uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,497,615 | arxiv | \section{Introduction}
Let $W$ be a Weyl group and $S$ be a set of Coxeter generators of $W$. Consider the Bruhat order with respect to $S$. Let $J$ be a subset of $S$ and $x,y,u \in W$. In \cite{MazMrd}, V. Mazorchuk and R. Mr{\dj}en prove that, if the intersection of the Bruhat interval $[x,y]$ with the parabolic coset $uW_j$ is nonempty, then it has a unique maximal element (see \cite[Lemma 3]{MazMrd}, for which the authors acknowledge help by A. Hultman) and a unique minimal element (see \cite[Lemma 5]{MazMrd}). The proof for the existence of a unique maximal element works also in the case of an arbitrary Coxeter group. On the other hand, the proof for the existence of a unique minimal element makes use of the longest element of $W$, which does not exist in infinite Coxeter groups. In this short note, we give an alternative proof that does not assume the finiteness and works for all Coxeter groups.
\bigskip
\section{Notation and preliminaries}
This section reviews the background material that is needed in the proof of Theorem~\ref{teorema}.
Let $(W,S)$ be an arbitrary Coxeter system. The group $W$, under Bruhat order (see, e.g., \cite[\S 2.1]{BB} or \cite[\S 5.9]{Hum}), sometimes also called Bruhat-Chevalley order, is a graded partially ordered set having the length function $\ell$ as its rank function. This means that $W$ has a minimum, which is the identity element $e$, and the function $\ell$ satisfies $\ell (e)=0$ and $\ell (y) =\ell (x)+1$ for all $x,y \in W$ with $x \lhd y$. Here $x \lhd y$, as well as $y \rhd x$, means that the Bruhat interval $[x,y]$ coincides with $\{x,y\}$.
Given $w\in W$, we let $D_R(w)$ denote the right descent set $\{ s \in S : \; \ell(w s) < \ell(w ) \}$ of $w$. Given a subset $J$ of $S$, we let $W_J$ denote the parabolic subgroup of $W$ generated by $J$ and $W^J$ denote the set $\{ w \in W \, : \; D_{R}(w)\subseteq S\setminus J \}$ of minimal left coset representatives. For $x\in W$, we let $W_{\leq x} =\{ w \in W \, : \; w \leq x \}$ and $W_{\geq x} =\{ w \in W \, : \; w \geq x \}$.
The following results are well known (see, e.g., \cite[Proposition~2.2.7]{BB} or \cite[Proposition~5.9]{Hum} for the first one, \cite[\S 2.4]{BB} or \cite[\S 1.10]{Hum} for the second one, and \cite[\S 3.2]{BB} for the lattice property of the weak Bruhat order implying the third one).
\begin{lem}[Lifting Property]
\label{ll}
Let $s\in S$ and $u,w\in W$, $u\leq w$. Then
\begin{enumerate}
\item[(i)]
\label{i}
if $s\in D_R(w)$ and $s\in D_R(u)$ then $us\leq ws$,
\item[(ii)]
\label{ii}
if $s\notin D_R(w)$ and $s \notin D_R(u)$ then $us\leq ws$,
\item[(iii)]
\label{iii}
if $s\in D_R(w)$ and $s\notin D_R(u)$ then $us\leq w$ and $u\leq ws$.
\end{enumerate}
\end{lem}
\begin{pro}
\label{fattorizzo}
Let $J \subseteq S$.
Every $w \in W$ has a unique factorization $w=w^{J} \cdot w_{J}$
with $w^{J} \in W^{J}$ and $w_{J} \in W_{J}$; for this factorization, $\ell(w)=\ell(w^{J})+\ell(w_{J})$.
\end{pro}
\begin{pro}
\label{discese}
Let $x\in W$. If $s_1,s_2 \in D_R(x)$, then the order of the product $s_1 s_2$ is finite.
\end{pro}
Symmetrically, left versions of Lemma~\ref{ll} and Proposition~\ref{fattorizzo} hold, as well as of the following well-known (and immediate to prove) result:
\begin{eqnarray}
\label{minoreinparabolico}
v\leq w \implies v^J\leq w^J
\end{eqnarray}
\section{Arbitrary Coxeter groups}
\begin{thm}
\label{teorema}
Let $(W,S)$ be an arbitrary Coxeter system. The intersection of a Bruhat interval with a parabolic coset is a Bruhat interval.
\end{thm}
\begin{proof}
By (\ref{minoreinparabolico}
, it is sufficient to prove that the intersection of a Bruhat interval with a parabolic coset, if nonempty, has a unique maximal element and a unique minimal element. The proof in \cite[Lemma 3]{MazMrd} for the existence of a unique maximal element in Weyl groups works also for arbitrary Coxeter groups.
Let us prove the existence of a unique minimal element in the case of a left coset (the mirrored argument works for a right coset). It is sufficient to prove the following claim: given $x,u \in W$ and a subset $J$ of $S$, the intersection $W_{\geq x} \cap uW_J$, if nonempty, has a unique minimal element.
Let $x$, $u$, and $J$ be as in the claim and suppose $W_{\geq x} \cap uW_J\neq \emptyset$. We may also suppose $u\in W^J$. We use induction on $\ell(x)$. If $\ell(x)=0$, then $x=e$ and $W_{\geq e} \cap uW_J$ has a unique minimal element, which is $u$.
Suppose $\ell(x)>0$ and, towards a contradiction, suppose that $m_1$ and $m_2$ are two distinct minimal elements of $W_{\geq x} \cap uW_J$.
Fix $i\in \{1,2\}$. Clearly $m_i\neq u$. Hence, there exists $s_i\in D_R(m_i)\cap J$. The minimality of $m_i$ implies $xs_i \lhd x$ since otherwise, by Lemma~\ref{ll}(iii), we would have $m_is_i \in W_{\geq x} \cap uW_J$. Let $m^i= \min (W_{\geq xs_i} \cap uW_J)$, which exists by the induction hypothesis and satisfies $m^i\leq m_1$ and $m^i\leq m_2$ since $m_1$ and $m_2$ both belong to $W_{\geq xs_i} \cap uW_J$. Furthermore. $m^is_i \rhd m^i$ since otherwise, by Lemma~\ref{ll}(iii), we would have $m^i \in W_{\geq x} \cap uW_J$, against the minimality of $m_1$ and $m_2$. Again Lemma~\ref{ll}(iii) implies $m^is_i \leq m_i$ while Lemma~\ref{ll}(ii) implies $m^is_i \in W_{\geq x} \cap uW_J$; by the minimality of $m_i$, we have $m^is_i=m_i$. Furthermore, if we let $\bar{i}$ be the element of the singleton $\{1,2\} \setminus \{i\}$, then we have $m_{\bar{i}}s_i \rhd m_{\bar{i}}$ since otherwise the same argument would imply that also $m_{\bar{i}}$ coincides with $m^is_i$, but $m_1\neq m_2$.
The four relations $m_1s_1\lhd m_1$, $m_1s_1\leq m_2$, $m_2s_2\lhd m_2$, $m_2s_2\leq m_1$ imply $\ell(m_1)=\ell(m_2)$, $m_1s_1\lhd m_2$, and $m_2s_2\lhd m_1$. By a repeated use of Lemma~\ref{ll}(i), we conclude that there exists $w\in W^{\{s_1,s_2\}}$ such that $m_1$ and $m_2$ belong to the coset $w W_{\{s_1,s_2\}}$.
Notice that $xs_1 \lhd x$ and $xs_2 \lhd x$; hence $W_{\{s_1,s_2\}}$ is finite by Lemma~\ref{discese}, and $x$ is the top element of the coset $xW_{\{s_1,s_2\}}$.
Lemma~\ref{ll}(i) implies
$$w \, (\underbrace{\cdots s_i s_{\bar{i}} s_i}_{\text{$h$ terms}}) \geq x \iff
w \, (\underbrace{\cdots s_i s_{\bar{i}} }_{\text{$h-1$ terms}}) \geq x s_i \iff
\cdots \iff
w\geq x \, (\underbrace{s_i s_{\bar{i}} s_i \cdots }_{\text{$h$ terms}}) $$
for each $h\in \mathbb N$ smaller than, or equal to, the rank of $W_{\{s_1,s_2\}}$.
Hence, by the four relations $m_1 \geq x$, $m_2 \geq x$, $m_1s_1 \not \geq x$, and $m_2s_2 \not \geq x$, we conclude that the intersection $W_{\leq w} \cap xW_{\{s_1,s_2\}}$ has two distinct maximal elements, which is a contradiction since we know that $W_{\leq w} \cap xW_{\{s_1,s_2\}}$ has a unique maximal element.
\end{proof}
\begin{rem}
\begin{enumerate}
\item Prior to \cite{MazMrd}, the special case of the existence of a unique maximal element in $W_{\geq x} \cap W_J$ , for all $x\in W$, is proved in \cite[Lemma 7]{Hom74}.
\item The proof of Theorem~\ref{teorema} provides another evidence of the fundamental role of (parabolic) dihedral subgroups and dihedral intervals in understanding the combinatorial properties of Coxeter groups (see, for example, \cite{BCM1}, \cite{CM1}, \cite{CM2},
\cite{Dye2}, \cite{Dye3}, \cite{Dyepreprint}, \cite{Mtrans}, \cite{M
).
\item Fix a subset $J$ of $S$. Let $x\in W$ and $u_1,u_2\in W^J$. While $\min ( W_{\geq x} \cap u_1W_J) \leq \min ( W_{\geq x} \cap u_2W_J)$, as well as $\max ( W_{\geq x} \cap u_1W_J) \leq \max ( W_{\geq x} \cap u_2W_J)$, clearly implies $u_1 \leq u_2$ by (\ref{minoreinparabolico}), the converse does not hold in general, as one may see already in type $A_2$.
\end{enumerate}
\end{rem}
|
1,116,691,497,616 | arxiv | \section{Introduction: photon number splitting attacks and decoy-state protocols}
Quantum key distribution (QKD)~\cite{Wie83,BB84,Eke91,Ben92} allows two parties, Alice and Bob, to establish
a common and secret key $S$ that is informationally secure; see~\cite{May01,SP00,Ren05}
and references therein. A widely used setup for QKD is the one
suggested by Bennett and Brassard (BB84)~\cite{BB84}. BB84 is ideally implemented by
preparing and transmitting
single-photon pulses. Information can be encoded in the state of one of two conjugate
polarization bases,
e.g. vertical/horizontal or diagonal/antidiagonal. Only those {photons}
that were prepared by Alice and detected by Bob in the same basis are useful to build
a sifted key, which forms $S$ after additional steps of information reconciliation and privacy amplification.
Security follows from the inability of {faithfully} copying quantum information~\cite{Zur82}
and the unavoidable information-disturbance trade-off in quantum mechanics.
Nevertheless, realistic implementations of BB84 use weak coherent photon pulses
that could involve many photons, avoiding the assumptions made in security analyses
~\cite{BBBSS92,SWF07,RPH09}.
Such pulses could be exploited by Eve, the eavesdropper, to gain access to the (insecure)
distributed key using a so-called photon-number splitting (PNS) attack~\cite{Lut00,LJ02}.
In a simple proposed PNS attack, Eve measures the number of photons in the pulse, $n$.
If $n=1$, Eve blocks the pulse.
If $n \ge 2$, Eve ``splits'' the pulse to obtain a copy of a single photon with the correct
polarization and keeps it in
her quantum memory. Eve could then obtain a full copy of $S$ by making measurements
of her photons in the correct polarization bases, which are known after a public discussion between Alice and Bob.
Since Alice and Bob cannot measure $n$, a PNS attack may go undetected.
Our goal is to provide a protocol for secure QKD
in the presence of PNS attacks.
A simple approach to overcome a PNS attack
considers reducing the probability
of multi-photon pulses by reducing the coherent-pulse
intensities. The drawback with this approach
is that the probability of creating single-photon pulses
is also reduced. Then, the rate at which bits to build $S$
are sifted is far from optimal~\cite{LJ02,GLLP04}.
Another approach is to use decoy states, that allow
to detect PNS attacks without a substantial
reduction on the rate of sifted bits if Eve is not present~\cite{Hwa03,LMC05,RH09}.
In a decoy-state protocol (DSP), one of several
weak coherent sources is randomly selected for each pulse.
Such sources create pulses of different intensities (mean photon numbers).
This gives Alice and Bob a means to estimate $f_0$ and $f_1$, the number
of Bob's detections due to empty and single-photon pulses prepared by Alice, in the same basis, respectively.
The values of $f_0$ and $f_1$ are important to determine $|S|$, the length of the secure key.
For example, in the discussed PNS attack, $f_1$ is substantially smaller than its
value when Eve is not present, and so is $|S|$.
In more detail, we let $K \gg 1$ be the total number of pulses
prepared by Alice. We first assume that the channel is non-adversarial,
i.e. no eavesdropping attacks
are present.
If the pulse has a random phase,
the number of photons it contains is
sampled according to the Poisson distribution:
\begin{align}
\label{eq:Poisson}
p_n^\mu=\mathrm{Pr}(n|\mu) = e^{-\mu}\frac{ \mu^n}{n!} \; ,
\end{align}
where $\mu$ is the mean photon number
for that source and $\mu \le 1 $ in applications. We let $\eta$ be the transmission/detection efficiency
of the quantum channel shared by Alice and Bob. If $b=1$ ($b=0$) denotes the event in which Bob
detects a non-empty (empty or vacuum) pulse,
\begin{align}
\nonumber
y_n = \mathrm{Pr}(b=1|n)
\end{align}
is the probability of a detection by Bob given that Alice's prepared pulse contained $n$ photons.
$y_n$ is the so-called $n$-photon yield and
$y_n < 1$ due to losses in the channel.
For $n \ge 1$, we may assume
\begin{align}
\nonumber
y_n =1 - (1-\eta)^n \; ,
\end{align}
which is a good approximation in applications. For $n=0$, $y_0>0$ denotes Bob's detector dark-count rate.
The total probability of Bob detecting a pulse (in any one cycle) is the total yield
\begin{align}
\label{eq:totalyield}
Y(\mu) &= \mathrm{Pr}(b=1) \\
\nonumber &= \sum_{n \ge 0} \mathrm{Pr}(n|\mu) y_n \\
\nonumber
& = e^{-\mu} y_0 + 1 - e^{-\mu \eta} \; .
\end{align}
$Y(\mu)$ can be estimated by Alice and Bob, via public discussion, from the frequency of detections
after all pulses were transmitted.
In QKD, we allow Eve to manipulate the parameters that characterize the channel
at her will. We use the superscript $\mathcal{E}$ to represent the interaction of Eve with
the communication. For example, $y^\mathcal{E}_n$ denotes the $n$-photon
yield in the presence of Eve. In a general intercept-resend attack, Eve
may intercept a pulse and resend a different one. That is, each detection
by Bob is not guaranteed to come from the same pulse that Alice prepared.
In a simple PNS attack, Eve makes non-demolition measurements of $n$.
With this information, Eve sets $y^\mathcal{E}_1=0 \ne y_1$ and
$y^\mathcal{E}_n \ge y_n$ for $n \ge 2$, so that
\begin{align}
\nonumber
Y(\mu) \approx Y^\mathcal{E}(\mu) \; .
\end{align}
Then, if Alice and Bob can only estimate the total yields,
a PNS attack could be ``invisible'' with the right choices of $y_2^\mathcal{E},y_3^\mathcal{E},\ldots$.
To increase the multi-photon yield,
Eve may use an ideal channel to resend the pulses.
(Note that sophisticated
PNS attacks that do not change the Poisson distribution are possible~\cite{LJ02}.)
A PNS attack allows Eve to have the full key $S$
if
\begin{align}
\nonumber
\mathrm{Pr}(n \ge 2 | \mu) \ge Y(\mu) \; .
\end{align}
In this case, Eve possesses a photon
with the same polarization as that of the pulse detected by Bob
and no single-photon pulses are involved in creating $S$.
Only if $\mathrm{Pr}(n \ge 2 | \mu) < Y(\mu)$ some security guarantees are possible~\cite{GLLP04}.
Such an inequality is satisfied when $\mu \approx \eta$, implying a rate for sifted bits
of order $\eta^2$ [Eq.~\eqref{eq:totalyield}]. This is undesirably
small ($\eta \ll 1$).
Remarkably, DSPs give an optimal rate of order $\eta$ with
small resource overheads.
A goal in a DSP is to estimate $y^\mathcal{E}_0$ and $y^\mathcal{E}_1$,
which provide a lower bound on $f_0^\mathcal{E}$ and $f_1^\mathcal{E}$,
respectively. Empty and single-photon pulses cannot be split and the information
carried in their polarization cannot be faithfully copied,
making them useful to create a secure key. For the estimation,
Alice uses photon sources with different values of $\mu$, but are identical otherwise.
{In a conventional DSP, it is assumed that Eve's PNS attack treats every $n$-photon pulse
equally and independently, regardless of its source. That is, Eve's attack
is simulated by independent and identically distributed (i.i.d.) random variables.
The total yield in this case is, for any given $\mu$,
\begin{align}
\label{eq:Evetotalyield}
Y^\mathcal{E}(\mu) = \sum_{n \ge 0} p_n^\mu \ y^\mathcal{E}_n \; .
\end{align}
Equation~\eqref{eq:Evetotalyield} describes mathematically what we denote as the i.i.d. assumption.
It follows that
\begin{align}
\nonumber
y^\mathcal{E}_0 &= Y^\mathcal{E}(\mu) |_{\mu=0} \; , \\
\label{eq:singlephotonTE}
y^\mathcal{E}_1 &= \partial_\mu \left[ e^\mu Y^\mathcal{E}(\mu) \right] |_{\mu=0} \; .
\end{align}
Then, if Eve's attack satisfies $Y(\mu) \approx Y^\mathcal{E}(\mu)$ for all $\mu$,
\begin{align}
\nonumber
y^\mathcal{E}_0 \approx y_0 \; , \; y^\mathcal{E}_1 \approx y_1 = \eta \; .
\end{align}
That is, by being able to estimate $Y^\mathcal{E}(\mu)$ for two values of $\mu \ll 1$ via public discussion,
Alice and Bob can restrict Eve's attack so that the dark-count rate
and single-photon yield
are almost unchanged {from the non-adversarial case. In addition, if a third source with
$\mu \approx 1$ is randomly invoked,
an optimal key rate of order $\eta$ will be achieved.
In reality, the estimation of $y_0^\mathcal{E}$ and $y_1^\mathcal{E}$ is subject to finite statistics and
can be technically involved. {Nevertheless, the i.i.d. assumption in Eq.~\eqref{eq:Evetotalyield}
allows Alice
and Bob to gain information about Eve's attack by running the
protocol and analyzing the (binomial) distributions of
the detection events for each source. However, we remark that if Eve were to correlate her attacks,
the i.i.d. assumption and the corresponding security analyses would be invalid.
This is the main motivation behind our analysis.
}
In this paper, we give an example that shows
how the i.i.d. assumption can be simply bypassed by Eve,
resulting in security parameters that are worse from those obtained
under the assumption. We then analyze the security of DSPs
for general {PNS} attacks.
Our main result is
an estimation procedure that gives
a lower bound on $f_0^\mathcal{E}$ and $f_1^\mathcal{E}$,
with a confidence level that is an input
to the estimation procedure. Our security analysis
does not use the i.i.d. assumption and is particularly relevant
when Eve performs a PNS attack that could correlate different pulses in one session or even
different sessions. We
compare some results obtained by our estimation procedure with those obtained by
using the i.i.d. assumption, and emphasize the important of our procedure.
\section{The security parameter, the i.i.d. assumption, and finite statistics}
\label{sec:decoys}
Of high significance in cryptographic protocols is $\epsilon$,
the so-called security parameter. $\epsilon$ measures
the deviation of a real protocol implementation from an ideal one.
We use the same definition used in Ref.~\cite{Ren05}, that states that a real QKD
protocol is $\epsilon$-secure if it is $\epsilon$-indistinguishable
from a perfectly secure and ideal one. This definition is equivalent
to a statement on the trace norm of the difference between
the quantum states resulting from the real and ideal protocol, respectively.
It implies that a QKD protocol that is $\epsilon$-secure can be safely reused
order $1/\epsilon$ times without compromising its security.
Usually, one fixes a value for $\epsilon$ and then determines the size
of $S$ based on several protocol performance parameters. These parameters include
the number of pulses sent by Alice, the number of pulses detected by Bob,
and the estimated bit error rates at each mean photon number.
For DSPs, $\epsilon$ has a component $\epsilon_{\rm DSP}$
that determines the confidence
level in the estimation of a lower bound of $f_0^\mathcal{E}$ and $f_1^\mathcal{E}$, due to finite statistics.
A possible way to obtain such lower bounds, under the i.i.d. assumption,
is the one followed in Ref~\cite{RH09}.
In this case, we consider a DSP with three sources, $i=U , V ,W$.
The mean photon number in each pulse, for each source,
is $\mu^U=0$, $\mu^V \ll 1$, and $\mu^W \in \mathcal{O}(1)$.
Each source $i$ randomly prepares a pulse with probability $q^i$ and
we let {$K^i$} be
the total number of pulses for that source. {$K^i$}
is known to Alice and Bob by public discussion after all pulses are sent
and {$K^i \approx q_i K$
when $K \gg 1$.}
We write $D^{i,\mathcal{E}}$ for the random variable
that counts the number of pulses from source $i$ detected by Bob
under the presence of Eve~\cite{Note3}.
The exact value that $D^{i,\mathcal{E}}$
takes in a session can also be obtained by Alice and Bob
via public discussion after the pulses were transmitted.
Under the i.i.d. assumption [Eq.~\eqref{eq:Evetotalyield}], $D^{i,\mathcal{E}}$
is sampled according to the binomial distribution. Then, {$D^{i,\mathcal{E}}/K^i$
is an estimator of the total yield $Y^\mathcal{E}(\mu^i)=E[D^{i,\mathcal{E}}/K^i]$, where $E[.]$
denotes the mean value. That is, for a given $\bar \epsilon_{\rm DSP}$,}
we can establish confidence intervals
\begin{align}
\label{eq:DSPbinomial}
\frac{D^{i,\mathcal{E}}}{K^i} + c(\bar \epsilon_{\rm DSP}) \sigma^{i,\mathcal{E}} \ge Y^\mathcal{E}(\mu^i) \ge \frac{D^{i,\mathcal{E}}}{K^i} - c(\bar \epsilon_{\rm DSP}) \sigma^{i,\mathcal{E}} \; ,
\end{align}
with confidence level $1-\bar \epsilon_{\rm DSP}$. The constant $c$ depends on $\bar \epsilon_{\rm DSP}$
and can be obtained using Chernoff's bound~\cite{Ho63} -- see Appendix~\ref{app:chernoff}.
The standard deviation in this case is
\begin{align}
\label{eq:binvariance}
\sigma^{i,\mathcal{E}} \approx \sqrt{\frac {Y^{\mathcal{E}}(\mu^i) (1-Y^{\mathcal{E}}(\mu^i))}{q^i {K}} } \; .
\end{align}
Using Eq.~\eqref{eq:Evetotalyield} for $Y^{\mathcal{E}}(\mu^i)$,
we can search for the minimum values of $y_0^\mathcal{E}$ and $y_1^\mathcal{E}$
that satisfy Eqs.~\eqref{eq:DSPbinomial}, e.g. by executing a linear program.
Both $y_0^\mathcal{E}$ and $y_1^\mathcal{E}$ can then be used to obtain the desired lower
bounds on $f_0^\mathcal{E}$ and $f_1^\mathcal{E}$, respectively, with corresponding confidence
level $1-\epsilon_{\rm DSP}$. This last step also requires using the i.i.d. assumption.
We remark that Eq.~\eqref{eq:DSPbinomial} does not properly regard
the problem of inferring a distribution for $Y^\mathcal{E}(\mu^i)$ from the known
$\frac{D^{i,\mathcal{E}}}{K^i}$, a problem that would require knowledge on the prior
distribution of $Y^\mathcal{E}(\mu^i)$.
\section{Increasing the length of confidence intervals: An attack}
\label{sec:attack}
The analysis in Sec.~\ref{sec:decoys} used the i.i.d. assumption that resulted in a
value for $\sigma^{i,\mathcal{E}}$ given by Eq.~\eqref{eq:binvariance}.
Nevertheless, the actual value of $\sigma^{i,\mathcal{E}}$ could be much higher
in more general PNS attacks. For the same confidence level,
a bigger $\sigma^{i,\mathcal{E}}$ implies a ``wider'' confidence interval for the estimation of
the yield $Y^\mathcal{E}(\mu^i)$ (Appendix~\ref{app:chernoff}),
and thus smaller lower bounds on $f_0^\mathcal{E}$ and $f_1^\mathcal{E}$.
The overall result
in the DSP is a secret key $S$ of smaller size for the same security parameter.
To illustrate how Eve can bypass the i.i.d. assumption,
we suggest a potential attack that results in almost no change for
the total yields (i.e., $Y(\mu^i) \approx Y^{\mathcal{E}}(\mu^i)$)~\cite{Note1} but the variances $\sigma^{i,\mathcal{E}}$
are increased
with respect to those of the binomial distribution [Eq.~\eqref{eq:binvariance}].
The suggested attack could be detected by Alice and Bob by estimating the variances
directly via public discussion. Nevertheless, it still shows that a better analysis of the security of DSPs
is needed to make rigorous claims.
In the attack,
Eve first picks an integer value for $\tau \ge 1$,
where $\tau^2$ denotes a scale for a variance or ``correlation'' of a particular distribution.
Eve receives all pulses from Alice and we let $k_n$ be the total number
of $n$-photon pulses in the protocol. Note that the exact value of $k_n$ is known to Eve but not to
Alice and Bob. In general, $k_n$ is sampled according to the binomial distribution
\begin{align}
\nonumber
\mathrm{Pr}(k_n) = \begin{pmatrix} K \cr k_n \end{pmatrix} (p_n)^{k_n} (1-p_n)^{K-k_n} \; ,
\end{align}
where $p_n$ is the probability of a pulse containing $n$ photons: $p_n = \sum_i q^i p_n^{\mu^i}$.
The mean and variance for such distribution are
\begin{align}
\nonumber
E[k_n] &=p_n K \; , \\
\nonumber
\sigma^2_{k_n} &= p_n (1-p_n) K \; .
\end{align}
Given $k_n$, Eve randomly picks a value for $d^{\mathcal{E}}_n \in \{0,1\ldots,k_n\}$,
where $d_n^\mathcal{E}=\sum_i d_n^{i,\mathcal{E}}$ is the total number of detections due to $n$-photon pulses prepared by Alice.
In particular, we assume that Eve can control $d^{\mathcal{E}}_0$, which determines the dark-count rate.
The distribution
associated with $d^{\mathcal{E}}_n$ has the following properties:
\begin{align}
\label{eq:dnDIST}
E[d^{\mathcal{E}}_n|k_n] &= y_n k_n \; , \\
\nonumber
\sigma^2_{d^{\mathcal{E}}_n|k_n} & = \tau^2 y_n (1-y_n) k_n \; .
\end{align}
We let $d_n^{i,\mathcal{E}}$ } be the number of $n$-photon pulses, prepared by Alice's $i$th source only, and detected by Bob.
The exact value of $d_n^{i,\mathcal{E}}$ is unknown to all parties. Because Eve does not know the source being used
in the DSP, $d_n^{i,\mathcal{E}}$ is sampled according to the binomial distribution when given $d_n^{\mathcal{E}}$:
\begin{align}
\label{eq:dnidist}
\mathrm{Pr}(d^{i,\mathcal{E}}_n | d^\mathcal{E}_n) = \begin{pmatrix} d^\mathcal{E}_n \cr d_n^{i,\mathcal{E}} \end{pmatrix} (q^i_n)^{d_n^{i,\mathcal{E}}} (1-q^i_n)^{d^\mathcal{E}_n-d_n^{i,\mathcal{E}}} \; ,
\end{align}
where
\begin{align}
q_n^i = \frac {q^i e^{-\mu^i} (\mu^i)^n}{\sum_{i'=U,V,W} q^{i'} e^{-\mu^{i'}} (\mu^{i'})^n} \; .
\end{align}
The distribution associated with $d_n^{i,\mathcal{E}}$ satisfies
\begin{align}
\nonumber
E[d_n^{i,\mathcal{E}}|d_n^{\mathcal{E}}] &= q_n^i d_n^{\mathcal{E}} \; , \\
\nonumber
\sigma^2_{d_n^{i,\mathcal{E}}|d_n^\mathcal{E}} & = q_n^i (1-q_n^i) q_n^\mathcal{E} \; .
\end{align}
As in Sec.~\ref{sec:decoys}, we let $(\sigma^{i,\mathcal{E}})^2$ be the variance
associated with the random variable $Z^{i,\mathcal{E}} =D^{i,\mathcal{E}}/K^i$, where
\begin{align}
\label{eq:constrains}
D^{i,\mathcal{E}} = \sum_{n \ge 0} d_n^{i,\mathcal{E}} \; ,
\end{align}
and $E[Z^{i,\mathcal{E}}]=Y^\mathcal{E}(\mu^i)$.
An accurate estimate of $Z^{i,\mathcal{E}}$ can be obtained
if we approximate $K^i \approx q^i K $, in the limit of large $K$.
In addition, because $K$ is fixed, the variables $k_n$ are not independent.
However, in the large-$K$ limit, $k_n$ can also be approximated by its mean value.
It implies that the $k_n$ are almost independent and so are the $d_n^{\mathcal{E}}$
and $d_n^{i,\mathcal{E}}$ for different values of $n$. Under these approximations,
\begin{align}
\label{eq:dnvariance}
(\sigma^{i,\mathcal{E}})^2 \approx \frac 1 {( q^i K )^2} \sum_{n \ge 0} \sigma^2_{d_n^{i,\mathcal{E}}} \; .
\end{align}
In Appendix~\ref{App:AttackStatistics} we show that
\begin{align}
\label{eq:dnivariance}
\sigma^2_{d_n^{i,\mathcal{E}}} =[ (\tau^2-1) q_n^i (1-y_n) + (1-q_n^i y_n p_n)] q_n^i y_n p_n K \; .
\end{align}
By inserting} Eq.~\eqref{eq:dnivariance} in Eq.~\eqref{eq:dnvariance}, we can obtain the variances as a function of $\tau$.
In Fig.~\ref{fig:Attack1} we compute $\sigma^{U,\mathcal{E}}$ and $\sigma^{V,\mathcal{E}}$.
The i.i.d. assumption discussed in Sec.~\ref{sec:decoys}
corresponds to $\tau=1$ -- see Appendix~\ref{App:AttackStatistics}. Using these results in Eq.~\eqref{eq:DSPbinomial} yields
wider confidence intervals
for the same confidence level.
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm]{FinalVersionAttack-Fig1.pdf}
\caption{The standard deviations $\sigma^{U,\mathcal{E}}$ and $\sigma^{V,\mathcal{E}}$ for an attack
in which Eve correlates $n$-photon pulses according to the value
of $\tau$. The channel parameters are $1 \le \tau \le 100$, $K=10^{10}$, $q^U=0.01$, $q^V=0.0275$, $\mu^U=0$, $\mu^V=0.063$,
$\eta=10^{-3}$, and $y_0=2.10^{-6}$~\cite{RH09}. The results in Sec.~\ref{sec:decoys} are recovered for $\tau=1$.}
\label{fig:Attack1}
\end{figure}
To illustrate our point further, we consider a simple attack in which a single
source $U$ is used to estimate the dark-count rate. Here, $\mu^U=0$
and $d_0^\mathcal{E}=D^{U,\mathcal{E}}$ is known. In the non-adversarial case, $d_0^\mathcal{E}$ is sampled according to the binomial distribution
with probability $y_0$ and known sample size $k_0=K^U$. Nevertheless, for the correlated attack, we assume that
Eve ``receives'' the $K^U$ pulses and groups them according to blocks of size $\tau^2$. Then,
Eve will force (prevent) the detection of all pulses in any one block with probability $y_0$ ($1-y_0$).
The random variable $d_0^\mathcal{E}$ for the correlated attack satisfies
\begin{align}
\nonumber
E[d_0^\mathcal{E}] & = y_0 k_0 \; , \\
\nonumber
\sigma^2_{d_0^\mathcal{E}} (\tau)& = [y_0 (\tau^2)^2 - (y_0 \tau^2)^2]\frac{k_0}{\tau^2} = \tau^2 y_0 (1-y_0) k_0 \; ,
\end{align}
and $\tau=1$ corresponds again to the i.i.d. assumption [see Eq.~\eqref{eq:dnDIST}].
In Fig.~\ref{fig:DarkCount} (A),
we plot the probability that $Z^{U,\mathcal{E}}=D^{U,\mathcal{E}}/K^U$ satisfies
\begin{align}
\nonumber
E[Z^{U,\mathcal{E}}] + c \sigma_{d_0^\mathcal{E}} (1) \ge Z^{U,\mathcal{E}} \ge E[Z^{U,\mathcal{E}}] - c \sigma_{d_0^\mathcal{E}} (1) \; ,
\end{align}
for different values of $c$ and $\tau$. For $\tau=1$, such a probability
corresponds to the confidence level in Eq.~\eqref{eq:DSPbinomial}. $E[Z^{U,\mathcal{E}}]=y_0$ in this example.
For the inverse problem, namely the estimation of $y_0$
from $D^{U,\mathcal{E}}$ and $K^U$, Eq.~\eqref{eq:DSPbinomial} may be incorrect.
We may then assume a uniform prior distribution for $y_0 \in [0,1]$,
and obtain the posterior distribution as
\begin{align}
\label{eq:BayesEst}
\mathrm{Pr} (y_0 | & D^{U,\mathcal{E}}) = \mathrm{Pr} (D^{U,\mathcal{E}}|y_0) \mathrm{Pr}(y_0)/\mathrm{Pr}(D^{U,\mathcal{E}}) \\
\nonumber
& \propto \begin{pmatrix} K^U/\tau^2 \cr D^{U,\mathcal{E}}/\tau^2 \end{pmatrix} y_0^{D^{U,\mathcal{E}}/\tau^2}
(1-y_0)^{(K^U-D^{U,\mathcal{E}})/\tau^2} \; ,
\end{align}
which is plotted in Fig.~\ref{fig:DarkCount} (B).
Our results demonstrate that, for a fixed security parameter, the accuracy in the estimation of the dark-count rate
strongly depends on Eve's attack and can be substantially different from the
one obtained under the i.i.d. assumption ($\tau=1$).
\begin{figure}[ht]
\centering
\includegraphics[width=7.8cm]{DarkCount-Fig}
\caption{Estimation of dark counts. (A) Confidence intervals for different correlated attacks,
parametrized by $\tau$, and confidence bounds, parametrized by $c$. (B) Bayesian estimation of $y_0$, the mean dark-count rate,
assuming a uniform prior and for different correlated attacks [Eq.~\eqref{eq:BayesEst}].}
\label{fig:DarkCount}
\end{figure}
\section{Security of DSP: Correlated PNS attacks}
\label{sec:generalsecurity}
We go beyond the i.i.d. assumption and
study more general and correlated PNS attacks,
in which Eve has full control
on Bob's detection events.
The secure key-rate in a realistic implementation of QKD is~\cite{RH09}
\begin{align}
\label{eq:keyrate}
s \ge f_0^{\mathcal{E} *} + f_1^{\mathcal{E} *} - \kappa_{\rm EC} F^{\mathcal{E}} H_2({\rm BER}) -
\kappa_{\rm PA} f_1^{\mathcal{E} *} H_2(b_1^{\max}) \; ,
\end{align}
which determines the size of the distributed key as $|S|=sK$.
$F^\mathcal{E}$ is the total number of pulses detected by Bob
and prepared by Alice in the same basis, that are useful for the sifted key.
In BB84, $F^\mathcal{E} \approx D^\mathcal{E}/2$, where $D^\mathcal{E}$ is the total number of detections.
$f_n^{\mathcal{E} *}$ is a lower bound on $f_n^\mathcal{E}$, the number of $n$-photon pulses
prepared and detected in the same basis.
$H_2(.)$ is the Shannon entropy,
$\kappa_{\rm EC}$ and $\kappa_{\rm PA}$ are coefficients due to the error correction and privacy amplification
steps, BER is the total bit error rate, and $b_1^{\max}$ is an upper
bound to the bit error rate due to single-photon pulses only.
In a DSP, we characterize a general PNS attack
by the distribution
\begin{align}
\mathrm{Pr}(d_0^{\mathcal{E}},d_1^{\mathcal{E}},\ldots|k_0,k_1,\ldots) \; ;
\end{align}
See Fig.~\ref{fig:InterceptResend1} for an example.
Our goal is to build an estimation procedure that places
confidence intervals on $f_0^\mathcal{E} =\sum_i f_0^{i,\mathcal{E}}$ and $f_1^\mathcal{E} = \sum_i f_1^{i,\mathcal{E}}$
from the known $D^{i,\mathcal{E}}$.
These intervals ultimately imply a lower bound on $s$ --
see Eq.~\eqref{eq:keyrate}.
\begin{figure}[ht]
\centering
\includegraphics[width=7.8cm]{InterceptResend-Fig}
\caption{A general PNS attack with three decoy sources, $\mu_U=0$, $\mu_V \ll 1$,
and $\mu_W \in \mathcal{O}( 1)$. Each block represents the number of pulses with $n=0,1,2,\ldots$,
respectively.
The random variables $k_n$ indicate
the number of $n$-photon pulses prepared by Alice and the superscript $i$ denotes the source
used for such pulses.
Eve's attack controls the number of detections
by Bob, due to $n$-photon pulses, through $d_n^{\mathcal{E}}$.
}
\label{fig:InterceptResend1}
\end{figure}
We assume that there are three sources satisfying $\mu_U=0 < \mu_V < \mu_W$,
and $\mu_W \in \mathcal{O}(1)$. Nevertheless, our analysis can be easily generalized
to the case in which more sources are present, where the estimation is more accurate.
For each source, Bob's detections satisfy Eq.~\eqref{eq:constrains}.
If a simple relationship between each $d_n^{i,\mathcal{E}}$ and $d_n^{\mathcal{E}}$ could be found, we could execute
a program to solve Eqs.~\eqref{eq:constrains}.
Such a relationship could be obtained from the binomial distribution
associated with $d_n^{i,\mathcal{E}}$, when given ${d_n^{\mathcal{E}}}$ [Eq.~\eqref{eq:dnidist}].
Our estimation procedure uses ${d_n^{i,\mathcal{E}}}$ to determine the confidence
intervals
\begin{align}
\label{eq:upperlowerbound1}
\Phi_{i,n} (d_n^{i,\mathcal{E}}) \ge d_n^{\mathcal{E}} \ge \phi_{i,n} (d_n^{i,\mathcal{E}}) \; .
\end{align}
The corresponding confidence level for each inequality is $1-\epsilon_n/2$.
The upper and lower bounds
are monotonic and invertible functions.
Then,
\begin{align}
\label{eq:upperlowerbound2}
& \phi_{i,n}^{-1}(d_n^\mathcal{E}) \ge d_n^{i,\mathcal{E}} \ge \Phi^{-1}_{i,n}( d_n^\mathcal{E}) \; ,
\end{align}
with the same confidence levels.
Such confidence levels do not result from the binomial
distribution as we are analyzing the inverse problem, namely the estimation of $d_n^\mathcal{E}$
from the available information (i.e., $D^{i,\mathcal{E}}$ and $K^i$).
From Eqs.~\eqref{eq:constrains} and \eqref{eq:upperlowerbound2},
we obtain
\begin{align}
\label{eq:upperlowerbound3}
& \sum_{n \ge 0} \phi_{i,n}^{-1}(d_n^\mathcal{E}) \ge D^{i,\mathcal{E}} \ge \sum_{n \ge 0} \Phi^{-1}_{i,n}( d_n^\mathcal{E}) \; ;
\end{align}
See Fig.~\ref{fig:BoundsFig} for an example.
\begin{figure}[ht]
\centering
\includegraphics[width=7.8cm]{Bounds-Fig}
\caption{Upper and lower bounds on $d_0^{V,\mathcal{E}}
+ d_1^{V,\mathcal{E}} \approx d^{V,\mathcal{E}}$. The yellow blocks
represent the number of pulses from source $V$ with $n=0$ and $n=1$.
The dark and blue blocks represent the total number of pulses
with $n=0$ and $n=1$, respectively. The confidence level for this case
is not smaller than $1-(\epsilon_0 + \epsilon_1)$.}
\label{fig:BoundsFig}
\end{figure}
Next, our estimation procedure executes a program
to obtain $d_0^{\mathcal{E} *}$ and $d_1^{\mathcal{E} *}$, the corresponding
smallest values of $d_0^\mathcal{E}$ and $d_1^\mathcal{E}$,
subject to the constraints given by Eqs.~\eqref{eq:upperlowerbound3}.
From the union bound, the confidence level in such values is $1- \bar \epsilon_{\rm DSP}$, with
\begin{align}
\label{eq:securityparameter0}
\bar \epsilon_{\rm DSP} \le 3 \sum_{n \ge 0} \epsilon_n \; ,
\end{align}
when three sources are used.
Since $f_0^{\mathcal{E}*}$ and $f_1^{\mathcal{E}*}$ are sampled according to a
binomial distribution when given $F^\mathcal{E}$ (i.e.,
the preparation and detection basis are random),
we obtain
\begin{align}
\label{eq:desiredbound}
f_n^{\mathcal{E} *} = F^\mathcal{E} \frac{ d_n^{\mathcal{E} *} }{D^\mathcal{E}} - c(\bar\delta_{\rm DSP}) \sqrt{F^\mathcal{E} \frac{ d_n^{\mathcal{E} *} }{D^\mathcal{E}} \left(1-\frac{ d_n^{\mathcal{E} *} }{D^\mathcal{E}} \right)} \; ,
\end{align}
where the constant $c( \bar \delta_{\rm DSP}) \ge 0$ can be obtained using
Eq.~\eqref{eq:chernofferror}.
The overall confidence level for the key rate $s$
is $1-\epsilon_{\rm DSP}$, where the security
parameter satisfies
\begin{align}
\label{eq:securityparameter}
\epsilon_{\rm DSP} \le \bar \epsilon_{\rm DSP} + \bar \delta_{\rm DSP} \; .
\end{align}
In the next section we obtain the confidence intervals and levels specifically for our method.
\vspace{0.5cm}
\subsection*{Confidence intervals for the estimation procedure}
\label{sec:confidencebounds}
Our method takes $\epsilon_{\rm DSP}$ as input
and outputs $f_0^{\mathcal{E}*}$ and $f_1^{\mathcal{E}*}$.
To satisfy Eq.~\eqref{eq:securityparameter}, we can set
\begin{align}
c(\bar \delta_{\rm DSP}) & = 2 \sqrt{|\log (\epsilon_{\rm DSP}/2)|}
\end{align}
and
\begin{align}
\label{eq:nphotonerror}
\epsilon_n &= (\epsilon_{\rm DSP}/12) (1/2)^n \;
\end{align}
[see Eqs.~\eqref{eq:chernofferror} and \eqref{eq:firsterror}].
Next, we will find $d_0^{\mathcal{E}*}$ and $d_1^{\mathcal{E}*}$ as required by Eq.~\eqref{eq:desiredbound}.
If $\phi$ depends on $d_n^{i,\mathcal{E}}$ only,
the probability that $d_n^{\mathcal{E}}$ is smaller than $\phi$ is
\begin{align}
\label{eq:errorbound2}
\sum_{d_n^\mathcal{E}=0}^{K} \mathrm{Pr}(d_n^\mathcal{E}) \sum_{d_n^\mathcal{E} \ge d_n^{i,\mathcal{E}} > u_n^i} \mathrm{Pr}(d_n^{i,\mathcal{E}}|d_n^\mathcal{E})
=\frac{\epsilon_n}{2} \; ,
\end{align}
with
\begin{align}
\nonumber
u_n^i = \phi_{i,n}^{-1} ({d_n^\mathcal{E}}) \; .
\end{align}
When given {$d_n^\mathcal{E}$}, the random variable {$d_n^{i,\mathcal{E}}$} is sampled according
to Eq.~\eqref{eq:dnidist}.
From Chernoff's bound (Appendix~\ref{app:chernoff})
\begin{align}
\epsilon_n
\label{eq:Hoeffdingbound}
{ \le 2 \max_{0 \le d_n^\mathcal{E} \le K} \exp \left \{ - \frac {(u_n^i - q_n^i d_n^\mathcal{E} )^2} {4 q_n^i (1-q_n^i) d_n^\mathcal{E}} \right \} \; , }
\end{align}
and we choose the lower bound so that
\begin{align}
\label{eq:lowerbound}
\phi_{i,n} ({d_n^{i,\mathcal{E}}}) = { \frac{d_n^{i,\mathcal{E}}} {q_n^i} - c_n \frac{ 1-q_n^i}{2 q_n^i} \left[ \sqrt{ c_n^2 + \frac{ 4 d_n^{i,\mathcal{E}}} { (1-q_n^i)^2}} -c_n \right] } \; ,
\end{align}
with $c_n \ge 0$.
The error probability satisfies
\begin{align}
\label{eq:errorbound3}
\epsilon_n \le 2 \exp \left \{ - c^2_n /4 \right \}\; ;
\end{align}
See Appendix~\ref{app:errors}.
A similar analysis gives the upper bound
\begin{align}
\label{eq:upperbound}
\Phi_{i,n} ({d_n^{i,\mathcal{E}}}) = { \frac{d_n^{i,\mathcal{E}}} {q_n^i} + c_n \frac{ 1-q_n^i}{2 q_n^i} \left[ \sqrt{ c_n^2 + \frac{ 4 d_n^{i,\mathcal{E}}} { (1-q_n^i)^2}} +c_n \right] } \; ,
\end{align}
with the same confidence level.
Then, to satisfy Eq.~\eqref{eq:nphotonerror},
it suffices to set
\begin{align}
\nonumber
c_n^2 ( \epsilon_{\rm DSP})= 4 | \log( \epsilon_{\rm DSP}/24) + n \log (1/2)| \;.
\end{align}
\vspace{2.cm}
To complete the estimation procedure, we
invert Eqs.~\eqref{eq:lowerbound} and \eqref{eq:upperbound}
and obtain
\begin{align}
\label{eq:finalbounds}
& \sum_{n \ge 0} q_n^i d_n^\mathcal{E} +c_n ( \epsilon_{\rm DSP}) \sqrt{ q_n^i (1-q_n^i) d_n^\mathcal{E}} \ge D^{i,\mathcal{E}} \; , \\
\nonumber
& D^{i,\mathcal{E}} \ge \sum_{n \ge 0} q_n^i d_n^\mathcal{E} - c_n ( \epsilon_{\rm DSP}) \sqrt{ q_n^i (1-q_n^i) d_n^\mathcal{E}} \; .
\end{align}
We can then execute a program
that finds the minimum values of $d_0^\mathcal{E}$ and $d_1^\mathcal{E}$
subject to Eqs.~\eqref{eq:finalbounds}.
For instance, a quadratic program can be used to search
$\sqrt{q_n^i d_n^{\mathcal{E}*}}$. Such minimum values
will be used in Eqs.~\eqref{eq:desiredbound} and \eqref{eq:keyrate} to obtain the key rate.
A technical remark is in order.
When $n \rightarrow \infty$, $q_n^i (1-q_n^i) \rightarrow 0$
exponentially fast in $n$.
Then, the contribution of large-$n$ terms in Eqs.~\eqref{eq:finalbounds} is negligible.
We can set a suitable cutoff $n_{\max} \ge n$ in the number of photons
per pulse in our analysis, to avoid
unnecessary computational overheads
in finding $d_0^{\mathcal{E}*}$ and $d_1^{\mathcal{E}*}$, and with an
insignificant impact in the estimated values.
\section{Conclusions}
We analyzed general photon-number splitting
attacks and pointed out that previous security analyses on decoy-state
protocols for QKD made a strong assumption on the attack. We provided
an estimation procedure that sets a lower bound on the
size of the secure, distributed key, with the corresponding confidence levels.
Our procedure requires executing a program to find the minimum values
of the number of detections due to empty and single-photon pulses, subject
to constraints that are determined by the results of the protocol and by the
desired security parameter.
It results in rigorous security guarantees even if Eve
correlates her attack according to the number of photons in the pulse.
We emphasize that our estimation procedure is not unique: Any time
that a confidence interval can be set as a function
of publicly available information for general attacks, then an estimation procedure is possible.
In addition, our choice of confidence intervals and
$\epsilon_n$ is not essential and could be further optimized to improve
the size of the secure key.
\section{Acknowledgments}
We thank Jane Nordholt, Kevin McCabe, Raymond Newell, Charles Peterson, and Stephanie Wehner for discussions.
We thank the Laboratory Directed Research and Development (LDRD) Program
at Los Alamos National Laboratory for support.
\begin{appendix}
\section{Properties of $Z^E_i$}
\label{App:AttackStatistics}
We let $X \in \{0,1,\ldots,K\}$ be a random variable and $f(X)$ the probability distribution.
The random variable $Y \in \{0,1,\ldots,K\}$ has the conditional distribution $g(Y|X)$.
The probability of $Y$ is $h(Y) = \sum_{X=0}^K g(Y|X) f(X)$.
Then, it is easy to show
\begin{align}
\sigma^2_Y = E[\sigma^2_{Y|X} ]+ \sigma^2_{E[Y|X]} \; ,
\end{align}
where
\begin{align}
\nonumber
\sigma^2_Y =\sum_{Y=0}^K h(Y) Y^2 - \left( \sum_{Y=0}^K h(Y) Y\right)^2 \;
\end{align}
is the variance of $Y$.
Also,
\begin{align}
\nonumber
E[Y|X] = \sum_{Y=0}^K g(Y|X) Y \;
\end{align}
is the expected value of $Y$ when given $X$,
\begin{align}
\nonumber
\sigma^2_{E[Y|X]} = \sum_{X=0}^K f(X) E[Y|X] - \left (\sum_{X=0}^K f(X) E[Y|X] \right)^2 \;
\end{align}
is the variance of $E[Y|X]$,
\begin{align}
\nonumber
\sigma^2_{Y|X} = \sum_{Y=0}^K g(Y|X) Y^2 - \left (\sum_{Y=0}^K g(Y|X) Y \right)^2 \;
\end{align}
is the variance of $Y$ when given $X$, and
\begin{align}
\nonumber
E[\sigma^2_{Y|X} ]= \sum_{X=0}^K f(X) \sigma^2_{Y|X} \;
\end{align}
is the expected value of such a variance.
In the attack discussed in Sec.~\ref{sec:attack}, $K$ is fixed and the distribution of $k_n$ satisfies
\begin{align}
\nonumber
E[k_n]=p_n K \; ,\\
\nonumber
\sigma^2_{k_n} = p_n (1-p_n) K \; .
\end{align}
Next, $d^{\mathcal{E}}_n$ is chosen such that, when given $k_n$,
\begin{align}
\nonumber
E[d^{\mathcal{E}}_n|k_n] = y_n k_n \; ,\\
\nonumber
\sigma^2_{d_n^\mathcal{E}|k_n} = \tau^2 y_n (1-y_n) k_n \; .
\end{align}
It follows that
\begin{align}
\nonumber
E[\sigma^2_{d_n^\mathcal{E}|k_n} ] = \tau^2 y_n (1-y_n) p_n K \; , \\
\nonumber
\sigma^2_{E[d_n^\mathcal{E}|k_n]}= (y_n)^2 \sigma^2_{k_n}= (y_n)^2 p_n (1-p_n) K \; .
\end{align}
Then,
\begin{align}
\nonumber
\sigma^2_{d_n^\mathcal{E}} = \tau^2 y_n (1-y_n) p_n K + (y_n)^2 p_n (1-p_n) K \; .
\end{align}
When given $d^\mathcal{E}_n$, the distribution for $d_n^{i,\mathcal{E}}$ satisfies
\begin{align}
\nonumber
E[d_n^{i,\mathcal{E}}|d_n^{\mathcal{E}}] = q_n^i d_n^{\mathcal{E}} \; , \\
\nonumber
\sigma^2_{d_n^{i,\mathcal{E}}|d_n^{\mathcal{E}}} = q_n^i (1-q_n^i) d_n^{\mathcal{E}} \; .
\end{align}
Therefore,
\begin{align}
\nonumber
\sigma^2_{E[d_n^{i,\mathcal{E}}|d_n^{\mathcal{E}}]} = (q_n^i)^2 \sigma^2_{d_n^{\mathcal{E}}} \; ,
\\
\nonumber
E[\sigma^2_{d_n^{i,\mathcal{E}}|d_n^{\mathcal{E}}}]= q_n^i (1-q_n^i) y_n p_n K \; .
\end{align}
Also,
\begin{align}
\nonumber
\sigma^2_{d_n^{i,\mathcal{E}}} & =(q_n^i)^2 \sigma^2_{d_n^{\mathcal{E}}} + q_n^i (1-q_n^i) y_n p_n K \\
\nonumber
&= (q_n^i)^2 [ \tau^2 y_n (1-y_n) p_n K + (y_n)^2 p_n (1-p_n) K] + \\
\nonumber & + q_n^i (1-q_n^i) y_n p_n K \\
\nonumber
& =[ (\tau^2-1) q_n^i (1-y_n) + (1-q_n^i y_n p_n)] q_n^i y_n p_n K \; .
\end{align}
The first term on the {\em rhs} vanishes when $\tau=1$.
The second term is
\begin{align}
\nonumber
(1-q_n^i y_n p_n) q_n^i y_n p_n K = (1-q^i y_n p^{\mu^i}_n) q^i y_n p^{\mu^i}_n K\; ,
\end{align}
so that
\begin{align}
\nonumber
\sum_{n \ge 0} \sigma^2_{d_n^{i,\mathcal{E}}} = q^i Y(\mu^i) K
- (q^i)^2 K \sum_{n \ge 0} [y_n p_n^{\mu^i} ]^2 \; ,
\end{align}
for $\tau=1$.
Moreover, since $\sum_{n \ge 0} [y_n p_n^{\mu^i} ]^2 \ll Y(\mu^i) \ll 1$,
then
\begin{align}
\nonumber
\sum_{n \ge 0} \sigma^2_{d_n^{i,\mathcal{E}}} \approx q^i Y(\mu^i) (1-Y(\mu^i) )K \; ,
\end{align}
which shows that the case discussed in Sec.~\ref{sec:decoys}, i.e. the i.i.d. assumption,
corresponds to choosing $\tau=1$ in this case.
\section{Chernoff bound}
\label{app:chernoff}
Chernoff's bound~\cite{Ho63} sets a bound on the probabilities of ``rare'' events
as a function of the standard deviation of the corresponding distribution.
More precisely, we let $X_1,X_2,\dots ,X_n$ be a set of i.i.d. random variables that satisfy $|X_j| \le 1$
and define $X = \sum_j X_j$. A general version of Chernoff's bound implies
\begin{align}
\label{eq:chernofferror0}
\mathrm{Pr}[X > E[X] + c \sigma] \le \exp \{- c^2/4 \} \; ,
\end{align}
where $\sigma=n^{1/2} \sqrt{E[(X_j)^2] -( E[X_j])^2}$ is the standard deviation.
For the special case of the binomial distribution where $X_j=1$ with probability $a$
and $X_j=0$ otherwise,
\begin{align}
\nonumber
\sigma &= \sqrt{n a(1-a) } \; , \\
\nonumber
E[X] & = n a\; ,
\end{align}
and
\begin{align}
\nonumber
\mathrm{Pr} [X> k] &= I_{a} (k,n-k+1) \\
\label{eq:chernoffboundBINOMIAL}
& \le \exp \{- (k - n a )^2/(4 a (1-a) n) \} \; .
\end{align}
Here, $I_{a} (k,n-k+1) $ is the so-called regularized incomplete beta function.
To satisfy $\mathrm{Pr} [X> k] \le \epsilon$, it suffices to choose $c$ such that
\begin{align}
\label{eq:chernofferror}
|c| = 2 \sqrt{ |\log \epsilon|} \; .
\end{align}
\section{Calculations of errors}
\label{app:errors}
If $\epsilon_n \le (\epsilon/12)(1/2)^n$, then
\begin{align}
\label{eq:firsterror}
\bar \epsilon & = \sum_{i}\sum_n \epsilon_n \\
\nonumber
& \le 3 (\epsilon/12) .2 = \epsilon/2 \; ,
\end{align}
where we considered that three sources $i$ are involved in the DSP.
Chernoff's bound for the binomial distribution [Eq.~\eqref{eq:chernoffboundBINOMIAL}]
implies that
\begin{align}
\nonumber
\epsilon_n & \le 2 \exp \left \{ - \frac {(u_n^i - q_n^i d_n^\mathcal{E} )^2} {4 q_n^i (1-q_n^i) d_n^\mathcal{E}} \right \} \; .
\end{align}
If we set,
\begin{align}
\label{eq:maxDU}
u_n^i=\phi_{i,n}^{-1}(d_n^\mathcal{E}) = q_n^i d_n^\mathcal{E} + c_n \sqrt{ q_n^i (1-q_n^i) d_n^\mathcal{E}} \; ,
\end{align}
then
\begin{align}
\nonumber
\epsilon_n & \le 2 \exp\{-c_n^2/4\}\; ,
\end{align}
as in Eq.~\eqref{eq:errorbound3}.
Replacing $u_n^i$ by $d_n^{i,\mathcal{E}}$ and $d_n^\mathcal{E}$
by $\phi_{i,n}(d_n^{i,\mathcal{E}})$ in Eq.~\eqref{eq:maxDU},
and solving the resulting quadratic equation,
we obtain
\begin{align}
\nonumber
&\sqrt{\phi_{i,n}(d_n^{i,\mathcal{E}})} = \left[ - c_n \sqrt{ q_n^i (1-q_n^i)} +\right. \\
\nonumber
& \left. + \sqrt{c_n^2 q_n^i (1-q_n^i) + 4 q_n^i d_n^{i,\mathcal{E}}} \right] / (2q_n^i) \; .
\end{align}
That is,
\begin{align}
\nonumber
\phi_{i,n}(d_n^{i,\mathcal{E}}) &= \frac{d_n^{i,\mathcal{E}}}{q_n^i} + \frac{c_n^2 (1-q_n^i)}{2 q_n^i} - \\
\nonumber
&- \frac{c_n \sqrt{c_n^2 (1-q_n^i) [(1-q_n^i) + 4 d_n^{i,\mathcal{E}}] } }{2 q_n^i} \; ,
\end{align}
that yields Eq.~\eqref{eq:lowerbound}.
Changing $c_n \rightarrow -c_n$ provides the upper bound without changing $\epsilon_n$,
i.e., the confidence level.
\end{appendix}
|
1,116,691,497,617 | arxiv | \section{Introduction}
This work, inspired by \cite{krugelmobileagent,irarandomagents,brandesnetanalysis}, considers roving agents' numerical characterization, challenging ad hoc pervasive and trustworthy networks. Agents are autonomous, moving, and intelligent software structures capable to play a sensitive role in advanced monitoring, computation and protection systems. Intrusion detection systems (IDS) \cite{krugelmobileagent} are addressed particularly. They appear as complementary mean to the ordinary cryptographic protection tools of computers and networks. Such IDS use software agent based monitoring and data collection, watching the inside processes of a computer, registering LOG files of application software systems, sniffing and recording communication protocols. Watching the whole network behavior they are better suited to warn approaching attacks and malfunctioning. Data mining agents (DMA) and Data fusion agents (DFA) are examples of information integration tools in networks \cite{irarandomagents}. In large networks, moreover when its structure is not predefined such as wireless sensor networks \cite{brandesnetanalysis} it is natural to consider independent, randomly roving agents, requiring that they are able to collect enough information in total, mining the necessary knowledge about the intrusion. This framework is studied in \cite{irarandomagents}, which prove formulas for the number of DMA sufficient to monitor the given size areas of networks. The formula received is complex and impractical because of their use of nested sums by different parameters. Our work tends to prove simple estimates for the same numerical characteristics of WSN.
\section{Roving Agents Model}
DMA roams around randomly in a network and acquires environmental information. It is lightweight using simplest mining algorithms. DFA is for integration of DMA set actions. DFA may act as an intrusion detection tool and then its power depends on information collected by DMA in network.
Let we are given a network $N$ of $n$ nodes $v_{1},\ldots,v_{n}$. Some fixed amount of information $\theta_{i}$ is allocated at node $v_{i}$. There are $k$ DMA $a_{1},\ldots,a_{k}$. Each agent \textit{\textbf{visits exactly $m$ different nodes}} and obtains the unique information content in each such node. DMA pass all collected information to DFA. Denote by $P_{k}(n,m,t)$ the probability that DFA contains exactly $t$ information blocks of network nodes when $k$ agents randomly visit $m$ of $n$ nodes each. The formula for $P_{k}(n,m,t)$ proven in \cite{irarandomagents} looks as:
\begin{align}
P_{k} &(n,m,t)= \nonumber \\ &\binom{n}{m}^{-(k-1)}\sum_{m_{2},m_{3},\ldots,m_{k-1}=0}^{m}\binom{m}{m_{2}} \binom{n-m}{m-m_{2}} \cdot \nonumber \\
\cdot &\binom{2m-m_{2}}{m_{3}}\binom{n-2m+m_{2}}{m-m_{3}}\ldots \nonumber \\
\ldots &\binom{(k-2)m-m_{2}-\ldots-m_{k-2}}{m_{k-1}} \cdot \nonumber \\
\cdot &\binom{n-(k-2)m+m_{2}+\ldots+m_{k-2}}{m-m_{k-1}}\cdot \nonumber \\
\cdot &\binom{(k-1)m-m_{2}-\ldots-m_{k-1}}{km-t-m_{2}-\ldots-m_{k-1}}\cdot \nonumber \\
\cdot &\binom{n-(k-1)m+m_{2}+\ldots+m_{k-1}}{t-(k-1)m+m_{2}+\ldots+m_{k-1}},k\geq4. \label{pbigformula}
\end{align}
Formulas for smaller $k$ given in \cite{irarandomagents} look similar to $(\ref{pbigformula})$. Of course these formulas are unobservable and simplifications or approximations are of interest. By this same reason \cite{irarandomagents} proves formulas, considering computer simulation, to understand the typical numbers of agents necessary to retrieve the required information in network. Modifications of ``\textit{exactly} $t$'' condition in agent distribution scheme are also important to be considered.
\section{Coverage Characterization of Roving Agents}
Let we are given the set $N=\{v_{1},\ldots,v_{n}\}$ of nodes and $S_{1},\ldots,S_{k}$ are $k$ arbitrary subsets of $N$, each of size $m\leq n$, visited correspondingly by the $k$ agents. We consider a probability distribution scheme over the $N$, and suppose that $m$-subsets $S_{j}$ are equiprobable and independent in this scheme. Having in total $C_{n}^{m}$ $m$-subsets the probability of one of them is equal to $1/C_{n}^{m}$. We are interested in knowing the probabilistic characteristics of the union $\cup_{i=1}^{k}S_{i}$ and its size. In particular, what is the probability that union of those subsets contains exactly $t$ elements?
\begin{align}
P_{k}(n,m,t)=Pr\left(\left|\bigcup_{i=1}^{k}S_{i}\right|=t\right). \label{pt}
\end{align}
To collection of subsets $S_{1},\ldots,S_{k}$ of $N$ nodes corresponds a matrix $A^{k \times n}=\{a_{ij}\}$ where
\begin{align}
a_{ij} =
\begin{cases}
1 & \text{if } v_{j}\in S_{i} \\
0 & \text{otherwise}
\end{cases}.
\end{align}
As each $S_{i}$ contains exactly $m$ elements then each row of $A^{k \times n}$ will contain $m$ $1$s and $n-m$ $0$s. If $\left|\cup_{i=1}^{k}S_{i}\right|=t$, then there are $t$ columns of $A$ which contain at least one $1$ and $n-t$ columns which don't contain $1$. The number of $k \times n$ matrixes with $m$ ones on each row and with exactly $n-t$ columns with no $1$ is $C_{n}^{t} \cdot Q(k,m,t)$ where $Q(k,m,t)$ is the number of $k \times t$ matrixes with $m$ ones on each row and at least one $1$ on each column.
Alternatively, let us consider the following schematic presentation of roving agents' distribution.
Left column vertices in the scheme presented in Fig. \ref{fig:distributionk_m} contain all the arrangements $T_{1},T_{2},\ldots$ of $k$ agents roving by $C_{n}^{m}$ $m$-node-subsets (ordered collections of $k$ $m$-node-subsets).
\begin{figure}[tb]
\begin{center}
\begin{minipage}[h]{\linewidth}
\includegraphics[width=.8\textwidth]{distributionk_m.eps}
\end{minipage}
\end{center}
\caption{Agent sets distribution in terms of trials and node sets. Left column contains outcomes of $k$ by $m$ trials (each $T_{i}$ is a ordered collection of $k$ $m$-subsets). Right column contains all the subsets of node set $N$.}
\label{fig:distributionk_m}
\end{figure}
From combinatorial perspective agents and nodes are distinguishable but $m$-node-subsets are considered as usual sets - different elements and no ordering. Total number of arrangements is equal to $\left(C_{n}^{m}\right)^{k}$. Part of these arrangements cover exactly $t$ nodes and let that these are vertices $T_{1},T_{2},\ldots,T_{p}$. In this notation $p$ is the unknown number that we want to compute. Right side column vertices correspond to all subsets of node set $N$ and part of these sets are of size $t$. In principle, node subset sizes may vary from $0$ to $n$ but in our experiment it may take values from $m$ to $\min(km,n)$.
We draw an edge between an arrangement and a node subset which is covered by that arrangement. Each arrangement is incident to exactly one edge (and subset). Each $t$-subset appears in different arrangements and this number is common for all $t$-subsets and is given by $Q(k,m,t)$.
$Q(k,m,t)$ can be calculated by inclusion-exclusion principle. We use the matrix model for arrangements. First, over a $k \times t$ matrix we take the whole set of unconstrained arrangements as all matrices with $m$ $1$s on rows, then we remove from this all the arrangements where at least one column is initially filled with $0$ (such matrices do not obey the conditions we require), then add arrangements with at least $2$ empty columns, etc. The formula representation of related quantities is:
\begin{align}
Q(k,m,t) & = \nonumber \\
& \left(C_{t}^{m}\right)^{k}-C_{t}^{1} \cdot \left(C_{t-1}^{m}\right)^{k}+C_{t}^{2} \cdot \left(C_{t-2}^{m}\right)^{k}-\ldots \nonumber \\ &+\left(-1\right)^{t-m}C_{t}^{t-m} \cdot \left(C_{m}^{m}\right)^{k}= \nonumber \\
& \sum_{i=0}^{t-m}\left(-1\right)^{i}C_{t}^{i} \cdot \left(C_{t-i}^{m}\right)^{k}. \label{inclexcl}
\end{align}
We have proven
\begin{theorem}
\begin{align}
P_{k}(n,m,t)=\frac{C_{n}^{t} \cdot \sum_{i=0}^{t-m}\left(-1\right)^{i}C_{t}^{i} \cdot \left(C_{t-i}^{m}\right)^{k}}{\left(C_{m}^{n}\right)^{k}}. \label{pformula}
\end{align}
\end{theorem}
First of all here we receive a real simplification of $(\ref{pbigformula})$. The formula received is still complex, but it might be approximated and the applied Markov inequality may give asymptotic estimates of $t$-subset probabilities \cite{medvedev}.
Another important characteristic, the mean value of subset size $t$, might be computed as:
\begin{align}
&\sum_{t=m}^{\min(km,n)}t \cdot P_{k}(n,m,t)= \nonumber \\
&=\sum_{t=m}^{\min(km,n)}\frac{t \cdot C_{n}^{t} \cdot \sum_{i=0}^{t-m}\left(-1\right)^{i}C_{t}^{i} \cdot \left(C_{t-i}^{m}\right)^{k}}{\left(C_{m}^{n}\right)^{k}}. \label{meanformula}
\end{align}
\section{On Node Repetition Limitations in An Agent Roving Scheme}
Let us consider the scene of random distribution of $m$ agents over the $n$ WSN nodes (here we do not consider $k$ agents but $m$ agents, and each individual agent visits exactly one node). Agents are dropped over the node set one by one, independently, and with equal probabilities for nodes. Allocating all $m$ agents we receive a collection of nodes visited by agents, probably with multiple agents that visited the same node.
Total number of different allocations is $n^{m}$. Among these are $1$ node allocations (all the agents visit the same node), their number is $n$, $2$ node allocations, they are $C_{n}^{2}\left(2^{m}-2\right)$ and the largest are $m$ node allocations ($m$-sets), when agents are distributed in all different nodes, and they are $n(n-1)\ldots (n-m+1)$. We are interested in the frequencies of allocation sizes when at least $2$ agents are allocated at the same node (sizes from $1$ to $m-1$), or complementary, the share of allocations with all different nodes.
One of the classical approaches of determining typical cases in distributions is when Markov or Chebyshev inequality is applied. In this way we consider the scheme presented in Fig. \ref{fig:distributionkm} similar to one presneted in Fig. \ref{fig:distributionk_m} to compute the mean of the number of allocated nodes in random distribution of $m$ agents over the $n$ WSN nodes.
Thus, the number of right side vertices in the scheme, where each vertex is a triple, node and a pair of agents, is $nC_{m}^{2}$. Edges are connecting an allocation (from left column) to a node with the given pair of agents it contains (right column). We compute the mean number $M(v_{n,m})$ of edges incident to each allocation as
\begin{align}
M(v_{n,m})=\frac{nC_{m}^{2} \cdot n^{m-2}}{n^{m}}=\frac{C_{m}^{2}}{n}
\end{align}
Apply Markov inequality $Pr\left\{v_{n,m} \geq \epsilon \right\} \leq M(v_{n,m}) / \epsilon$. Take $\epsilon=1$, then $C_{m}^{2}/n$ is the upper estimate of probability of repeating agents at nodes. If $C_{m}^{2}/n \rightarrow 0$ with $n,m \rightarrow \infty$, then we receive that almost all allocations consist of all different agents at nodes.
\begin{figure}[tb]
\begin{center}
\begin{minipage}[h]{\linewidth}
\includegraphics[width=.8\textwidth]{distributionkm.eps}
\end{minipage}
\end{center}
\caption{Agents distribution on WSN node sets. Left column contains outcomes of $m$ trials (each $S_{i}$ is a ordered collection of $m$ nodes), right column contains triples, node and two different agents}
\label{fig:distributionkm}
\end{figure}
\section{Comparison of Agent Allocation Schemes}
In this point we will define and consider two basic probability distributions tightly related to each other.
\begin{itemize}
\item First distribution $U_{n,k,\left\{m\right\}}$ is composed by $k$ independent consecutive allocations of $m$-node subsets over the WSN area of $n$ nodes. $\left(C_{n}^{m}\right)^{k}$ Outcomes of trials are ordered collections of $m$-subsets of WSN nodes. These collections may cover all node subsets of sizes from $m$ to $\min(km,n)$.
\item Second distribution scheme $U_{n,k,m}$, which we want to consider and compare with the basic distribution $U_{n,k,\left\{m\right\}}$ considered above, consists of $k$ consecutive and independent stages; each stage allocates $m$ elements consecutively and independently over the WSN area of $n$ nodes. Outcomes of these trials are all $n^{km}$ ordered collections of nodes. These collections may cover all node subsets of sizes from $1$ to $\min(km,n)$.
\end{itemize}
In one individual stage of $U_{n,k,m}$ we have $m!$ orderings of a single allocation of $m$-subset of one step of $U_{n,k,\left\{m\right\}}$. This is to be taken into account comparing the schemes $U_{n,k,\left\{m\right\}}$ and $U_{n,k,m}$. This difference can also be seen comparing the one stage outcomes of $U_{n,k,\left\{m\right\}}$ and $U_{n,k,m}$. Represent $C_{n}^{m}$ of model $U_{n,k,\left\{m\right\}}$ as
\begin{align}
\frac{n!}{m!(n-m)!}=\frac{n(n-1) \ldots (n-m+1)}{m!}.
\end{align}
Numerator of the last ratio is the counterpart of $n^{m}$ of model $U_{n,k,\left\{m\right\}}$, and $m!$ is the coefficient we mentioned about. Comparing $U_{n,k,\left\{m\right\}}$ and $U_{n,k,m}$, first we note that outcomes of $U_{n,k,\left\{m\right\}}$ are part of outcomes of $U_{n,k,m}$ and hence they may have higher probabilities.
\begin{figure}[tb]
\begin{center}
\begin{minipage}[h]{\linewidth}
\includegraphics[width=.9\textwidth]{allocations.eps}
\end{minipage}
\end{center}
\caption{Allocations by $U_{n,k,\left\{m\right\}}$ and $U_{n,k,m}$}
\label{fig:allocations}
\end{figure}
Consider the probability $p_{j}$ of an event, that in stage $j$ of $U_{n,k,m}$, all the allocated $m$ elements are different. Then $P=p_{1} \cdot p_{2} \cdot \ldots \cdot p_{k}$ is the probability that in all $k$ stages allocated $m$ elements are different. In different stages allocations of course may intersect. Outcomes of $U_{n,k,\left\{m\right\}}$ multiplied with this probabilities are equal to probabilities of $U_{n,k,m}$, part $B$ of intersection of outcomes (Fig. \ref{fig:allocations}). $p_{j}$ Was estimated in previous point as a value tending to $1$ asymptotically. We may extend this proposition to the entire value $P$.
Formally we use the property that probability of union of events is less or equal the sum of event probabilities:
\begin{align}
Pr & \left\{(v_{n,m} \geq \epsilon | q=1) \vee \ldots \vee (v_{n,m} \geq \epsilon | q=k) \right\} \leq \nonumber \\
&\leq k \cdot Pr\left\{v_{n,m} \geq \epsilon \right\} \leq \frac{k \cdot M(v_{n,m})}{\epsilon}.
\end{align}
Then the final condition (upper estimate) sufficient for repetition probability tending to zero is $kC_{m}^{2}/n\rightarrow 0$ with $n,m,k\rightarrow \infty$. The sufficient condition for allocation of all $m$ agents in all $k$ consecutive stages to different nodes $km^{2}/n\rightarrow 0$ is naturally acceptable in WSN which have a very large nodes set as a rule.
Final picture is: part $B$ allocations (Fig. \ref{fig:allocations}) appear in $U_{n,k,m}$ with probability $P$ tending to $1$; relative probability distribution among the elements of $B$ is identical in $U_{n,k,\left\{m\right\}}$ and $U_{n,k,m}$; event probability in model $U_{n,k,\left\{m\right\}}$ is not less than in $U_{n,k,m}$ multiplied by $P$; probabilities of $t$-subset allocations under the model $U_{n,k,m}$ have formulas similar to the ones for model $U_{n,k,\left\{m\right\}}$ considered above.
If $R(k,m,t)$ denotes the number of $t$-node allocations in model $U_{n,k,m}$ then the formal representation of $R(k,m,t)$ similar to the formula for $Q(k,m,t)$. Considered above can be achieved by the same inclusion exclusion method:
\begin{align}
R&(k,m,t)=t^{mk} - C_{t}^{1} \cdot (t-1)^{mk} + C_{t}^{2} \cdot (t-2)^{mk} - \ldots \nonumber \\
&\ldots + (-1)^{t-1}C_{t}^{t-1} \cdot (t-(t-1))^{mk} = \nonumber \\ &=\sum_{i=0}^{t-1}(-1)^{i}C_{t}^{i} \cdot (t-i)^{mk}. \label{rkmtformula}
\end{align}
On this basis we formulate
\begin{theorem}
If $kC_{m}^{2}/n\rightarrow 0$ with $n,m,k\rightarrow \infty$, then comparison of $U_{n,k,\left\{m\right\}}$ and $U_{n,k,m}$ model probabilities of $t$-node allocations are by relation
\begin{align}
\frac{C_{n}^{t}Q(k,m,t)}{(C_{n}^{m})^k} \cdot P \leq \frac{C_{n}^{t}R(k,m,t)}{n^{km}} \text{, with } P \rightarrow 1.
\end{align}
\end{theorem}
Finally, we note that $R(k,m,t)$ has equivalent presentation in terms of second kind Stirling numbers (\cite{chelluri})
\begin{align}
S(N,K) = \frac{1}{K!} \sum_{j=0}^{K}(-1)^{j}C_{K}^{j} (K-j)^N.
\end{align}
Here we used the fact that allocation of $k$ consecutive and independent stages of $m$ elements over the WSN area of $n$ nodes is equivalent to allocation of $km$ elements over that area. Note a difference between the formulas for $Q(k,m,t)$ and $R(k,m,t)$ - that is summation limits. In case of $R(k,m,t)$ formally we may add the zero term for $i=t$, and then we receive
\begin{align}
R(k,m,t)=t!S(mk,t)
\end{align}
which is the final postulation of this paper.
\section{Conclusion}
WSN and software agent systems are important application technique for many areas. Being hard algorithmically and complex in model level these systems require special economy regimes and this is concerned in knowing the minimal requirements and maximum effect when resource is limited. In randomly roving agents model, which is considered above, it is shown that appearing probabilities are equivalently presented in terms of combinatorial Stirling numbers and due to known asymptotic formulas for these numbers (\cite{chelluri}), this allows to adopt the monitoring regime in an optimal way.
\bibliographystyle{plain}
|
1,116,691,497,618 | arxiv | \section{Introduction}
\label{sec:intro}
\input{sections/01_intro_mlsb}
\section{Approach}
\label{sec:approach}
\input{sections/02_approach_mlsb}
\section{Results}
\label{sec:results}
\input{sections/03_results_mlsb}
\section{Discussion}
\label{sec:outro}
\input{sections/04_outro_mlsb}
\begin{ack} We thank Mike Dunne for supporting this project. This work was supported by the U.S. Department of Energy, under DOE Contract No. DE-AC02-76SF00515, the LCLS seed grant "From Atomic Models to Noisy Images and Back Again: End-to-end Differentiable SPI simulators for CryoEM and XFELs" (PI: GW), the SLAC LDRD project "AtomicSPI: Learning atomic scale biomolecular dynamics from single-particle imaging data" (PI: FP, YN). N.M. acknowledges support from the National Institutes of Health (NIH), grant No. 1R01GM144965-02. We acknowledge the use of the computational resources at the SLAC Shared Scientific Data Facility (SDF).
\end{ack}
{\small
\bibliographystyle{plain}
\subsection{Image Formation Model}
\label{subsec:ifm}
In single particle cryo-EM, probing electrons interact with the electrostatic potential created by the molecules embedded in a thin layer of vitreous ice. Each molecular volume $V_i$ can be seen as a mapping from $\mathbb{R}^3$ to $\mathbb{R}$ and is indexed by $i\in\{1,\ldots,N\}$.
In the sample, each molecule is in an unknown orientation $R_i\in SO(3)\subset \mathbb{R}^{3\times 3}$, and an unknown conformation $z_i \in \mathbb{R}^d$. We assume that the volumes $\{V_i\}_{i=1,\ldots,N}$ are drawn independently from a probability distribution $\mathbb{P}_V$ supported on a low-dimensional subspace (the \textit{conformational space}). More specifically, we assume there exist $d\in\mathbb{N}$ and $\mathcal{V}:\mathbb{R}^3\times\mathbb{R}^d\to\mathbb{R}$ such that $\mathbb{P}_V$ is supported on the conformational space $\{\mathcal{V}(.,z), z\in\mathbb{R}^d\}$.
\paragraph{Elastic Network Models, Normal Mode Analysis and Coarse-Graining.} The structural space of the molecular structure is $3M$ dimensional, noting $M$ its number of constitutive atoms $\sim 10^2-10^5$. Efficient sampling of this very high-dimensional space to uncover the lower-dimensional subspace defined by a handful of collective variables (the \textit{conformational space} $\mathcal{V}$) has been an area of intense research for many decades \cite{schutte1999direct, levitt2014birth, noe2017collective}. A relatively cheap and efficient option consists in performing a harmonic approximation of the energy landscape around a chosen conformation~\cite{noguti1982collective,levitt1985protein} by representing the molecule with an elastic network model (ENM)~\cite{tirion1996large,atilgan2001anisotropy}. Noting $X = \{\mathbf{r}_j\}_{j=1,M}$ the atomic Cartesian coordinates of the molecule, the potential energy $E$ of the ENM is further approximated with a second-order Taylor approximation around a reference conformation $X^{(0)}$ and using $H$ the Hessian matrix of $E$ in $X^{(0)}$:
\begin{equation}
E(X) = \frac{1}{2}(X-X^{(0)})^{T} H (X-X^{(0)})~\mbox{ with }~H_{jk} \propto (\mathbf{r}_j^{(0)} - \mathbf{r}_k^{(0)})(\mathbf{r}_j^{(0)} - \mathbf{r}_k^{(0)})^{T}.
\end{equation}
Solving the equations of motion in $E$ leads to the eigenvalue problem $HU = \Lambda U$ where each of the $3M$ eigenpairs $(\lambda_l, U_l)$ defines a \textit{normal mode} of frequency $\sqrt{\lambda_l}$ that deforms the reference conformation along $U_l = \{\mathbf{u}_j^{(l)}\}_{j=1,M}$. Pragmatically, the normal modes provide a linear basis of functions that spans the whole conformational space. Because the variance of each mode scales with the inverse of its frequency, it is customary to perform a low-rank approximation of $H$ by only considering the $d$ lowest frequency normal modes $\tilde{U}_d = \{U_1, ..., U_d\}$ - a choice empirically justified as they often encode relevant functional motions~\cite{tama2001conformational}. Formally, for rank $d$, the reference atomic model can be deformed as follows:
\begin{equation}
\label{eq:lindef}
X(z) = X^{(0)} + \tilde{U}_d.z^{T}.
\end{equation}
In practice, the eigenvalue problem is challenging to solve for large $M$ as the Hessian is a $3M\times 3M$ matrix. Because the lowest-frequency modes usually exhibit strong collectivity~\cite{tama2001conformational}, $H$ is usually computed for a subset of atoms, the \textit{coarse-grained (CG) model}. Its eigenvectors can then be interpolated on the remaining set of atoms and orthonormalized to yield $\tilde{U}_d$.
\paragraph{Isolated Atom Superposition Approximation, Weak-phase Approximation and Projection Assumption.} The electrostatic potential of the whole molecule can be approximated as the superposition of the electron form factors of its constitutive atoms~\cite{VULOVIC201319}. Each form factor is defined as the sum of 5 Gaussians --- their amplitude $a$ and width $b$ determined by the atomic type \cite{colliex2006electron}.
For typical cryo-EM experiments, the scattered wave can be linearized and simplifies to the integral of the rotated volume along the path of the probing electron \cite{vulovic2014use}, resulting in the following mapping from $\mathbb{R}^2$ to $\mathbb{R}$ where $x_j^{(i)}$ and $y_j^{(i)}$ are the planar coordinates of atom $j$ in volume $i$ derived from $\mathbf{r}_j^{(i)} = R_i.(\mathbf{r}_j^{(0)} + \sum_{l=1}^{d} z_l^{(i)}\mathbf{u}_j^{(l)})^T$ :
\begin{equation}
\label{eq:proj}
I_i(x,y) = \int_t V_i(x,y,t)dt = 4\pi \sum_{j=1}^{M}\sum_{k=1}^{5}\frac{a_{jk}}{b_{jk}}\ \exp{-\frac{4\pi^{2}}{b_{jk}}\big((x-x_j^{(i)})^{2} + (y-y_j^{(i)})^{2}\big)}.
\end{equation}
The interaction between the beam and the microscope's lens is modeled by the Point Spread Function (PSF) $P_i$. Imperfect centering of the molecule in the image is characterized by small translations $\mathbf{t}_{i}\in\mathbb{R}^2$. Finally, taking into account signal arising from the vitreous ice in which the molecules are embedded as well as the non-idealities of the lens and the detector, each image $\hat{Y}_i$ can be modelled as~\cite{VULOVIC201319,Scheres2012RELION}
\begin{eqnarray}
\hat{Y}_i = T_{\mathbf{t}_{i}} * P_i * I_i + Z_i ~~~\Leftrightarrow~~~ \hat{\mathcal{Y}}_i = \mathcal{T}_{\mathbf{t}_{i}} \odot C_i \odot \mathcal{I}_i + \mathcal{Z}_i
\label{eq:ifm}
\end{eqnarray}
where $*$ is the convolution operator, $\odot$ indicates element-wise multiplication, $T_\mathbf{t}$ is the $\mathbf{t}$-translation kernel and $Z_i$ white Gaussian noise on $\mathbb{R}^2$. In Fourier space, we note $\hat{\mathcal{Y}}_i$ the Fourier transform of $\hat{Y}_i$, $\mathcal{I}_i$ the complex Fourier transform of the ideal image $I_i$, $\mathcal{T}_\mathbf{t}$ the $\mathbf{t}$-translation operator or phase-shift in Fourier space, $C_i$ is the Contrast Transfer Function (CTF), Fourier transform of the PSF, and $\mathcal{Z}_i$ the Fourier transform of $Z_i$.
\subsection{Overview of the Architecture}
\label{subsec:ovw}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{figures/placeholder.png}
\caption{\textbf{(top) Autoencoder architecture}. The encoder, parameterized by $\psi$, maps an image $Y_i$ to the shift $\mathbf{t}_{i}$ and conformational $z_i$ variables. The decoder is a differentiable simulator that generates noise-free images $\hat{Y}_i$ using the image formation model defined in Eq.~\ref{eq:ifm} from a reference atomic model $X^{(0)}$, using the variables learned by the encoder. The input and output images are compared with the L2 loss, which is batch-minimized for $\psi$. \textbf{(bottom) Results}. GT and predicted distributions over the \textit{uniform} (left), \textit{continuous} (center) and \textit{discontinuous} (right) datasets (see Section. \ref{sec:results}) for $d=1$ (top row) and $d=16$ (bottom row). The explained variance (EV) of PC1 is indicated for $d=16$. The GT (in gray) and $d=16$ predicted atomic models (rainbow-colored) corresponding to the local maxima in the \textit{discontinuous} dataset are displayed together with their RMSD.}
\label{fig:architecture}
\end{figure}
Fig.~\ref{fig:architecture} summarizes the proposed encoder-decoder architecture. Images $Y_i$ are fed into an encoder, parameterized by $\psi$, that predicts a shift $\mathbf{t}_{i}$ and a conformational variable $z_i$. These estimated parameters are then applied to a reference structure as in Eq.~\ref{eq:lindef}. The resulting deformed atomic model is rotated using a known pose $R_i$, and Eq.~\ref{eq:proj} is evaluated at the desired pixel positions. Based on the estimated translation $\mathbf{t}_{i}$ and given CTF parameters $C_i$, the rest of the image formation model described in Eq.~\ref{eq:ifm} is simulated with the physics-aware decoder to obtain $\hat{Y}_i$, a noise-free estimation of $Y_i$. Pairs of measured and reconstructed images are compared using the L2 loss, and gradients are backpropagated throughout the differentiable model in order to optimize the encoder.
\paragraph{Discriminative model.} The encoder acts as a discriminative model by mapping images to estimates of $\mathbf{t}_{i}$ and $z_i$. The encoder is structured sequentially with the following components. A Convolutional Neural Network (CNN) containing $5$ blocks, each consisting of $2$ \emph{Conv3x3} layers, followed by batch normalization and average pooling by a factor of $2$. The filter numbers for the convolutional layers in the blocks are set as [32, 64, 128, 256, 512]. The architecture of the CNN is inspired from the first layers of VGG16~\cite{vgg16}, known to perform well on visual tasks. A \emph{translation} Multi-Layer Perceptron (MLP) with $2$ hidden layers of sizes [512, 256] that outputs a feature $\mathbf{t}_{i}$ of dimension $2$. A \emph{conformation} MLP with $2$ hidden layers of sizes [512, 256] that maps $y_i$ to $z_i$ of dimension $d$, which is the number of normal modes used.
\paragraph{Generative Model.} The generative model implemented in the decoder is a simulation of the image formation model (Section~\ref{subsec:ifm}).
\textit{Pre-processing}. A reference atomic model $X^{(0)}$ is fed to ProDy~\cite{bakan2011prody} to pre-compute the Hessian matrix $H_{CG}$ of the CG model. $H_{CG}$ is diagonalized on the GPU~\cite{pytorch} and its first $d$ eigenvectors are interpolated on the remaining set of atoms with PyKeOps~\cite{pykeops} to yield $U_d$. The electron form factors~\cite{colliex2006electron} are retrieved with Gemmi~\cite{Wojdyr2022}. For each image $i$, the rotation matrix and CTF are provided either by the simulator or by an external software like RELION~\cite{Scheres2012RELION} or cryoSPARC~\cite{Punjani2017CryoSPARC:Determination}.
\textit{Training}. The forward pass is differentiable with respect to $z_i$ and $\mathbf{t}_{i}$. For each image $i$, $X^{(0)}$, $\tilde{U}_d$ and $z_i$ are combined according to Eq.\ref{eq:lindef}. The resulting atomic coordinates are rotated by $R_i$ \emph{via} a matrix multiplication. A fixed grid of $D^2$ coordinates on the x-y plane in real space is used to evaluate Eq.\ref{eq:proj}. The resulting image is Fourier transformed and element-wise multiplied by $\hat{T}_{\mathbf{t}_{i}}$ and $C_i$. A final inverse Fourier transform step is carried out.
\paragraph{Training Procedure.}
For all described datasets, the data was split into training and test set with a 90-10 ratio, resulting in 45,000 images in the training set and 5,000 images in the test set. Training was carried with the following parameters: we use the Adam optimizer~\cite{Kingma2015Adam:Optimization} with a learning rate of \SI{1e-3}, minibatch size of 256 images. Each training run was stopped after 200 epochs, each taking approximately 10 minutes on 4 Nvidia Tesla A100 GPUs. Inference results below are reported on the test set.
\subsection{Experimental setup}
We generated a 50-frame trajectory between the atomic models of the open (4AKE~\cite{muller1996adenylate}) and closed (1AKE~\cite{muller1992structure}) forms of Adenylate Kinase (AK) using the \textit{morph} tool in PyMOL~\cite{PyMOL}. For each frame, 10,000 images were generated using our simulator, each image is $192\times192$ with a $0.8$\r{A} pixel size. Particle poses were randomly sampled from a uniform distribution on $SO(3)$, and CTF defocus values were randomly sampled from a Log-normal distribution with $\mu=\SI{1}{\micro\metre}$ and $\sigma=\SI{0.3}{\micro\metre}$. Gaussian noise was added to each image with a Signal-to-Noise Ratio (SNR) set at $-20$ dB. A variable number of images for each frame were then combined to generate 3 datasets, each consisting of 50,000 images, displaying different distributions (following the approach of \cite{Zhong2019ReconstructingModels}): (i) \textit{uniform} --- approximately the same number of images across frames, (ii) \textit{continuous} --- 3 overlapping Gaussians, and (iii) \textit{discontinuous} --- 3 non-overlapping Gaussians.
We tested our proposed architecture on each dataset, using AK in the open form a $X^{(0)}$. For each image $i$, $R_i$ and $C_i$ are given from the simulation while $\mathbf{t}_{i}$ and $z_i$ are estimated by the encoder $\psi$. We ran three simulations in each case, with $d=0$ (no deformation), $d=1$ and $d=16$. For $d=1$, we compare in Fig.\ref{fig:architecture}-bottom the distribution of $z_i$ to the ground-truth (GT) distribution. For $d=16$, we first perform principal component analysis (PCA) on $z_i$ and display the first component (PC1) against GT. In every case, PC1 explained more than 90\% of the variance in $z_i$.
While the qualitative features of the GT distribution is recaptured for $d=1$ already, the predicted distribution deteriorates as the GT deviates from the reference model. This suggests that the morphing trajectory between the open and closed forms of AK cannot be fully captured with the first normal mode of open AK. On the other hand, we see a drastic improvement for $d=16$, which suggests not only that the morphing trajectory is mostly captured with the first 16 modes but also that their amplitudes were correctly estimated by the encoder.
To quantitatively measure the accuracy of the prediction, we selected 3 predicted atomic models corresponding to the 3 local maxima in the predicted \textit{discontinuous} distribution for $d=16$ and compared them to their GT counterpart by measuring the root-mean-square deviation (RMSD) for each pair. As shown in Fig.\ref{fig:architecture}, the RMSD increases as the maxima recedes from the reference atomic model but stays below 1.5 \r{A}. For reference, the RMSD between the open and closed form is approximately 7 \r{A}.
|
1,116,691,497,619 | arxiv | \section{Introduction}
Understanding color confinement
in the framework of QCD is one of the most challenging
non-perturbative problems of strong interaction.
Center vortices offer an appealing picture of confinement
\cite{'tHooft:1978hy,Mack:1979rq,Greensite:2003bk}.
The vortex picture of the QCD vacuum was proposed as early as in 1978
and has recently received support by lattice calculations.
After center projection in the so called maximal center gauge one
produces still the full string tension \cite{DelDebbio:1997mh}.
On the other hand, when center vortices are removed from the
Yang-Mills ensemble the string tension vanishes \cite{deForcrand:1999ms}.
In addition, the signals of confinement disappear from QCD propagators in
both the Landau gauge \cite{Gattnar:2004bf} and the Coulomb gauge
\cite{Greensite:2004ke} when the center vortices are removed.
Furthermore, the deconfinement phase transition in the vortex picture
can be understood as a depercolation transition \cite{Engelhardt:1999fd}
and the topological charge can be understood in terms of intersections
\cite{Engelhardt:1999xw} and writhing \cite{Reinhardt:2001kf} of the
center vortex sheets.
In refs.~\cite{kovacs:00,deForcrand:2001nd} the free energy of center vortices
has been investigated in lattice calculations exploiting the fact that
on the torus a center vortex can be induced by twisted boundary
conditions. In ref.~\cite{kovacs:00} it was shown that
in the confinement regime the free energy of a (thick) center vortex
vanishes. In ref.~\cite{deForcrand:2001nd} the t'~Hooft loop
\cite{'tHooft:1977hy}, which generates a center vortex
\cite{Reinhardt:2002mb}, has been used to calculate the free energy
in the sectors of $SU(2)$ Yang-Mills theory with fixed electric flux
as a function of temperature and spatial volume.
In the present paper we will carry out an analogous
investigation in continuum Yang-Mills theory. We will introduce the
center vortices on the torus by means of twisted boundary conditions.
These boundary conditions are then realized by Abelian background
fields. We calculate the free energy of such center vortex fields as
function of the temperature in one-loop approximation\footnote{In
this context let us also mention that the energy density of an
infinitely thin static magnetic center vortex in continuum $SU(2)$
Yang-Mills theory has been evaluated in \cite{Lange:2003ti} in the
Schr\"odinger picture to one loop order.}.
The paper is organized as follows: In the next section we summarize
the relevant features of gauge fields on the torus satisfying
twisted boundary conditions. In particular we focus on the
derivation of the free energy of configurations with definite
electric flux and its relation to the free energy of a static
quark-antiquark pair. In section \ref{vortex_field} we introduce
Abelian configurations fulfilling twisted boundary conditions.
Such configurations can be interpreted as electric or magnetic
center vortices. Then the operator of fluctuations around these gauge
fields is determined.
The spectrum of the fluctuations around such
background gauge fields with {\em non-zero} instanton number has been
calculated already in ref.~\cite{vbaal:84} and in general it has
negative modes. However, positivity of the fluctuation operator
can be achieved by restriction to {\em zero} instanton number and by a
suitable choice of some free parameters (moduli) of the background
gauge fields. This fact, which was pointed out in \cite{vbaal:96} for gauge
fields on the three torus, can be trivially generalized to the four
torus for gauge fields with zero Pontryagin index. Then the requirement
of a strictly positive fluctuation operator constrains
the range of temperatures. For the cases
of purely spatial and purely temporal twists (where the Pontryagin
index vanishes) the spectrum of the fluctuation operator is calculated.
Finally, in section \ref{calc_free_energy}, the proper time
regularization is used to calculate the free energy of magnetic and
electric vortices to one-loop order. Our results are then confronted
with those of the lattice calculations of ref.~\cite{deForcrand:2001nd}.
\section{QCD on the hypertorus\label{torusqcd}}
To fix our notation we summarize the relevant ingredients of gauge fields
on the torus $\mathbbm{T}^4$. We define the four torus $\mathbbm{T}^4$ as $\mathbbm{R}^4$
modulo the lattice
\begin{align}
\label{eq:02_0042}
\mathscr{L} = \left\{x \in {\mathbbm{R}}^4 | x = n_\mu L^{(\mu)}; n =
(n_\mu), n \in
{\mathbbm Z}^4 \right\} \, ,
\end{align}
where $L^{(\mu)} := L_\mu e_\mu\, , \, \mu = 0,1,2,3$
denote the vectors spanning the lattice and
$\{e_\mu| (e_\mu)_\nu = \delta_{\mu\nu}\}$ is the
canonical basis of $\mathbbm{R}^4$. Throughout this paper we choose
$L_1=L_2=L_3=L$ and $L_0=1/T$ with $T$ being the temperature.
Fields on $\mathbbm{T}^4$ are considered then as fields on $\mathbbm{R}^4$ fulfilling
appropriate boundary conditions.
Local gauge invariants are periodic with respect to a shift by an
arbitrary lattice vector $L^{(\mu)}$, whereas the gauge potential
is (in general) only periodic up to a gauge transformation, i.e.
\begin{align}
\label{eq:02_0045}
A_\lambda(x + L^{(\mu)}) =A_\lambda^{\Omega_\mu}(x)
= \Omega_\mu(x)A_\lambda(x)\Omega_\mu(x)^{\dagger} +
\Omega_\mu(x)\dau{\lambda}\Omega_\mu(x)^{\dagger}.
\end{align}
Here $\Omega_\mu (x)$ denotes the transition function in $\mu$-direction.
Transition functions in different directions have to respect the cocycle
condition
\begin{align}
\label{eq:02_0047}
\Omega_\mu(x + L^{(\nu)})\Omega_\nu(x) =
Z_{\mu\nu}\Omega_\nu(x + L^{(\mu)})\Omega_\mu(x) \, ,
\end{align}
where
\begin{align}
\label{eq:02_0048}
Z_{\mu\nu} = \varexp{\frac{2\pi i}{N} n_{\mu\nu}}\cdot{\mathbbm 1}_N,
\hspace{1cm} n_{\mu\nu} \in {\mathbb Z}(\mathrm{mod}~ N)
\end{align}
is an element of the center of the gauge group $SU(N)$. Here we have
introduced the antisymmetric \emph{twist-tensors} $n_{\mu\nu}$ and
${\mathbbm 1}_N$ is the $N \times N$ unit matrix. The cocycle condition (\ref{eq:02_0047})
expresses compatibility of two successive translations in the
$(\mu,\nu)$-plane, and a non-trivial twist $Z_{\mu\nu}$ induces
a center vortex in this plane. The possible twists are divided into
two groups. First, the spatial twists $n_{i j} , (i,j = 1,2,3)$ which
can be interpreted as the components of a 3-vector
$\zvec{m} = (n_{23},n_{31},n_{12})$ which represents the direction of the
magnetic flux. These twists induce magnetic center vortices. Second,
temporal twists $n_{0 i}$, which again can be interpreted as the
components of a 3-vector $\zvec{k} = (n_{01},n_{02},n_{03})$. These
twists induce ``fat'' electric center vortices whose flux is
homogeneously distributed over the whole torus. \\
\noindent
The Pontryagin index $P_N$ of a gauge potential is given by
\begin{align}
\label{eq:02_0049}
P_N = - \rez{16\pi^2}\int_{\mathbbm{T}^4}\mathrm{d}^4 x \,\,\tr{G_{\mu\nu}
G_{\mu\nu}^*} \, , \,
\end{align}
where
\begin{align}
\label{eq:02_0008}
G_{\mu\nu} = \dau\mu A_\nu - \dau\nu A_\mu +
\commute{A_\mu}{A_\nu} = G^a_{\mu\nu}T_a
\end{align}
is the field strength, $G_{\mu\nu}^* =
\rez{2}\epsilon_{\mu\nu\alpha\beta}G_{\alpha\beta}$ its dual and $T_a$
are the generators of the Lie algebra of the gauge group normalized such
that $\tr{T_a T_b} = -\frac{1}{2} \delta_{a b}$. The
Pontryagin index is fully determined by the transition functions
$\Omega_\mu$. In \cite{vbaal:82} it has been shown that the Pontryagin
index is generally a fractional number
\begin{align}
\label{eq:02_0049a}
P_N = \nu + \left(\frac{N-1}{N}\right)\mathrm{Pf}(n_{\mu\nu})
\, , \, \nu \in \mathbbm{Z} \, ,
\end{align}
where $\nu$ is the integer valued topological charge (instanton number)
and $\mathrm{Pf}(n_{\mu\nu}) = \rez{8} \epsilon_{\mu\nu\alpha\beta}n_{\mu\nu}n_{\alpha\beta}$
is the Pfaffian of the twist tensor $n_{\mu\nu}$. The fractional
part of $P_N$ is obviously due to $n_{\mu \nu}$.
Under a gauge transformation $U$ the transition functions $\Omega_\mu$
transform as
\begin{align}
\label{eq:02_0052}
\Omega^{U}_\mu(x) = U(x + L^{(\mu)})\Omega_\mu(x)U^{-1}(x).
\end{align}
The twist tensor $n_{\mu\nu}$ is invariant under gauge transformations
of the transition functions.
\subsection{Flux}
\label{flux}
In this subsection we will derive the free energy of a
pure $SU(N)$ gauge field configuration on $\mathbbm{T}^4$ with given
electric and magnetic flux $(\zvec{e}, \zvec{m})$. Thereby we will
mainly follow the ideas outlined by 't~Hooft \cite{thooft:79}.
The partition function under interest is given by the trace over physical
states with well defined electric and magnetic flux $(\zvec{e}, \zvec{m})$.
This partition function can be formally expressed as
\begin{align}
\label{eq:02_0064}
\mathscr{Z}(\zvec{e}, \zvec{m},L_\mu) :=
\mathcal{N} \btr{{\mathsf
P}(\zvec{e},\zvec{m})e^{-\beta H}} \, ,
\end{align}
where ${\mathsf P}(\zvec{e},\zvec{m})$ denotes the projector on definite
electric and magnetic flux, $\beta = L_0$ is the inverse temperature
and $\mathcal{N}$ is a
normalization constant chosen such that $\mathscr{Z}(\zvec{e}=0, \zvec{m},L_\mu)=1$.
To obtain this projector we will first identify the physical
states with given flux $(\zvec{e}, \zvec{m})$. To this end we consider the
canonically quantized theory (in Weyl gauge $A_0 = 0$).
In this gauge the configuration space consists of spatial gauge potentials
$A_i \, , \, i= 1,2,3$ on the
torus $\mathbbm{T}^3$ satisfying the twisted boundary conditions
(\ref{eq:02_0045}) with spatial twist vector $\zvec{m}$. This vector $\zvec{m}$
represents the magnetic flux of the configuration under consideration.
Since electric field
operators do not commute with the gauge potential, it is harder
to identify states with fixed electric flux $\zvec{e}$.
Physical states have to be invariant under (time-independent) gauge
transformations $\Omega(\zvec{x})$. The gauge group of pure $SU(N)$
Yang-Mills theory is $SU(N)/Z(N)$, since the gauge potentials
live in the adjoint representation. Therefore, gauge
transformations take values in $SU(N)/Z(N)$ rather than in $SU(N)$,
i.e.~a gauge transformation $\Omega(\zvec{x})$ need not be periodic on
$\mathbbm{T}^3$ - but can change across the torus by a center element:
\begin{align}
\label{eq:02_0053}
\Omega[\zvec{k}](\zvec{x + L^{(i)}}) = Z_i\Omega[\zvec{k}](\zvec{x}),\hspace{1cm} i =
1,2,3,\hspace{1cm} Z_i := e^{\frac{2\pi i k_i}{N}}{\mathbbm 1}_N \in Z(N) \, .
\end{align}
Correspondingly, since $\pi_1(SU(N)/Z(N)) = Z(N)$, on $\mathbbm{T}^3$
there are $N^3$ homotopically inequivalent classes of gauge transformations.
The different classes are labeled by a vector $\zvec{k} =
(k_1,k_2,k_3) \in {\mathbbm{Z}}^3(\!\!\!\mod N)$ and transformations
belonging to a class $\zvec{k}$ are denoted by $\Omega[\zvec{k}](\zvec{x})$.
One can
choose a representative $\tilde \Omega[\zvec{k}](\zvec{x})$ from each class
which takes values only in the Cartan subgroup of $SU(N)$ such that
\begin{align}
\label{eq:02_0054A}
\tilde \Omega[\zvec{k}_1] \tilde \Omega[\zvec{k}_2] =
\tilde \Omega[(\zvec{k}_1+\zvec{k}_2)\!\!\!\mod N] \, .
\end{align}
A general gauge transformation of class $\zvec{k}$ is then given by
the product of $\tilde \Omega[\zvec{k}](\zvec{x})$ and a topologically
trivial (i.e. truly periodic) gauge transformation (belonging to
class $\zvec{k}=0$). Physical states $\ket{\psi}$
have to be gauge invariant under topologically trivial (or small) gauge
transformations, but they can pick up a phase under a topologically
non-trivial gauge transformation.
Let $\hat\Omega[\zvec{k}](\zvec{x})$ be the unitary operator which, when
acting on the Hilbert space of physical states, generates the gauge
transformation $\tilde \Omega[\zvec{k}](\zvec{x})$. Obviously, the operators
$\hat\Omega[\zvec{k}](\zvec{x})$ with different $\zvec{k}$ commute with
each other and with the Hamiltonian. The eigenstates of
$\hat\Omega[\zvec{k}](\zvec{x})$ possess well defined electric flux
as can be seen as follows. Since the $\hat\Omega[\zvec{k}](\zvec{x})$
are unitary their eigenvalues are pure phase
\begin{align}
\label{eq:02_0054}
\hat\Omega[\zvec{k}]\ket{\psi} = e^{i\sigma(\zvec{k})}\ket{\psi},\hspace{1cm}
\sigma(\zvec{k})\in{\mathbbm R} \, .
\end{align}
and from eq.~(\ref{eq:02_0054A}) follows
that the phase factor $\sigma$ is of the form
\begin{align}
\label{eq:02_0055}
\sigma(\zvec{k}) = \frac{2\pi}{N}\zvec{e}\cdot\zvec{k}
\end{align}
with some vector $\zvec{e}\in{\mathbbm{Z}}^3(\!\!\!\!\!\mod N)$,
i.e. we can label the eigenstate $\ket{\psi}$ by the vector
$\zvec{e}$, i.e.~$\ket{\psi_{\zvec{e}}}$. We will now identify
the vector $\zvec{e}$ as the
electric flux. For this purpose we notice that the Wilson loop operator
\begin{align}
\label{eq:02_0056}
\hat W(C) =
\rez{N}\tr{ \mathcal{P} \varexp{-\oint_{C}A_\mu\mathrm{d} x^\mu}}
\end{align}
is the creation operator of electric flux lines \cite{Polyakov:1975rs}.
Let $C_i,~ i = 1,2,3$ denote a path linearly interpolating between
points $\zvec{x}$ and $\zvec{x} + L^{(i)}$.
The corresponding Wilson loop operator $\hat W(C_i)$ is not
invariant under homotopically non-trivial gauge transformations
$\Omega[\zvec{k}]$:
\begin{align}
\label{eq:02_0058}
\hat W(C_i)^{\Omega[\zvec{k}]} = \rez{N}\tr{\Omega[\zvec{k}](\zvec{x}+
L^{(i)})\mathcal{P}
e^{-\int_{C_i}\zvec{A}\mathrm{d}\zvec{x}}\Omega[\zvec{k}]^\dagger(\zvec{x})}
= Z_i \hat W(C_i) \, ,
\end{align}
\noindent
where $Z_i$ is the center element defined in (\ref{eq:02_0053}).
Consider now the action of $\hat\Omega[\zvec{k}]$ on the state
$\hat W(C_i)\ket{\psi_{\zvec{e}}}$:
\begin{align}
\label{eq:02_0059}
\hat \Omega[\zvec{k}] \hat W(C_i)\ket{\psi_{\zvec{e}}}
=
\hat W(C_i)^{\Omega[\zvec{k}]}
\ket{\psi^{\Omega[\zvec{k}]}_{\zvec{e}}}
=
Z_i \hat W(C_i)\hat\Omega[\zvec{k}] \ket{\psi_{\zvec{e}}}
=
e^{i\frac{2\pi}{N}(\zvec{e}\cdot \zvec{k} + k_i)}
\hat W(C_i)\ket{\psi_{\zvec{e}}} \, .
\end{align}
It is seen that the action of $\hat W(C_i)$ on $\ket{\psi_{\zvec{e}}}$
increases the $i$-th component of the vector ${\zvec{e}}$ by one unit.
Since the Wilson loop operator is the creation operator of electric
flux, we have to identify $\frac{2 \pi}{N} {\zvec{e}}$ with the
electric flux vector. Accordingly the eigenvectors
$\ket{\psi_{\zvec{e}}}$ (\ref{eq:02_0054}) of $\hat \Omega[\zvec{k}]$ are
states with definite electric flux $\zvec{e}$ and the projection
operator on physical states with given electric flux ${\zvec{e}}$
is given by
\begin{align}
\label{eq:02_0064B}
{\mathsf P}(\zvec{e}) =
\rez{N^3}\sum_{\zvec{k}}
e^{-\frac{2\pi i}{N}\zvec{e}\zvec{k}} \hat\Omega[\zvec{k}] \, .
\end{align}
Indeed acting with ${\mathsf P(\zvec{e})}$ on the state
$\ket{\psi_{\zvec{e}'}}$ one obtains:
\begin{align}
\label{eq:02_0064C}
{\mathsf P}(\zvec{e}) \ket{\psi_{\zvec{e}'}}=
\rez{N^3}\sum_{\zvec{k}}
e^{-\frac{2\pi i}{N}\zvec{e}\zvec{k}} \hat\Omega[\zvec{k}]
\ket{\psi_{\zvec{e}'}} =
\rez{N^3}\sum_{\zvec{k}}
e^{-\frac{2\pi i}{N}\zvec{e} \zvec{k}}
e^{\frac{2\pi i}{N}\zvec{e}' \zvec{k}}
\ket{\psi_{\zvec{e}'}} =
\delta^{(3)}_{\zvec{e} , \zvec{e}'}
\ket{\psi_{\zvec{e}'}} \, .
\end{align}
The desired projector ${\mathsf P}(\zvec{e},\zvec{m})$ on definite
electric and magnetic flux $(\zvec{e},\zvec{m})$ is then given by
${\mathsf P}(\zvec{e})$ times the projector onto states with fixed
spatial twist $\zvec{m}$. With the projector ${\mathsf P}(\zvec{e},\zvec{m})$
at our disposal the partition function for fixed electric and
magnetic flux $(\zvec{e},\zvec{m})$ defined by eq.~(\ref{eq:02_0064})
is given by
\begin{align}
\label{eq:02_0065a}
\mathscr{Z}(\zvec{e},\zvec{m}) =
\frac{\mathcal{N}}{N^3}\sum_{\zvec{k}}
e^{-\frac{2\pi i}{N} (\zvec{e}\cdot\zvec{k})}
\mathscr{Z}_{\zvec{k}} (\zvec{m}) =
\frac{\sum_{\zvec{k}} e^{-\frac{2\pi i}{N} (\zvec{e}\cdot\zvec{k})}
\mathscr{Z}_{\zvec{k}}(\zvec{m})}{\sum_{\zvec{k}} \mathscr{Z}_{\zvec{k}}(\zvec{m})} \, ,
\end{align}
\noindent
where
\begin{align}
\label{eq:02_0065b}
\mathscr{Z}_{\zvec{k}}(\zvec{m}) =
\btr{\hat\Omega[\zvec{k}]e^{-\beta H}}
\end{align}
is the partition function of fixed temporal twist $\zvec{k}$. (Recall that
fixed spatial twist $\zvec{m}$ implies also fixed magnetic flux.)
Eq.~(\ref{eq:02_0065a}) defines the $Z(N)$-Fourier transform from the
temporal twist $\zvec{k}$ to the electric flux $\zvec{e}$ and is refered to
as Kramers-Wannier duality.
Formally the partition function of fixed temporal twist $\zvec{k}$
(\ref{eq:02_0065b}) represents the thermodynamic average of the operator
$\hat\Omega[\zvec{k}]$ generating gauge transformations which are periodic in
spatial direction up to a center element, see eq.~(\ref{eq:02_0053}).
In the ``coordinate'' representation this partition function is given
by (ignoring for the moment the spatial twist)
\begin{align}
\label{eq:02_0065c}
\mathscr{Z}_{\zvec{k}} &=
\int \mathcal{D} \zvec{A} \bra{\zvec{A}}
\hat\Omega[\zvec{k}]e^{-\beta H} P_G \ket{\zvec{A}}
\nonumber \\
&=
\int \mathcal{D} \zvec{A} \bra{\zvec{A}^{\Omega[\zvec{k}]}}
e^{-\beta H} P_G \ket{\zvec{A}} \, ,
\end{align}
where $P_G$ denotes the projector on gauge invariant states
\cite{Reinhardt:1997rm}. The matrix element in the above equation can be
expressed in the standard form by a functional integral over spatial
gauge field configurations satisfying the temporal boundary condition
\begin{align}
\label{eq:02_0065d}
\zvec{A} (\vec{x},\beta) =
\zvec{A}^{\Omega[\zvec{k}]} (\vec{x},0) \, .
\end{align}
Furthermore the projector $P_G$ contains an integration over the gauge
group with the ``Haar'' measure. This integral can be expressed as an
integral over a (temporally constant) temporal gauge field $A_0$. Thereby
the Haar measure becomes the Faddeev-Popov determinant in the gauge
$\partial_0 A_0 = 0$. The resulting functional integral for the
partition function (\ref{eq:02_0065c}) is gauge invariant and can be
expressed in an arbitrary gauge yielding
\begin{align}
\label{eq:02_0065e}
\mathscr{Z}_{\zvec{k}} =
\int_{\zvec{A} (\beta) = \zvec{A}^{\Omega} (0)}
\mathcal{D} A_\mu \delta_{gf} (A) e^{-S_{YM}(A)} \, ,
\end{align}
where the spatial gauge field satisfies the temporally twisted boundary
conditions (\ref{eq:02_0065d}). Adding also spatially twisted boundary
conditions introduces spatial flux $\zvec{m}$ as described at the beginning
of the section. This shows that the partition function
(\ref{eq:02_0065b}) can be calculated from the standard functional
integral supplemented by the twisted boundary conditions
(\ref{eq:02_0053}).
\subsection{Polyakov-loop}
The static $q\overline{q}$-potential at finite temperature
$T = \rez{\beta}$ is defined as the alteration of the free energy
$F_{q\overline{q}}(\beta)$ after adding a $q\overline{q}$-pair
to the vacuum \cite{Svetitsky:1985ye}.
This potential can be extracted from the Polyakov-loop correlator
\begin{align}
\label{eq:02_0068}
\langle \mathscr{P}(\zvec{x})\mathscr{P}^\dagger(\zvec{y})\rangle
=
e^{-\beta F_{q\overline{q}}(\zvec{x},\zvec{y})} \, ,
\end{align}
\noindent
where in presence of temporal twist $\Omega_0(x)$ the gauge invariant
Polyakov loop operator is defined by
\begin{align}\label{eq:02_0067}
\mathscr{P}(\zvec{x})
=
\rez{N} \tr{{\mathcal P}\varexp{-\int_0^{\beta=L_0}
{A_0}(x^0,\zvec{x})\mathrm{d} x^0} \Omega_0^\dagger(\zvec{x})}
\, .
\end{align}
\noindent
Using the boundary conditions (\ref{eq:02_0045}) for the gauge potential
and the cocycle condition (\ref{eq:02_0047}) one obtains
the following periodicity properties of the Polyakov loop:
\begin{align}
\mathscr{P}(\zvec{x} + L^{(i)}) &=\nonumber
\rez{N}\tr{\mathcal{P}
e^{-\int_0^\beta \mathrm{d} x^0 A_0(x^0, \zvec{x} + L^{(i)})}
\Omega_0^\dagger(\zvec{x} + L^{(i)})}\\
\label{eq:02_0069}
&=
\rez{N}\tr{\mathcal{P}
e^{-\int_0^\beta \mathrm{d} x^0 \Omega_i(A_0 + \dau{0})\Omega_i^\dagger}
Z_{i0}\Omega_i(x)\Omega_0^\dagger(\zvec{x})
\Omega_i^\dagger(x+\beta)}=e^{-\frac{2\pi
i}{N}k_i}\mathscr{P}(\zvec{x}) \, ,
\end{align}
\noindent
where $k_i$ is the $i$-th component of the temporal twist vector $\zvec{k}$.
After multiple usage of this periodicity property one arrives at
\begin{align}
\label{eq:02_0070}
\mathscr{P}(\zvec{x} - L\zvec{e}) = e^{\frac{2\pi
i}{N}\zvec{k}\cdot\zvec{e}}\mathscr{P}(\zvec{x})
\end{align}
\noindent
From this relation follows that the thermic Polyakov loop correlator
in presence of magnetic flux $\zvec{m}$ is given by
\begin{align}
\label{eq:02_0071}
\langle\mathscr{P}(\zvec{x})
\mathscr{P}^\dagger(\zvec{x}-L\zvec{e})\rangle =
\frac{\sum_{\zvec{k}}e^{-\frac{2\pi i}{N}(\zvec{e}\cdot\zvec{k})}
\mathscr{Z}_{\zvec{k}}(\zvec{m})}{\sum_{\zvec{k}}\mathscr{Z}_{\zvec{k}}(\zvec{m})}
\end{align}
\noindent
Comparison with eq.~(\ref{eq:02_0065a}) shows that
\cite{deForcrand:2001nd}
\begin{align}
\label{eq:02_0071a}
\langle\mathscr{P}(\zvec{x})
\mathscr{P}^\dagger(\zvec{x}-L\zvec{e})\rangle =
\mathscr{Z}(\zvec{e},\zvec{m}) \equiv
e^{-\beta F(\zvec{e},\zvec{m})} \, .
\end{align}
\noindent
Hence, on the torus the implementation of temporal twisted boundary
conditions enforces static electric flux which can be interpreted as
arising from two homogeneously but oppositely charged planes a distance
$L {\zvec{e}}$ apart, and thus simulates static quark and antiquark
sources a distance $L {\zvec{e}}$ apart (Note that $\langle\mathscr{P}(\zvec{x})
\mathscr{P}^\dagger(\zvec{x}-L\zvec{e})\rangle$ is independent of
$\zvec{x}$). In the following we will realize the twisted boundary
conditions by Abelian background fields.
\section{Abelian vortex field}
\label{vortex_field}
In the remaining part of the paper we restrict ourselves to
the gauge group $SU(2)$.
The implementation of flux on the torus by means of twisted boundary
conditions can be most easily realized by Abelian background gauge
fields\footnote{Here, Abelian gauge potential means a potential that
takes values only in the Cartan subalgebra of the
gauge group.} of constant field strength on $\mathbbm{T}^4$. Such
a configuration can be interpreted as the field of a fat
center vortex whose flux is homogeneously smeared out over the
whole torus.
The most general solution of the equations of motion of pure $SU(2)$
Yang-Mills theory with constant field
strength and twisted boundary conditions on $\mathbbm{T}^4$ reads
\cite{vbaal:84}
\begin{align}\label{eq:03_0008}
{\mathrm{A}^{(0)}_\mu}(x) =
\left(\frac{\pi}{2}\mathscr{F}_{\mu\nu}x_\nu -
\frac{\pi Q_\mu}{L_\mu}\right)T,\hspace{1cm} T :=
-i\left(\begin{array}{cc}1&0\\0&-1
\end{array}\right)
= -i\sigma_3 \, ,
\end{align}
where $Q_\mu \, , \, \mu = 0,1,2,3$ are arbitrary constants (moduli)
and
\begin{align*}
\mathscr{F}_{\mu\nu} = - \frac{n_{\mu \nu}}{L_\mu L_\nu}
\end{align*}
is up to constant factor the field strength
\begin{align}\label{eq:03_0009}
\mathrm{G}^{(0)}_{\mu\nu} = -\pi \mathscr{F}_{\mu\nu} T
\end{align}
induced by the twist tensor $n_{\mu \nu}$.
The gauge field (\ref{eq:03_0008}) fulfills the twisted boundary
conditions (\ref{eq:02_0045},\ref{eq:02_0047}) with transition functions
\begin{align}
\label{eq:03_0015}
\Omega_\mu (x) =
\varexp{\frac{\pi}{2} \sum_\nu n_{\mu\nu} \frac{x_\nu}{L_{\nu}}T}
\end{align}
and twist tensor $n_{\mu\nu}$.
Obviously, each non-zero component $n_{\mu \nu} = - n_{\nu \mu}$
of the twist tensor corresponds to a non-zero field strength component
$\mathrm{G}^{(0)}_{\mu\nu}$ of the gauge potential (\ref{eq:03_0008}),
which represents the field of $|n_{\mu \nu}|$ center vortices
in the $(\mu \nu)$-plane homogeneously smeared out on $\mathbbm{T}^4$.
We will use this field to calculate the free
energy of ``fat'' center vortices in one-loop order. To this end we
will compute first the spectrum and then the determinant of the
operator of fluctuations
of the gauge field around the given center vortex configuration
(\ref{eq:03_0008}). The appearance of negative eigenvalues will be
avoided by appropriately choosing the constants $Q_\mu$.
\subsection{Fluctuation operator}
\label{fluctuation_operator}
The spectrum of the fluctuations around the Abelian potential
(\ref{eq:03_0008}) for non-zero Pontryagin index (\ref{eq:02_0049a})
has been found already in \cite{vbaal:84} and in general it has
negative modes. On the three torus these negative modes can be avoided
\cite{vbaal:96} by appropriately choosing the moduli. Here we will
extend this result to the four torus where one has to restrict oneself
to background gauge fields with zero Pontryagin index and to
choose the constants $Q_\mu$ (moduli) (\ref{eq:03_0008}) appropriately.
We will shortly summarize the essential results needed below.
The fluctuation operator is obtained by expanding the Yang-Mills action
around the background field $\mathrm{A}^{(0)}_\mu$ (\ref{eq:03_0008}).
For this purpose it is convenient to express the fluctuating gauge
potential as
\begin{align}
\label{eq:03_0022}
A_\mu = \mathrm{A}^{(0)}_\mu + \delta A_\mu \, , \hspace{1cm}
\delta A_\mu = \frac{-i}{2}
\left(
\begin{array}{cc}
b_\mu & \sqrt{2} c_\mu \\
\sqrt{2} c_\mu^* & - b_\mu
\end{array}
\right) \, .
\end{align}
\noindent
From eqs.~(\ref{eq:02_0045},\ref{eq:03_0015})
the boundary conditions for $b_\mu$ and
$c_\mu$ follow:
\begin{align}
\label{eq:03_0024}
b_\mu(x + L^{(\lambda)}) = b_\mu(x) \, , \hspace{1cm}
c_\mu(x + L^{(\lambda)}) =
\varexp{-\pi i \sum\limits_\nu \frac{n_{\lambda\nu}x_\nu}{L_\nu}}
c_\mu(x) \, .
\end{align}
\noindent
For later use we write down the transformation properties of
$c_\mu$ under a shift by an arbitrary lattice vector
$ l_\nu L^{(\nu)} $ which can be obtained by successive
use of eq.~(\ref{eq:03_0024}):
\begin{align}
\label{eq:03_0026}
c_\mu\left(x + \sum_\nu l_\nu L^{(\nu)}\right) =
\varexp{-i\pi\sum_{\rho,\nu}L_{\rho}l_\rho
\mathscr{F}_{\rho\nu}x_\nu + i\pi
\sum_{\rho<\nu}l_\rho n_{\rho\nu}l_\nu}c_\mu(x).
\end{align}
\noindent
Noting that $\mathrm{A}^{(0)}_\mu$ is a solution of the equations of motion the
action in terms of the fluctuations $\delta A_\mu$ reads
\begin{align}
\label{eq:03_0027}
S_{YM} = S_0 & -\rez{g^2} \int \mathrm{d}^4 x \left \{
\tr{-2\delta A_\mu\commute{\mathrm{G}^{(0)}_{\mu\nu}}{\delta A_\nu}
- \delta A_\mu(\hat D_\nu^0)^2 \delta A_\mu
-(\hat D_\mu^0 \delta A_\mu)^2 + \mathcal{O}(\delta A^3)}\right\} \, ,
\end{align}
\noindent
where $S_0 = -\rez{2g^2} \int \mathrm{d}^4 x~ \tr{{\mathrm{G}^{(0)}_{\mu\nu}}^2}$ is the
action of the background field $\mathrm{A}^{(0)}_\mu$ and
\begin{align}
\label{eq:03_0006}
\hat D_\mu^0 := \dau\mu + \commute{\mathrm{A}^{(0)}_\mu}{\cdot}
\end{align}
is the covariant derivative with respect to $\mathrm{A}^{(0)}_\mu$ in the
adjoint representation. Adopting background gauge fixing
\begin{align}
\label{eq:03_0007}
\hat D_\mu^0 \delta A_\mu =
\dau\mu \delta A_\mu + \commute{\mathrm{A}^{(0)}_\mu}{\delta A_\mu}
\stackrel{!}{=} 0
\end{align}
\noindent
one finds for the gauge fixed action
\begin{align}
\label{eq:03_0028}
S^{gf} &= S_0 -
\rez{g^2}\!\int\!\!\mathrm{d}^4x
\left\{\tr{\delta A_\muM_A^{\mu\nu}\delta A_\nu} +
2\tr{\bar\PsiM_{gh}\Psi} +
\mathcal{O}(\delta A^3)\right\} \,
\end{align}
with the fluctuation operators
\begin{align}
\label{eq:03_0029}
M_A^{\mu\nu} =
- \delta_{\mu\nu}(\hat D_\lambda^0)^2 -
2 \commute{\mathrm{G}^{(0)}_{\mu\nu}}{\cdot} \, , \hspace{1cm}
M_{gh} = -(\hat D_\lambda^0)^2 \,
\end{align}
and $\Psi$ being the ghost field. Using the parametrization
(\ref{eq:03_0022}) and representing the ghost field as
\begin{align}
\label{eq:03_0030}
\Psi = \frac{-i}{2}\left(\begin{array}{cc}\eta&2\sqrt{2}\phi\\
2\sqrt{2}(\phi')^*&-\eta\end{array}\right)
\end{align}
\noindent
one finally obtains up to terms of third order in the fluctuations
\begin{align}
\label{eq:03_0031}
S^{gf} &
&= S_0 + \rez{g^2}\int\mathrm{d}^4x
\left\{\rez{2}b_\muM_0b_\mu +
c_\mu^*\left(M_n\delta_{\mu\nu} -
4\pi i \mathscr{F}_{\mu\nu} \right)c_\nu +
\eta^*M_0\eta +
\phi^*M_n\phi +
{\phi'}^*M_n\phi' \right\} \, ,
\end{align}
\noindent
where
\begin{align}
\label{eq:03_0032}
M_0 := \left(\rez{i}\dau \lambda\right)^2\!\!\! , \hspace{1cm}
M_n := \left(\rez{i}\dau\lambda - \pi\mathscr{F}_{\lambda\nu}x_\nu +
\frac{2\piQ_\lambda}{L_\lambda}
\right)^2 \, .
\end{align}
\noindent
Obviously, the operator $M_n$ depends on the twist tensor
$n_{\mu \nu} = - \frac{\mathscr{F}_{\mu\nu}}{L_\mu L_\nu}$.
Note that the ghost fields $\phi , \phi'$ and $\eta$ respect
the same periodicity properties as the gauge fields $c_\mu$
and $b_\mu$, respectively, see eq.~(\ref{eq:03_0024}).
\subsection{The spectrum of the fluctuation operator}
\label{spectrum}
In this section we will calculate the spectrum of the operators $M_0 ,
M_n$ and $\left(M_n\delta_{\mu\nu}-4\pi i \mathscr{F}_{\mu\nu} \right)$,
see eq.~(\ref{eq:03_0032}), from which the eigenvalues of the
operators $M_A$ and $M_{gh}$ (\ref{eq:03_0029}) follow by using
the decompositions (\ref{eq:03_0022}) and (\ref{eq:03_0030}).
The eigenfunctions of $M_0$ are plane waves and
from the periodicity properties (\ref{eq:03_0024}) of the fields
$b_\mu$ and $\eta$ one obtains the eigenvalues
\begin{align}
\label{eq:03_0033}
\lambda_{l} = \sum_{\mu = 0}^3\left(\frac{2\pi
l_\mu}{L_\mu}\right)^2,\;\;\;l \in {\mathbbm Z}^4 \, .
\end{align}
\noindent
These eigenvalues are also eigenvalues of $M_A$ (\ref{eq:03_0029})
(four-fold degenerate in the index $\mu$)
and of $M_{gh}$ (non-degenerate).
In the general case the operator $M_A$ has negative eigenvalues
implying that the considered background gauge potential $\mathrm{A}^{(0)}_\mu$
is a saddle point of the action. Note that the negative eigenmodes occur
already for covariantly constant background fields, where they signal
the instability of the perturbative vacuum. Only if the map
\begin{align}
\label{eq:03_0017}
\mathscr{F}: \mathbbm{R}^4 \longrightarrow \mathbbm{R}^4 \, , \hspace{1cm}
x_\mu \mapsto \mathscr{F}_{\mu\nu}x_\nu \hspace{1cm}
\end{align}
is degenerate (i.e. $\ker(\mathscr{F})$ is non-trivial) and for suitable choice
of the parameters $Q_\mu$, see eq.~(\ref{eq:03_0008}),
the spectrum of $M_n$ is strictly positive. Therefore, to ensure
positivity, we will consider only the cases of purely spatial (magnetic)
($\zvec{m} \neq 0 , \zvec{k} = 0$) and
purely temporal ($\zvec{m} = 0 , \zvec{k} \neq 0$) twists. In these cases the map
$\mathscr{F}$ is obviously degenerate and we have the orthogonal
decomposition\footnote{This decomposition is orthogonal with respect to
the canonical scalar product in $\mathbbm{R}^4$ because the $4\times 4$ matrix
$\mathscr{F}$ is anti-symmetric.}
\begin{align*}
\mathbbm{R}^4 = \ker(\mathscr{F}) \oplus \mathrm{im}(\mathscr{F}) \, ,
\end{align*}
and the operator $M_n$ decays into two parts
\begin{align*}
M_n & =
\left.M_{n} \right|_{\mathrm{im}(\mathscr{F})} +
\left.M_{n} \right|_{\ker(\mathscr{F})} \, ,
\end{align*}
\noindent
each acting in one of the orthogonal spaces $\ker(\mathscr{F})$ and $\mathrm{im}(\mathscr{F})$.
In $\left.M_{n} \right|_{\ker(\mathscr{F})}$ the linear term
$\pi\mathscr{F}_{\lambda\nu}x_\nu$ is absent (because $(x_\mu) \in \ker(\mathscr{F})$).
Therefore, the eigenfunctions of $\left.M_{n} \right|_{\ker(\mathscr{F})}$ are
plane waves and can be labeled by two integers $k,l$. We denote the
eigenvalues by $\lambda_{(k ,l)}$ and the eigenfunctions by $\ket{k, l}$.
Eigenfunctions of $\left.M_{n} \right|_{\mathrm{im}(\mathscr{F})}$
can be labeled by one integer $m$ and are
denoted by $\ket{m}$ with eigenvalue $\lambda_{m}$. Then the
eigenfunctions of $M_n$ can simply be written as products of
eigenfunctions of $\left.M_{n} \right|_{\mathrm{im}(\mathscr{F})}$ and
$\left.M_{n} \right|_{\ker(\mathscr{F})}$:
\begin{align*}
\ket{m,k,l} &= \ket{m}\ket{k, l}\;\;\; \Rightarrow\;\;\; \\
M_{n}\ket{m,k,l} &=
\ket{k,l} \left.M_{n} \right|_{\mathrm{im}(\mathscr{F})} \ket{m} +
\ket{m} \left.M_{n} \right|_{\ker(\mathscr{F})} \ket{k,l} =
\left( \lambda_{m} + \lambda_{(k,l)} \right) \ket{m,k,l} \, .
\end{align*}
\noindent
The spectra of the operators $\left.M_{n} \right|_{\mathrm{im}(\mathscr{F})}$ and
$\left.M_{n} \right|_{\ker(\mathscr{F})}$ are calculated in Appendix
\ref{app_spec} from which the spectra of the operators $M_A$ and
$M_{gh}$ follow immediately and are summarized in tables \ref{tab02}
and \ref{tab03}.
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|r}
eigenvalue°eneracy¶meters\\
\hline
$-2\pi f + \left( \zvec{p}(k,l)
- \zvec{q} \right)^2 $
& $2\tilde e$ &
$k,l \in {\mathbbm Z}\vphantom{\Big)}$
\\
$\hphantom{-}2\pi f + \left( \zvec{p}(k,l)
- \zvec{q} \right)^2 $ &
$6\tilde e$ &
$k,l \in {\mathbbm Z}\vphantom{\Big)}$
\\
$2\pi f(2n + 1) + \left(\zvec{p}(k,l)
- \zvec{q} \right)^2 $ &
$8\tilde e$ &
$n = 1,2,3,\ldots; k,l \in {\mathbbm Z}\vphantom{\Big)}$
\\
\hline
from eq.~(\ref{eq:03_0033}):
$\sum\limits_{\mu=0}^3\left(\frac{2\pi l_\mu}{L_\mu}\right)^2$
& $4$ &
$l \in {\mathbbm Z}^4\vphantom{\Big)}$\\
\hline
\end{tabular}
\caption{\label{tab02} Spectrum of the operator $M_A^{\mu\nu}$,
where $\zvec{q}:= - \left . \frac{2\pi Q_\lambda}{L_\lambda}
\right|_{\ker(\mathscr{F})}$. The quantities $f$, $\tilde e$ and $\zvec{p}(k,l)$
are defined respectively in eqs.~(\ref{eq:001}), (\ref{eq:002}) and
(\ref{eq:03_0054}) of appendix \ref{app_spec}.}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|r}
eigenvalue°eneracy¶meters\\
\hline
$2\pi f(2n + 1) + \left(\zvec{p}(k,l)
- \zvec{q} \right)^2 $
& $2\tilde e$ &
$n = 0,1,2,\ldots; k,l \in {\mathbbm Z}\vphantom{\Big)}$
\\
\hline
from eq.~(\ref{eq:03_0033}):
$\sum\limits_{\mu=0}^3\left(\frac{2\pi l_\mu}{L_\mu}\right)^2$
& $1$ &
$l \in {\mathbbm Z}^4\vphantom{\Big)}$\\
\hline
\end{tabular}
\caption{\label{tab03} Spectrum of the operator $M_{gh}$
(see also the caption of table \ref{tab02}).}
\end{center}
\end{table}
\noindent
\subsection{No twist}
For sake of completeness let us consider the case without flux
$\zvec{m}=\zvec{k}=0$. We will need this spectrum to normalize the free energy of a
twist configuration with respect to the case of zero field strength.
The eigenvalues of $M_0$ are the same as in the twisted case, i.e.
\begin{align}
\label{eq:03_0059}
\lambda_{l} = \sum_{\mu = 0}^3 \left(\frac{2\pi l_\mu
}{L_\mu} \right)^2,\;\;\;l \in {\mathbbm Z}^4 \, ,
\hspace{1cm} l = (l_\mu)
\end{align}
\noindent
and they appear in the spectrum of $M_A^{\mu \nu}$ (with degeneracy 4)
and of $M_{gh}$ (non-degenerate).
The remaining spectrum consists of the eigenvalues of the operator
$M_n = \left(-i\dau\lambda +
\frac{2\piQ_\lambda}{L_\lambda}\right)^2$ which read
\begin{align}
\label{eq:03_0060}
\lambda_{l} = \sum_{\mu = 0}^3
\left(\frac{2\pi }{L_\mu} (l_\mu-Q_\mu)\right)^2
\, , \;\;\;l \in {\mathbbm Z}^4 \, ,
\hspace{1cm} l = (l_\mu) \, .
\end{align}
These eigenvalues appear in the spectrum of the operator $M_A^{\mu \nu}$
(with degeneracy 8) as well as of the operator $M_{gh}$
(with degeneracy 2).
\section{Calculation of the free energy}
\label{calc_free_energy}
In this section we will evaluate the free energy of a ``fat'' center
vortex induced by the twisted boundary conditions and
examine its dependence on the torus geometry,
and especially on the temperature $T=1/L_0$.
For this purpose we have to evaluate the determinants of the fluctuation
operators. These determinants are ultraviolet singular and need
regularization. We shall adopt the proper-time regularization which
preserves gauge invariance. In the proper time regularization the
determinant of an operator $A$ is given by
\begin{align}
\label{eq:05_0002}
\log\det A := -
\int \limits^\infty_{\rez{\Lambda^2}} \frac{\mathrm{d} \tau}{\tau}
\trp{\varexp{-\tau A}} = - \int \limits^\infty_{\rez{\Lambda^2}}
\frac{\mathrm{d} \tau}{\tau} \sum_{\lambda_i\neq 0}\varexp{-\tau\lambda_i}
\, ,
\end{align}
\noindent
where $\Lambda$ is the ultraviolet cut-off. The symbol $\trp{}$ means
that the trace is taken over non-zero eigenvalues only.
The $\Lambda^4$ and $\Lambda^2$ divergencies cancel in the ratio of
determinants - in our case the ratio between the determinants for
non-zero ($n_{\mu\nu}\neq 0$) and zero ($n_{\mu\nu}= 0$) field
strength.
The proper time integral (\ref{eq:05_0002}) is defined only if all
eigenvalues are positive. This requires (see the first line in
table \ref{tab02})
\begin{align}
\label{eq:05_0035}
2\pi f < \left(\zvec{p}(k,l) -
\zvec{q}\right)^2,\hspace{1cm} \forall k,l \in {\mathbbm Z}.
\end{align}
\noindent
With given twist tensor and parameters $Q_\mu$ from this inequality
follows a condition for $L T$, which can be interpreted as the
temperature in units of $1/L$. Therefore, large values $L T$
stand for high temperatures.
After a lengthy calculation, which is performed in appendix
\ref{proper_time}, one obtains the free energy of a ``fat'' center
vortex in terms of the renormalized coupling constant:
\begin{align}
\label{free-en}
\frac{F_{\zvec{k}}(\zvec{m} , L T)}{T} &=
\frac{2\pi^2}{g_R^2(T^2)}
\frac{\left(f L^2\right)^2}{L T}
+ \frac{11}{12} \frac{\left(f L^2\right)^2}{L T}
\log\frac{1}{L^2 T^2}
+ C (f L^2, Q_\mu, L T) \, ,
\end{align}
where the function $C$ is defined in eq.~(\ref{eq:11_0004})
of appendix \ref{proper_time}.
In the following we will separately discuss the cases
of purely spatial (magnetic) twist $\zvec{m}$ and purely temporal twist $\zvec{k}$.
\subsection{Spatial twist}
\label{spatial_twist}
For spatial twists $\zvec{k} = 0\, , \, \zvec{m} \neq 0$, which
correspond to fat magnetic center vortices, we choose
$Q_\mu = (\rez 2,\rez 2,\rez 2,\rez 2)$.
In the following table we list the values of the parameter
$\zvec{q}= - \left.\frac{2\pi Q_\lambda}{
L_\lambda}\right|_{\ker(\mathscr{F})}$
needed in the calculation of the function
$C$ (\ref{eq:11_0004}) as well as
the limits for $L T$ resulting from inequality (\ref{eq:05_0035}):
\begin{equation}
\begin{array}{c|c|c}
\zvec{m} & \zvec{q} &
L T
\\
\hline
(1,0,0) &
\frac{\pi}{L}(L T,1,0,0) &
L T \in {\mathbbm{R}}_+
\\
(1,1,0) &
\frac{\pi}{L}(L T,1,1,0) &
L T > 0.95
\\
(1,1,1) &
\frac{\pi}{L}(L T,1,1,1) &
L T > 1.05
\end{array}
\end{equation}
In figure \ref{abbtwists} the quantity $C$ (\ref{eq:11_0004}) is
plotted as a function of $L T$ for the different values of $\zvec{m}$.
One observes that $C$ is nearly proportional to the
number of twists (more precisely: proportional to $\|\zvec{m}\|^2$).
\begin{figure}[!t]
\textsf{
\begin{center}
\psfragscanon
\psfrag{m = (1,0,0)}{$\zvec{m} = (1,0,0)$}
\psfrag{m = (1,1,0)}{$\zvec{m} = (1,1,0)$}
\psfrag{m = (1,1,1)}{$\zvec{m} = (1,1,1)$}
\psfrag{0}{}
\psfrag{3.5}{}
\psfrag{4.5}{}
\psfrag{0.5}{}
\psfrag{1.5}{}
\psfrag{2.5}{}
\psfrag{20}{}
\psfrag{40}{}
\psfrag{60}{}
\psfrag{80}{}
\psfrag{L T (Temperatur)}{\Large{$L T$ (temperature)}}
\psfrag{Crr(L T, ||m||)}{\Large{$C(L T, \|\zvec{m}\|)$}}
\includegraphics[angle = 270,scale=.5]{spatial.eps}
\caption{$C$ (\ref{eq:11_0004}) as a function of $L T$
for different twist vectors.\label{abbtwists}}
\end{center}
}
\end{figure}
In figure \ref{abbtwists2} the free energy is plotted in
units of $\Lambda_{QCD}$ as a function of $T/\Lambda_{QCD}$ for the different twist
configurations, where the torus extension $L$ has been chosen to be
$L = 1/\Lambda_{QCD}$. The 1-loop expansion is valid if
$T \gg \Lambda_{QCD} = \frac{1}{L}$, i.e.~$L T \gg 1$.
\begin{figure}[!t]
\textsf{
\begin{center}
\psfragscanon
\psfrag{m = (1,0,0)}{$\zvec{m} = (1,0,0)$}
\psfrag{m = (1,1,0)}{$\zvec{m} = (1,1,0)$}
\psfrag{m = (1,1,1)}{$\zvec{m} = (1,1,1)$}
\psfrag{0.5}{}
\psfrag{1.5}{}
\psfrag{5}{}
\psfrag{15}{}
\psfrag{25}{}
\psfrag{35}{}
\psfrag{temperature}{\Large{$\frac{T}{\Lambda_{QCD}}$ (temperature)}}
\psfrag{freeenergy and more}{\Large{$\frac{F(\zvec{m},
T/\Lambda_{QCD})}{\Lambda_{QCD}}$ (free energy)}}
\includegraphics[angle = 270,scale=.5]{spatial2.eps}
\caption{The free energy of thick vortices as a function of
temperature. \label{abbtwists2}}
\end{center}
}
\end{figure}
\noindent
For large $L T$ the free energy of thick magnetic center vortices
is nearly independent of the temperature $T$. Hence, these vortices
cannot be relevant for the deconfinement phase transition.
In fact it is well known that magnetic center vortices (generated here
by spatial twists) do not contribute to the confining properties of the
theory (i.e.~to the temporal string tension). They do, however,
contribute to the spatial string tension, which even slightly increases
across the deconfinement phase transition. Lattice calculations
\cite{Engelhardt:1999fd} show that magnetic vortices (measured in a
spatial volume at
fixed time) do percolate in both the confined and deconfined phase and
thus cannot be used to characterize the deconfinement phase transition.
\subsection{Temporal twists}
\label{temporal_twist}
For electric center vortices induced by temporal
twists $\zvec{k} \neq 0 \, , \, \zvec{m} = 0$ we
choose $Q_\mu = (0,0,\rez 2,\rez 2)$\footnote{Here we have chosen
$Q_\mu$ differently from the purely spatial twist case since for
$\zvec{k}=(1,1,1)$
the vector $Q_\mu=(\rez 2,\rez 2,\rez 2,\rez 2)$ would be in the image
of the map $\mathscr{F}$, see eq.~(\ref{eq:03_0017}), and therefore in
this case we would have $\zvec{q}=0$.}.
In the following table
the parameter $\zvec{q}$ necessary for the
calculation of the free energy is
listed. We also quote the range of temperature $L T$ (last column)
for which all eigenmodes of the fluctuation operator are positive, see
(\ref{eq:05_0035}):
\begin{equation}
\begin{array}{c|c|c}
\zvec{k} &
\zvec{q} &
L T
\\
\hline
(1,0,0) &
\frac{\pi}{L}(0,0,1,1) &
L T < 3.14
\\
(1,1,0) &
\frac{\pi}{2 L}(0,-1, 1, 2) &
L T < 1.67
\\
(1,1,1) &
\frac{\pi}{3L}(0,-2,1,1) &
L T < 0.60
\end{array} \qquad .
\end{equation}
\noindent
\begin{figure}[!t]
\textsf{
\begin{center}
\psfragscanon
\psfrag{k = (1,0,0)}{$\zvec{k} = (1,0,0)$}
\psfrag{k = (1,1,0)}{$\zvec{k} = (1,1,0)$}
\psfrag{k = (1,1,1)}{$\zvec{k} = (1,1,1)$}
\psfrag{0}{$0$}
\psfrag{1}{$1$}
\psfrag{2}{$2$}
\psfrag{-4}{$-4$}
\psfrag{-3}{$-3$}
\psfrag{-2}{$-2$}
\psfrag{-1}{$-1$}
\psfrag{0.5}{$0.5$}
\psfrag{1.5}{$1.5$}
\psfrag{2.5}{$2.5$}
\psfrag{3.5}{$3.5$}
\psfrag{3}{$3$}
\psfrag{4}{$4$}
\psfrag{6}{$6$}
\psfrag{8}{$8$}
\psfrag{10}{$10$}
\psfrag{LT (Temperatur)}{\Large{$L T$ (temperature)}}
\psfrag{Crz(LT, ||k||)}{\Large{$C(L T, \|\zvec{k}\|)$}}
\includegraphics[angle = 270,scale=.66]{temporal.eps}
\caption{The quantity $C$ (\ref{eq:11_0004}) as a function
of $L T$ for different twist
vectors $\zvec{k}$. The lines show the function $C$ when all
eigenvalues of the fluctuation spectrum are included in the
corresponding proper time integral. The graphs plotted with the
symbols ($+,\times,*$) show $C$ when the eigenvalues that become
negative with rising temperature are neglected (see text).
\label{abbtwiststemp}}
\end{center}
}
\end{figure}
\noindent
In figure \ref{abbtwiststemp} the quantity $C$ defined by
eq.~(\ref{eq:11_0004}) which is part of the free energy (\ref{free-en})
is plotted as a function of $L T$ for different twists.
This quantity (and consequently also the free energy) goes to minus
infinity when $L T$ approaches the region where a single eigenvalue
of the spectrum of the fluctuations becomes negative. This signals the
limit of validity of our calculation.
It is easy to see that the convergence condition (\ref{eq:05_0035}) can
not be valid for arbitrarily high temperatures: The twisted
boundary conditions enforce a {\bf constant} flux
$\|\zvec{k}\| = f L L_0$ in the
plane of the twist, where flux is the product of area and (constant)
field strength. Increasing the temperature $T$, i.e.~decreasing the torus
extension $L_0$, results in an increasing field strength until the
inequality (\ref{eq:05_0035}) is violated.\\
\noindent
As can be seen from fig.~\ref{abbtwiststemp} the free energy of
electric center vortices (temporal twist) increases with the
temperature $T L$. Consequently the electric center vortices
become less and less important as the temperature increases.
This result is consistent with the vanishing of the temporal string
tension above the deconfinement phase transition observed in lattice
calculations, given the fact that electric center vortices are
responsible for the temporal string tension.\\
\noindent
The points ($+,\times,*$) in figure \ref{abbtwiststemp} show $C$ as a
function of $T$ for the different twists when the negative eigenvalues
in the spectrum are neglected.
The appearance of negative modes signals the infrared
instability of the perturbative vacuum. Neglecting the
negative eigenvalues the free energy is proportional to $T^4$ for
large $T$, which corresponds to the \emph{Stefan-Boltzmann}-law
of a free Bose gas.
\begin{figure}[!t]
\vspace*{-.7cm}
\textsf{
\begin{center}
\psfragscanon
\psfrag{ 0}{$0$}
\psfrag{ 0.2}{$0.2$}
\psfrag{ 0.4}{$0.4$}
\psfrag{ 0.6}{$0.6$}
\psfrag{ 0.8}{$0.8$}
\psfrag{ 1}{$1$}
\psfrag{1.2}{}
\psfrag{1.4}{$1.4$}
\psfrag{1.6}{}
\psfrag{1.8}{$1.8$}
\psfrag{2}{}
\psfrag{ 0.1}{$0.1$}
\psfrag{ 0.12}{}
\psfrag{ 0.14}{}
\psfrag{ 0.16}{}
\psfrag{ 0.18}{}
\psfrag{ 0.2}{$0.2$}
\psfrag{ 0.22}{}
\psfrag{ 0.24}{}
\psfrag{ 0.26}{}
\psfrag{ 0.28}{}
\psfrag{ 0.3}{$0.3$}
\psfrag{L (Torusgroesse)}{\Large{$L\Lambda_{QCD}$ (torus extension)}}
\psfrag{exp(-F/T) morespace}{\Large{$\varexp{-F_{\zvec{k}}/T}$}}
\includegraphics[angle = 270,scale=.53]{temp_kov_A.eps}
\caption{Creation probability of the thick electric vortex
as function of the torus extension. \label{abbtwiststemp2}}
\end{center}
}
\end{figure}
Let us consider the partition function
$\mathscr{Z}_{\zvec{k}}(\zvec{m}=0) = \varexp{-F_{\zvec{k}}/T}$ of the temporal
twist $\zvec{k} = (1,0,0)$ on a torus with fixed ratio
$L / L_0 = L T$ as a function of the temperature $T$.
From eq.~(\ref{eq:06_0002}) one obtains
\begin{align}
\label{eq:06_0003}
\mathscr{Z}_{\zvec{k}} (T) &=
e^{- C (L T)} \left( L T \right)^{\frac{11}{6} L T}
\left( \frac{T}{\Lambda_{QCD}} \right)^{-\frac{11}{6} L T} \, ,
\end{align}
i.e.~for large temperature $T$ the partition function $\mathscr{Z}_{\zvec{k}}$
decreases according to a power law with exponent $-\frac{11}{6} L T$.
This is in qualitative agreement with the lattice results of
ref.~\cite{deForcrand:2001nd}. There the partition function $\mathscr{Z}_{\zvec{k}}$
has been calculated as a function of $T$ for different lattice
geometries (corresponding to different values of $L T$) and
thereby the deconfinement phase transition was identified.
Below the critical temperature $\mathscr{Z}_{\zvec{k}}$ is nearly one. Increasing
the temperature $\mathscr{Z}_{\zvec{k}}$ decreases and above the critical temperature
drops to zero as predicted by eq.~(\ref{eq:06_0003}). The larger
$L T$ the sharper the drop.
\newline
The partition function $\mathscr{Z}_{\zvec{k}}$ in eq.~(\ref{eq:06_0003}) can also be
considered as a function of the torus extension $L$ (with fixed
value $L T$). Figure \ref{abbtwiststemp2} shows $\mathscr{Z}_{\zvec{k}}$ as a
function\footnote{In the figure $L$ is measured in units of
$1/\Lambda_{QCD}$, i.e. with $\Lambda_{QCD} = 600$ MeV the value $L \Lambda_{QCD} = 0.3$
corresponds to $L \approx 0.1$ fm.} of $L \Lambda_{QCD}$ for $L T = 0.3$.
This function can be
interpreted as the creation probability of a thick vortex
\cite{kovacs:00}. It monotonically increases with $L \Lambda_{QCD}$.
Although the present one-loop calculation is, strictly speaking,
reliable only for small $L$ it certainly shows the right tendency of
$\mathscr{Z}_{\zvec{k}}$ for increasing $L$. Lattice calculations performed
by Kov\'acs and Tomboulis \cite{kovacs:00} show that the center
vortex creation probability $\mathscr{Z}_{\zvec{k}} = \varexp{-F_{\zvec{k}}/T}$ approaches
one for large $L$, implying that the Yang-Mills vacuum can be
considered as a condensate of thick center vortices.
With the above results at hand we can calculate the free energy
of an electric flux $\zvec{e}$ and the expectation value of the
\emph{Polyakov}-loop correlator, see eq.~(\ref{eq:02_0071}).
Since the spectrum is invariant under spatial rotations contributions
of the twists $\zvec{k} \in \{(1,0,0),(0,1,0),(0,0,1)\}$ are identical
and we will write $\mathscr{Z}_{\zvec{k}} (\zvec{m} =0 ,L_\mu) = \mathscr{Z} (1)$ if
$\zvec{k} \in \{(1,0,0),(0,1,0),(0,0,1)\}$ and
$\mathscr{Z}_{\zvec{k}} (\zvec{m} =0 ,L_\mu) = \mathscr{Z}(2)$ for
$\zvec{k} \in \{(1,1,0),(0,1,1),(1,0,1)$ and
$\mathscr{Z}_{\zvec{k}} (\zvec{m} =0 ,L_\mu) = \mathscr{Z}(3)$ for
$\zvec{k} = (1,1,1)$. The partition function for the electric flux
$\zvec{e} = (1,0,0)$ is given by
\begin{align}
\label{eq:05_0037}
\mathscr{Z} ({\zvec{e}},\zvec{m}=0,L_\mu) =
e^{-\beta F ({\zvec{e}},\zvec{m} = 0 ,L_\mu)} =
\frac{1 + \mathscr{Z}(1) - \mathscr{Z}(2) - \mathscr{Z}(3)}
{1 + 3\mathscr{Z}(1) + 3\mathscr{Z}(2) + \mathscr{Z}(3)} \, .
\end{align}
\begin{figure}[!t]
\textsf{
\begin{center}
\psfragscanon
\psfrag{llaq = 1spa}{\Large{\hspace*{-2cm}$L\Lambda_{QCD} = 0.07$}}
\psfrag{llaq = 2spa}{\Large{\hspace*{-2cm}$L\Lambda_{QCD} = 0.1$}}
\psfrag{llaq = 3spa}{\Large{\hspace*{-2cm}$L\Lambda_{QCD} = 0.15$}}
\psfrag{genauso}{}
\psfrag{(a)}{\;\;\;\;\;\;\;\;(a)}
\psfrag{(b)}{(b)}
\psfrag{0}{$0$}
\psfrag{0.5}{}
\psfrag{-0.5}{}
\psfrag{1}{$1$}
\psfrag{1.5}{}
\psfrag{2}{$2$}
\psfrag{2.5}{}
\psfrag{3}{$3$}
\psfrag{3.5}{}
\psfrag{4}{$4$}
\psfrag{4.5}{}
\psfrag{5}{$5$}
\psfrag{6}{$6$}
\psfrag{7}{$7$}
\psfrag{8}{$8$}
\psfrag{LT (Temperatur)}{\Large{$\frac{T}{\Lambda_{QCD}}$ (temperature)}}
\psfrag{Zk(1)}{\Large{$\mathscr{Z}(1)$}}
\psfrag{Ze(1)}{\Large{$\mathscr{Z}(\zvec{e}=(1,0,0))$}}
\includegraphics[angle = 270,scale=.53]{temp_smek2_A.eps}
\includegraphics[angle = 270,scale=.53]{temp_smek_A.eps}\medskip\\
\includegraphics[angle = 0,scale=.9]{titelleiste.eps}
\caption{Partition function of (a) temporal twist $\zvec{k}=(1,0,0)$
and (b) electric flux $\zvec{e} = (1,0,0)$ as function of
temperature. The full lines give the results under
consideration of all eigenvalues. The plots with $+,\times,*$
show the results when the negative eigenvalues are neglected as
in fig.~\ref{abbtwiststemp}.
\label{abbtwiststemp3}}
\end{center}
}
\end{figure}
\noindent
In figure \ref{abbtwiststemp3} the temperature dependences of
$\mathscr{Z}(1)$ (a) and $\mathscr{Z}(\zvec{e}=(1,0,0))$ (b) are plotted for
different values of $L \Lambda_{QCD}$. Obviously, $\mathscr{Z}(1)$ as a function of
$T$ decreases with increasing temperature $T$ and for larger $L$ it
becomes steeper at low temperatures $T$. In the range of temperature,
where our calculation is valid, one observes a dual behaviour
between $\mathscr{Z}(\zvec{e})$ and $\mathscr{Z}(1)$. This is expected since
$\mathscr{Z}(\zvec{e})$ emerges from $\mathscr{Z}(i) \, , \, i = 1,2,3$ through a
$Z(2)$ Fourier transform. Qualitatively, the same dual behaviour is
observed in the lattice calculation \cite{deForcrand:2001nd}.
Unfortunately, the perturbative ansatz has the problem that
the one-loop approximation is valid for high temperatures.
On
the other hand for the calculation of $\mathscr{Z}(\zvec{e})$ all possible
twists $\zvec{k}$ are needed - and
especially $\mathscr{Z}(3)$ is known only for $T/\Lambda_{QCD} < 0.6/(L\Lambda_{QCD})$.
Therefore, $\mathscr{Z}(\zvec{e})$ can hardly be considered in the scope of the
one-loop approximation.
\section{Conclusions}
In this paper we have carried out a one loop calculation of the free
energy of fat center vortices in $SU(2)$ Yang-Mills theory as function
of the temperature. The fat center vortices were induced by imposing
twisted boundary conditions to the gauge field defined on a four-torus.
These boundary conditions, in turn, were realized by Abelian background
fields which describe fat center vortices whose flux is homogeneously
distributed over the whole 4-torus. Accordingly the fluctuations around
the Abelian background fields have to satisfy quasi periodic boundary
conditions. For arbitrary combinations of electric and magnetic center
flux, i.e.~for arbitrary twist, negative eigenmodes of the fluctuations
appear. Such modes can be avoided for purely spatial or purely temporal
twists. In the case of purely spatial twist the free
energy is nearly independent of the temperature in agreement with
lattice results \cite{deForcrand:2001nd}. More interesting is the
case of purely temporal twist.
Unfortunately in this case the range of validity of our calculations is
rather restricted: for low temperatures the perturbative ansatz
is not valid while for high temperatures negative modes appear
in the fluctuation spectrum. But nevertheless in the range of validity
of the present calculation our results are in
qualitative agreement with the lattice results given in
\cite{deForcrand:2001nd}. Furthermore, the creation probability
of a thick vortex in dependence on the torus extension $L$ has been
calculated and is in agreement with the lattice results given in
\cite{kovacs:00}.
To summarize we have been able to (at least qualitatively) reproduce
the lattice results obtained in refs.~\cite{deForcrand:2001nd,kovacs:00} by
an one-loop calculation in continuum Yang-Mills theory.
\section{Acknowledgments}
The authors are grateful to L.~v.~Smekal and K.~Langfeld
for helpful discussions.
This work has been supported by the Deutsche Forschungsgemeinschaft
under grant DFG-Re 856/4-2.
|
1,116,691,497,620 | arxiv | \section{Introduction}
A fundamental problem in information processing is to transmit a message correctly via a noisy channel,
where the noisy channel is mathematically described by a probabilistic relation between input and output symbols.
To address this problem, we employ channel coding, which is composed of two parts:
an encoder and a decoder.
The key point of this technology is the addition of redundancy to the original message to protect it from corruption by the noise.
The simplest channel coding is transmitting the same information three times as shown in Fig. \ref{F1}.
That is, when we need to send one bit of information, $0$ or $1$, we transmit three bits, $0,0,0$ or $1,1,1$.
When an error occurs in only one of the three bits,
we can easily recover the original bit.
The conversion from $0$ or $1$ to $0,0,0$ or $1,1,1$ is called an encoder and
the conversion from the noisy three bits to the original one bit is called a decoder.
A pair of an encoder and a decoder is called a code.
In this example, the code has a large redundancy and the range of correctable errors is limited. For example, if two bits are flipped during the transmission, we cannot recover the original message.
For practical use, we need to improve on this code,
that is, decrease the amount of redundancy and enlarge the range of correctable errors.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.7]{c-channel.pdf}
\end{center}
\caption{Channel coding with three-bit code}
\Label{F1}
\end{figure}%
The reason for the large redundancy in the simple code described above is that the block-length (the number of bits in one block) of the code is only $3$.
In 1948, Shannon \cite{Shannon48} discovered that increasing the block-length $n$ can improve the redundancy and the range of correctable errors.
In particular, he clarified the minimum redundancy required to correct an error with probability almost $1$
with an infinitely large block-length $n$.
To discuss this problem, for a probability distribution $P$,
he introduced the quantity $H(P)$, which is called the (Shannon) entropy and expresses the uncertainty of the probability distribution $P$.
He showed that we can recover the original message by a suitable code
when the noise of each bit is independently generated subject to the probability distribution $P$, the rate of redundancy is the entropy $H(P)$,
and the block-length $n$ is infinitely large.
This fact is called the channel coding theorem.
Under these conditions, the limit of the minimum error probability
depends only on whether the rate of the redundancy is larger than the entropy $H(P)$ or not.
We can consider a similar problem when the channel is given as additive white Gaussian noise.
In this case, we cannot use the term redundancy because its meaning is not clear.
In the following, instead of this term, we employ the transmission rate,
which expresses the number of transmitted bits per one use of the channel, to characterize the speed of the transmission.
In the case of an additive white Gaussian channel, the channel coding theorem is that
the optimal transmission rate is $ \frac{1}{2}\log (1+\frac{S}{N})$, where $\frac{S}{N}$ is the signal-noise ratio \cite[Theorem 7.4.4]{Gallager}.
However, we cannot directly apply the channel coding theorem to actual information transmission
because this theorem guarantees only the existence of a code with the above ideal performance.
To construct a practical code, we need another type of theory, which is often called coding theory.
Many practical codes have been proposed, depending on the strength of the noise in the channel, and have been used in real communication systems.
However, although these codes realize a sufficiently small error probability, no code could attain the optimal transmission rate.
Since the 1990s, turbo codes and low-density parity check (LDPC) codes have been actively studied as useful codes \cite{BGT,MN96}.
It was theoretically shown that they can attain the optimal transmission rate
when the block-length $n$ goes to infinity.
However, still no actually constructed code could attain the optimal transmission rate.
Hence, many researchers have doubted what the real optimal transmission rate is.
Here, we should emphasize that any actually constructed code has a finite block-length and
will not necessarily attain the conventional asymptotic transmission rate.
On the other hand, in 1962, Strassen \cite{strassen} addressed this problem
by discussing the coefficient with the order $\frac{1}{\sqrt{n}}$ of the transmission rate,
which is called the second-order asymptotic theory.
The calculation of the second-order coefficient approximately gives the solution of the above problem, that is,
the real optimal transmission rate with finite block-length $n$.
Although he derived the second-order coefficient for the discrete channel,
he could not derive it for the additive white Gaussian channel.
Also, in spite of the importance of his result, many researchers overlooked his result
because his paper was written in German.
Therefore, the successive researchers had to recover his result
without use of his derivation.
The present paper explains how this problem has been resolved even for additive white Gaussian channel
by tracing the long history of classical and quantum information theory.
Currently, finite block-length theory is one of hottest topics in information theory
and is discussed more precisely for various situations elsewhere \cite{Pol,Pol2,Hay2,Hay1,PPV2,PPV3,TK,SKT15,KV,TT13,Han5,YHN}.
Interestingly, in the study of finite-block-length theory,
the formulation of quantum information theory becomes closer to that of classical information theory \cite{Haya20}.
In addition to reliable information transmission,
information theory studies data compression (source coding) and (secure) uniform random number generation.
In these problems, we address a code with block-length $n$.
When the information source is subject to the distribution $P$ and
the block-length $n$ is infinitely large, the optimal conversion rate is $H(P)$ in both problems.
Finite-length analysis also plays an important role in secure information transmission.
Typical secure information transmission methods are quantum cryptography and
physical layer security.
The aim of this paper is to review the finite-length analysis in these various topics in information theory.
Further,
finite-length analysis has been developed in conjunction with an unexpected effect from
the theory of quantum information transmission, which is often called quantum information theory.
Hence, we explain the relation between the finite-length analysis and quantum information theory.
The remained of the paper is organized as follows.
First, Section \ref{S15} outlines the notation used in information theory.
Then, Section \ref{S2} explains how the quantum situation is formulated as a preparation for later sections.
Section \ref{S3} reviews the idea of an information spectrum, which is a general method used in information theory.
The information spectrum plays an important role for developing the finite-length analysis later.
Section \ref{S4} discusses folklore source coding, which is the first application of finite-length analysis.
Then, Section \ref{S5} addresses quantum cryptography, which is the first application to an implementable communication system.
After a discussion of quantum cryptography,
Section \ref{S6} deals with second-order channel coding,
which gives a fundamental bound for finite-length of codes.
Finally, Section \ref{S7} discusses the relation between finite-length analysis and physical layer security.
\section{Basics of information theory}\Label{S15}
As a preparation for the following discussion, we provide the minimum mathematical basis for a discussion of information theory.
To describe the uncertainty of a random variable $X$ subject to the distribution $P_X$ on a finite set ${\cal X}$,
Shannon introduced the Shannon entropy
$H(P_X):=- \sum_{x \in {\cal X}} P_X(x)\log P_X(x)$, which is often written as $H(X)$.
When $-\log P_X(x)$ is regarded as a random variable,
$H(P_X)$ can be regarded as its expectation under the distribution $P_X$.
When two distributions $P$ and $Q$ are given
the entropy is concave, that is,
$\lambda H(P) +(1-\lambda )H(Q) \le H(\lambda P +(1-\lambda) Q) $ for $ 0<\lambda <1$.
Due to the concavity, the maximum of the entropy is $\log |{\cal X}|$, where
$|{\cal X}|$ is the size of ${\cal X}$.
To discuss the channel coding theorem, we need to consider the
conditional distribution $P_{Y|X}(y|x)=P_{Y|X=x}(y)$
where $Y$ is a random variable in the finite set ${\cal Y}$,
which describes the channel with input system ${\cal X}$ and output system ${\cal Y}$.
In other words, the distribution of the value of the random variable $Y$ depends on the value of the random variable $X$.
In this case, we have the entropy $H(P_{Y|X=x})$ dependent on the input symbol $x \in {\cal X}$.
Now, we fix a distribution $P_X$ on the input system ${\cal X}$,
taking the average of the entropy $H(P_{Y|X=x})$,
we obtain the conditional entropy $\sum_{x \in {\cal X}} P_X(x) H(P_{Y|X=x})$, which is often written as $H(Y|X)$.
That is, the conditional entropy $H(Y|X)$ can be regarded as the uncertainty of the system ${\cal Y}$ when we know the value on ${\cal X}$.
On the other hand,
when we do not know the value on ${\cal X}$,
the distribution $P_Y$ on ${\cal Y}$ is given as
$P_Y(y):=\sum_{x \in {\cal X}} P_X(x) P_{Y|X=x}(y)$.
Then, the uncertainty of the system ${\cal Y}$
is given as the entropy $H(Y):=H(P_Y)$, which is larger than the conditional entropy $ H(Y|X)$
due to the concavity of the entropy.
So, the difference $H(Y)-H(Y|X)$ can be regarded as the amount of knowledge in the system ${\cal Y}$ when we know the value on the system ${\cal X}$.
Hence, this value is called the mutual information between the two random variables $X$ and $Y$,
and is usually written as $I(X;Y)$.
Here, however,
we denote it by $I(P_X,P_{Y|X})$ to emphasize the dependence on the distribution $P_X$ over the
input system ${\cal X}$.
In channel coding, we usually employ the same channel $P_{Y|X}$ repetitively and independently ($n$ times).
The whole channel is written as
the conditional distribution
$$
P_{Y^n|X^n=x^n}(y^n):=P_{Y|X=x_1}(y_1)\cdots P_{Y|X=x_n}(y_n)\;,
$$
where $x^n=(x_1, \ldots, x_n) \in {\cal X}^n$ and $y^n=(y_1, \ldots, y_n) \in {\cal Y}^n$.
This condition is called the memoryless condition.
In information theory, information intended to be sent to a receiver is called a message, and is distinguished from other types of information.
We consider the case that the sender sends a message, which is one element of the set
${\cal M}_n:= \{1, \ldots, M_n\}$, where $M_n$ expresses the number of elements in the set.
Then, the encoder $E_n$ is written as a map from ${\cal M}_n $ to ${\cal X}^n$,
and the decoder $D_n$ is written as a map from ${\cal Y}^n$ to ${\cal M}_n $.
The pair of the encoder $E_n$ and the decoder $D_n$ is called a code.
Under this formulation, we focus on the decoding error probability
$\epsilon(E_n,D_n):=\frac{1}{M_n}\sum_{m=1}^{M_n}
(1- \sum_{y^n: D_n(y^n)=E_n(m)} P_{Y^n|X^n=E_n(m)}(y^n))$, which expresses the performance of a code
$(E_n,D_n)$.
As another measure of the performance of a code $(E_n,D_n)$,
we focus on the size $M_n$, which is denoted by $|(E_n,D_n)|$ later.
Now, we impose the condition $\epsilon(E_n,D_n)\le \epsilon $ on our code $(E_n,D_n)$,
and maximize the size $|(E_n,D_n)|$.
That is, we focus on $M(\epsilon| P_{Y^n|X^n}):= \max_{(E_n,D_n)}\{|(E_n,D_n)|\; | \epsilon(E_n,D_n)\le \epsilon\} $.
In this context, the quantity $\frac{1}{n}\log M(\epsilon| P_{Y^n|X^n})$
expresses the maximum transmission rate under the above conditions.
The channel coding theorem characterizes the maximum transmission rate as follows.
\begin{align}
\lim_{n \to \infty}
\frac{1}{n}\log M(\epsilon| P_{Y^n|X^n}) =\max_{P_X} I(P_X,P_{Y|X}), \quad
0<\epsilon <1 .\Label{9-1-11}
\end{align}
The maximum value of the mutual information is called the capacity.
To characterize the mutual information, we introduce the relative entropy between two distributions $P$ and $Q$ as
$D(P\|Q):=\sum_{x \in {\cal X}}P(x) \log \frac{P(x)}{Q(x)}$.
When we introduce the joint distribution $P_{XY}(x,y):=P_X(x) P_{Y|X}(y|x)$
and the product distribution $(P_{X}\times P_Y)(x,y):= P_{X}(x) P_Y(y)$,
the mutual information is characterized as \cite{Shannon48,Gallager}
\begin{align}
I(P_X,P_{Y|X})= D(P_{XY} \|P_{X}\times P_Y)
=\min_{Q_Y}D(P_{XY} \|P_{X}\times Q_Y )
=\min_{Q_Y}\sum_{x}P_X(x) D(P_{Y|X=x}\| Q_Y ).
\end{align}
That is, the capacity is given as
\begin{align}
\max_{P_X} I(P_X,P_{Y|X})
&=\max_{P_X} D(P_{X}\times P_Y \| P_{XY})
=\max_{P_X} \min_{Q_Y}D(P_{X}\times Q_Y \| P_{XY}) \\
&=\max_{P_X} \min_{Q_Y}\sum_{x}P_X(x) D(P_{Y|X=x}\| Q_Y )
= \min_{Q_Y}\max_{P_X} \sum_{x}P_X(x) D(P_{Y|X=x}\| Q_Y ).
\end{align}
The final equation can be shown by using the mini-max theorem.
On the other hand,
it is known that the relative entropy $D(P\|Q)$
characterizes the performance of statistical hypothesis testing
when both hypotheses are given as distributions $P$ and $Q$.
Hence, we can expect an interesting relation between
channel coding and statistical hypothesis testing.
As a typical channel, we focus on an additive channel.
When the input and output systems ${\cal X}$ and ${\cal Y}$ are given as
the module $\mathbb Z/d\mathbb Z $,
given the input $X \in \mathbb Z/d\mathbb Z $,
the output $Y \in \mathbb Z/d\mathbb Z $ is given as
$Y=X +Z$, where $Z$ is the random variable describing the noise and is subject to the distribution $P_Z$ on $\mathbb Z/d\mathbb Z$.
Such a channel is called an additive channel or an additive noise channel.
In this case,
the conditional entropy $H(Y|X)$ is $H(P_Z)$, because
the entropy $H(P_{Y|X=x})$ equals $H(P_Z)$ for any input $x \in {\cal X}$,
and the mutual information $I(P_X,P_{Y|X})$ is given by $H(P_Y) - H(P_Z)$.
When the input distribution $ P_X$ is the uniform distribution,
the output distribution $ P_Y$ is the uniform distribution and achieves the maximum entropy
$\log d$.
So, the maximum mutual information $\max_{P_X} I(P_X,P_{Y|X})$ is given as $\log d - H(P_Z)$.
That is, the maximum transmission equals $\log d - H(P_Z)$.
If we do not employ the coding, the transmission rate is $\log d$.
Hence, the entropy $H(P_Z)$ can be regarded as the loss of the transmission rate due to the coding.
In this coding, we essentially add the redundancy $H(P_Z)$ in the encoding stage.
It is helpful to explain concrete constructions of codes with the case of $d=2$, in which $\mathbb Z/2 \mathbb Z$ becomes the finite field $\mathbb F_2$, which is the set $\{0,1\}$ with the operations of modular addition and multiplication,
when the additive noise $Z^n=(Z_1, \ldots, Z_n)$ is subject to the $n$-fold distribution $P_{Z}^n$ of $n$ independent and identical distributed copies of $Z \sim P_Z$. (From now on, we call such distributions ``iid distributions'' for short.) The possible transmissions are then elements of $\mathbb F_2^n$ which is the set of $n$-dimensional vectors whose entries are either $0$ or $1$.
In this case, we can consider the inner product
in the vector space $\mathbb F_2^n$ using the multiplicative and additive operations of $\mathbb F_2$.
When $P_Z(1)=p$, the entropy $H(P_Z)$ is written as $h(p)$, where
the binary entropy is defined as
$h(p):= - p \log p - (1-p)\log (1-p)$.
Since ${\cal X}^n=\mathbb F_2^n$,
we choose a subspace $C$ of $\mathbb F_2^n$ with respect to addition and we identify the message set ${\cal M}_n$ with $C$.
The encoder is given as a natural imbedding of $C$.
To find a suitable decoder,
for a given element $[y]$ of the coset $\mathbb F_2^n/C $,
we seek the most probable element $\Gamma([y])$ among $x+C$.
Hence, when we receive $y \in \mathbb F_2^n$,
we decode it to $y- \Gamma([y])$.
It is typical to employ this kind of decoder.
To identify the subspace $C$,
we often employ a parity check matrix $K$, in which,
the subspace $C$ is given as the kernel of $K$.
Using the parity check matrix $K$,
the element of the coset $\mathbb F_2^n/C $ can be identified using
the image of the parity check matrix $K$, which is called the syndrome.
In this case, we denote the encoder by $E_K$.
Alternatively, when $\Gamma([y]) $ realizes $\max_{x^n \in [y]} P_Z^n(x^n)$,
the decoder is called the maximum likelihood decoder.
This decoder also gives the minimum decoding error $\epsilon(E_T,D)$.
As another decoder, we can choose $\Gamma([y]) $ such that
$\Gamma([y]) $ realizes $\max_{x^n \in [y]} |x^n|$,
where $|x^n|$ is the number of appearances of $1$ among $n$ entries.
This decoder is called the minimum distance decoder.
When $P_Z(0)>P_Z(1)$, the maximum likelihood decoder is the same as the minimum distance decoder.
We denote the minimum distance decoder by $D_{K,\min}$.
This type of code is often called an error correcting code.
When most of the entries of the parity check matrix $K$ are zero,
the parity check matrix $K$ is called an LDPC matrix.
When the subspace $C$ is given as the kernel of
an LDPC matrix,
the code is called the LDPC code.
In this case, it is known that
a good decoder can be realized with a small calculation complexity
\cite{BGT,MN96}.
Hence, an LDPC code is used for practical purposes.
\section{Information Transmission via Quantum Coding}\Label{S2}
To discuss the information transmission problem,
we eventually need to address the properties of the physical media carrying the information.
When we approach the ultimate limit of the information transmission rate as a theoretical problem,
we need to consider the case when individual particles express each bit of information.
That is, we focus on the information transmission rate under such an extreme situation.
To realize the ultimate transmission rate, we need to use every photon (or every pulse) to describe one piece of information.
Since the physical medium used to transmit the information behaves quantum mechanically under such conditions,
the description of the information system needs to reflect this quantum nature.
Several researchers, such as Takahasi \cite{Takahasi}, started to consider the limit of optical communication in the 1960s.
In 1967, Helstrom \cite{Hel:1,Hel:2} started to systematically formulate this problem as a new type of information processing system based on quantum theory instead of
an information transmission system based on classical mechanical input and output, which obeys conventional probability theory.
The study of information transmission based on such quantum media
is called quantum information theory.
In particular, research on channel coding for quantum media is called quantum channel coding.
In contrast, information theory based on the conventional probability theory
is called classical information theory
when we need to distinguish it from quantum information theory,
even when the devices employ quantum effects in their insides,
because the input and the output are based on classical mechanics.
Quantum information theory in its earlier stage has been studied more deeply by Holevo
and is systematically summarized in his book \cite{HolP} in 1980.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.7}{\includegraphics[scale=1]{clip092.eps}}
\end{center}
\caption{Classical channel coding for optical communication.
Dashed thick arrows indicate quantum state transmission.
Normal thin arrows indicate classical information.}
\Label{F2}
\end{figure}%
\begin{figure}[htbp]
\begin{center}
\scalebox{0.7}{\includegraphics[scale=1]{clip093.eps}}
\end{center}
\caption{Quantum channel coding for optical communication.
Dashed thick arrows indicate quantum state transmission.
Normal thin arrows indicate classical information.}
\Label{F3}
\end{figure}%
Here, we point out that current optical communication systems are treated in the framework of classical information theory.
However, optical communication can be treated in both
classical and quantum information theory as follows (Figs. \ref{F2} and \ref{F3}).
Because the framework of classical information theory cannot deal with a quantum system,
to consider optical communication within classical information theory,
we need to fix the modulator converting the input signal to the input quantum state
and the detector converting the output quantum state to the outcome, as shown in Fig. \ref{F2}.
Once we fix these,
we have the conditional distribution connecting the input and output symbols,
which describes the channel in the framework of classical information theory.
That is, we can apply classical information theory to the classical channel.
The encoder is the process converting the message (to be sent) to the input signal,
and the decoder is the process recovering the message from the outcome.
On the other hand, when we discuss optical communication within the framework of quantum information theory as shown in Fig. \ref{F3},
we focus on the quantum channel, whose input and output are given as quantum states.
When the quantum system is characterized by the Hilbert space ${\cal H}$,
a quantum state is given as a density matrix $\rho$ on ${\cal H}$, which is a positive-semi definite matrix with trace $1$.
Within this framework, we combine a classical encoder and a modulator into a quantum encoder, in which
the message is directly converted to the input quantum state.
Similarly, we combine
a classical encoder and a detector into a quantum decoder, in which
the message is directly recovered from the output quantum state.
Once the optical communication is treated in the framework of quantum information theory,
our coding operation is given as the combination of a quantum encoder and a quantum decoder.
This framework allows us to employ physical processes across multiple pulses as
a quantum encoder or decoder,
so quantum information theory clarifies how much such a correlating operation enhances the
information transmission speed.
It is also possible to fix only the modulator and discuss
the combination of a classical encoder and a quantum decoder,
which is called classical-quantum channel coding, as shown in Fig. \ref{F4}.
A classical-quantum channel is given as a map from
an element $x$ of the input classical system
${\cal X}$ to an output quantum state $\rho_x$,
which is given as a density matrix on the output quantum system ${\cal H}$.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.7}{\includegraphics[scale=1]{clip159.eps}}
\end{center}
\caption{Classical-quantum channel coding for optical communication.
Dashed thick arrows indicate quantum state transmission.
Normal thin arrows indicate classical information.}
\Label{F4}
\end{figure}%
Here, we remark that the framework of quantum information theory mathematically
contains the framework of classical information theory as the commutative special case, that is, the case when all $\rho_x$ commute with each other.
This character is in contrast to
the fact that a quantum Turing machine does not contain the conventional Turing machine as the commutative special case.
Hence, when we obtain a novel result in quantum information theory
and it is still novel even in the commutative special case,
it is automatically novel in classical information theory.
This is a major advantage and became a driving force for later
unexpected theoretical developments.
A remarkable achievement of the early stage was made by Holevo in 1979, who obtained a partial result for the classical-quantum channel coding theorem \cite{Holevo-bounds,Holevo-bounds2}.
However, this research direction entered a period of stagnation in the 1980s.
In the 1990s,
quantum information theory entered a new phase and
was studied from a new viewpoint.
For example, Schumacher introduced the concept of a typical sequence in a quantum system \cite{Schumacher}.
This idea brought us new developments and
enabled us to extend data compression to the quantum setting \cite{Schumacher}.
Based on this idea,
Holevo \cite{HoCh} and Schumacher and Westmoreland \cite{SW}
independently proved the classical-quantum channel coding theorem, which had been unsolved until that time.
Unfortunately, a quantum operation in the framework of quantum information theory
is not necessarily available with the current technology.
Hence, these achievements remain more theoretical
than classical channel coding theorem.
However, such theoretical results have, in a sense, brought us more practical results, as we shall see later.
Now, we give a formal statement of the quantum channel coding theorem for the classical-quantum channel $x \mapsto \rho_x$.
For this purpose, we introduce the von Neumann entropy
$H(\rho):= - {\rm Tr}\, \rho \log \rho$ for a given density matrix $\rho$.
It is known that the von Neumann entropy is also concave
just as in the classical case.
When we employ the same classical-quantum channel $n$ times,
the total classical-quantum channel
is given as a map
$x^n(\in {\cal X}^n) \mapsto
\rho^{(n)}_{x^n}:=
\rho_{x_1}\otimes \cdots \otimes \rho_{x_n}$.
While an encoder is given as the same way as the classical case,
a decoder is defined in a different way because it is given by using a quantum measurement on the output quantum system ${\cal H}$.
The most general description of a quantum measurement on the output quantum system ${\cal H}$ is given by using a positive operator-valued measure
$D_n=\{\Pi_m\}_{m=1}^{M_n}$, in which, each $\Pi_m$ is a positive-semi definite matrix on ${\cal H}$ and the condition $\sum_{m=1}^{M_n} \Pi_m=I$ holds.
As explained in \cite[(4.7)]{Hay-book}\cite[(8.48)]{book2},
the decoding error probability is given as
$\epsilon(E_n,D_n):=
\frac{1}{M_n}\sum_{m=1}^{M_n} (1- {\rm Tr}\, \Pi_m \rho^{(n)}_{E_n(m)})$.
So, we can define
the maximum transmission size
$M(\epsilon| \rho^{(n)}_{\cdot}):= \max_{(E_n,D_n)}\{|(E_n,D_n)| | \epsilon(E_n,D_n)\le \epsilon\} $.
On the other hand, the mutual information
is defined as
$I(P_X,\rho_{\cdot}):=H(\sum_{x}P_X(x) \rho_x)-\sum_{x}P_X(x) H( \rho_x)$.
So, the maximum transmission
rate is characterized by the quantum channel coding theorem as follows
\begin{align}
\lim_{n \to \infty}
\frac{1}{n}\log M(\epsilon|\rho^{(n)}_{\cdot})
=\max_{P_X} I(P_X,\rho_{\cdot}),
\quad 0<\epsilon <1.
\end{align}
To characterize the mutual information $I(P_X,\rho_{\cdot})$,
we denote the classical system ${\cal X}$
by using the quantum system ${\cal H}_X$ spanned by $|x\rangle$
and introduce the density matrix
$\rho_{XY}:=\sum_{x \in {\cal X}}P_X(x) |x\rangle \langle x| \otimes \rho_x$ on the joint system ${\cal H}_X \otimes {\cal H}$
and
the density matrix
$\rho_{Y}:=\sum_{x \in {\cal X}}P_X(x) \rho_x$ on the quantum system ${\cal H}$.
In this notation, we regard $P_X$ as the density matrix
$\sum_{x \in {\cal X}}P_X(x) |x\rangle \langle x| $ on ${\cal H}_X$.
Using the quantum relative entropy
$D(\rho\|\sigma):= {\rm Tr}\, \rho (\log \rho-\log \sigma)$
between two density matrices $\rho$ and $\sigma$,
the mutual information is written as
\begin{align}
I(P_X,\rho_{\cdot})=
D(P_X \otimes \rho_Y \| \rho_{XY})
=\min_{\sigma_Y}
D(P_X \otimes \sigma_Y \| \rho_{XY}).
\end{align}
So, the capacity is given by
\begin{align}
\max_{P_X}I(P_X,\rho_{\cdot}):=
\max_{P_X}D(P_X \otimes \rho_Y \| \rho_{XY})
=\max_{P_X}\min_{\sigma_Y}
D(P_X \otimes \sigma_Y \| \rho_{XY}).
\end{align}
Here, it is necessary to discuss the relation between classical and quantum information theory.
For this purpose, we focus on information transmission via communication on an optical fiber.
When we employ coding in classical information theory,
we choose a code based on classical information devices, which are
the input and the output of the classical channel shown in Fig. \ref{F2}.
In contrast,
when we employ coding in quantum information theory,
we choose a code based on quantum information devices, which
are the input and the output of the quantum channel shown in Fig. \ref{F3}.
In the case of Fig. \ref{F4}, we address the classical-quantum channel so
that we focus on the output system as a quantum information device.
That is, the choice between classical and quantum information theory
is determined by the choice of a classical or quantum information device,
respectively.
\section{Information Spectrum}\Label{S3}
The early stage of the development of finite block-length studies
started from a completely different motivation and used
the information spectrum method introduced by Han and Verd\'{u}\cite{resolvability,Han1}.
Conventional studies in information theory
usually impose the iid or memoryless condition on the information source or the channel.
However, neither the information source nor the channel is usually independent in the actual case and
they often have correlations.
Hence, information theory needed to be adapted for such a situation.
To resolve such a problem, Verd\'{u} and Han have discussed
optimal performance in the context of several topics in classical information theory, including channel coding, by using the behavior of the logarithmic likelihood, as shown in Fig. \ref{F8}\cite{Verdu-Han}.
However, they have discussed only the case when the block-length $n$ approaches infinity, and have not studied the case with finite block-length.
It is notable that this study clarified
that the analysis of the iid case
can be reduced to the law of large numbers.
In this way, the information spectrum method has clarified the mathematical structures of many topics in information theory, which has worked as a silent trigger for further developments.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\includegraphics[scale=1]{clip162.eps}}
\end{center}
\caption{Structure of information spectrum:
The information spectrum method discusses the problem in steps.
One is the step to connect the information source and the behavior of the logarithmic likelihood.
The other is the step to connect the behavior of the logarithmic likelihood and the optimal performances in the respective tasks.}
\Label{F8}
\end{figure}%
Another important contribution of the information spectrum method
the connection of simple statistical hypothesis testing to many topics in classical information theory \cite{Han1}.
Here, simple statistical hypothesis testing is the problem
of deciding which candidate is the true distribution with an asymmetric treatment of two kinds of errors when two candidates for the true distribution are given.
In particular, the information spectrum method has revealed that
the performances of
data compression and uniform random number generation
are given by the behavior of the logarithmic likelihood.
Here, we briefly discuss the idea of the information spectrum approach in the case of
uniform random number generation.
Let ${\cal X}_n$ be the original system, where $n$ is an index.
The product set ${\cal X}^n$ is a typical example of this notation.
In uniform random number generation, we prepare another set ${\cal Y}_n$,
in which, we generate an approximate uniform random number $Y_n$.
In this formulation, we focus on the initial distribution $P_{X_n}$ on ${\cal X}_n$.
Then, our operation is given as a map $\phi_n$ from ${\cal X}_n$ to ${\cal Y}_n$.
The resultant distribution on ${\cal Y}_n$
is given as $P_{X_n}\circ \phi_n^{-1}(y):= \sum_{x : \phi_n(x)=y}P_{X_n}(x)$.
To discuss the quality of the resultant uniform random number,
we employ the uniform distribution
$P_{{\cal Y}_n,\mathop{\rm mix}}(y):= \frac{1}{|{\cal Y}_n|}$ on ${\cal Y}_n$.
So, the error of the operation $\phi_n$ is given as
$\gamma(\phi_n):=
\frac{1}{2}\sum_{y \in {\cal Y}_n}
|P_{X_n}\circ \phi_n^{-1}(y)-P_{{\cal Y}_n,\mathop{\rm mix}}(y)|$.
Now, we define the maximum size of the uniform random number with error $\epsilon$ as
$S_n(\epsilon| P_{X_n}):= \max_{\phi_n} \{ |{\cal Y}_n|
| \gamma(\phi_n) \le \epsilon\}$.
Vembu and Verd\'{u} \cite[Section V]{Vembu-Verdu} showed that
\begin{align}
\lim_{n \to \infty}
\frac{1}{n} \log S_n(\epsilon| P_{X_n})
=\sup_R \Big\{
R
\Big| \lim_{n \to \infty}
P_{X_n} \Big\{x \in {\cal X}_n \Big| -\frac{1}{n}\log P_{X_n}(x) \le R\Big\}
\le \epsilon \Big\}.
\end{align}
This fact shows that the generation rate
$\frac{1}{n} \log S_n(\epsilon| P_{X_n})$
is essentially described by the random variable $-\frac{1}{n}\log P_{X_n}(x)$.
When ${\cal X}_n$ is ${\cal X}^n$ and $P_{X_n}$ is the iid distribution $P_X^n$ of $P_X$,
the random variable $-\frac{1}{n}\log P_{X_n}(x)$ converges to
the entropy $H(P_X)$ in probability due to the law of large numbers.
In the iid case, the generation rate equals the entropy $H(P_X)$.
In the channel coding case, we focus on a general conditional distribution
$P_{Y_n|X_n}(y|x)$ as the channel.
Then, Verd\'{u} and Han \cite{Verdu-Han} derived the maximum transmission rate as
\begin{align}
\lim_{n \to \infty}
\frac{1}{n} \log M(\epsilon| P_{Y_n|X_n})
=
\sup_{\{P_{X_n}\}}
\sup_R \Big\{
R\Big|
\lim_{n \to \infty}
P_{X_n,Y_n} \Big\{(x,y) \in {\cal X}_n\times {\cal Y}_n
\Big| \frac{1}{n}\log \frac{P_{Y_n|X_n}(y|x)}{P_{Y_n}(y)} \le R\Big\}
\le \epsilon \Big\}.\Label{8-31-a}
\end{align}
Although we can derive the formula \eqref{9-1-11} from this general formulation,
it is not so easy because
the above formula contains the maximization $\sup_{P_{X_n}}$ of the input distribution on the large system ${\cal X}_n$.
When the channel $P_{Y_n|X_n}$ is given as the additive channel with the additive noise distribution $P_{Z_n}$ as $P_{Y_n|X_n}(y|x)=P_{Z_n}(y-x)$,
the above formula can be simplified as
\begin{align}
\lim_{n \to \infty}
\frac{1}{n} \log M(\epsilon| P_{Y_n|X_n})
=
\sup_R \Big\{
R\Big| \lim_{n \to \infty}
P_{Z_n} \Big\{z \in {\cal Z}_n
\Big| \frac{1}{n}(\log P_{Z_n}(z) + \log |{\cal Z}_n|)\le R\Big\}
\le \epsilon \Big\}.\Label{8-31-ab}
\end{align}
Note that ${\cal Z}_n$ is the same set as ${\cal X}_n$ and ${\cal Y}_n$
when the channel is additive.
As already mentioned, the information spectrum approach was started as a result of
a motivation different from the above.
When Han and Verd\'{u} \cite{resolvability} introduced this method, they considered identification codes, which were initially introduced by Ahlswede and Dueck \cite{AD}.
To resolve this problem, Han and Verd\'{u} introduced another problem---channel resolvability---
which discusses the approximation of a given output distribution by the input uniform distribution on a small subset.
That is, they consider
\begin{align}
T(\epsilon|P_{Y_n|X_n})&:=
\max_{P_{X_n}}
T(\epsilon|P_{X_n},P_{Y_n|X_n}),
\end{align}
and
\begin{align}
&T(\epsilon|P_{X_n},P_{Y_n|X_n})\nonumber \\
:=&
\min_{{\cal T}_n} \min_{\phi_n}
\Bigg\{|{\cal T}_n| ~\Bigg| \frac{1}{2}
\sum_{y \in {\cal Y}_n}
\bigg|
\sum_{x\in {\cal X}_n} P_{Y_n|X_n}(y|x)P_{X_n}(x) -
\sum_{x\in {\cal X}_n} P_{Y_n|X_n}(y|x)
\sum_{u:\phi_n(u)=x}
P_{{\cal T}_n,\mathop{\rm mix}}(x)
\bigg|
\le \epsilon
\Bigg\},
\end{align}
where
$\phi_n$ is chosen as a function from ${\cal T}_n$ to ${\cal X}_n$.
They showed that
\begin{align}
&\lim_{\epsilon \to 0}
\lim_{n \to \infty}
\frac{1}{n} \log T(\epsilon|P_{Y_n|X_n}) \nonumber \\
=&
\lim_{\epsilon \to 0}
\sup_{ \{P_{X_n}\}}
\sup_R \Big\{
R\Big|
\lim_{n \to \infty}
P_{X_n,Y_n} \Big\{(x,y) \in {\cal X}_n\times {\cal Y}_n
\Big| \frac{1}{n}\log \frac{P_{Y_n|X_n}(y|x)}{P_{Y_n}(y)} \le R\Big\}
\le \epsilon \Big\}.
\Label{9-3-1}
\end{align}
By considering this problem, they introduced the new concept of
channel resolvability, which
later played an important role in a completely different topic.
In the next stage, Nagaoka and the author extended the information spectrum method to the quantum case \cite{Nag-Hay,Hay-Nag}.
In this extension, their contribution is not only the non-commutative extension but also the redevelopment of information theory.
In particular, they have given a deeper clarification of the explicit relation between
simple statistical hypothesis testing and channel coding, which is called the
dependence test bound in the later study \cite[Remark 15]{Hay-Nag}.
In this context, Nagaoka \cite{Naga-EQIS} has developed another explicit relation between simple statistical hypothesis testing and channel coding, which is called the meta converse inequality\footnote{Unfortunately,
due to page limitations, the present paper cannot give a detailed derivation.
However, a detailed discussion is available in Section 4.6 of the book \cite{Hay-book}.}.
These two clarifications of the relation between
simple statistical hypothesis testing and channel coding
work as a preparation for the next step of finite-length analysis.
Now, to grasp the essence of these contributions,
we revisit the classical setting
because the quantum situation is more complicated.
To explain the notation of classical hypothesis testing,
we consider testing between two distributions
$P_1$ and $P_0$ on the same system ${\cal X}$.
Generally, our testing method is written by using
a function $T$ from ${\cal X}$ to $[0,1]$ as follows.
When we observe $x \in {\cal X}$,
we support $P_1$ with the probability $T(x)$, and support $P_0$ with the probability $1-T(x)$.
When the function $T$ takes values only in $\{0,1\}$,
our decision is deterministic.
In this problem, we have two types of error probability.
The first one is the probability for erroneously supporting $P_1$ while the true distribution is $P_0$, which is given as
$\alpha(T|P_0\|P_1):=\sum_{x\in {\cal X}}T(x) P_0(x)$.
The second one is the probability for erroneously supporting $P_0$ while the true distribution is $P_1$, which is given as
$\beta(T|P_0\|P_1):=\sum_{x\in {\cal X}}(1-T(x)) P_1(x)$.
Then, we consider the minimum second error probability under the constraint of a constant probability for the first error
as
$\beta(\epsilon|P_0\|P_1):= \min_{T}\{
\beta(T|P_0\|P_1)| \alpha(T|P_0\|P_1) \le \epsilon\}$.
To overcome the problem with respect to $\sup_{P_{X_n}}$ in \eqref{8-31-a},
for a given channel $P_{Y|X}$,
Nagaoka \cite{Naga-EQIS} derived the meta converse inequality:
\begin{align}
M(\epsilon| P_{Y|X}) \le
\max_{P_X}
\beta(\epsilon|P_{XY}\|P_{X}\times Q_Y)^{-1}\Label{8-31-6}
\end{align}
for any distribution $Q_Y$ on ${\cal Y}$.
Also, the author and Nagaoka derived the dependence test bound as follows \cite[Remark 15]{Hay-Nag}.
For a given distribution on $P_X$ on ${\cal X}$
and a positive integer $N$,
there exists a code $(E,D)$ such that $|(E,D)|=N$ \footnote{In the quantum case, they found a slightly weaker inequality.
However, we can trivially derive \eqref{8-31-5} from their derivation in the commutative case.}
\begin{align}
\epsilon(E,D) \le
\epsilon+ N \beta(\epsilon|P_{XY}\|P_{X}\times P_Y).\Label{8-31-5B}
\end{align}
That is, for any $\delta>0$ and $\epsilon>0$, we have
\begin{align}
M(\epsilon+\delta | P_{Y|X}) \ge
\max_{P_X}
\delta \beta(\epsilon|P_{XY}\|P_{X}\times Q_Y)^{-1}.\Label{8-31-5}
\end{align}
Here, \eqref{8-31-5} follows from \eqref{8-31-5B} by putting $\delta= N \beta(\epsilon|P_{XY}\|P_{X}\times P_Y)$.
Then, using \eqref{8-31-5},
the author and Nagaoka derived the $\ge$ part of \eqref{8-31-a}
including the quantum extension.
Also,
using \eqref{8-31-6},
the author and Nagaoka derived another expression for
\eqref{8-31-a}:
\begin{align}
&\lim_{n \to \infty}
\frac{1}{n} \log M(\epsilon| P_{Y_n|X_n}) \nonumber \\
=&
\inf_{\{Q_{Y_n}\}}
\sup_{\{P_{X_n}\}}
\sup_R \bigg\{
R\bigg|
\lim_{n \to \infty}
P_{X_n,Y_n} \bigg\{(x,y) \in {\cal X}_n\times {\cal Y}_n
\bigg| \frac{1}{n}\log \frac{P_{Y_n|X_n}(y|x)}{Q_{Y_n}(y)} \le R\bigg\}
\le \epsilon \bigg\}.\Label{8-31-b}
\end{align}
While \eqref{8-31-b} seems more complicated than \eqref{8-31-a},
\eqref{8-31-b} is more useful for proving the impossibility part for the following reason.
In \eqref{8-31-a}, the distribution $P_{Y_n}$ has a complicated form in general.
Hence, it is quite difficult to evaluate the behavior of
$\frac{1}{n}\log \frac{P_{Y_n|X_n}(y|x)}{P_{Y_n}(y)} $.
When we derive the upper bound of
$\lim_{n \to \infty}\frac{1}{n} \log M(\epsilon| P_{Y_n|X_n})$,
it is enough to consider the case with a special $Q_{Y_n}$.
That is, $Q_{Y_n}$ can be chosen to be a distribution for iid random variables so that
the random variable $\frac{1}{n}\log \frac{P_{Y_n|X_n=x}(y)}{Q_{Y_n}(y)} $ can be factorized.
Then, the impossibility part of the channel coding theorem
can be easily shown via \eqref{8-31-b}.
Indeed, since the classical case is not so complicated,
it is possible to recover several important results from \eqref{8-31-a}.
However, use of the formula \eqref{8-31-b} is needed in the quantum case
because everything becomes more complicated.
\section{Folklore in Source Coding}\Label{S4}
When the information source is subject to the iid distribution $P_X^n$ of $P_X$,
the compression rate and the uniform random number generation rate have the same value of $H(P_X)$ asymptotically.
Hence, we can expect that the data compressed up to the entropy rate
$H(P_X)$ would be the uniform random number.
However, this argument does not work as a proof of the statement, so
this conjecture has the status of folklore in source coding,
and its validity remained unconfirmed for a long time.
Han \cite{Han:Folk} tackled this problem by using the method of information spectrum.
Han focused on the normalized relative entropy
$\frac{1}{n}D(P_{X}^n \circ \phi_n^{-1}\| P_{{\cal Z}_n,\mathop{\rm mix}})$
as the criterion to measure
the difference of the generated random number from a uniform random number,
and showed that the folklore in source coding is valid \cite{Han:Folk}.
However, the normalized relative entropy is too loose a criterion to guarantee
the quality of the uniform random number
because it is possible to distinguish a generated random number from a truly uniform random number even though the random number is considered to be uniform by this criterion.
In particular, when a random number is used for cryptography,
we need to employ a more rigorous criterion to judge the quality of its uniformity.
In contrast, the criterion $\gamma(\phi_n) $ is the most popular criterion
which gives the statistical distinguishability between a truly uniform random number and a given random number \cite{R-K}.
That is, when this criterion takes the value $0$,
the random number must be truly uniform.
Hence, when we use a random number for cryptography,
we need to accept only a random number passing this criterion.
Also, Han \cite{Han:Folk} has proved that the folklore conjecture in source coding is not valid
when we adopt the variational distance as our criterion.
On the other hand, to clarify the incompatibility between data compression and uniform random number generation,
the author \cite{Hay2} developed a theory for finite-block-length codes for both topics.
In this analysis,
he applied the method of information spectrum to the second-order $\sqrt{n}$ term, as shown in Fig. \ref{F8}.
That is,
by using the varentropy $V(P_X):=
\sum_{x\in {\cal X}} P_X(x) (- \log P_X(x)-H(P_X))^2$,
the central limit theorem guarantees that
\begin{align}
P_X^n \{x^n \in {\cal X}^n|
(\log P_{X}^n(x^n)- nH(P_X))/\sqrt{n} \le \epsilon\}
= \sqrt{V(P_X)}\Phi^{-1}(\epsilon),
\Label{12-9-1}
\end{align}
where the cumulative distribution function $\Phi$ of the standard Gaussian distribution is defined as
$\Phi(a):= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^a e^{-\frac{t^2}{2}}dt$.
So, the generation length $\log S(\epsilon |P_X^n)$ is asymptotically expanded as
\begin{align}
\log S(\epsilon |P_X^n)
= n H(P_X) + \sqrt{n}\sqrt{V(P_X)}\Phi^{-1}(\epsilon)
+ o(\sqrt{n}).\Label{8-31-9}
\end{align}
Now, we consider data compression, in which we define
the minimum compressed size
$ R(\epsilon |P_X^n)$ with decoding error $\epsilon$
in the same way.
Then, the asymptotic expansion is \cite{strassen,Hay2}
\begin{align}
\log R(\epsilon |P_X^n)= n H(P_X) - \sqrt{n} \sqrt{V(P_X)}\Phi^{-1}(\epsilon)
+ o(\sqrt{n}).\Label{8-31-10}
\end{align}
That is, whenthe converted length has the asymptotic expansion
$ n H(P_X) - \sqrt{n}\sqrt{V(P_X)} R $,
the errors of both settings are illustrated in Fig. \ref{F7}.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\includegraphics[scale=1]{trade-off.eps}}
\end{center}
\caption{Asymptotic trade-off relation between errors of data compression and
uniform random number generation:
When we focus on the second-order coding rate,
the minimum error of data compression is the probability of the exclusive event
of the minimum error of uniform random number generation.}
\Label{F7}
\end{figure}%
Now, we fix the conversion rate up to the second-order $\frac{1}{\sqrt{n}}$.
When we apply an operation from the system ${\cal X}^n$
to a system with size $e^{nH(P_X)+\sqrt{n}R}$,
the sum of the errors of
the data compression and the uniform random number generation almost equals to $1$.
This trade-off relation shows that
data compression and uniform random number generation
are incompatible to each other.
Indeed, since the task of data compression has the direction opposite to that of uniform random number generation,
the second-order analysis explicitly clarifies that there is a trade-off relation for their errors rather than compatibility.
Although the evaluation of optimal performance up to the second-order coefficient gives
an approximation of the finite-length analysis,
it also shows the existence of their trade-off relation.
This application shows the importance of the second-order analysis.
Because the evaluation of the uniformity of a random number is closely related to
security,
this type evaluation has been applied to security analysis \cite{Watanabe}.
This trade-off relation also plays an important role
when we use the compressed data as the scramble random variable for another piece of information \cite{H-RM}.
\section{Quantum cryptography}\Label{S5}
\subsection{Single-photon pulse without noise}\Label{S5A}
Section \ref{S2} has explained that the problem of the ultimate performance of optical communication
can be treated as quantum channel coding.
When the communication media has quantum properties,
it opens the possibility of a new communication style that cannot be realized with the preceding technology.
Quantum cryptography was proposed by Bennett and Brassard \cite{BB84} in 1984 as
a technology to distribute secure keys by using quantum media.
Even when the key is eavesdropped during the distribution,
this method enables us to detect the existence of the eavesdropper with high probability.
Hence, this method realizes secure key distribution,
and is called quantum key distribution (QKD).
Now, we explain the original QKD protocol based on single-photon transmission.
In the QKD,
the sender, Alice, needs to generate four kinds of states in the two-dimensional system $\mathbb C^2$, namely,
$|0\rangle,|1\rangle,$ and $|\pm \rangle:=
\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)$\footnote{In the study of cryptography,
We call the authorized sender, the authorized receiver, and the eavesdropper
Alice, Bob, and Eve, respectively.}.
Here, $\{|0\rangle,|1\rangle\}$ is called the bit basis,
and $\{|\pm\rangle\}$ is called the phase basis.
Also, the receiver, Bob, needs to measure the received quantum state by using either the bit basis or the phase basis.
The original QKD protocol \cite{BB84} is the following.
\begin{description}
\item[(1)] [Preparation] Alice randomly chooses one of four states, and sends it to Bob.
\item[(2)] [Transmission] Bob randomly chooses one of two bases, and measures the received state using the chosen basis.
Alice and Bob repeat Steps (1) and (2) several times.
\item[(3)] [Detection] Alice and Bob exchange their basis information via a public
channel, and they discard bits with disagreed bases.
\item[(4)] [Error check] Alice and Bob randomly choose check bits from among the remaining bits,
and they exchange their values via a public channel.
If they find an error,
they stop the protocol because the error might be caused by eavesdropping.
Otherwise, they use the remaining bits as keys, which are called {\it raw} keys.
\end{description}
In this protocol, if the eavesdropper, Eve, performs a measurement during transmission,
the quantum state would be destroyed with non-negligible probability
because she does not know the basis of the transmitted quantum state a priori.
When the number of qubits measured by Eve is not so small,
Alice and Bob will find disagreements in step (4).
So, the existence of eavesdropping will be discovered by Alice and Bob with high probability.
\subsection{Random hash functions}\Label{S5A2}
The original protocol supposes noiseless quantum communication by a single photon.
So, the raw keys are not necessarily secure when the channel has noise.
To realize secure communication even with a noisy channel,
we need a method to generate secure keys from keys partially leaked to Eve.
Such a process is called privacy amplification.
In this process, we apply a hash function, which maps from a larger set to a smaller set.
In the security analysis,
we often employ a hash function
whose choice is determined by a random variable (a random hash function).
A typical class of random hash functions is the following class.
A random hash function $f_R$
from $\mathbb F_2^{n}$ to $\mathbb F_2^{m}$
is called universal$_2$ \cite{Carter,WC81} when
\begin{align}
{\rm Pr} \{f_R (x)= f_R (x')\} \le 2^{-m}
\end{align}
for distinct elements $x$ and $x'$ in $\mathbb F_2^{n}$.
A typical example of a surjective universal$_2$ hash function is
the concatenated Toeplitz matrix, which is given as follows.
When an $m\times (n-m)$ matrix $T_R=(T_{i,j}) $ is given as
$T_{i,j}=R_{i+j-1}$ by using $n-1$ random variables $R_j$,
it is called a Toeplitz matrix.
Let ${\cal T}=\{T_R\,|\, r\in I\}$ be the set of all $m\times (n-m)$ Toeplitz matrices.
Then let $M_r=(T_R,I_{m})$ be an $m\times n$ matrix defined by a concatenation of
$T_R$ and the $m$-dimensional identity matrix $I_m$.
Then, the concatenated Toeplitz matrix $M_R$ maps
an input $x\in \mathbb F_2^n$ to the output $y=M_r x \in \mathbb F_2^m$.
The concatenated Toeplitz matrix $M_R$ is universal$_2$ when $R$ is a uniform random number. (For a proof, see, e.g., \cite[Appendix II]{Haya5}.)
This class can be relaxed as follows.
A random hash function $f_R$ from $\mathbb F_2^{n}$ to $\mathbb F_2^{m}$ is called
$\delta$-almost universal$_2$ when
\begin{align}
{\rm Pr} \{f_R (x)= f_R (x')\} \le \delta 2^{-m}
\end{align}
for distinct elements $x$ and $x'$ in $\mathbb F_2^{n}$.
Here, ${\rm Pr} \{C \} $ expresses the probability that the condition $C$ holds.
When $\delta=1$, it is universal$_2$.
Here, $R$ denotes the random variable identifying the hash function.
When a random hash function $f_R$ is linear,
it is $\delta$-almost universal$_2$
if and only if
\begin{align}
{\rm Pr} \{ x \in \mathop{\rm Ker} f_R \} \le \delta 2^{-m}
\end{align}
for any non-zero element $x \in \mathbb F_2^{n}$.
Here, $\mathop{\rm Ker} f$ is the kernel of the linear function $f$.
Considering the space $(\mathop{\rm Ker} f)^{\perp}$ orthogonal to $\mathop{\rm Ker} f$ in $\mathbb F_2^{n}$, we introduce another class of random hash functions.
A linear random surjective hash function $f_R$
from $\mathbb F_2^{n}$ to $\mathbb F_2^{m}$
is called $\delta$-almost dual universal$_2$
when
\begin{align}
{\rm Pr} \{ x \in (\mathop{\rm Ker} f_R)^{\perp} \} \le \delta 2^{-n+m}
\end{align}
for any non-zero element $x \in \mathbb F_2^{n}$.
As examples of $\delta$-almost dual universal$_2$ hash functions,
the paper \cite{HT2} proposed hash functions whose calculation complexity
and random seeds are smaller than existing functions for practical use, as shown in Fig. \ref{F6}.
When $R$ is not a uniform random number the above concatenated Toeplitz matrix $M_R$ is not universal$_2$; fortunately, it is $\delta$-almost dual universal$_2$.
So, we can evaluate security in the framework of $\delta$-almost dual universal$_2$ hash functions.
That is, for a realistic setting,
the concept of $\delta$-almost dual universal$_2$ works well.
Note that there exists a $2$-almost universal$_2$ hash function whose resultant random number is insecure (Fig. \ref{F6}).
Hence, the concept of $\delta$-almost dual universal$_2$ is more useful than
$\delta$-almost universal$_2$.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\includegraphics[scale=1]{hash.eps}}
\end{center}
\caption{Classes of (dual) universal$_2$ hash functions and security:
A hash function is used to realize privacy amplification.
This picture shows the relations between classes of hash functions and security.
In cryptography theory,
strong security is considered a requirement for a hash function \cite{R-K}.
The class of universal$_2$ hash functions was proposed in \cite{Carter,WC81}.
Using the leftover hash lemma \cite{BBCM,ILL},
Renner \cite{Renner1} proposed to use this class for quantum cryptography.
Tomamichel et al. \cite{TSSR11}
proposed to use the class of $\delta$-almost universal$_2$ hash functions
when $\delta$ is close to $1$.
Tsurumaru et al. \cite{Tsuru} proposed the use
of $\delta$-almost dual universal$_2$ hash functions
when $\delta$ is constant or increases polynomially.
As an example of a $\delta$-almost dual universal$_2$ hash function,
the author with his collaborators \cite{HT2} constructed a secure Hash Function with a less random seed
and less calculation.
Although the security analysis in \cite{TWGR} is based on universal$_2$ hash functions,
that in \cite{H-QKD2,H-N,HT1} is based on $\delta$-almost dual universal$_2$ hash functions.}
\Label{F6}
\end{figure}%
\subsection{Single-photon pulse with noise}\Label{S5B}
To realize the security even with a noisy quantum channel,
we need to modify the original QKD protocol.
Since this modified protocol is related to error correction,
finite-length analysis plays an important role
to guarantee the security of the real QKD system.
Here, for simplicity, we discuss only the finite-length security analysis
with the Gaussian approximation.
The modified QKD protocol is the following.
Steps (1), (2), and (3) are the same as in the original.
\begin{description}
\item[(4)] [Error estimation] Alice and Bob randomly choose check bits from among the remaining bits,
and they exchange their values via a public channel.
\item[(*)] In the following, we give a protocol for the bit basis.
Here, we denote the number of remaining bits with the bit basis measurement
by $n$,
and
we denote the numbers of check bits with the phase and bit basis measurements by $l$ and $l'$.
We denote the numbers of observed errors among
check bits in the phase and bit basis measurements by $c$ and $c'$.
\item[(5)] [Error correction]
Alice and Bob apply error correction based on a $k$-dimensional subspace $C$ and obtain $k$ corrected bits.
That is, Alice sends her syndrome to Bob via a public channel,
and Bob corrects his error.
Here, the length $k$ and a code $C$ are chosen by
the observed error rate $\frac{c'}{l'}$ with the bit basis measurement.
\item[(6)] [Privacy amplification]
Alice and Bob apply a $\delta$-almost dual universal2 hash function from $\mathbb F_2^k$ to $\mathbb F_2^{k-\bar{k}}$.
This protocol sacrifices $\bar{k}$ bits, which is called the sacrifice bit length and is determined by the observed error rate $\frac{c}{l}$ with the phase basis measurement.
Then, Alice and Bob obtain final keys with length $s:=k-\bar{k}$.
\end{description}
To perform the finite-length security analysis approximately,
we consider the following items.
\begin{description}
\item[(i)]
The virtual decoding phase error probability of a code $C$ with an arbitrary decoder
gives the amount of leaked information with privacy amplification by a hash function whose kernel is $C^{\perp}$.
In this correspondence,
the privacy amplification in the bit basis
by a $\delta$-almost dual universal$_2$ hash function $\mathbb F_2^k$ to $\mathbb F_2^{k-\bar{k}}$
essentially realizes
an error correction code in the phase basis
whose parity check matrix is a $\delta$-almost universal$_2$ hash function
from $\mathbb F_2^n$ to $\mathbb F_2^{\bar{k}}$\cite[Lemmas 2 \& 4]{H-QKD}\cite[Theorem 2]{H-QKD2}\cite[(54)]{Tsuru}\cite[Section 9.4.3]{book2}\cite[Section 5.6.2]{book3}\footnote{To explain this point, we need to discuss a $\delta$-almost universal$_2$ hash function for $\mathbb F_2^n/C^{\perp}$, which requires more work.
To avoid this difficulty, we give only a simplified discussion here.}.
\item[(ii)]
When the total number of bits is $n+l$, the total number of errors is $b$,
and we randomly choose $l$ bits as the observed bits,
the number of observed errors $c$ is subject to the hypergeometric distribution
$P_{b}(c):=\frac{{l \choose c} {n \choose b-c}}{{n+l \choose b}}$.
So, the value $(c-\frac{lb}{n+l})/\sqrt{l} $
approximately obeys the Gaussian distribution with variance
$ \frac{bn(n+l-b)}{(n+l)^2(n+l-1)}$.
\item[(iii)]
When the parity check matrix is given by a $\delta$-almost universal$_2$ hash function from $\mathbb F_2^{n}$ to $\mathbb F_2^{\bar{k}}$,
the decoder is the minimum distance decoder,
and
the support of the distribution $P_{Z^n}$ of errors on $\mathbb F_2^n$
is included in the set $\{x^n \in \mathbb F_2^n| |x^n|=b-c \}$,
the average decoding error probability is evaluated as
\begin{align}
{\rm E}_R \epsilon(E_{f_R},D_{f_R,\min}|P_{Z^n})
\le \delta e^{n h((b-c)/n)-\bar{k}},
\end{align}
where
${\rm E}_R$ denotes the expectation with respect to the random variable $R$\cite[Lemma 1]{H-QKD}\cite[Theorem 3]{H-QKD2}\cite[(37)]{Tsuru}.
\item[(iv)]
The real distribution of error in the phase basis
for $n$ remaining qubits with the bit basis measurement
and $l$ check qubits with the phase basis measurement
($n+l$ qubits in total)
is written as a probabilistic mixture of
distributions $P_{\bar{k}}$, where $P_{\bar{k}}$ is a distribution on $\{x^n \in \mathbb F_2^{n+l}| |x^n|=\bar{k} \}$\cite[Section IV-B]{H-QKD}\cite[Section III-C]{H-QKD2}\cite[(18)]{HT1}.
(Any distribution on $\mathbb F_2^n$ satisfies this condition.
In the memoryless case, the coefficients form a binomial distribution.)
\end{description}
To give our security criterion,
we denote the information transmitted via the public channel by $u$,
and introduce its distribution $P_{{\rm pub}}$.
Depending on the public information $u$,
we denote the state on the composite system of Alice's and Eve's systems,
the state on Alice's system,
the state on Eve's system,
and the length of the final key length
by $\rho_{A,E|u}$, $\rho_{A| u}$, $\rho_{E|u}$, and $s(u)$, respectively.
We denote the completely mixed state with length $s(u)$
by $\rho_{A,\mathop{\rm mix}| s(u)}$.
Then, similar to
the security criterion is given in \cite[(3)]{HT1}
\begin{align}
\frac{1}{2}
\sum_{u} P_{{\rm pub}}(u)
\| \rho_{A,E|u}- \rho_{A,\mathop{\rm mix}| s(u)}\otimes \rho_{E|u}\|_1.
\Label{9-1-1}
\end{align}
Now, as a security condition, we impose the condition that
\eqref{9-1-1} is smaller than $\epsilon$.
Combining the above four items,
depending on $c$,
we can derive the sacrifice bit length $\bar{k}(c)$.
Although the exact formula of $\bar{k}(c)$ is complicated, it can be asymptotically expanded as \cite[(53)]{HT1}
\begin{align}
\bar{k}(c)= nh(\frac{c}{l})
- \frac{\sqrt{n}}{2}h'(\frac{c}{l})\sqrt{
\frac{c}{l}(1-\frac{c}{l})(1+\frac{l}{n})\frac{n}{l}}
\Phi^{-1}(\frac{\epsilon^2}{2})
+o(\sqrt{n}).
\end{align}
Here, we should remark that this security analysis does not assume the memoryless condition for the quantum channel.
To avoid this assumption, we introduce a random permutation and the effect of random sampling, which allows us to consider that the errors in both bases are subject to the hypergeometric distribution.
However, due to the required property of hash functions, we do not need to apply the random permutation in the real protocol.
That is, we need to apply only random sampling to estimate the error rates of the phase basis.
Here, we need to consider the reliability, that is, the agreement of the final keys.
For this purpose, we need to attach a key verification step as follows \cite[Section VIII]{Fung}.
\begin{description}
\item[(7)] [Key verification]
Alice and Bob apply a universal$_2$ hash function from $\mathbb F_2^{k-\bar{k}}$ to $\mathbb F_2^{\hat{k}}$ to the final keys.
They exchange their results via a public channel.
They discard their final $\hat{k}$ bits if they do not agree.
Otherwise, they consider that their remaining keys agree.
\end{description}
However, the amount of leaked information for the final keys
cannot be estimated by a similar method.
So, the security analysis is more important than the agreement of the keys.
\subsection{Weak coherent pulse with noise}\Label{S5C}
Next, we discuss a weak coherent pulse with noise, whose device is illustrated in Fig. \ref{F6B}.
Since the above protocol assumes single-photon pulses,
when the pulse contains multiple photons even occasionally,
the above protocol cannot guarantee security.
Since it is quite difficult to generate a single-photon pulse,
we usually employ weak coherent pulses with phase randomization,
whose states are written as
$\sum_{n=0}^{\infty}e^{-\mu}\frac{\mu^n}{n\!} | n \rangle \langle n |$,
where $\mu$ is called the intensity.
That is, weak coherent pulses contain multiple-photon pulses, as shown in Fig. \ref{F9}.
In this case,
there are several multiple-photon pulses among $n$ received pulses.
In optical communication,
only a small fraction of pulses arrive at the receiver side.
That is, the ratio of multiple-photon states
of Alice's side is different
from that of Bob's side.
This is because the detection ratio on Bob's side depends on the number of photons.
As the first step in the security analysis,
we need to estimate the ratios of
vacuum pulses, single-photon pulses, and multiple-photon pulses among
$n$ received pulses.
Indeed, there is a possibility that Bob erroneously detects the pulse even with a vacuum pulse.
To obtain this estimate, we remark that the ratio of multiple-photon pulses depends on the intensity $\mu$.
Hence, it is possible to estimate
the detection ratios of
vacuum pulses, single-photon pulses, and multiple-photon at Bob's side
from the detection ratios of more than 3 different intensities,
which are obtained by solving simultaneous equations \cite{decoy1,decoy2,decoy3,Ma05,Wang05,H1,decoy4,ODI}.
Observing the error rate of each pulse depending on the intensity and the basis, we can estimate the error rates of both bases for
vacuum pulses, single-photon pulses, and multiple-photons.
This idea is called the decoy method.
Based on this discussion, we change steps (1), (2), (3), and (4).
However, we do not need to change steps (5) and (6), in which we choose the error correcting code and the sacrifice bit length.
As the second step of the security analysis,
when $n$ received pulses are composed of
$n_0$ vacuum pulses, $n_1$ single-photon pulses,
and $n_2$ multiple-photon pulses,
we need to estimate the leaked information after the privacy amplification
with sacrifice bit length $\bar{k}$.
In the current case,
we replace items (i) and (iii) by the following.
\begin{description}
\item[(i')]
When $n$ received pulses are composed of
$n_0$ vacuum pulses, $n_1$ single-photon pulses,
and
$n_2$ multiple-photon pulses,
then, $n_0$ vacuum pulses are converted to noiseless single-photon pulses
and
$n_2$ multiple-photon pulses
are converted to noiseless single-photon pulses
whose error distribution is the uniform distribution \cite[Section III-B]{H-QKD2}.
Then, we have the same statement as (i).
\item[(iii')]
Assume that the parity check matrix is given by a $\delta$-almost universal$_2$ hash function from $\mathbb F_2^{n}$ to $\mathbb F_2^{\bar{k}}$.
We also make an assumption for the distribution $P_{Z^n}$
on $\mathbb F_2^n=\mathbb F_2^{n_0+n_1+n_2}$;
$n_0$ bits have no error,
there are $t_1$ errors among the $n_1$ bits, and
the distribution of errors on the $n_2$ bits
is the uniform distribution.
So, the decoder $\Gamma([y])$ is defined as
\begin{align}
\Gamma([y]):= \mathop{\rm argmin}_{x^n \in [y]:(*)}
\|x^n\|,
\end{align}
where $(*)$ is the condition that
all of entries among the above $n_0$ bits are $0$,
and $\|x^n\|$ is the number of bits with entry $1$ among the above $n_1$ bits.
Then, the average decoding error probability is evaluated as \cite[Theorem 3]{H-QKD2}
\begin{align}
{\rm E}_R \epsilon(E_{f_R},D_{f_R,\min}|P_{Z^n})
\le \delta e^{n_1 h(t_1/n_1)+ n_2-\bar{k}}.
\end{align}
\end{description}
Finally, we combine the original items (ii) and (iv) with the above modified items (i') and (iii').
However,
due to the complicated estimation process for the partition $n_0,n_1,n_2$ of
$n$ qubits,
we need a very complicated discussion.
Based on such an analysis, after long calculation,
we obtain a formula for the sacrifice bit length, as shown in Fig. \ref{F5}.
\newpage
\begin{figure}[htbp]
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\hsize}
\centering
\scalebox{0.5}{\includegraphics[scale=0.3]{NEC.JPG}}
\caption{QKD system developed by NEC. Copyright (2015) by NEC:
This device was used for a long-term evaluation
demonstration in 2015 by the ``Cyber Security Factory" (core facility for counter-cyber-attack activities in NEC) \cite{Nec}.
}
\Label{F6B}
\end{minipage} &
%
\begin{minipage}[t]{0.45\hsize}
\centering
\scalebox{0.5}{\includegraphics[scale=1]{coherent.eps}}
\caption{Multiple photons in a weak coherent pulse:
A weak coherent pulse contains multiple photons with a certain probability, which depends on the intensity of the pulse.}
\Label{F9}
\end{minipage}
\end{tabular}
\end{figure}
\begin{figure}[htbp]
\begin{tabular}{cc}
\begin{minipage}{0.7\hsize}
\begin{center}
\includegraphics[width=0.7\textwidth]{decoy1.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.3\hsize}
\begin{center}
\scalebox{1.0}{\includegraphics[scale=0.6]{decoy2.eps}}
\end{center}
\end{minipage}
\end{tabular}
\caption{Key generation rate with weak coherent pulses:
We employ two intensities: signal intensity and decoy intensity.
Using the difference between detection rates of the pulses with two different intensities,
we can estimate the fraction of multiple photons in the detected pulses.
Here, we set the signal intensity to be $1$.
This graph shows the key generation rate dependent on the decoy intensity.
This graph is based on the calculation formula given in \cite{H-N}.}
\Label{F5}
\end{figure}
\newpage
\subsection{History of developments of QKD}\Label{S5D}
Because the raw keys are not necessarily secure when the channel has noise or two photons are transmitted,
many studies have been done to find a way to guarantee security
when the communication device has such imperfections.
For this purpose,
we need to consider a partial information leakage whose amount is bounded by the amount of the imperfection.
Shor and Preskill \cite{SP} and Mayers \cite{M01} showed that privacy amplification generates secure final keys
even when the channel has noise when the light source correctly generates a singlephoton.
Gottesman et al. \cite{GLLP} showed that these final keys can be secure
even when the light source occasionally generates multiple photons
if the fraction of multiple photon pulses is sufficiently is small.
The light source used in the actual quantum optical communication
is the weak coherent light, which probabilistically generates some multiple photon pulses, as shown in Fig. \ref{F9}.
Hence, this kind of extension had been required for practical use.
Hwang \cite{decoy1}
proposed an efficient method to estimate the fraction of multiple photon pulses, called the decoy method, in which the sender randomly chooses pulses with different intensities.
Until this stage, the studies of QKD were mainly done by individual researchers.
However, project style research is needed for a realization of QKD
because the required studies need more interactions between theorists and experimentalists.
A Japanese project, the ERATO Quantum Computation and Information Project,
tackled the problem of guaranteeing the security of a real QKD system.
Since this project contained an experimental group as well as theoretical groups,
this project naturally proceeded to a series of studies of QKD from a more practical viewpoint.
First, one project member, Hamada \cite{Ham2002,ou-Ham} studied the relation between the quantum error correcting code and the security of QKD more deeply.
Then, another project member, Wang \cite{decoy3,Wang05} extended the decoy method, which was developed independently
by a group at Toronto University \cite{decoy2,Ma05}.
Tsurumaru \cite{decoy4} and the author \cite{H1} have further extended the method.
These extended decoy methods give a design for the choice of the intensity of transmitted pulses.
Further, jointly with the Japanese company NEC, the experimental group demonstrated
QKD with spools of standard telecom fiber over 100 km \cite{Tomi}.
Here, we note that the theoretical results above assume the combination of error correction and privacy amplification
for an infinitely large block-length in steps (5) and (6).
They did not give a quantitative evaluation of the security with finite-block-length.
They also did not address
the construction of privacy amplification so
these results are not sufficient for realization of a quantum key distribution system.
To resolve this issue, as a member of this project,
the author \cite{H-QKD} approximately evaluated the security
with finite-block-length $n$
when the channel has noise and the light source correctly generates a single photon.
This idea has two key points.
The first contribution is the evaluation of information leakage via the phase error probability of virtual error correction in the phase basis, which
is summarized as item (i).
This evaluation is based on the duality relation in quantum theory, which typically appears
in the relation between position and momentum.
The other contribution is the approximate evaluation of the phase error probability
via the application of the central limit theorem, which is obtained by
the combination of items (iii) and (iv).
This analysis is essentially equivalent to the derivation of
the coefficient of the transmission rate up to the second-order $\frac{1}{\sqrt{n}}$.
However, this analysis assumed a single-photon source.
Under this assumption, the author discussed the optimization for the ratio of check bits \cite{ORPB}.
Based on a strong request from the project leader of the ERATO project
and helpful suggestions by the experimental group, using the decoy method,
he extended a part of his analysis to the case when the light source sometimes generates multiple photons \cite{H-QKD2} by replacing item (iv) by (iv').
Based on this analytical result,
the ERATO project made an experimental demonstration of QKD
with weak coherent pulses on a real optical fiber, whose security is quantitatively guaranteed
in the Gaussian approximation \cite{HHHTT}.
Another Japanese project of the
National Institute of Information and Communication Technology (NICT)
has continuously made efforts toward a realization of QKD.
After the ERATO project, the author joined the NICT project from 2011 to 2016.
The NICT
organized a project in Tokyo (Tokyo QKD Network) by connecting QKD devices operated by
NICT, NEC, Mitsubishi Electric, NTT, Toshiba Research Europe, ID Quantique, the Austrian Institute of Technology, the Institute of Quantum Optics and Quantum Information and the University of Vienna in 2010\cite{UQCC}.
Also, as a part of the NICT project, NEC developed a QKD system, as shown in Fig. \ref{F6B}, and performed a long-term evaluation experiment in 2015 \cite{Nec}.
After the above ERATO project, two main theoretical problems remained,
and their resolutions had been strongly required by the NICT project
because they are linked to the security evaluation of these installed QKD systems.
The first one was the complete design of privacy amplification.
Indeed, in the above security analysis based on the phase error probability,
the range of possible random hash functions was not clarified.
That is, only one example of a hash function was given in the paper \cite{H-QKD2},
and we had only a weaker version of item (ii) at that time.
To resolve this problem, as members of the NICT project,
Tsurumaru and the author clarified what kind of hash functions can be used to guarantee the security of a QKD system \cite{Tsuru},
which yields the current item (ii).
They introduced $\delta$-almost dual universal$_2$ hash functions, as explained in Section \ref{S5A2}.
In these studies, Tsurumaru taught the author
the practical importance of the construction of hash functions from an industrial viewpoint
based on his experience obtained as a researcher at Mitsubishi Electric.
The second problem was to remove the Gaussian approximation in \cite{H-QKD}
from the finite-length analysis.
Usually, security analysis requires rigorous evaluation without approximation.
Hence, this requirement was essential for the security evaluation.
In Hayashi and Tsurumaru \cite{HT1}, we succeeded in removing this approximation and obtained
a rigorous security analysis for the single-photon case.
Also, the paper \cite{HT1} clarified
the security criterion and simplified the derivation
in the discussion given in Subsection \ref{S5B}.
Based on a strong request by the NICT project,
the author extended the finite-length analysis to the case with multiple photons by employing the decoy method and performing a complicated statistical analysis \cite{H-N}.
The transmission rate in the typical case is shown in Fig. \ref{F5}.
This study clarified the requirements for physical devices to apply the obtained security formula.
In this study \cite{H-N},
the author also improved an existing decoy protocol.
Under the improved protocol, he optimizes the choice of intensities \cite{ODI}.
Finally, we should remark that
only such a mathematical analysis can guarantee the security of QKD.
This is quite similar to the situation that conventional security measures,
like RSA, can be guaranteed by mathematical analysis of the computational complexity \cite{RSA}.
In this way QKD is different from conventional communication technology.
Here, we should address the security analysis based on the leftover hash lemma \cite{BBCM,ILL} as another research stream of QKD.
This method came from cryptography theory
and was started by the Renner group at the Swiss Federal Institute of Technology in Zurich (ETH) \cite{Renner1}.
The advantage of this method is the direct evaluation of information leakage without needing to evaluate the virtual phase error probability.
This method also enables a security analysis with finite-block-length \cite{TWGR}.
However, their finite-block-length analysis is looser than our analysis in Hayashi and Tsurumaru \cite{HT1}
because their bound \cite{TWGR} cannot yield the second-order rate based on the central limit theorem
whereas it can be recovered from the bound in Hayashi and Tsurumaru \cite{HT1}.
Further, while their method is potentially precise,
it has very many parameters to be estimated in the finite-block-length analysis.
Although their method improves the asymptotic generation rate \cite{WMU},
the increase in the number of parameters to be estimated enlarges the error of channel estimation in the finite-length setting.
Hence, they need to decrease the number of parameters to be estimated.
In their finite-block-length analysis, they simplified their analysis so that
only the virtual phase error probability has to be estimated.
This simplification improves the approach based on the leftover hash lemma because it gives a security evaluation based on the virtual phase error probability more directly.
However, this approach did not consider security with weak coherent pulses.
As another merit, the approach based on the leftover hash lemma later
influenced the security analysis in the classical setting \cite{Haya5,H-tight,H-ep,HW1,HT15}.
To discuss the future of QKD,
we now describe other QKD projects.
Several projects were organized in Hefei in 2012 and in Jinan in 2013\cite{C-QKD}.
In 2013, a US company, Battelle,
implemented a QKD system for commercial use in Ohio
using a device from ID Quantique\cite{battelle}.
Battelle has a plan to establish a QKD system between Ohio and Washington, DC, over a distance of 700 km\cite{battelle}.
Also, in China, the Beijing-Shanghai project almost established a QKD system connecting Shanghai, Hefei, Jinan, and Beijing with over a distance of 2,000 km \cite{C-QKD}.
Indeed, these implemented QKD networks are composed of a collection of QKD communications over relatively short distance.
However, quite recently,
a Chinese group has succeeded in realizing a satellite for
quantum communications.
Since most of these developments are composed of networks of quantum communication channels,
it is necessary to develop theoretical results to exploit the properties of quantum networks for a QKD system.
\section{Second-order channel coding}\Label{S6}
Now, we return to classical channel coding with the memoryless condition.
In the channel coding, it is important to clarify
the difference between the asymptotic transmission rate
and the actual optimal transmission rate dependent on the block-length, as shown in Fig. \ref{F11}.
This is because
many researchers had mistakenly thought that
the actual optimal transmission rate equals the asymptotic transmission rate for a long time.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\includegraphics[scale=1]{actual.eps}}
\end{center}
\caption{
Relation between the asymptotic transmission rate
and the actual transmission rate dependent on the block-length:
Usually, the actual transmission rate is smaller than the asymptotic key generation rate.
As the block-length increases, the actual transmission rate becomes closer to
the asymptotic key generation rate.}
\Label{F11}
\end{figure}%
When the channel $P_{Y|X}$ is given as a binary additive noise subject to the distribution $P_Z$
and the channel $P_{Y^n|X^n}$ is the product distribution of the channel $P_{Y|X}$,
the simple combination of \eqref{8-31-ab} and \eqref{12-9-1} yields
the asymptotic expansion of
$M(\epsilon| P_{Y^n|X^n})$:
\begin{align}
M(\epsilon| P_{Y^n|X^n})
= n (\log 2-H(P_Z))
+ \sqrt{n}\sqrt{V(P_Z)}\Phi^{-1}(\epsilon)+ o(\sqrt{n})
\end{align}
because Eq. \eqref{8-31-ab} does not contain $\sup_{P_{X_n}}$ like \eqref{8-31-a}.
In the general case,
using the formulas \eqref{8-31-5} or \eqref{8-31-a} with order $\sqrt{n}$,
we can derive the $\ge $ part of the following inequality.
\begin{align}
M(\epsilon| P_{Y^n|X^n})
=
\left\{
\begin{array}{ll}
n \max_{P_X}I(P_X,P_{Y|X})
+ \sqrt{n}\sqrt{V_-(P_{Y|X})}\Phi^{-1}(\epsilon)+ o(\sqrt{n})
& \hbox{ if } \epsilon <\frac{1}{2} \\
n \max_{P_X}I(P_X,P_{Y|X})
+ \sqrt{n}\sqrt{V_+(P_{Y|X})}\Phi^{-1}(\epsilon)+ o(\sqrt{n})
& \hbox{ if } \epsilon \ge \frac{1}{2} ,
\end{array}
\right.
\Label{9-1-4}
\end{align}
where $V(P_{Y|X})$ is defined as
\begin{align}
V_+(P_{Y|X}) &:=
\max_{P_X} \sum_{x}P_X(x) \sum_y P_{Y|X}(y|x)
\Big(\log \frac{P_{Y|X}(y|x)}{P_Y(y)}- D(P_{Y|X=x}\|P_Y)\Big)^2
\Label{9-1-5}\\
V_-(P_{Y|X}) &:=
\min_{P_X} \sum_{x}P_X(x) \sum_y P_{Y|X}(y|x)
\Big(\log \frac{P_{Y|X}(y|x)}{P_Y(y)}- D(P_{Y|X=x}\|P_Y)\Big)^2 ,
\Label{9-1-6}
\end{align}
and the minimum and maximum are taken over the $P_X$ satisfying
$I(P_X,P_{Y|X})= \max_{Q} I(Q,P_{Y|X})$.
However, it is difficult to derive the $\le$ part of inequality \eqref{9-1-4}
by using \eqref{8-31-a} due to the maximization $\sup_{P_{X_n}}$.
To resolve this problem,
we choose $P_{X}$ as the distribution realizing
the minimum in \eqref{9-1-6}
or the maximum in \eqref{9-1-5}
and substitute
$P_{X}^n$ into $Q_n$ in the formula \eqref{8-31-b}.
Then, we can derive the $\le$ part of the inequality \eqref{9-1-4}.
Although this expansion was firstly derived by Strassen \cite{strassen} in 1962,
this derivation is much simpler, which shows
the effectiveness of the method of information spectrum.
The author applied this method to an additive white Gaussian noise channel
and succeeded in deriving the second-order coefficient of its transmission rate, which had been unknown until that time;
this was published in 2009 \cite{Hay1}.
In fact, he obtained only a rederivation of Strassen's result.
When he presented this result in a domestic meeting \cite{second-D},
Uyematsu pointed out Strassen's result.
To go beyond Strassen's result, he applied this idea to the additive white Gaussian noise channel, and obtained the following expansion,
which appears as a typical situation in wireless communication.
\begin{align}
\log M(\epsilon|S,N)
=
\frac{n}{2}\log \Big(1+\frac{S}{N}\Big)
+
\sqrt{n}\frac{\frac{S^2}{N^2}+\frac{2S}{N}}{2(1+\frac{S}{N})}
\Phi^{-1}(\epsilon)+ o(\sqrt{n}),
\Label{9-1-8}
\end{align}
where
$M(\epsilon|S,N)$ is the maximum size of transmission when the variance of the Gaussian noise is $N$ and the power constraint is $S$.
In fact, a group in Princeton University, mainly Verd\'{u} and Polyanskiy,
tackled this problem independently.
In their papers \cite{Pol,Pol2},
they considered the relation between channel coding and simple statistical hypothesis testing,
and independently derived two relations, the dependence test bound and the meta converse inequality,
which are the same as in the classical special case considered in the author and Nagaoka\cite{Hay-Nag} and Nagaoka \cite{Naga-EQIS}.
Since their results \cite{Pol} are limited to the classical case,
the applicable region of their results is narrower than that of the preceding results in \cite{Hay-Nag,Naga-EQIS}.
Then, Verd\'{u} and Polyanskiy rederived Strassen's result, without use of the method of information spectrum, by the direct evaluation of these two bounds.
They also independently derived the second-order coefficient of the optimal transmission rate for the additive white Gaussian noise channel in 2010 \cite{Pol}.
Since the result by this group at Princeton had a large impact in the
information theory community at that time,
their paper received the best paper award of IEEE Information theory society in 2011
jointly with the preceding paper by the author \cite{Hay1}.
As explained above, the Japanese group obtained some of the same results
several years before the Princeton group but had much weaker publicity than the Princeton group.
Thus, the Princeton group met the demand in the information theory community,
and they presented their results very effectively.
In particular, since their research activity was limited to the information theory community,
their audiences were suitably concentrated so that they could create a scientific boom in this direction.
In contrast to the Princeton group, the Japanese group studied the same topic far from the demand of the community because their study originated in quantum information theory.
In particular, their research funds were intended for the study of quantum information so they had to present their work to quantum information audiences who are less interested in their results.
Also, because their work was across too wide a research area to explain their results effectively,
they could not devote sufficient efforts to explain their results to the proper audiences at that time.
Hence, their papers attracted less attention.
For example, few Japanese researchers knew the paper \cite{Hay1}
when it received the IEEE award in 2011.
After this award, this research direction became much more popular and was applied to very many topics in information theory \cite{PPV2,PPV3,TK,SKT15,KV,HW13,HW14c,HW1,YHN}.
In particular, the third-order analysis has been applied to channel coding \cite{TT13}.
These activities were reviewed in a recent book \cite{T14}.
Although this research direction is attracting much attention,
we need to be careful about evaluating its practical impact.
These studies consider finite-block-length analysis for the optimal rate with respect to all codes including those with too high a calculation complexity to implement.
Hence, the obtained rate cannot necessarily be realized with implementable codes.
To resolve this issue,
we need to discuss the optimal rate among codes whose calculation complexity is not so high.
Because no existing study discusses this type of finite-block-length analysis,
such a study is strongly recommend for the future.
Also, a realistic system is not necessarily memoryless;
so, we need to discuss memory effects.
To resolve this issue, jointly with Watanabe,
the author extended this work to channels with additive Markovian noise,
which covers the case when Markovian memory exists in the channel \cite{HW13}.
While this model covers many types of realistic channel,
it is not trivial to apply the results in \cite{HW13} to the realistic case of wireless communication
because it is complicated to address the effect of fading in the coefficients.
This is an interesting future problem.
After this breakthrough, the
Princeton group extended their idea to many topics in channel coding and data compression \cite{PPV2,PPV3,KV,KV14}.
On the other hand,
in addition to the above Markovian extension,
the author, jointly with Tomamichel, extended this work to the quantum system \cite{T-M}, providing a unified framework for the second-order theory in the quantum system
for data compression with side information, secure uniform random number generation,
and simple hypothesis testing.
At the same time, Li \cite{KL} directly derived the second-order analysis for simple statistical hypothesis testing in the quantum case.
However, the second-order theory for simple statistical hypothesis testing
has less meaning in itself;
it is more meaningful in relation to other topics in information theory.
\section{Extension to physical layer security}\Label{S7}
\subsection{Wire-tap channel and its variants}
The quantum cryptography explained above offers secure key distribution based on physical laws.
The classical counterpart of quantum cryptography is physical layer security,
which offers information theoretical security based on several physical assumptions from classical mechanics.
As its typical mode, Wyner \cite{Wyner} formulated the wire-tap channel model,
which was more deeply investigated by Csisz\'{a}r and K\"{o}ner\cite{CK79}.
This model assumes two channels, as shown in Fig. \ref{F10x}:
a channel $P_{Y|X}$ from the authorized sender (Alice) to the authorized receiver (Bob)
and a channel $P_{Z|X}$ from the authorized sender to the eavesdropper (Eve).
When the original signal of Alice has stronger correlation
with the received signal than that with Eve, that is,
a suitable input distribution $P_X$ satisfies
the condition $I(P_X,P_{Y|X}) > I(P_X,P_{Z|X})$,
the authorized users can communicate without any information leakage
by using a suitable code.
More precisely, secure communication is available
if and only if there exists a suitable joint distribution $P_{VX}$ between the input system ${\cal X}$ and another system ${\cal V}$
such that the condition $I(P_V,P_{Y|V}) > I(P_V,P_{Z|V})$ holds,
where $P_{Y|V}(y|v):=\sum_{x\in {\cal X}}P_{Y|X}(y|x)P_{X|V}(x|v)$
and $P_{Z|V}$ is defined in the same way.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.7}{\includegraphics[scale=1]{clip163.eps}}
\end{center}
\caption{Wire-tap channel model. Eve is assumed to have a weaker connection to Alice than Bob does.}
\Label{F10x}
\end{figure}%
Although we often assume that the channel is stationary and memoryless,
the general setting can be discussed by using information spectrum \cite{Haya6}.
This paper explicitly pointed out that there is a relation between the wire-tap channel and the channel resolvability discussed in Section \ref{S3}.
This idea has been employed in many subsequent studies \cite{BL,HG,HES}.
Watanabe and the author \cite{WH} discussed the second-order asymptotic for the channel resolvability.
Also, extending the idea of the meta converse inequality to the wire-tap channel,
Tyagi, Watanabe, and the author showed a relation between the wire-tap channel and hypothesis testing\cite{HTW1}.
Based on these results, Yang et al. \cite{YSP} investigated finite-block-length bounds for wiretap channels without Gaussian approximation.
Also, taking into account the construction complexity,
the author and Matsumoto \cite[Section XI]{HM} proposed another type of finite-length analysis for wire-tap channels.
Its quantum extension has also been discussed \cite{Deve,H-wire}.
However, in the wire-tap channel model, we need to assume that Alice and Bob know the channel $P_{Z|X}$ to Eve.
Hence,
although it is a typical model for information theoretic security,
this model is somewhat unrealistic because Alice and Bob cannot identify Eve's behavior.
That is, it is assumed that Eve has weaker connection to Alice than Bob does, as shown in Fig. \ref{F10x}.
So, it is quite hard to find a realistic situation where the original wire-tap channel model is applicable.
Fortunately, this model has more realistic derivatives:
one is secret sharing\cite{Bla,Sha}, and another is secure network coding\cite{bhattad05,cai11survey,cai02b,cai07securecondition,caiyeung11,HY}.
In secret sharing,
there is one sender, Alice, and $k$ receivers, Bob$1$, $\ldots$, Bob$k$.
Alice splits her information into $k$ parts, and sends them to the respective Bobs
such that
a subset of Bobs cannot recover the original message.
For example, assume that there are two Bobs, $X_1$ is the original message and $X_2$ is an independent uniform random number.
If Alice sends the exclusive or of $X_1$ and $X_2$ to Bob$1$
and sends $X_2$ to Bob$2$,
neither Bob can recover the original message.
When both Bobs cooperate, however, they can recover it.
In the general case,
for any given numbers $k_1<k_2 < k$, we manage our code such that
any set of $k_1$ Bobs cannot recover the original message but any set of $k_2$ Bobs can \cite{Yamamoto}.
Secure network coding is a more difficult task.
In secure network coding, Alice sends her information to the receiver via a network, and
the information is transmitted to the receiver via intermediate links.
That is, each intermediate link transfers a part of the information.
Secure network coding is a method to guarantee security when some of the intermediate links are eavesdropped by Eve.
Such a method can be realized by applying the wire-tap channel to the case when Eve obtains the information from some of intermediate links\cite{bhattad05,cai11survey,cai02b,cai07securecondition,caiyeung11,HY}.
When each intermediate link has the same amount of information,
the required task can be regarded as a special case of secret sharing.
However, this method depends on the structure of the network,
and it is quite difficult for Alice to know this structure.
Hence, it is necessary to develop a coding method that does not depend on the structure of the network.
Such a coding is called universal secure network coding,
and has been developed by several researchers\cite{silva08,silva09,NT,KMU,Kurosawa}.
These studies assume only that the information processes on each node are linear
and the structure of network does not change during transmission.
In particular, the security evaluation can be made even with finite-block-length codes \cite{NT,KMU,Kurosawa}.
Since it is quite hard to tap all of the links,
this kind of security is sufficiently useful for practical use by ordinary people
based on the cost-benefit analysis of performance.
To understand the naturalness of this kind of assumption,
let us consider the daily-life case in which an important message is sent by dividing it into two e-mails,
the first of which contains the message encrypted by a secure key, and the second one contains the secure key.
This situation assumes that only one of two links is eavesdropped.
\subsection{Secure key distillation}
As another type of information theoretical security,
Ahlswede and Csisz\'{a}r\cite{Ahlswede} and Maurer\cite{Mau93} proposed
secure key distillation.
Assume that two authorized users, Alice and Bob, have random variables $X$ and $Y$,
and the eavesdropper, Eve, has another random variable $Z$.
When the mutual information $I(X;Y)$ between Alice and Bob is larger than
the mutual information $I(X;Z)$ or $I(Y;Z)$ between one authorized user and Eve,
and
when their information is given as the $n$-fold iid distribution of a given joint distribution $P_{XYZ}$,
Alice and Bob can extract secure final keys.
Recently, secure key distillation has been developed in a more practical way
by importing the methods developed for or motivated by quantum cryptography
\cite{Haya5,H-tight,H-ep,HW1,HT15,Watanabe,BTH}.
In particular, its finite-block-length analysis
has been much developed, including the Markovian case, when Alice's random variable agrees with Bob's random variable
\cite{Haya5,H-tight,H-ep,HW1,HT15}.
Such a analysis has been extended to a more general case in which
Alice's random variable does not necessarily agree with Bob's random variable \cite{HTW2}.
Although some of the random hash functions were originally constructed for quantum cryptography,
they can be used for privacy amplification even in secure key distillation \cite{Carter,WC81,HT2}.
Hence, under several natural assumptions for secure key distillation,
it is possible to precisely evaluate the security based on finite-block-length analysis.
We assume that $X$ is a binary information, and
all information is given as the $n$-fold iid distribution of a given joint distribution $P_{XYZ}$.
In this case, the protocol is essentially given by steps (5) and (6) of QKD,
where the code $C$, its dimension $k$, and the sacrifice bit length $\bar{k}$ are determined a priori according to the joint distribution $P_{XYZ}$.
Now, we denote the information exchanged via the public channel by $u$
and its distribution by $P_{{\rm pub}}$.
The security is evaluated by the criterion;
\begin{align}
\gamma(C,\{f_r\}):=
\frac{1}{2}
\sum_{u}\sum_{r} P_R(r)
P_{{\rm pub}}(u) \sum_{x\in \mathbb F_2^{k-\bar{k}}} \sum_{z \in {\cal Z}^n}
| P_{f_{r}(X^n) Z^n|U}(x,z|u) -
P_{ \mathbb F_2^{k-\bar{k}},\mathop{\rm mix}}(x)P_{Z^n|U}(z|u)|
\Label{9-2-1},
\end{align}
where $P_R$ is the distribution of the random variable $R$ used to choose our hash function $f_R$.
To evaluate this criterion, we introduce the conditional R\'{e}nyi entropy
\footnote{Indeed, two kinds of conditional R\'{e}nyi entropy are known.
This type is often called the Gallager \cite{Gallager} or Arimoto type\cite{Arimoto}.}
\begin{align}
H_{1+s}(X|Z|P_{XZ}):= -\frac{1+s}{s} \log
\Big(\sum_{z\in {\cal Z}}P_{Z}(z)
\Big(\sum_{x\in {\cal X}}P_{X|Z}(x|z)^{1+s}
\Big)^{\frac{1}{1+s}}
\Big).
\end{align}
Then, the criterion is evaluated as (\cite[(54) and Lemma 22]{H-ep} and \cite[(21)]{H-q-secure}\footnote{For its detail derivation, see \cite[Section V-D]{H-wireless}.})
\begin{align}
\gamma(C,\{f_r\})\le
(1+\frac{\sqrt{\delta}}{2})
\min_{s \in [0,1]}
e^{\frac{s}{1+s} (n \log 2-\bar{k} -n H_{1+s}(X|Z|P_{XZ}) )}.\Label{9-2-2}
\end{align}
Its quantum extension has also been discussed in \cite{DW,H-q-secure}.
Here, we should remark that the evaluation \eqref{9-2-2} can be realized by
a random hash function with small calculation complexity.
This is because
the inequality holds for an arbitrary linear code and an arbitrary $\delta$-almost dual universal$_2$ hash function.
Since the paper \cite{HT2} proposed several efficient $\delta$-almost dual universal$_2$ hash functions,
the bound has operational meaning even when we take into account the calculation complexity for its construction.
So, one might consider that secure key distillation is the same as QKD.
However, QKD is different from secure key distillation even with the quantum extension
due to the following points.
The advantage of QKD is that it does not assume anything except for the basic laws of quantum
theory.
Hence, QKD usually does not allow us to make any additional assumptions,
in particular, the iid assumption.
In contrast, in secure key generation, we often make the iid assumption.
As another difference, we are assumed to know the joint distribution or the density matrix on the whole system in secure key distillation
whereas we need to estimate only the density matrix on the whole system in QKD.
The finite-block-length analysis of secure key distillation
is different from that for channel coding in the following point.
The obtained finite-block-length analysis for channel coding discusses only the optimal performance among all codes, including impractical codes whose calculation complexity is too high.
However, in the finite-block-length analysis for physical layer security,
the obtained bound can be attained by a practical protocol whose calculation complexity is linear in the block-length.
\subsection{Application to wireless communication}
Recently, along with the growing use of wireless communication,
secure wireless communication has been very actively studied \cite{YMRSTM,WS,PJCG,CJ,Trappe,Zeng,WX}.
Physical layer security has been considered as a good candidate for secure wireless communication \cite{BBRM,YPS}.
Typically,
we assume the quasi-static condition, which allows us to assume
the memoryless condition in one coding block-length.
Even with this condition, a simple application of the wire-tap channel cannot guarantee secure communication
when Eve set up her antenna between Alice and Bob.
However, when the noise in Bob's output signal is independent of the noise in Eve's output signal,
the mutual information between Alice and Bob is larger than that between Eve and Bob even in this situation.
In this case, when they apply secure distillation in the opposite way after the initial wireless communication from Alice to Bob,
they can generate secure keys.
The assumption of the independence between Bob's and Eve's outputs is too strong and unrealistic for a practical use
because there is a possibility of interference between the two channels.
Hence, a more realistic assumption is needed.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.7}{\includegraphics[scale=1]{clip157}}
\end{center}
\caption{Model of Eve's attack for secure wireless communication. Eve can inject artificial noise into Bob's observation.
It is also assumed that Eve has noise in her detector like Bob.
It is natural to assume that these detector noises are independent of other random variables.}
\Label{F10}
\end{figure}%
To resolve this problem,
the author had the following idea based on the experience of interactions with experimentalists studying QKD.
It is natural to assume that
the noises generated inside each detector are independent and Gaussian,
and only the noise generated outside the detector is correlated to Eve's output.
In this situation, even when all of the intermediate space between Alice and Bob is under the control of Eve
and Eve injects artificial noise into Bob's observation, as in Fig. \ref{F10},
when the noise is sufficiently small,
the author showed that Alice and Bob can still generate secure keys \cite{H-wireless}.
Here, after the communication via the noisy wireless channel,
Alice and Bob need to estimate the noise by random sampling.
Once the random sampling guarantees that the noise is sufficiently small, they apply the secure key generation protocol.
This is a protocol to generate secure keys between Alice and Bob under reasonable assumptions for secure wireless communication.
Although the paper \cite{H-wireless} gives such a protocol with a small calculation complexity for construction,
the real performance of this protocol has not been studied in real situations.
A future task is to estimate the performance of the proposed method in a realistic situation by taking into account several complicated realistic effects, including fading.
Here, we summarize the advantages over modern cryptography based on computation complexity\cite{Trappe}.
When cryptography based on computation complexity is broken by a computer,
any information transmitted with this cryptography can be eavesdropped by using that computer.
To break physical layer security of the above type,
Eve has to set up
an antenna for each communication.
Furthermore, each antenna must be very expensive because
it must break the above assumption.
Maybe, it is not impossible to break a limited number of specific communications
for a very limited number of persons.
However, due to the cost constraint, it is impossible to eavesdrop on all communications
in a realistic situation.
In this way, physical layer security offers a different type of security from computational security.
\section{Conclusions and future prospects}
In this review article, we have discussed developments of finite-block-length theory in
classical and quantum information theory:
classical and quantum channel coding, data compression,
(secure) random number generation,
quantum cryptography,
and physical layer security.
These subareas have been developed with strong interactions with each other
in unexpected ways.
The required future studies for channel coding and data compression
are completely different from those needed for security topics.
In the former topics,
existing finite-block-length theory discusses only the minimum error among all codes
without considering the calculation complexity of its construction.
Hence, for practical use,
we need a finite-block-length theory for realizable codes whose
construction has less calculation complexity.
Such finite-block-length theory is urgently required.
Fortunately, the latest results obtained for these two topics
\cite{HW13,HW14c} cover the case when a Markovian memory effect exists.
However, their applications to a realistic situation have not been sufficiently studied, and
such practical applications are interesting open problems.
In contrast, in the latter topics, the established finite-block-length theory already
takes into account the calculation complexity of its construction;
hence, it is more practical.
However, these types of security protocols have not been realized for the following reasons.
In the case of quantum cryptography, we need more development on the device side.
Also, to realize secure communication for distances over 2000 km, we might need another type of information-scientific combinatorics.
In the case of physical layer security,
we need more studies to fill the gap between information theoretical security analysis
and device development.
There has recently been one such study \cite{H-wireless}.
Furthermore, the idea of finite-block-length theory is fundamental and can be extended to areas beyond information theory.
For example, it has been applied to
a statistical mechanical rederivation of thermodynamics \cite{TH14,HT15b},
the conversion of entangled states \cite{WK13,WK13b,WK14,IWK15}, and
the analysis of the gap between two classes of local operations \cite{HO}.
Therefore, we can expect more applications of finite-block-length theory to other areas.
\section*{Acknowledgments}
The works reported here were supported in part by
a MEXT Grant-in-Aid for Scientific Research (A) No. 23246071,
a JSPS Grant-in-Aid for Young Scientists (A) No. 20686026,
a JSPS Grant-in-Aid for Young Scientists (B) No. 14750330,
a JSPS Grant-in-Aid for Scientific Research on Priority Area ``Deepening
and Expansion of Statistical Mechanical Informatics (DEX-SMI)'' No. 18079014,
ERATO(-SORST) Quantum Computation and Information Project of the
Japan Science and Technology Agency (JST),
the Brain Science Institute of RIKEN,
the National Institute of Information and Communication Technology (NICT), Japan,
the Okawa Research Grant,
and the Kayamori Foundation of Informational Science Advancement.
The author is grateful to
Professor Akihisa Tomita, who is an expert on the physical implementation of QKD systems,
for discussing the physical model for a real QKD system.
He is also thankful to
Dr. Toyohiro Tsurumaru and Dr. Kiyoshi Tamaki, who are working on QKD from an industrial perspective,
for discussing the physical assumptions for the decoy model.
He is also grateful to
Professor \'{A}ngeles Vazquez-Castro,
Professor Hideichi Sasaoka, and Professor Hisato Iwai,
who are experts in wireless communication,
for discussing the validity of the model of the paper \cite{H-wireless} for
secure wireless communication.
He is also thankful to Dr. Ken-ichiro Yoshino in NEC for providing the picture of the QKD device (Fig. \ref{F6B}).
|
1,116,691,497,621 | arxiv | \section{Introduction}
A major challenge in the current ``big-data era'' is to extract signals from huge databases. Often, an applied researcher proceeds in a two-step fashion: First, in order to decide whether there is any signal in the data at all, one performs an aggregate test of the global null hypothesis of no signal. This global null hypothesis is typically formulated as the high-dimensional target parameter being the zero vector. Second, if the global null hypothesis was rejected by the test, further analysis is undertaken to uncover the precise nature of the signal. Much research has been directed to studying properties of such a sequential rejection principle, cf.~\cite{romano2005exact}, \cite{yekutieli2008hierarchical}, \cite{rosenbaum2008testing}, \cite{meinshausen2008hierarchical}, \cite{goeman2010sequential}, \cite{heller2}, \cite{bogomolov2020hypotheses} and references therein.
Using a powerful test for the global null hypothesis in the first step of such a hierarchical multi-step procedure is of course crucial, and the development of tests for this hypothesis has therefore attracted much research in its own right. A typical choice, employed in, e.g., \cite{heller}, is to use a test based on the Euclidean norm of the estimator. This also leads to the likelihood ratio (LR) test in the Gaussian sequence model they considered, which is also the framework in the present article. Although the LR test is a natural choice, one may ask: \emph{Do tests for the global null exist that are consistent against substantially more alternatives than the LR test?} This question is practically relevant, because one can choose from a large menu of well-established tests, yet precisely which one to use is not obvious: For example, one could use tests based on other norms than the Euclidean one, a natural class of tests being based on~$p$-norms, cf.~the classic monograph of~\cite{ingster}. One could also use a test based on combining different~$p$-norms as suggested by the power enhancement principle of \cite{fan2015} and in~\cite{kp2}. Another test that has gained popularity in recent years is the Higher Criticism. This test dates back to~\cite{tukey1976t13} and its strong power properties against deviations from the global null were first exhibited by~\cite{donoho} and have led to much subsequent research, cf.~\cite{donoho2009feature}, \cite{hall2010innovated}, \cite{ tony2011optimal}, \cite{arias2011global}, \cite{barnett2014analytical}, \cite{li2015higher}, \cite{arias2019detection} and \cite{ porter2020beyond}. Alternatively, one could use tests based on combining~p-values for coordinate-wise zero restrictions. Important early work includes~\cite{fisher1934statistical}, \cite{tippett1931methods}, \cite{pearson1933method}, \cite{stouffer1949american} and \cite{simes1986improved}. For a review of the classic literature see \cite{cousins2007annotated}, more recent contributions are \cite{owen09}, \cite{duan2020interactive} and~\cite{vovk2020combining, vovk2020values}. It is crucial to highlight here that many of the above mentioned tests are consistent against strictly more alternatives than the LR test, i.e., they dominate the LR test in terms of their consistency properties. Hence, the question of interest is not whether one can do better than the LR test at all, but whether one can do \emph{substantially} better.
We consider the question raised in the previous paragraph from a high-dimensional perspective. In the Gaussian sequence model, we investigate
whether aggregate tests can be obtained that are consistent against substantially more alternatives than the likelihood
ratio test. We show that relative to a uniform prior on the parameter space this is impossible: essentially, we prove that for any given test the set of alternatives against which it is consistent, but the LR test is not, has vanishing relative Lebesgue measure. Hence, no test for the global null hypothesis can substantially improve on the LR test. The assumptions on the tests for which we establish this statement are minimal and cover, inter alia, (combinations of)~$p$-norm based tests, the power enhancement principle, the Higher Criticism and typical constructions based on combining p-values. Thus, none of these, while potentially consistent against even strictly more alternatives than the LR test, is consistent against substantially more alternatives. From a technical perspective, our proofs are based on results by~\cite{ss} concerning the asymptotic volume of intersections of~$p$-norm balls and on invariance arguments involving an ``average'' Gaussian correlation inequality due to~\cite{schechtman}.
Our finding is reminiscent of \cite{lecam1953}, who showed (in finite-dimensional settings) that the set of possible superefficiency points of an estimator relative to the maximum likelihood estimator cannot be larger than a Lebesgue null set; cf.~also \cite{vanderVaart1997}. Note that our result does not imply that one should always use the LR test and not think carefully about the choice of test in high-dimensional testing problems. If, for example, one is interested in particular types of deviations from the null, e.g., sparse ones, there may be good reasons to use a test based on the supremum norm or the Higher Criticism. Nevertheless, in analogy to \cite{lecam1953}, regardless of how clever an alternative test is designed, the amount of alternatives against which one achieves an improvement as compared to the LR test cannot be substantial in terms of relative volume. This also supports basing a combination procedure, such as the power enhancement principle by~\cite{fan2015}, on the Euclidean norm.
\section{Framework and terminology}
We consider the Gaussian sequence model
\begin{equation}\label{eqn:model}
y_{i,d} = \theta_{i,d} + \varepsilon_i, \quad i = 1, \hdots, d,
\end{equation}
where~$y_{1,d}, \hdots, y_{d,d}$ are the observations, the parameters~$\theta_{i,d} \in \mathbb{R}$ are unknown, and where the unobserved terms~$\varepsilon_i$ are independent and standard normal. We write~$\bm{y}_d = (y_{1,d}, \hdots y_{d,d})'$,~$\bm{\varepsilon}_d = (\varepsilon_1, \hdots, \varepsilon_d)'$, and~$\bm{\theta}_d = (\theta_{1,d}, \hdots, \theta_{d,d})' \in \mathbb{R}^d$. Although the Gaussian sequence model is an idealization, many fundamental issues of high dimensionality show up already here and insights obtained within this model carry over, at least on a conceptual level, to many other settings. It is therefore widely recognized as an important prototypical framework in high-dimensional statistics, see, for example,~\cite{ingster}, \cite{carpentier2019adaptive}, \cite{ johnstone} or \cite{castillo2020spike}.
In the model~\eqref{eqn:model}, we are interested in the testing problem
\begin{equation}\label{eqn:tp}
H_{0, d}: \bm{\theta}_d = \bm{0}_d \quad \text{ against } \quad H_{1, d}: \bm{\theta}_d \in \mathbb{R}^d \setminus \{\bm{0}_d\},
\end{equation}
where~$\bm{0}_d$ denotes the origin in~$\mathbb{R}^d$. The null hypothesis~$H_{0, d}$ is typically referred to as the ``global null'' of no effect.
For a given~$d \in \mathbb{N}$, a (possibly randomized) test~$\varphi_d$, say, for~\eqref{eqn:tp} is a (measurable) function from the sample space~$\mathbb{R}^d$ to the closed unit interval. In the asymptotic framework we consider, we are interested in properties of sequences of tests~$\{\varphi_d\}$, where $\varphi_d$ is a test for~\eqref{eqn:tp} for every~$d \in \mathbb{N}$. To lighten the notation, we shall write~$\varphi_d$ instead of~$\{\varphi_d\}$ whenever there is no risk of confusion. We are particularly interested in the consistency properties of sequences of tests. As usual, we say that a sequence of tests~$\varphi_d$ is consistent against the \emph{array} of parameters~$\bm{\vartheta} = \{\bm{\theta}_d : d \in \mathbb{N}\}$, where~$\bm{\theta}_d \in \mathbb{R}^d$ for every~$d \in \mathbb{N}$, if and only if (as~$d \to \infty$)
\begin{equation*}
\mathbb{E}\left(\varphi_d(\bm{\theta}_d + \bm{\varepsilon}_d)\right) \to 1.
\end{equation*}
To every sequence of tests~$\varphi_d$ we associate its \emph{consistency set}~$\mathscr{C}(\varphi_d)$, say. The consistency set~$\mathscr{C}(\varphi_d)$ is the set of all arrays of parameters~$\bm{\vartheta}$ the sequence of tests~$\varphi_d$ is consistent against. By definition~$$\mathscr{C}(\varphi_d) \subseteq \bigtimes_{d = 1}^{\infty} \mathbb{R}^d =: \bm{\Theta},$$ the latter denoting the set of all possible arrays of parameters.
Recall that a sequence of tests~$\varphi_d$ is said to have \emph{asymptotic size}~$\alpha \in [0, 1]$ if
\begin{equation*}
\mathbb{E}\left(\varphi_d(\bm{\varepsilon}_d)\right) \to \alpha.
\end{equation*}
In this article, following the Neyman-Pearson paradigm, we focus on the case where~$\alpha \in (0, 1)$, which we shall implicitly assume in the discussions throughout unless mentioned otherwise.
It is well-known that the LR test for~\eqref{eqn:tp} rejects if the Euclidean norm~$\|\cdot\|_2$ of the observation vector~$\bm{y}_d$ exceeds a critical value~$\kappa_{d,2}$ chosen to satisfy the given size constraints. That is, the LR test is given by~$\mathds{1}\{\|\cdot\|_2 \geq \kappa_{d,2}\}$. For notational simplicity, we abbreviate the sequence of tests~$\{\mathds{1}\{\|\cdot\|_2 \geq \kappa_{d,2}\} \}$ by~$\{2, \kappa_{d,2}\}$ and thus write~$\mathscr{C}(\{2, \kappa_{d,2}\})$ for its consistency set. The following result is contained in \cite{ingster},~cf.~also Theorem~3.1 in \cite{kp2} for extensions.
\begin{theorem}\label{thm:ing}
Let~$\kappa_{d,2}$ be a sequence of critical values such that the asymptotic size of~$\{2, \kappa_{d,2}\}$ is~$\alpha \in (0, 1)$. Then
\begin{equation}\label{eqn:2normcons}
\bm{\vartheta} \in \mathscr{C}(\{2, \kappa_{d,2}\}) \quad \Leftrightarrow \quad d^{-1/2} \|\bm{\theta}_d\|_2^2 \to \infty.
\end{equation}
\end{theorem}
Theorem~\ref{thm:ing} shows that the consistency set of the LR test is precisely characterized by the asymptotic behavior of the Euclidean norms of the array of alternatives under consideration. That the consistency set of the LR test can be completely characterized in terms of the norm its test statistic is based on seems natural, but is quite specific to the LR test, see Theorem~3.1 and the ensuing discussion in \cite{kp2}.
\begin{remark}\label{eqn:2normsize01}
Theorem~\ref{thm:ing} shows that~$\mathscr{C}(\{2, \kappa_{d,2}\})$ does not depend on the precise value of~$\alpha$, as long as~$\alpha \in (0, 1)$. Consequently, one easily sees that the equivalence in~\eqref{eqn:2normcons} remains true for any sequence of critical values~$\kappa_{d,2}$ such that
\begin{equation*}
0 < \liminf_{d \to \infty} \mathbb{P}(\|\bm{\varepsilon}_d\|_2 \geq \kappa_{d,2}) \leq \limsup_{d \to \infty} \mathbb{P}(\|\bm{\varepsilon}_d\|_2 \geq \kappa_{d,2}) < 1.
\end{equation*}
\end{remark}
\section{Superconsistency points}\label{sec:supcp}
\subsection{Improving on the LR test}\label{sec:improving}
Although the LR test is a canonical choice of a test for the testing problem~\eqref{eqn:tp}, there are many other reasonable tests available. For example, classic results by~\cite{birnbaum1955} and~\cite{stein1956} show that any test with convex acceptance region (i.e., the complement of its rejection region) is admissible. \citeauthor{anderson1955integral}'s (\citeyear{anderson1955integral}) theorem implies that if the acceptance region is furthermore symmetric around the origin then the test is also unbiased. Thus, any convex symmetric (around the origin) set delivers an admissible unbiased test, which is hence reasonable from a non-asymptotic point of view.
One class of tests that is intimately related to the LR tests consists of tests based on other~$p$-norms than the Euclidean one. For~$\bm{x} = (x_1, \hdots, x_d)' \in \mathbb{R}^d$ and~$p \in (0, \infty]$, define the~$p$-norm as usual via\footnote{Strictly speaking,~$||\cdot||_p$ defines a norm on~$\mathbb{R}^d$ only for~$p\in[1,\infty]$ and a quasi-norm for~$p \in (0, 1)$.}
\begin{equation*}
\|\bm{x}\|_p =
\begin{cases}
\left(\sum_{i = 1}^d |x_i|^p\right)^{\frac{1}{p}} & \text{if } p < \infty, \\
\max_{i = 1, \hdots, d} |x_i| & \text{else}.
\end{cases}
\end{equation*}
In analogy to the LR test,~$p$-\emph{norm based} tests reject if the~$p$-norm of the observation vector exceeds a critical value~$\kappa_{d,p}$. Special cases, which have an established tradition in high-dimensional inference, are the~$1$- and the supremum norm. We shall denote the sequence of tests~$\{\mathds{1}\{\|\cdot \|_p \geq \kappa_{d,p}\}\}$ by~$\{p, \kappa_{d,p}\}$. Clearly,~$p$-norm based tests are unbiased and admissible for~$p\in[1,\infty]$ as a consequence of the discussion in the first paragraph of this section.
Concerning the consistency sets~$\mathscr{C}(\{p, \kappa_{d,p}\})$ of general~$p$-norm based tests, it is a somewhat surprising fact that
\begin{enumerate}[label=(\roman*)]
\item~$\mathscr{C}(\{p, \kappa_{d,p}\})\subsetneqq \mathscr{C}(\{q, \kappa_{d,q}\})$ for~$0 < p < q < \infty$, i.e., strictly larger exponents~$p$ result in \emph{strictly} larger consistency sets; and
\item that this ranking does not extend to~$q = \infty$, in the sense that there are alternatives the supremum norm based test is not consistent against but against which the LR test is consistent and vice versa;
\end{enumerate}
see~\cite{kp2} for formal statements.\footnote{Recall that throughout the present article we implicitly impose the condition that all tests have asymptotic size in~$(0, 1)$ if not otherwise mentioned.} From (i) it follows that any~$p$-norm based test with~$p \in (2, \infty)$ has a \emph{strictly} larger consistency set than the LR test.
Other tests that strictly dominate the LR test can be obtained, e.g., through combination procedures that enhance the LR test with a sequence of tests~$\eta_d$ that is sensitive against alternatives of a different ``type'' than the LR test in the sense that~$$\mathscr{C}(\eta_d) \setminus \mathscr{C}(\{2, \kappa_{d,2}\}) \neq \emptyset.$$ To see how this can be achieved, note that the consistency set of the sequence of tests~$\psi_d$, say, where~$\psi_d$ rejects if the LR test \emph{or}~$\eta_d$ rejects, contains~$\mathscr{C}(\{2, \kappa_{d,2}\}) \cup \mathscr{C}(\eta_d)$, and hence dominates the LR test in terms of consistency. Essentially, this is the power enhancement principle of~\cite{fan2015}, see Section~\ref{sec:examples} below for further discussion and~\cite{kp1} for related results. Note that if~$\eta_d$ has asymptotic size~$0$, which is an assumption imposed on~$\eta_d$ in the context of the power enhancement principle, nothing is lost in terms of asymptotic size when using~$\psi_d$ instead of the LR test, because both sequences of tests then have the same asymptotic size.\footnote{If~$\eta_d$ has a positive asymptotic size that is smaller than the asymptotic size targeted in the final combination test, one can work with a LR test with small enough asymptotic size in the combination procedure to obtain a test that dominates the LR test in terms of consistency (recall from Theorem~\ref{thm:ing} that the consistency set of the LR test does not depend on the specific value of the asymptotic size).}
To clarify how much can possibly be gained in terms of consistency by using a sequence of tests~$\varphi_d$ other than the LR test, we shall consider the corresponding set~$\mathscr{C}(\varphi_d) \setminus \mathscr{C}(\{2, \kappa_{d,2}\})$, which we refer to as the \emph{superconsistency points} of the sequence of tests~$\varphi_d$ (relative to the LR test). Note that the set of superconsistency points is defined for any sequence of tests, regardless of whether it dominates the LR test or not (in the sense that its consistency set includes that of the LR test). On a conceptual level, superconsistency points are related to superefficiency points of estimators relative to the maximum likelihood estimator in classic parametric theory.
\subsection{The relative volume of the set of superconsistency points}\label{sec:relvol}
The central question we consider in this article is how ``large'' the set of superconsistency points~$\mathscr{C}(\varphi_d) \setminus \mathscr{C}(\{2, \kappa_{d,2}\})$ can possibly be for a sequence of tests~$\varphi_d$ with asymptotic size in~$(0, 1)$. Note that the larger~$\mathscr{C}(\varphi_d) \setminus \mathscr{C}(\{2, \kappa_{d,2}\})$ is, the larger is the set of alternatives the sequence of tests~$\varphi_d$ is consistent against but the LR test is not consistent against. Although we already know from the examples discussed in Section~\ref{sec:improving} that~$\mathscr{C}(\varphi_d) \setminus \mathscr{C}(\{2, \kappa_{d,2}\})$ is non-empty for many~$\varphi_d$, we here investigate whether one can \emph{substantially} enlarge the consistency set by using another test than the LR test.
To make the above question amenable to a formal treatment, note that Theorem~\ref{thm:ing} implies that for any sequence of LR tests~$\{2, \kappa_{d,2}\}$ with asymptotic size~$\alpha \in (0, 1)$, the complement of~$\mathscr{C}(\{2, \kappa_{d,2}\})$ satisfies
\begin{equation*}
\bm{\Theta} \setminus \mathscr{C}(\{2, \kappa_{d,2}\}) \supseteq \bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)
\end{equation*}
if the sequence~$r_d > 0$ is such that~$r_d/d^{1/4}$ is bounded and
where~$\mathbb{B}_2^d(r)$ denotes the Euclidean ball with radius~$r$ centered at the origin. That is, the LR test is inconsistent against any element of~$\bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)$. We now investigate how many inconsistency points of the LR test can be removed from any such benchmark~$\bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)$ by erasing all superconsistency points of a sequence of tests~$\varphi_d$.
Formally, this is to be understood in the following sense: let~$\varphi_d$ be a sequence of tests with consistency set~$\mathscr{C}(\varphi_d)$ and let~$r_d$ be such that~$r_d/d^{1/4}$ is bounded. Let~$\mathbb{D}_d \subseteq \mathbb{B}_2^d(r_d)$ be such that~$$\bigtimes_{d = 1}^{\infty} \mathbb{D}_d \subseteq \mathscr{C}(\varphi_d).$$ Note that all elements of~$\bigtimes_{d = 1}^{\infty} \mathbb{D}_d$ are superconsistency points of~$\varphi_d$ which are also contained in the benchmark~$\bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)$. Denoting by~$\text{vol}_d$ the~$d$-dimensional Lebesgue measure, one could now study the asymptotic behavior of the sequences $\mathrm{vol}_d\left(\mathbb{B}_2^d(r_d)\right) - \mathrm{vol}_d\left(\mathbb{D}_d\right)$ or~$\mathrm{vol}_d\left(\mathbb{D}_d\right)$ in order to determine whether one can substantially improve upon the LR test. However, these sequences both converge to~$0$. To see this, just note that
\begin{equation*}
\mathrm{vol}_d\left(\mathbb{B}_2^d(r_d)\right) = \frac{\pi^{d/2}}{\Gamma(d/2 + 1)} r_d^d \to 0,
\end{equation*}
in case~$r_d/d^{1/4}$ is bounded as a consequence of Stirling's approximation to the gamma function as well as~$\mathbb{D}_d \subseteq \mathbb{B}_2^d(r_d)$. Thus, such ``absolute'' volume measures are uninformative, since even the absolute volume of~$\mathbb{B}_2^d(r_d)$ tends to zero. Hence, we investigate the asymptotic behavior of the~\emph{relative} volume measure
\begin{equation}\label{eqn:limc1p}
\frac{\mathrm{vol}_d\left(\mathbb{D}_d\right)}{\mathrm{vol}_d\left(\mathbb{B}_2^d(r_d)\right)}.
\end{equation}
Obviously, the ratio in~\eqref{eqn:limc1p} is a number in~$[0, 1]$. On the one hand, if this ratio is asymptotically close to~$1$, this means that in terms of relative volume many elements of the benchmark~$\bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)$ are superconsistency points of the sequence of tests~$\varphi_d$. That is, one can \emph{substantially} improve upon the LR test by using~$\varphi_d$ (or by combining the LR test with~$\varphi_d$ through the power enhancement principle). On the other hand, if this ratio is asymptotically close to~$0$, this means that in terms of relative volume only few elements of the benchmark are superconsistency points of~$\varphi_d$.
We emphasize that using the (normalized) Lebesgue measure to assess the asymptotic magnitude of the set of superconsistency points is one among many possible choices. Other measures would be possible too, but the uniform prior over~$\mathbb{B}_2^d(r_d)$ is a natural choice as in many situations there is no clear guidance concerning the type of alternative one wishes to favor.\footnote{Our results remain valid if, instead of measuring the magnitude of~$\mathbb{D}_d$ w.r.t.~the uniform probability measure on~$\mathbb{B}_2^d(r_d)$, one measures its magnitude w.r.t.~the uniform probability measure on the Euclidean sphere of radius~$r_d$. We will comment on this in Remark~\ref{rem:surf}, but will focus on the uniform distribution on~$\mathbb{B}_2^d(r_d)$ throughout the article.}
Note that the ratio in~\eqref{eqn:limc1p} depends on two ingredients:
\begin{enumerate}
\item the benchmark~$\bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)$;
\item the sequence of superconsistency points~$\bigtimes_{d = 1}^{\infty} \mathbb{D}_d$ which depends on the sequence of tests~$\varphi_d$.
\end{enumerate}
Therefore, one could suspect that the asymptotic behavior of~\eqref{eqn:limc1p} depends in a complicated way on the interplay between these two components. Nevertheless, it turns out that the asymptotic behavior of~\eqref{eqn:limc1p} has a simple description that does not depend on any of the two ingredients just described. In fact, we shall prove in the following two sections that the limit of the sequence is always~$0$ for large classes of sequences of tests~$\varphi_d$. Hence, it is impossible to improve on the LR test in terms of the magnitude of its consistency set apart from a set of superconsistency points that is negligible in a relative volume sense.
In the following Section~\ref{sec:pproof}, we shall first establish this result for~$\varphi_d$ a sequence of~$p$-norm based tests with~$p \in (2, \infty)$. Note that all these tests have a strictly larger consistency set than the LR test as discussed in Section~\ref{sec:improving}. A general result, the proof of which is a bit more involved, will be presented in Section~\ref{sec:general}.
\section{$p$-norm based tests}\label{sec:pproof}
We now consider the asymptotic behavior of the sequence~\eqref{eqn:limc1p} for the special case where~$\varphi_d$ is a sequence of~$p$-norm based tests with~$p \in (2, \infty)$ being fixed. For this class of tests, we can exploit the characterization of their consistency sets provided in Theorem 3.1 and Corollary 3.2 of~\cite{kp2}, together with results from asymptotic geometry developed in~\cite{ss} based on earlier results in~\cite{sz}. These ingredients lead to a direct proof of the limit of the sequence in~\eqref{eqn:limc1p} being~$0$.
\begin{theorem}\label{thm:nogain}
Let~$p \in (2, \infty)$ and let the sequence of critical values~$\kappa_{d,p}$ be such that~$\{p, \kappa_{d,p}\}$ has asymptotic size~$\alpha \in (0, 1)$. Then, for any sequence~$r_d > 0$ such that~$r_d/d^{1/4}$ is bounded, and any sequence of non-empty Borel sets~$\mathbb{D}_d \subseteq \mathbb{B}_2^d(r_d)$ such that~
\begin{equation}\label{eqn:dpf}
\bigtimes_{d = 1}^{\infty} \mathbb{D}_d \subseteq \mathscr{C}(\{p, \kappa_{d,p}\}),
\end{equation}
we have
\begin{equation*}
\lim_{d \to \infty} \frac{\mathrm{vol}_d\left(\mathbb{D}_d\right)}{\mathrm{vol}_d\left(\mathbb{B}_2^d(r_d)\right)} = 0.
\end{equation*}
\end{theorem}
\begin{proof}
Let~$\{p, \kappa_{d,p}\}$,~$r_d$, and~$\mathbb{D}_d$ be as in the statement of the theorem. Corollary~3.2 in~\cite{kp2} shows that~$\bm{\vartheta} \in \mathscr{C}(\{p, \kappa_{d,p}\})$ if and only if~$d^{-1/2} (\|\bm{\theta}_d\|_2^2 \vee \|\bm{\theta}_d\|_p^p) \to \infty$. Together with~$r_d/d^{1/4}$ being bounded, ~$\mathbb{D}_d \subseteq \mathbb{B}_2^d(r_d)$, and~\eqref{eqn:dpf}, this guarantees that~$\tilde{s}_d/d^{1/(2p)} \to \infty$ for~$\tilde{s}_d := \inf\{ \|\bm{\theta}_d\|_p: \bm{\theta}_d \in \mathbb{D}_d \}$.
The definition of~$\tilde{s}_d$ implies~$$\mathbb{G}_d := \mathbb{B}_2^d(r_d) \setminus \mathbb{D}_d \supseteq \mathbb{B}_2^d(r_d) \cap \mathbb{B}_p^d(\tilde{s}_d/2).$$
Define the sequence~$s_d := d^{1/(2p) - 1/4} r_d > 0$, so that~$s_d/d^{1/(2p)} = r_d/d^{1/4}$ is bounded. Hence, eventually~$\tilde{s}_d \geq 2s_d$ and thus~$\mathrm{vol}_d(\mathbb{G}_d) \geq \mathrm{vol}_d(\mathbb{B}_2^d(r_d) \cap \mathbb{B}_p^d(s_d))$ holds, so that the quotient~
\begin{equation*}
1 - \frac{\mathrm{vol}_d\left(\mathbb{D}_d\right)}{\mathrm{vol}_d(\mathbb{B}_2^d(r_d))} = \frac{\mathrm{vol}_d\left(\mathbb{G}_d\right)}{\mathrm{vol}_d(\mathbb{B}_2^d(r_d))}
\end{equation*}
is eventually not smaller than
\begin{equation*}
\frac
{\mathrm{vol}_d\left(
\mathbb{B}_2^d(r_d) \cap \mathbb{B}_p^d(s_d)
\right)}
{\mathrm{vol}_d\left(
\mathbb{B}_2^d(r_d)
\right)} = \frac
{\mathrm{vol}_d\left(
\mathbb{B}_2^d(e_{d,2}) \cap \mathbb{B}_p^d(e_{d,2} s_d/r_d)
\right)}
{\mathrm{vol}_d\left(
\mathbb{B}_2^d(e_{d,2})
\right)}= \mathrm{vol}_d
\left(
\mathbb{B}_2^d(e_{d,2}) \cap u_d \mathbb{B}_p^d(e_{d,p})
\right),
\end{equation*}
where~$u_d := \frac{e_{d,2}}{e_{d,p}} \frac{d^{1/(2p)}}{d^{1/4}}$,~$e_{d,p} := \frac{1}{2} \frac{\Gamma(1+d/p)^{1/d}}{\Gamma(1+1/p)}$, and consequently~$\mathrm{vol}_d(
\mathbb{B}_2^d(e_{d,2})
) = 1$. The main result in \cite{ss} shows that for every~$t$ large enough~$\mathrm{vol}_d
(
\mathbb{B}_2^d(e_{d,2}) \cap t \mathbb{B}_p^d(e_{d,p})
) \to 1$, as~$d \to \infty$. Therefore, we are done upon verifying that~$u_d \to \infty$. This follows from the lower bound
\begin{equation*}
\frac{e_{d,2}}{e_{d,p}} =
\left[\frac{\Gamma(1+d/2)}{\Gamma(1+d/p)}\right]^{1/d}
\frac{\Gamma(\frac{1}{p} + 1)}{\Gamma(\frac{1}{2} + 1)} \geq
\left[d/p\right]^{1/2-1/p}
\frac{\Gamma(\frac{1}{p} + 1)}{\Gamma(\frac{1}{2} + 1)}
,
\end{equation*}
where we used the inequality for ratios involving the gamma function in Equation~12 of~\cite{jameson} with~``$x = 1+d/p$'' (which is not smaller than~$1$) and~``$y = d(1/2 - 1/p)$'' (which is not smaller than~$0$).
\end{proof}
Hence, even though~$\mathscr{C}(\{p, \kappa_{d, p}\})$ contains the consistency set of the LR test as a strict subset for every~$p \in (2, \infty)$ as discussed in Section~\ref{sec:improving}, the subset of those alternatives in each benchmark~$\bigtimes_{d = 1}^{\infty} \mathbb{B}_2^d(r_d)$ for which the test~$\{p, \kappa_{d, p}\}$ provides an improvement over the LR test is ``negligible'' in (relative) volume. That this result is not specific to~$p$-norm based tests, but extends to essentially all relevant tests will be shown next.
\section{Tests with acceptance regions that are star-shaped around the origin}\label{sec:general}
The proof of Theorem~\ref{thm:nogain} builds heavily on the particular structure of the consistency set of~$p$-norm based tests. Working with a ``general'' class of tests, one can no longer exploit specific properties of the consistency set. Therefore, we take a different approach, which is based on invariance properties of the LR test and the Lebesgue measure, together with an invariance-inducing construction built on an average version of the Gaussian correlation inequality. Since most tests used in practice are non-randomized (i.e., they are indicator functions of the complements of their acceptance regions) we shall focus on this class of tests.
To specify a sequence of non-randomized tests, let~$A_d \subseteq \mathbb{R}^d$ for~$d \in \mathbb{N}$ be a sequence of (Borel) acceptance regions denoted by~$\mathbb{A} := \{A_d : d \in \mathbb{N}\}$. Such a sequence~$\mathbb{A}$ of acceptance regions corresponds to a sequence of tests the~$d$-th element of which is defined as the indicator function of the complement of~$A_d$. For simplicity, we write~$\mathbb{A}$ for this sequence of tests. Therefore, we also write~$\mathscr{C}(\mathbb{A})$ for the associated consistency set.
We shall establish a general version of Theorem~\ref{thm:nogain} under the following assumption concerning~$\mathbb{A} = \{A_d : d \in \mathbb{N}\}$.
\begin{assumption}\label{as:conv}
For every~$d \in \mathbb{N}$ the set~$A_d$ is star-shaped around the origin.
\end{assumption}
To illustrate the generality of this assumption we first discuss some examples.
\subsection{Examples of tests with acceptance region satisfying Assumption~\ref{as:conv}}\label{sec:examples}
Many commonly used tests satisfy~Assumption \ref{as:conv}. For example, every convex acceptance region containing~$\bm{0}_d$ is obviously star-shaped around this point.\footnote{Tests for the problem~(\ref{eqn:tp}) whose acceptance region does not contain~$\bm{0}_d$ are not generally unbiased.} The following non-exhaustive list contains concrete examples of frequently used tests and related constructions that satisfy~Assumption \ref{as:conv}. It also highlights situations in which allowing the acceptance region~$A_d$ to be star-shaped rather than convex is important.
\begin{itemize}[leftmargin=*]
\item[-] \emph{$p$-norm based tests}: These tests, described in Section~\ref{sec:improving}, reject~$H_{0,d}:\bm{\theta}_d=\bm{0}_d$ if~$||\bm{y}_d||_p$ exceeds a critical value~$\kappa_{d,p}$. Thus, for any~$p\in(0,\infty]$, one has that~$A_d=\cbr[0]{\bm{y}\in\mathbb{R}^d:\ ||\bm{y}||_p < \kappa_{d,p}}$, which is clearly star-shaped, while it is only convex in case~$p\in[1,\infty]$. Note that because Assumption~\ref{as:conv} is imposed for every fixed~$d\in\mathbb{N}$, it is also satisfied for a sequence of tests based on~$p$-norms that vary with~$d$.
\item[-] \emph{Combination of tests with star-shaped acceptance regions}: In high-dimensional testing problems, as we highlighted previously, it is frequently the case that different tests have good power properties against different types of alternatives. One way of constructing a test that is powerful against more alternatives than any single test in a given collection of tests consists of rejecting as soon as at least one of the tests in the given collection rejects. This method has been applied, e.g., by \cite{spokoiny} and has become a frequently employed method of constructing adaptive tests in the minimax literature, cf.~also Chapter~8 of \cite{ginenickl}. In our framework, the abstract idea of such constructions is as follows: For each~$d\in\mathbb{N}$ consider a family of acceptance regions~$A_{d,i}$, each star-shaped around the origin, where~$i$ varies in the abstract (non-empty) index set~$\mathbb{I}_d$. The test that rejects if~$\bm{y}_d$ falls outside any of the~$A_{d,i}$ has acceptance region~$A_d:=\bigcap_{i\in\mathbb{I}_d}A_{d,i}$, which is then also star-shaped around the origin.\footnote{This test can, as is typically done, also be expressed as the maximum of the individual tests with acceptance regions~$A_{d,i},i\in\mathbb{I}_d$.} Hence, such combination procedures satisfy~Assumption~\ref{as:conv}.\footnote{While the intersection~$A_d$ is star-shaped, additional conditions need to be satisfied by specific constructions to guarantee that the sequence of tests has the targeted asymptotic size.}
\item[-] \emph{Power enhancement principle}: In the vein of the previous example, and as motivated in Section~\ref{sec:improving}, \cite{fan2015} added to a test \emph{statistic}~$T_{d,1}:\mathbb{R}^d\to[0,\infty)$ with critical value~$\kappa_d\in(0,\infty)$ another test statistic~$T_{d,2}$ which equals 0 with probability tending to 1 under the null hypothesis (but diverges faster than~$\kappa_d$ under alternatives under which~$T_{d,1}$ does not diverge). This \emph{enhanced} test rejects in case $T_{d,1}(\bm{y})+T_{d,2}(\bm{y})$ exceeds~$\kappa_d$. If~$T_{d,i}(\lambda \bm{y})\leq T_{d,i}(\bm{y})$ for every~$\lambda\in(0,1)$, $i\in\cbr[0]{1,2}$ and~$\bm{y} \in \mathbb{R}^d$, which is a natural condition that is often satisfied as test statistics for the problem under consideration are typically homogeneous, the corresponding acceptance region~$A_d:=\cbr[0]{\bm{y}\in\mathbb{R}^d: T_{d,1}(\bm{y})+T_{d,2}(\bm{y})< \kappa_d}$ is star-shaped.
\item[-] \emph{Higher Criticism}: The Higher Criticism (HC) test was introduced in \cite{tukey1976t13}, and the analysis of it was initiated by~\cite{donoho}. HC has become a popular test in high-dimensional testing problems due its strong power guarantees, cf., e.g., \cite{hall2010innovated, tony2011optimal, arias2011global, barnett2014analytical, porter2020beyond}. For~$0\leq a < b\leq 1$, a HC-test statistic for the problem~\eqref{eqn:tp} is of the form\footnote{HC-type tests are traditionally studied in mixture models with one-sided alternatives. We have made the appropriate adjustments in the context of our two-sided alternatives in~\eqref{eqn:tp}.}
\begin{align*}
\text{HC}_d(a,b):=\sup_{a<\delta\leq b}\frac{\sum_{i=1}^d(\mathds{1}_{\cbr[0]{|y_{i,d}|\geq z_{1-\delta/2}}}-\delta)}{\sqrt{\delta(1-\delta)d}},
\end{align*}
where~$z_{1-\delta/2}=\Phi^{-1}(1-\delta/2)$,~$\Phi$ being the cdf of the standard normal distribution. The classic HC-test corresponds to~$a=0$ and one often chooses~$b=1/2$. The HC$^+$ variant, on the other hand, uses~$a=1/d$. Since HC-tests reject for large values of~$\text{HC}_d(a,b)$ and since~$\mathds{1}_{\cbr[0]{|\lambda x|\geq c}}\leq \mathds{1}_{\cbr[0]{|x|\geq c}}$ for all~$\lambda\in(0,1)$ as well as~$c,x\in\mathbb{R}$, it follows that the acceptance region of such tests are star-shaped. Note, however, that they are \emph{not} generally convex, thus underscoring again the importance of the flexibility that Assumption~\ref{as:conv} allows for.
\item[-] \emph{Tests from meta-analysis and tests based on~p-values}: Many tests from meta-analysis\footnote{In this context one interprets each~$y_{i,d}$ as a z-score.} are based on combining the~p-values~$p_i:=2(1-\Phi(|y_{i,d}|)),\ i=1,\hdots,d$. Observing that each~$p_i$ is decreasing in~$|y_{i,d}|$ and~$|\lambda y_{i,d}|\leq |y_{i,d}|$ for all~$\lambda\in(0,1)$, it follows that \emph{Fisher's combined probability test}, \cite{fisher1934statistical}, which rejects when~$-2\sum_{i=1}^d\ln(p_i)$ exceeds a critical value, has a star-shaped acceptance region. Indeed, since small p-values are evidence against the null hypothesis, \cite{birnbaum1954combining} established that (while there is no best combination procedure) any combined test that is most powerful against \emph{some} deviation from~$H_{0,d}$ and which rejects for the~p-values~$p_1,\hdots,p_d$ must also reject for the~p-values~$\tilde{p}_1,\hdots,\tilde{p}_d$ if~$\tilde{p}_i\leq p_i$ for~$i=1,\hdots,d$. Clearly, such tests have star-shaped acceptance regions. Many classic procedures from meta-analysis, such as the ones of~\cite{tippett1931methods}, \cite{pearson1933method}, \cite{simes1986improved}, \cite{stouffer1949american}, satisfy this reasonable criterion. \cite{cousins2007annotated} provides further examples and discussion of tests based on combining~p-values while~\cite{owen09} contains a specialized review. In fact, methods based on suitably combining~p-values and related quantities have received continued interest in the era of high-dimensional statistics as witnessed by, e.g., the recent works of \cite{duan2020interactive} and~\cite{vovk2020combining, vovk2020values}.
\end{itemize}
\subsection{Superconsistency points of sequences of tests satisfying Assumption \ref{as:conv}}
We now establish the analogue of Theorem~\ref{thm:nogain} for all sequences of non-randomized tests the acceptance regions of which satisfy Assumption~\ref{as:conv}. The proof is based on invariance arguments together with an ``average'' version of the Gaussian correlation inequality established in Proposition~5 of~\cite{schechtman}. The advantage over the Gaussian correlation inequality in its original form is that it applies to a larger class of sets (i.e., star-shaped sets with the origin as a center, as opposed to convex sets that are symmetric around the origin), which is of fundamental importance for our purpose, cf.~the examples discussed in Section~\ref{sec:examples}. The proof of the following theorem can be found in Appendix~\ref{proofmain0s}.
\begin{theorem}\label{thm:nogain2}
Let the sequence of Borel sets~$\mathbb{A}$ satisfy Assumption~\ref{as:conv} and let the associated sequence of tests have asymptotic size~$\alpha \in (0, 1)$. Then, for any sequence~$r_d > 0$ such that~$r_d/d^{1/4}$ is bounded, and any sequence of non-empty Borel sets~$\mathbb{D}_d \subseteq \mathbb{B}_2^d(r_d)$ such that~
\begin{equation}\label{eqn:inclconsset}
\bigtimes_{d = 1}^{\infty} \mathbb{D}_d \subseteq \mathscr{C}(\mathbb{A}),
\end{equation} we have
\begin{equation}\label{eqn:limmain}
\lim_{d \to \infty} \frac{\mathrm{vol}_d\left(\mathbb{D}_d\right)}{\mathrm{vol}_d\left(\mathbb{B}_2^d(r_d)\right)} = 0.
\end{equation}
\end{theorem}
\begin{remark}[Spherical measure instead of relative volume]\label{rem:surf}
One could ask what happens in the context of Theorem~\ref{thm:nogain2} if, instead of considering the asymptotic behavior of~$\mathrm{vol}_d(\mathbb{D}_d)/\mathrm{vol}_d(\mathbb{B}_2^d(r_d))$ in~\eqref{eqn:limmain}, one considers~$\rho_{d,r_d}(\mathbb{D}_d)$, where~$\rho_{d,r_d}$ denotes the uniform probability measure on the sphere~$\{\xi \in \mathbb{R}^d: \|\xi\|_2 = r_d\}$. Inspection of the proof of Theorem~\ref{thm:nogain2} shows that~$$\rho_{d,r_d}\left(\mathbb{D}_d\right) \to 0.$$ In fact, the proof of Theorem~\ref{thm:nogain2} is based on that statement. That is, also with this alternative measure, one reaches qualitatively the same conclusion concerning the magnitude of the set of superefficiency points of a sequence of tests relative to the LR test.
\end{remark}
\section{Conclusion}
In high-dimensional testing problems, the choice of a test implicitly or explicitly determines the type of alternative it prioritizes. In the Gaussian sequence model, the LR test is based on the Euclidean norm. Many tests exist that are consistent against alternatives the LR test isn't consistent against (or are even consistent against strictly more alternatives than the LR test), i.e., they possess what we refer to as superconsistency points. We have shown that for any test with an acceptance region that is star-shaped around the origin, the corresponding set of superconsistency points is negligible in an asymptotic sense. This can be interpreted as a high-dimensional testing analogue of Le Cam's famous result that the set of superefficiency points relative to the maximum likelihood estimator is at most a Lebesgue null set, cf.~\cite{lecam1953}. In analogy to that classic finding, our result does not suggest that one should always use the LR test. But it shows that relative to a uniform prior on the parameter space one cannot expect substantial improvements by using other tests.
\bibliographystyle{ims
|
1,116,691,497,622 | arxiv | \section{Introduction}
Let $(M,g,\mathcal F)$ and $(M',g',\mathcal F')$ be foliated Riemannian manifolds and $\phi:M\to M'$ be a smooth foliated map (i.e., $\phi$ is a smooth leaf-preserving map). Let $Q$ be the normal bundle of $\mathcal F$ and $d_{T}\phi=d\phi|_{Q}$, the restriction of $d\phi$ on the normal bundle $Q$. Then $\phi$ is said to be {\it transversally harmonic} if $\phi$ is a solution of the Eular-Largrange equation $\tau_{b}(\phi)=0$, where $\tau_b(\phi)={\rm tr}_{Q}(\nabla_{\rm tr} d_T\phi)$ is the transversal tension field of $\phi$.
Transversally harmonic maps on foliated Riemannian manifolds have been studied by many authors \cite{CZ,FJ,JU3,JJ1,JJ2,KW1,KW2,OSU}. However, a transversally harmonic map is not a critical point of the transversal energy functional \cite{JJ1}
\begin{align*}
E_{B}(\phi)=\frac{1}{2}\int_{M} | d_T \phi|^2\mu_{M}.
\end{align*}
In 2013, S. Dragomir and A. Tommasoli \cite{DT} defined a new harmonic map, called {\it $(\mathcal F,\mathcal F')$-harmonic map}, which is a critical point of the transversal energy functional $E_{B}(\phi)$. Two definitions are equivalent when $\mathcal F$ is minimal.
On the other hand, Y. Chiang and R. Wolak \cite{CW} defined the transvesally $f$-harmonic map.
Let $f$ be a basic function on $M$. The map $\phi$ is said to be {\it transversally $f$-harmonic} if $\phi$ is a solution of the Eular-Largrange equation $\tau_{b,f}(\phi)=0$, where $\tau_{b,f}(\phi)$ is the {\it transversal $f$-tension field} of $\phi$ defined by $\tau_{b,f}(\phi) ={\rm tr}_Q(\nabla_{\rm tr}(e^{-f} d_T\phi)).$ From the first variation formula (Theorem 3.4), the transvesally $f$-harmonic map is not a critical point of the transversal $f$-energy functional
\begin{align*}
E_{B,f}(\phi) = \frac12\int_M |d_T\phi|^2 e^{-f}\mu_M.
\end{align*}
Sometimes, we use a function $f$ instead of $e^{-f}$(\cite{CW,JU3}).
Similarly, the map $\phi$ is said to be {\it $(\mathcal F,\mathcal F')_f $-harmonic} if $\phi$ is a critical point of the transversal $f$-energy functional $E_{B,f}(\phi)$.
If $f$ is constant, then transversally $f$-harmonic (resp. $(\mathcal F,\mathcal F')_f$-harmonic) map is just transversally harmonic (resp. $(\mathcal F,\mathcal F')$-harmonic) map. It is well-known \cite{JU3} that a transversally harmonic map is a critical point of the transversal $f$-energy functional for special function $f$ satisfying $df=-\kappa_B$ ($\kappa_B$ is the basic part of the mean curvature form $\kappa$ of $\mathcal F$). Hence if $f_\kappa$ is a solution of $df=-\kappa_B$, then $(\mathcal F,\mathcal F')_{f_\kappa}$-harmonic map is equivalent to the transversally harmonic map.
Originally, the $f$-harmonic maps on Riemannian manifolds were studied by A. Lichnerowicz in 1969 \cite{LI}, later by J.Eells and L. Lemaire in 1977 \cite{EL}.
In this article, we study $f$-harmonic maps (transversally $f$-harmonic map and $(\mathcal F,\mathcal F')_f$-harmonic map) on weighted foliations. A {\it weighted foliation} $(M,\mathcal F, g,e^{-f}\nu)$ is a Riemannian foliation endowed with a weighted transversal volume form $e^{-f}\nu$ and some basic function $f$, where $\nu$ is the transversal volume form of $\mathcal F$.
The geometry of a weighted manifold (or a smooth metric measure space) were developed by D. Bakry and M. \'Emery \cite{BE} and studied by many authors \cite{LO,LV,ST,ST2,VI,WW}. Also, the geometry of weighted manifolds is closely related with that of self-shrinkers and gradient Ricci solitons. An important geometric tool is the Bakry-\'Emery Ricci tensor, which was first introduced by A. Lichnerowicz \cite{LI1}.
For the study of weighted foliations, we define the Bakry-\'Emery type Ricci tensor ${\rm Ric}_f^Q$ on $(M,\mathcal F, g,e^{-f}\nu)$ by
\begin{align*}
{\rm Ric}_f^Q = {\rm Ric}^Q +{\rm Hess}_T f,
\end{align*}
where ${\rm Ric}^Q$ is the transversal Ricci tensor and ${\rm Hess}_T f$ is the transversal Hessian \cite{JU3} of $\mathcal F$.
We call ${\rm Ric}_f^Q$ as {\it transversal Bakry-\'Emery Ricci tensor} on a weighted foliation. Then we have the following Theorems.
\begin{thm} (cf. Theorem 3.6)
Let $(M,g,\mathcal F,e^{-f}\nu)$ be a weighted foliation on a closed Riemannian manifold $M$ with ${\rm Ric}_f^Q\geq 0$ and
$(M',g',\mathcal F')$ be a foliated Riemannian manifold with non-positive transversal sectional curvature $K^{Q'}$. Then a transversally $f$-harmonic map $\phi:M \rightarrow M'$ is transversally totally geodesic.
In addition,
(1) if ${\rm Ric}_f^Q>0$ at some point, then $\phi$ is transversally constant;
(2) if $K^{Q'}<0$, then $\phi$ is transverseally constant or $\phi(M)$ is a transversally geodesic closed curve.
\end{thm}
Let $\mathcal K$ be the set of all basic functions $f$ satisfying $ i(\kappa_B^\sharp)df \leq 0$.
\begin{rem} For $f\in\mathcal K$, Theorem 1.1 holds for $(\mathcal F,\mathcal F')_f$-harmonic map. (cf. Theorem 3.8).
\end{rem}
Moreover, we study the Liouville type theorems for $(\mathcal F,\mathcal F')_f$ and transversally $f$-harmonic map, respectively. The Liouville type theorem has been studied by many researchers \cite{JU1,EN,SY,YA} on Riemannian manifolds and \cite{FJ,JJ1,JJ2,OSU} on foliations. Specially, see \cite{RV,WX} for $f$-harmonic maps on weighted Riemannian manifolds. Let $\mu_0$ be the infimum of the spectrum of the eigenvalues of the weighted basic Laplacian $\Delta_{B,f}$ acting on $f$-weighted $L^2$-basic functions on $M$.
\begin{thm} (cf. Theorem 4.2) Let $(M,g,\mathcal F,e^{-f}\nu)$ be a weighted foliation on a complete manifold whose all leaves are compact and the mean curvature form is bounded. Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with non-positive transversal sectional curvature. Let $f\in\mathcal K$ and $\phi:M\to N$ be a transversally $f$-harmonic map
with $E_{B,f}(\phi)<\infty$. Then
(1) if ${\rm Ric}_f^Q \geq 0$ at all points, then $\phi$ is transversally totally geodesic;
(2) if ${\rm Ric}_f^Q\geq 0$ at all points and $\int_M e^{-f}\mu_M= \infty$, then $\phi$ is transvesally constant;
(3) if ${\rm Ric}_f^Q \geq -\mu_0$ at all points and ${\rm Ric}^Q_f>-\mu_0$ at some point, then $\phi$ is transversally constant.
\end{thm}
\begin{rem} Theorem 1.3 holds for $(\mathcal F,\mathcal F')_f$-harmonic maps under the same conditions in Theorem 1.3. (cf. Theorem 4.4).
\end{rem}
\section{Preliminaries}
Let $(M,g,\mathcal F)$ be a foliated Riemannian
manifold of dimension $n$ with a foliation $\mathcal F$ of codimension $q (=n-p)$ and a bundle-like metric $g$ with respect to $\mathcal F$ \cite{Molino,Tond}. Let $Q=TM/T\mathcal F$ be the normal bundle of $\mathcal F$, where $T\mathcal F$ is the tangent bundle of $\mathcal F$. Let $g_Q$ be the induced metric by $g$ on $Q$, that is, $g_Q = \sigma^*(g|_{T\mathcal F^\perp})$, where $\sigma:Q\to T\mathcal F^\perp$ is the canonical bundle isomorphism. So we consider $Q\cong T\mathcal F^\perp$. Then $g_Q$ is the holonomy invariant metric on $Q$, meaning that $L_Xg_Q=0$ for $X\in T\mathcal F$, where
$L_X$ is the transverse Lie derivative with respect to $X$. Let $\nabla^Q$ be the transverse Levi-Civita
connection on the normal bundle $Q$ \cite{Tond,Tond1} and $R^Q$ be the transversal curvature tensor of $\nabla^Q\equiv\nabla$, which is defined by $R^Q(X,Y)=[\nabla_X,\nabla_Y]-\nabla_{[X,Y]}$ for any $X,Y\in\Gamma TM$. Let $K^Q$ and ${\rm Ric}^Q $ be the transversal
sectional curvature and transversal Ricci operator with respect to $\nabla$, respectively. Let $\kappa$ be the mean curvature form of $\mathcal F$, which is defined by
\begin{align*}
\kappa(X) = g_Q (\sum_{i=1}^p\pi(\nabla^M_{f_j}f_j), \pi(X))
\end{align*}
for any tangent vector $X\in\Gamma TM$, where $\pi:TM\to Q$ is the natural projection and $\{f_j\}$ is a local orthonormal basis of $T\mathcal F$. The foliation $\mathcal F$ is said to be {\it
minimal} if $\kappa=0$ \cite{Tond}.
Let $\Omega_B^r(\mathcal F)$ be the space of all {\it basic
$r$-forms}, i.e., $\omega\in\Omega_B^r(\mathcal F)$ if and only if
$i(X)\omega=0$ and $L_X\omega=0$ for any $X\in\Gamma T\mathcal F$, where $i(X)$ is the interior product. Then $\Omega^*(M)=\Omega_B^*(\mathcal F)\oplus \Omega_B^*(\mathcal F)^\perp$ \cite{Lop}. It is well known that $\kappa_B$ is closed, i.e., $d\kappa_B=0$ \cite{Lop, PJ}, where $\kappa_B$ is the basic part of $\kappa$.
Let $\bar *:\Omega_B^r(\mathcal F)\to \Omega_B^{q-r}(\mathcal F)$ be the star operator given by
\begin{align*}
\bar *\omega = (-1)^{(n-q)(q-r)} *(\omega\wedge\chi_{\mathcal F}),\quad \omega\in\Omega_B^r(\mathcal F),
\end{align*}
where $\chi_{\mathcal F}$ is the characteristic form of $\mathcal F$ and $*$ is the Hodge star operator associated to $g$. Let $\langle\cdot,\cdot\rangle$ be the pointwise inner product on $\Omega_B^r(\mathcal F)$, which is given by
\begin{align*}
\langle\omega_1,\omega_2\rangle \nu = \omega_1\wedge\bar * \omega_2,
\end{align*}
where $\nu$ is the transversal volume form such that $*\nu =\chi_{\mathcal F}$.
Let $\delta_B :\Omega_B^r (\mathcal F)\to \Omega_B^{r-1}(\mathcal F)$ be the operator defined by
\begin{align*}
\delta_B\omega = (-1)^{q(r+1)+1} \bar * (d_B-\kappa_B \wedge) \bar *\omega,
\end{align*}
where $d_B = d|_{\Omega_B^*(\mathcal F)}$. Locally, $\delta_{B}$ is expressed by
\begin{equation}\label{2-2}
\delta_{B} = -\sum_a i(E_a) \nabla_{E_a} + i (\kappa_{B}^\sharp),
\end{equation}
where $(\cdot)^\sharp$ is the dual vector field of $(\cdot)$ \cite{JR} and $\{E_a\}_{a=1,\cdots,q}$ is a local orthonormal basic frame on $Q$.
It is well known \cite{PR} that $\delta_B$ is the formal adjoint of $d_B$ with respect to the global inner product $\ll\cdot,\cdot\gg$, which is defined by
\begin{align}\label{2-1}
\ll \omega_1,\omega_2\gg =\int_M \langle\omega_1,\omega_2\rangle\mu_M
\end{align}
for any compactly supported basic forms $\omega_1$ or $\omega_2$,
where $\mu_M =\nu\wedge\chi_{\mathcal F}$ is the volume form. There exists a bundle-like metric such that the mean curvature form satisfies $\delta_B\kappa_B=0$ on compact manifolds \cite{DO,MMR,MA}.
The basic
Laplacian $\Delta_B$ acting on $\Omega_B^*(\mathcal F)$ is given by
\begin{equation*}
\Delta_B=d_B\delta_B+\delta_B d_B.
\end{equation*}
Let $Y$ be a foliated vector field, i.e., $[Y,Z]\in \Gamma T\mathcal F$ for
all $Z\in \Gamma T\mathcal F$ \cite{Kamber2} and put $\bar Y = \pi (Y)$.
Now we define the bundle map $A_Y:\Gamma Q\to \Gamma Q$ for any $Y\in TM$ by
\begin{align}\label{A-operator}
A_Y s =L_Ys-\nabla_Ys,
\end{align}
where $L_Y s = \pi [Y,Y_s]$ for $\pi(Y_s)=s$. It is well-known \cite{Kamber2} that for any foliated vector field $Y$
\begin{align*}
A_Y s = -\nabla_{Y_s}\bar Y,
\end{align*}
where $Y_s$ is the vector field such that $\pi(Y_s)=s$. So $A_Y$ depends only on $\bar Y=\pi(Y)$ and is a linear operator. Moreover, $A_Y$ extends in an obvious way to tensors of any type on $Q$ \cite{Kamber2}.
Then we
have the generalized Weitzenb\"ock formula on $\Omega_B^*(\mathcal F)$ \cite{JU2}: for any $\omega\in\Omega_B^r(\mathcal F),$
\begin{align}\label{2-3}
\Delta_B \omega = \nabla_{\rm tr}^*\nabla_{\rm tr}\omega +
F(\omega)+A_{\kappa_B^\sharp}\omega,
\end{align}
where $F(\omega)=\sum_{a,b}\theta^a \wedge i(E_b)R^Q(E_b,
E_a)\omega$ and
\begin{align}\label{2-4}
\nabla_{\rm tr}^*\nabla_{\rm tr}\omega =-\sum_a \nabla^2_{E_a,E_a}\omega
+\nabla_{\kappa_B^\sharp}\omega.
\end{align}
The operator $\nabla_{\rm tr}^*\nabla_{\rm tr}$
is positive definite and formally self adjoint on the space of
basic forms \cite{JU2}.
If $\omega$ is a basic 1-form, then $F(\omega)^\sharp
={\rm Ric}^Q(\omega^\sharp)$.
Now, let $(M,g,\mathcal F, e^{-f}\nu)$ be a weighted foliation, that is, Riemannian foliation endowed with a weighted transversal volume form $e^{-f}\nu$, where $f$ is a basic function. The formal adjoint operator $\delta_{B,f}$ of $d$ with respect to volume form $e^{f}\mu_M$ is given by
\begin{align}\label{2-6}
\delta_{B,f}\omega =e^f \delta_B (e^{-f}\omega) =\delta_B\omega + i(\nabla_{\rm tr} f)\omega
\end{align}
for any basic form $\omega$, where $\nabla_{\rm tr} f = \sum_a E_a(f)E_a$. That is, for any basic forms $\omega\in\Omega_B^r(\mathcal F)$ and $\eta\in\Omega_B^{r+1}(\mathcal F)$,
\begin{align}
\int_M \langle d_B\omega,\eta\rangle e^{-f}\mu_M =\int_M \langle \omega,\delta_{B,f}\eta\rangle e^{-f}\mu_M.
\end{align}
The {\it weighted basic Laplacian} operator $\Delta_{B,f}$ is defined by
\begin{align}\label{2-7}
\Delta_{B,f} = d_B\delta_{B,f} + \delta_{B,f}d_B.
\end{align}
From (\ref{2-6}), we have
\begin{align}\label{2-8}
\Delta_{B,f} = \Delta_B + L_{\nabla_{\rm tr} f}.
\end{align}
Specially, $\Delta_{B,f} =\Delta_B + i(\nabla_{\rm tr} f)d_B$ on $\Omega_B^0(\mathcal F)$. Then we have the following.
\begin{lem} Let $(M,g,\mathcal F)$ be a closed, connected Riemannian manifold with a foliation $\mathcal F$. If $(\Delta_{B,f} -\kappa_B)h\geq 0$ (or $\leq 0$) for any basic function $h$, then $h$ is constant.
\end{lem}
\begin{proof} The proof is similar to \cite[Lemma 2.1]{JLR}. That is, let $f$ be a basic function on $M$.
Since $\Delta_B-\kappa_B =\Delta -\kappa$ on $\Omega_B^0(\mathcal F)$ \cite[Proposition 4.1]{PR}, we have that on $\Omega_B^0(\mathcal F)$
\begin{align}
\Delta_{B,f} -\kappa_B =\Delta_B -\kappa_B + i(\nabla_{\rm tr} f) d_B = \Delta -\kappa + i(\nabla f)d,
\end{align}
where $\Delta$ is the Laplace operator on $M$. The operator of right hand side in the above is a second order elliptic operator, by the maximum principle, the proof follows.
\end{proof}
And the {\it generalized Bakry-\'Emery-Ricci curvature } ${\rm Ric}_f^Q$ is defined by
\begin{align}\label{2-9}
{\rm Ric}_f^Q = {\rm Ric}^Q + {\rm Hess}_T f,
\end{align}
where ${\rm Hess}_T f =\nabla_{tr}d_B f$ is the transversal Hessian \cite{JU3}.
Note that the Bakry-\'Emery-Ricci curvature ${\rm Ric}_f $ on a Riemannian manifold is related to the Ricci soliton, specially gradient Ricci solitions. On foliated Riemannian manifold, the generalized Bakry-\'Emery-Ricci curvature ${\rm Ric}_f^Q$ is related to the transversal Ricci solitions, which are special solutions of the transversal Ricci flow \cite{LIN}.
For later use, we recall the transversal divergence theorem
on a foliated Riemannian
manifold.
\begin{thm} \label{thm1-1} \cite{Yorozu}
Let $(M,g,\mathcal F)$ be a closed, oriented Riemannian manifold
with a transversally oriented foliation $\mathcal F$ and a
bundle-like metric $g$ with respect to $\mathcal F$. Then for a vector field $X\in \Gamma TM$,
\begin{equation*}
\int_M \operatorname{div_\nabla}(\bar X) \mu_{M}
= \int_M g_Q(\bar X,\kappa^\sharp)\mu_{M},
\end{equation*}
where $\operatorname{div_\nabla} (\bar{X})$
denotes the transversal divergence of $\bar {X}$ with respect to the
connection $\nabla$.
\end{thm}
\section{Harmonic maps}
\subsection {General facts}
Let $(M, g,\mathcal F)$ and $(M', g',\mathcal F')$ be two foliated Riemannian manifolds and let $\phi:(M,g,\mathcal F)\to (M', g',\mathcal F')$ be a smooth foliated map,
i.e., $d\phi(T\mathcal F)\subset T\mathcal F'$. We define $d_T\phi:Q \to Q'$ by
\begin{align}
d_T\phi := \pi' \circ d \phi \circ \sigma.
\end{align}
Then $d_T\phi$ is a section in $ Q^*\otimes
\phi^{-1}Q'$, where $\phi^{-1}Q'$ is the pull-back bundle on $M$. Let $\nabla^\phi$
and $\tilde \nabla$ be the connections on $\phi^{-1}Q'$ and
$Q^*\otimes \phi^{-1}Q'$, respectively.
The map $\phi:(M, g,\mathcal F)\to (M', g',\mathcal F')$ is called {\it transversally totally geodesic} if it satisfies
\begin{align}
\tilde\nabla_{\rm tr}d_T\phi=0,
\end{align}
where $(\tilde\nabla_{\rm tr}d_T\phi)(X,Y):=(\tilde\nabla_X d_T\phi)(Y)$ for any $X,Y\in \Gamma Q$. Note that if $\phi:(M,g,\mathcal F)\to (M',g',\mathcal F')$ is transversally totally geodesic with $d\phi(Q)\subset Q'$, then, for any transversal geodesic $\gamma$ on $M$, $\phi\circ\gamma$ is also transversal geodesic.
From now on, we use $\nabla$ instead of all induced connections if we have no confusion.
The {\it transversal tension field} $\tau_{b}(\phi)$ of $\phi$ is defined by
\begin{align}\label{eq3-3}
\tau_{b}(\phi):={\rm tr}_{Q}(\nabla_{\rm tr} d_T\phi)=\sum_a (\nabla_{E_a}d_T\phi)(E_a).
\end{align}
Let $\Omega$ be a compact domain of $M$. The {\it transversal energy} of $\phi$ on $\Omega\subset
M$ is defined by
\begin{align}\label{eq2-4}
E_{B}(\phi;\Omega)={1\over 2}\int_{\Omega} | d_T \phi|^2\mu_{M}.
\end{align}
The map $\phi$ is said to be {\it $(\mathcal F,\mathcal F')$-harmonic} \cite{DT} if $\phi$ is a critical point of the transversal energy functional $E_{B}$.
Let $V\in\phi^{-1}Q'$ and let $\phi_t$ be a foliated variation with $\phi_0=\phi$ and ${d\phi_t\over dt}|_{t=0}=V$. Then we have the first variational formula \cite{JJ1}. That is,
\begin{align}\label{3-5}
{d\over dt}E_{B}(\phi_t;\Omega)|_{t=0}=-\int_{\Omega} \langle V,\tau_{b}(\phi)-d_T\phi(\kappa_B^\sharp)\rangle \mu_{M},
\end{align}
where $\langle\cdot,\cdot\rangle$ is the pull-back metric on $\phi^{-1}Q'$.
Trivially, $\phi$ is $(\mathcal F,\mathcal F')$-harmonic map if and only if ${\tau}_{b}(\phi)-d_T\phi(\kappa_B^\sharp)=0$. But $(\mathcal F,\mathcal F')$-harmonic map is not equivalent to the transversally harmonic map.
Let $\Omega_B^r(E)$ be the space of $E$-valued basic $r$-forms on $M$, where $E=\phi^{-1}Q'$. We define $d_\nabla : \Omega_B^r(E)\to \Omega_B^{r+1}(E)$ by
\begin{align}
d_\nabla(\omega\otimes s)=d_B\omega\otimes s+(-1)^r\omega\wedge\nabla s
\end{align}
for any $s\in \Gamma E$ and $\omega\in\Omega_B^r(\mathcal F)$.
Let $\delta_\nabla$ be a formal adjoint of $d_\nabla$ with respect to the inner product induced from (\ref{2-1}).
Trivially, we have the following remark.
\begin{rem} Let $\phi:(M,\mathcal F)\to (M',\mathcal F')$ be a smooth foliated map. Then
\begin{align*}
d_\nabla (d_T\phi)=0,\quad\delta_\nabla d_T\phi=-\tau_b (\phi) +d_T\phi(\kappa_B^\sharp).
\end{align*}
\end{rem}
Now we define the Laplacian $\Delta$ on $\Omega_B^*(E)$ by
\begin{align}\label{ee8}
\Delta =d_\nabla \delta_\nabla +\delta_\nabla d_\nabla.
\end{align}
Then the generalized Weitzenb\"ock type formula (\ref{2-3}) is extended to $\Omega_B^*(E)$ as follows \cite{JJ1}: for any $\Psi\in\Omega_B^r(E)$,
\begin{align}\label{eq4-6}
\Delta \Psi = \nabla_{\rm tr}^*\nabla_{\rm tr} \Psi
+ A_{\kappa_{B}^\sharp} \Psi + F(\Psi),
\end{align}
where $ \nabla_{\rm tr}^*\nabla_{\rm tr}$, $A_X$ and $F(\Psi)$ are naturally extended to $\Omega_B^r(E)$.
Moreover, for any $ \Psi\in\Omega_B^r(E)$,
\begin{align}\label{weitzenbock}
\frac12\Delta_B|\Psi |^{2}
=\langle\Delta \Psi, \Psi\rangle -|\nabla_{\rm tr} \Psi|^2-\langle A_{\kappa_{B}^\sharp}\Psi, \Psi\rangle -\langle F(\Psi),\Psi\rangle.
\end{align}
Then we have the generalized Weitzenb\"ock type formula as follows.
\begin{prop}\label{th2} \cite{JJ1}
Let $\phi:(M, g,\mathcal F) \to (M', g', \mathcal F')$ be a smooth foliated map. Then
\begin{align}\label{3-11}
\frac12\Delta_B| d_T \phi |^{2}
= - |\nabla_{\rm tr} d_T \phi|^2 -\langle F(d_T\phi),d_T\phi\rangle-\langle d_\nabla \tau_b(\phi),d_T\phi\rangle +\langle\nabla_{\kappa_B^\sharp}d_T\phi,d_T\phi\rangle,
\end{align}
where
\begin{align}\label{3-12}
\langle F(d_T\phi),d_T\phi\rangle&=\sum_a g_{Q'}(d_T \phi({\rm Ric^{Q}}(E_a)),d_T \phi(E_a)) \notag\\
&-\sum_{a,b}g_{Q'}( R^{Q'}(d_T \phi(E_b), d_T \phi(E_a))d_T \phi(E_a), d_T \phi(E_b)).
\end{align}
\end{prop}
\begin{proof} By $\Psi=d_T\phi$ in (\ref{weitzenbock}), it is trivial from Remark 3.1.
\end{proof}
\begin{thm} Let $\phi:(M,g,\mathcal F,e^{-f}\nu) \to (M',g',\mathcal F')$ be a smooth foliated map. Then
\begin{align*}
\frac12\Delta_{B,f}|d_T\phi|^2 =-|\nabla_{\rm tr}d_T\phi|^2 - \langle F_f(d_T\phi),d_T\phi\rangle -\langle d_\nabla(\bar\tau_{b,f}(\phi)),d_T\phi\rangle+ \frac12 \kappa_B^\sharp (|d_T\phi|^2),
\end{align*}
where $\bar\tau_{b,f}(\phi) =\tau_b(\phi) -d_T\phi(\nabla_{\rm tr}f)$ and
\begin{align}\label{3-13}
\langle F_f(d_T\phi),d_T\phi\rangle&=\sum_a g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a)) \notag\\
&-\sum_{a,b}g_{Q'}( R^{Q'}(d_T \phi(E_b), d_T \phi(E_a))d_T \phi(E_a), d_T \phi(E_b)).
\end{align}
\end{thm}
\begin{proof}
From (\ref{2-9}), we know
\begin{align*}
\langle F(d_T\phi),d_T\phi\rangle = \langle F_f (d_T\phi),d_T\phi\rangle - \langle d_T\phi({\rm Hess}_Tf),d_T \phi\rangle,
\end{align*}
where $\langle d_T\phi({\rm Hess}_Tf),d_T \phi\rangle := \sum_a g_{Q'} (d_T\phi(\nabla_{E_a} \nabla_{\rm tr} f),d_T\phi(E_a))$. Hence from (\ref{2-8}) and (\ref{3-11}),
\begin{align}\label{3-14}
\frac12\Delta_{B,f}|d_T\phi|^2=&-|\nabla_{\rm tr}d_T\phi|^2 - \langle F_f(d_T\phi),d_T\phi\rangle+\langle\nabla_{\kappa_B^\sharp} d_T\phi,d_T\phi\rangle -\langle d_\nabla(\tau_b(\phi)),d_T\phi\rangle \notag\\
&+\langle d_T\phi({\rm Hess}_Tf),d_T\phi\rangle+\frac12 \langle \nabla_{\rm tr}|d_T\phi|^2,\nabla_{\rm tr} f\rangle.
\end{align}
Note that $(\nabla_{\rm tr} d_T\phi)(X,Y) =(\nabla_{\rm tr} d_T\phi)(Y, X)$ for any vector fields $X,Y \in \Gamma Q$ . Hence if we choose a local orthonormal basic frame $\{E_a\}$ such that $\nabla E_a=0$ at $x\in M$, then
\begin{align*}
\frac12 \langle \nabla_{\rm tr}|d_T\phi|^2,\nabla_{\rm tr} f\rangle &=\sum_a \langle (\nabla_{\nabla_{tr} f} d_T\phi)(E_a),d_T\phi(E_a)\rangle\\
&=\sum_a\langle (\nabla_{E_a} d_T\phi)(\nabla_{\rm tr}f),d_T\phi(E_a)\rangle\\
&=\sum_a\langle \nabla_{E_a} d_T\phi(\nabla_{\rm tr}f),d_T\phi(E_a)\rangle -\sum_a\langle d_T\phi(\nabla_{E_a}\nabla_{\rm tr}f),d_T\phi(E_a)\rangle\\
&=\langle d_\nabla(d_T\phi(\nabla_{\rm tr}f)),d_T\phi\rangle -\langle d_T\phi({\rm Hess}_Tf),d_T\phi\rangle.
\end{align*}
That is,
\begin{align}\label{3-15}
\langle d_T\phi({\rm Hess}_Tf),d_T\phi\rangle+\frac12 \langle \nabla_{\rm tr}|d_T\phi|^2,\nabla_{\rm tr} f\rangle=\langle d_\nabla(d_T\phi(\nabla_{\rm tr}f)),d_T\phi\rangle.
\end{align}
From (\ref{3-14}) and (\ref{3-15}), the proof follows.
\end{proof}
\subsection{$f$-harmonic maps}
Let $f$ be a basic function on $M$ and let $\phi:(M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ be a smooth foliated map. Let $\tau_{b,f}:={\rm tr}_Q (\nabla_{\rm tr}( e^{-f} d_T\phi))$ be the transversal $f$-tension field. Then
\begin{align}
\tau_{b,f}(\phi) =( \tau_b(\phi) - d_T\phi(\nabla_{\rm tr}f))e^{-f}=\bar\tau_{b,f}(\phi)e^{-f}.
\end{align}
The {\it transversally $f$-harmonic map} is a solution of the Eular-Largrange equation $\tau_{b,f}(\phi)=0$ (equivalently, $\bar\tau_{b,f}(\phi)=0$).
The map $\phi$ is said to be {\it $(\mathcal F,\mathcal F')_f$-harmonic map} if $\phi$ is a critical point of the {\it transversal $f$-energy functional} $E_{B,f}(\phi)$ given by
\begin{align}\label{3-16}
E_{B,f}(\phi) = \frac12\int_\Omega |d_T\phi|^2 e^{-f}\mu_M.
\end{align}
Remark that if $f$ is constant, then a transversally $f$-harmonic and $(\mathcal F,\mathcal F')_f$-harmonic map are transversally harmonic and $(\mathcal F,\mathcal F')$-harmonic map, respectively.
\begin{thm} $(${\rm The first variational formula}$)$ \label{th4}
Let $\phi:(M, g, \mathcal F,e^{-f}\nu)\to (M', g', \mathcal F')$
be a smooth foliated map and $\{\phi_t\}$ be a smooth foliated variation of $\phi$ supported in a compact domain $\Omega$. Then
\begin{align*}
{d\over dt}E_{B,f}(\phi_t;\Omega)|_{t=0}=-\int_{\Omega} \langle V, {\bar\tau}_{b,f}(\phi)-d_T\phi(\kappa_B^\sharp)\rangle e^{-f} \mu_{M},
\end{align*}
where $V$ is the variation vector field of $\phi_t$.
\end{thm}
\begin{proof} It is trivial from \cite[Theorem 3.7]{JU3}.
\end{proof}
From Theorem 3.4, the map $\phi:M\to M'$ is $(\mathcal F,\mathcal F')_f$-harmonic map if and only if
\begin{align}\label{3-17}
\tilde\tau_{b,f}(\phi) := \bar\tau_{b,f}(\phi) -d_T\phi(\kappa_B^\sharp) =0.
\end{align}
In general, $(\mathcal F,\mathcal F')_f$-harmonic map and transversally $f$-harmonic map are not equivalent unless $\mathcal F$ is minimal. For more information about the transversally $f$-harmonic map, see \cite{CW}.
\begin{lem}
Let $(M,\mathcal F,g,e^{-f}\nu)$ be a weighted foliation and $(M',g',\mathcal F')$ be a Riemannian foliation.
(1) If $\phi:M\to M'$ is a transversally $f$-harmonic map, then
\begin{align}\label{3-18}
|d_T\phi|\Delta_{B,f}|d_T\phi|
=|d_{B}|d_T\phi||^{2}-|\nabla_{\rm tr}d_T\phi|^{2}-\langle F_f(d_T\phi),d_T\phi\rangle+|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|).
\end{align}
(2) If $\phi:M\to M'$ be a $(\mathcal F,\mathcal F')_f$-harmonic map, then
\begin{align}\label{3-19}
|d_T\phi|\Delta_{B,f}|d_T\phi|
=&|d_{B}|d_T\phi||^{2}-|\nabla_{\rm tr}d_T\phi|^{2}-\langle F_f(d_T\phi),d_T\phi\rangle-\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle\notag\\
&+|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|).
\end{align}
\end{lem}
\begin{proof}
By a simple calculation, we have
\begin{align*}
\frac12\Delta_{B,f}| d_T \phi |^{2}
=|d_T\phi|\Delta_{B,f}|d_T\phi|-|d_{B}|d_T\phi||^{2}.
\end{align*}
Hence the proofs follow from Theorem 3.3 and (\ref{3-17}).
\end{proof}
Then we have the following.
\begin{thm} Let $(M,g,\mathcal F,e^{-f}\nu)$ be a weighted foliation on a closed manifold $M$ with ${\rm Ric}_{f}^Q\geq 0$ and $(M',g',\mathcal F')$ be a Riemannian foliation with $K^{Q'}\leq 0$. Then any transversally $f$-harmonic map $\phi:M\to M'$ is always transversally totally geodesic.
In addition,
(1) if ${\rm Ric}_f^Q>0$ at some point, then $\phi$ is transversally constant.
(2) if $K^{Q'}<0$, then $\phi$ is transversally constant or $\phi(M)$ is a transversally geodesic closed curve.
\end{thm}
\begin{proof}
By the first Kato's inequality \cite{BE1}, we have
\begin{align}\label{3-20}
|\nabla_{\rm tr}d_T\phi|\geq|d_{B}|d_T\phi||.
\end{align}
Since $\phi:M\to M'$ is a transversally $f$-harmonic map, $\tau_{b,f}(\phi)=0$. From (\ref{3-18}) and (\ref{3-20}), we have
\begin{align}\label{3-21}
|d_T\phi| (\Delta_{B,f}-\kappa_B^\sharp)|d_T\phi|\leq -\langle F_f(d_T\phi),d_T\phi\rangle.
\end{align}
From the assumptions ${\rm Ric}_f^Q\geq 0$ and $K^{Q'}\leq 0$, $\langle F_f(d_T\phi),d_T\phi\rangle \geq 0$ and so
\begin{align*}
(\Delta_{B,f} -\kappa_B^\sharp)|d_T\phi|\leq 0.
\end{align*}
By Lemma 2.1, $|d_T\phi|$ is constant. Again, from (\ref{3-18}), we have
\begin{align}\label{3-22}
|\nabla_{\rm tr} d_T \phi|^2+\langle F_f(d_T\phi),d_T\phi\rangle=0.
\end{align}
Since $\langle F_f(d_T\phi),d_T\phi\rangle \geq 0$,
from (\ref{3-22}), we have
\begin{align}\label{3-23}
|\nabla_{\rm tr} d_T \phi|^2=0 \quad\textrm{and}\quad \langle F_f(d_T\phi),d_T\phi\rangle=0.
\end{align}
Thus, $\nabla_{\rm tr}d_T\phi=0$, i.e., $\phi$ is transversally totally geodesic.
Furthermore, from (\ref{3-13}) and (\ref{3-23}), we get
\begin{align}\label{3-24}
\left\{
\begin{array}{ll}
g_{Q'}(d_T\phi({\rm Ric}_f^{Q}(E_a)),d_T\phi(E_a))= 0,\\\\
g_{Q'}(R^{Q'}(d_T\phi(E_a),d_T\phi(E_b))d_T\phi(E_a),d_T\phi(E_b))= 0
\end{array}
\right.
\end{align}
for any indices $a$ and $b$.
If ${\rm Ric}_f^{Q}$ is positive at some point, then $d_T\phi=0$, i.e., $\phi$ is transversally constant, which proves (1). For the statement (2), if the rank of $d_T\phi \geq2$, then there exists a point $x\in M$ such that at least two linearly independent vectors at $\phi(x)$, say, $d_T\phi(E_1)$ and $d_T\phi(E_2)$.
Since $K^{Q'}<0$,
\begin{align*}
g_{Q'}(R^{Q'}(d_T\phi(E_1),d_T\phi(E_2))d_T\phi(E_2),d_T\phi(E_1))<0,
\end{align*}
which contradicts (\ref{3-24}). Hence the rank of $d_T\phi <2$, that is, the rank of $d_T\phi$ is zero or one everywhere. If the rank of $d_T\phi$ is zero, then $\phi$ is transversally constant. If the rank of $d_T\phi$ is one, then $\phi(M)$ is a transversally geodesic closed curve.
\end{proof}
\begin{rem}
Note that if $\phi$ is $(\mathcal F,\mathcal F')_f$-harmonic, then
\begin{align*}
\delta_\nabla (e^{-f}d_T\phi) =-\tilde\tau_{b,f}(\phi)=0.
\end{align*}
\end{rem}
Now, we restrict a basic function $f$ to study $(\mathcal F,\mathcal F')_f$-harmonic map. That is, let $\mathcal K$ be the set of basic functions such that $i(\kappa_B^\sharp)d_B f\leq 0$. That is,
\begin{align*}
\mathcal K =\{f\in\Omega_B^0(\mathcal F)| i(\kappa_B^\sharp)d_Bf \leq 0\}.
\end{align*}
Trivially, $f_\kappa\in \mathcal K$ because of $d_Bf_\kappa =-\kappa_B$. And if $\mathcal F$ is taut, then $\mathcal K = \Omega_B^0(\mathcal F)$. Then we have the following.
\begin{thm} Let $(M,\mathcal F,g,e^{-f}\nu)$ and $(M',g',\mathcal F')$ be as in Theorem 3.6. If $f\in\mathcal K$, then any $(\mathcal F,\mathcal F')_{f}$-harmonic map $\phi:M\to M'$ is transversally totally geodesic.
In addition,
(1) if ${\rm Ric}_f^Q>0$ at some point, then $\phi$ is transversally constant.
(2) if $K^{Q'}<0$, then $\phi$ is transverseally constant or $\phi(M)$ is a transversally geodesic closed curve.
\end{thm}
\begin{proof}
From (\ref{3-19}) and (\ref{3-20}), we get
\begin{align}\label{3-25}
|d_T\phi| \Delta_{B,f}|d_T\phi| \leq -\langle F_f(d_T\phi),d_T\phi\rangle -\langle d_\nabla i(\kappa_B^\sharp)d_T\phi,d_T\phi\rangle+|d_T\phi| \kappa_B^\sharp(|d_T\phi|).
\end{align}
By the curvature assumptions, $\langle F_f(d_T\phi),d_T\phi\rangle \geq 0$. So we get from (\ref{3-25}),
\begin{align}\label{3-26}
|d_T\phi|\Delta_{B,f}|d_T\phi|
\leq-\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle+|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|).
\end{align}
Integrating (\ref{3-26}) with the weighted measure, we have
\begin{align}\label{3-27}
\int_{M}\langle&|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}
\leq-\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle e^{-f}\mu_{M}+\int_{M}|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|)e^{-f}\mu_{M}.
\end{align}
From Remark 3.7, we get
\begin{align}\label{3-28}
\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle e^{-f}\mu_{M}
=\int_{M}\langle i(\kappa_{B}^\sharp)d_T\phi,\delta_\nabla (e^{-f}d_T \phi)\rangle \mu_{M}
=0.
\end{align}
Now, we choose a bundle-like metric $g$ such that $\delta_{B}\kappa_{B}=0$. Then $\delta_{B,f} \kappa_B =i(\nabla_{\rm tr} f)\kappa_B$ from (\ref{2-6}). So
\begin{align*}
\int_{M}|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|)e^{-f}\mu_{M}
&=\frac12\int_M \langle |d_T\phi|^2, \delta_{B,f} \kappa_B\rangle e^{-f}\mu_M\\
&=\frac12\int_M \langle d_B f,\kappa_B\rangle |d_T\phi|^2 e^{-f}\mu_M.
\end{align*}
Since $f\in\mathcal K$, that is, $\langle d_Bf,\kappa_B\rangle\leq 0$, we get
\begin{align}\label{3-29}
\int_{M}|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|)e^{-f}\mu_{M} \leq 0.
\end{align}
Note that $ \int_{M}\langle|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}=\int_M |d_B|d_T\phi||^2 e^{-f}\mu_M\geq 0$. So from (\ref{3-27})$\sim$(\ref{3-29}), we get
\begin{align*}
\int_{M}\langle |d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}=0,
\end{align*}
which yields $ d_{B}|d_T\phi|=0$. That is, $|d_T\phi|$ is constant.
From (\ref{3-19}), we have
\begin{align}\label{3-30}
0=&-|\nabla_{\rm tr}d_T\phi|^{2}-\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle -\langle F_f(d_T\phi),d_T\phi\rangle.
\end{align}
From (\ref{3-28}) and (\ref{3-30}), by integrating we get
\begin{align}\label{3-31}
\int_{M}|\nabla_{\rm tr}d_T\phi|^{2} e^{-f}\mu_{M}
+\int_{M}\langle F_f(d_T\phi),d_T\phi\rangle e^{-f}\mu_{M}=0.
\end{align}
Since $\langle F_f(d_T\phi),d_T\phi\rangle \geq 0$,
from (\ref{3-31}), we have
\begin{align}\label{3-32}
|\nabla_{\rm tr} d_T \phi|^2=0 \quad\textrm{and}\quad \langle F_f(d_T\phi),d_T\phi\rangle=0.
\end{align}
Since (\ref{3-32}) is same to (\ref{3-23}) in Theorem 3.6, the remaining part of the proof is same to that in Theorem 3.6. So we omit the remaining part of the proof.
\end{proof}
\begin{cor} Let $(M,g,\mathcal F,e^{-f_\kappa}\nu)$ be a weighted foliation on a closed manifold $M$ with $Ric_{f_\kappa}^Q\geq 0$ and $(M',g',\mathcal F')$ be a Riemannian foliation with $K^{Q'}\leq 0$. If $\mathcal F$ is non-minimal or ${\rm Ric}_{f_\kappa}^Q >0$ at some point, then $(\mathcal F,\mathcal F')_{f_\kappa}$-harmonic map is always transversally constant.
\end{cor}
\begin{proof} Note that $f_\kappa$ is a solution of $d_Bf=-\kappa_B$. So $f_\kappa\in\mathcal K$ and
\begin{align*}
\int_{M}|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|)e^{-f_\kappa}\mu_{M}
&=\frac12\int_M \langle d_B f_\kappa,\kappa_B\rangle |d_T\phi|^2 e^{-f_\kappa}\mu_M\\
& =-\frac12\int |\kappa_B|^2 |d_T\phi|^2 e^{-f_\kappa}\mu_M.
\end{align*}
Hence from (\ref{3-27}) and (\ref{3-28}), we get
\begin{align*}
0\leq \int_M \langle |d_T\phi|,\Delta_{B,f} |d_T\phi|\rangle e^{-f_\kappa}\mu_M \leq -\frac12\int |\kappa_B|^2 |d_T\phi|^2 e^{-f_\kappa}\mu_M\leq 0.
\end{align*}
That is,
\begin{align*}
|\kappa_B| |d_T\phi| =0.
\end{align*}
Hence if $\kappa_B\ne 0$, then $d_T\phi=0$, that is, $\phi$ is transversally constant. In case of ${\rm Ric}_f^Q >0$ at some point, the proof is trivial from Theorem 3.8.
\end{proof}
\begin{rem}
Since $f_\kappa$ satisfies $d_B f_\kappa =-\kappa_B$, we have from (\ref{3-17})
\begin{align*}
\tilde\tau_{b,f_\kappa}(\phi)= \tau_b(\phi).
\end{align*}
Hence a transversally harmonic map is equivalent to the $(\mathcal F,\mathcal F')_{f_\kappa}$-harmonic map. So Corollary 3.9 holds for transversally harmonic maps.
\end{rem}
\section{Liouville type theorems}
In this section, we investigate the Liouville type theorems for transversally $f$-harmonic map and $(\mathcal F,\mathcal F')_{f}$-harmonic map on weighted foliations.
Let $B_{l}=\{y\in M|\rho(y)\leq l\}$, where $\rho(y)$ is the distance between leaves through a fixed point $x_{0}$ and $y$.
Let $\omega_{l}$ be the Lipschitz continuous basic function such that
\begin{align*}
\left\{
\begin{array}{ll}
0\leq\omega_{l}(y)\leq1 \quad {\rm for \, any} \, y\in M\\
{\rm supp}\, \omega_{l}\subset B_{2l}\\
\omega_{l}(y)=1 \quad {\rm for \, any} \, y\in B_{l}\\
\lim\limits_{l\rightarrow\infty}\omega_{l}=1\\
|d\omega_{l}|\leq\frac{C}{l} \quad\textrm {almost everywhere on $M$},
\end{array}
\right.
\end{align*}
where $C$ is positive constant \cite{Y1}. Therefore, $\omega_{l}\eta$ has compact support for any basic form $\eta\in\Omega_{B}^{*}(\mathcal F)$.
\begin{lem} Let $(M,g,\mathcal F,e^{-f}\nu)$ be a weighted foliation on a complete Riemannian manifold whose all leaves are compact and the mean curvature form is bounded. Let $f\in\mathcal K$ and $\phi: (M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ be a smooth foliated map of $E_{B,f}(\phi) <\infty$. Then
\begin{align*}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle \omega_{l}^{2}|d_T\phi|,\kappa_{B}^\sharp(|d_T\phi|)\rangle e^{-f}\mu_{M}\leq 0.
\end{align*}
\end{lem}
\begin{proof}
Let $g$ be a bundle-like metric such that $\delta_{B}\kappa_{B}=0$. Then we get
\begin{align}\label{4-1}
\int_{M}\langle \omega_{l}^{2}|d_T\phi|,\kappa_{B}^{\sharp}(|d_T\phi|)\rangle e^{-f}\mu_{M}
=&\frac12\int_M \langle d_B |d_T\phi|^2,\omega_l^2\kappa_B\rangle e^{-f}\mu_M\notag\\
=&\frac{1}{2}\int_{M}\delta_{B,f}(\omega_l^2 \kappa_{B})|d_T\phi|^{2}e^{-f}\mu_{M}\notag \\
=&-\int_M \Big(\langle d_B\omega_l ,\omega_l\kappa_B\rangle -\frac12 i(\nabla_{\rm tr}f)\omega_l^2\kappa_B\Big) |d_T\phi|^2 e^{-f}\mu_M.
\end{align}
Since $f\in \mathcal K$, that is, $i(\nabla_{\rm tr}f)\kappa_B=i(\kappa_B^\sharp)d_B f\leq 0$, then from (\ref{4-1}),
\begin{align}\label{4-2}
\int_{M}\langle \omega_{l}^{2}|d_T\phi|,\kappa_{B}^{\sharp}(|d_T\phi|)\rangle e^{-f}\mu_{M}
\leq-\int_{M}\langle d_B\omega_l,\omega_l\kappa_B\rangle |d_T\phi|^2 e^{-f}\mu_M.
\end{align}
By using the Cauchy-Schwartz inequality, we get
\begin{align*}
\Big|\int_M \langle d_B\omega_l,\omega_l\kappa_B\rangle |d_T\phi|^2 e^{-f}\mu_M\Big| &\leq {C\over l}{\rm max} |\kappa_B|\int_M\omega_l |d_T\phi|^2 e^{-f}\mu_M.
\end{align*}
So $E_{B,f}(\phi)<\infty$ implies
\begin{align}\label{4-3}
\lim_{l\to\infty}\int_M\langle d_B\omega_l,\omega_l\kappa_B\rangle |d_T\phi|^2 e^{-f}\mu_M =0.
\end{align}
From (\ref{4-2}) and (\ref{4-3}), by letting $l\rightarrow\infty$,
\begin{align*}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle \omega_{l}^{2}|d_T\phi|,\kappa_{B}^\sharp(|d_T\phi|)\rangle e^{-f}\mu_{M}\leq 0.
\end{align*}
\end{proof}
Let $\mu_{0}$ be the infimum of the spectrum of the eigenvalues of the weighted basic Laplacian $\Delta_{B,f}$ acting on weighted $L^{2}$-basic functions on $M$. Then we have the following Liouville type theorem for transversally $f$-harmonic maps.
\begin{thm}
Let $(M,g,\mathcal F,e^{-f}\nu)$ be a weighted foliation on a complete Riemannian manifold whose all leaves are compact and the mean curvature form is bounded. Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with $K^{Q'}\leq 0$.
Let $f\in\mathcal K$ and $\phi : M \rightarrow M'$ be a transversally $f$-harmonic map of $E_{B,f}(\phi)<\infty$. Then
(1) if ${\rm Ric}_f^Q\geq 0$, then $\phi$ is transversally totally geodesic.
(2) if ${\rm Ric}_f^Q\geq 0$ and $\int_M e^{-f}\mu_M= \infty$, then $\phi$ is transversally constant.
(3) if ${\rm Ric}_f^Q \geq -\mu_0$ at all $x$ and ${\rm Ric}_f^Q >-\mu_0$ at some point, then $\phi$ is transversally constant.
\end{thm}
\begin{proof}
(1) Since ${\rm Ric}_f^Q \geq 0$ and $K^{Q'}\leq 0$, from (\ref{3-13}), $\langle F_f(d_T\phi),d_T\phi\rangle \geq 0$. Hence from Lemma 3.5 (1) and the first Kato's inequality (\ref{3-20}), we have
\begin{align}\label{4-4}
|d_T\phi|\Delta_{B,f} |d_T\phi| \leq |d_B |d_T\phi||^2 -|\nabla_{tr}d_T\phi|^2 + |d_T\phi|\kappa_B^\sharp(|d_T\phi|)\leq |d_T\phi|\kappa_B^\sharp(|d_T\phi|).
\end{align}
Multiplying (\ref{4-4}) by $\omega_{l}^{2}$ and integrating by parts, from Lemma 4.1, we get
\begin{align}\label{4-5}
\lim_{l\to\infty}\int_{M}&\langle \omega_{l}^{2}|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}\leq 0.
\end{align}
On the other hand, by the Cauchy-Schwarz inequality, we have
\begin{align}\label{4-6}
\int_{M}\langle \omega_{l}^{2}&|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}\notag\\
=&\int_M\omega_l^2 |d_B|d_T\phi||^2 e^{-f}\mu_M+ 2\int_M \omega_l\langle |d_T\phi|d_B\omega_l,d_B|d_T\phi|\rangle e^{-f}\mu_M\notag\\
\geq& \int_{M}\omega_{l}^{2}|d_{B}|d_T\phi||^{2}e^{-f}\mu_{M}-2\int_{M}\omega_{l}|d_T\phi||d_{B}\omega_{l}||d_{B}|d_T\phi||e^{-f}\mu_{M}.
\end{align}
By using the inequality $A^2 +B^2 \geq 2AB$, we get
\begin{align}\label{4-7}
2\int_{M}\omega_{l}|d_{B}\omega_{l}||d_T\phi||d_{B}|d_T\phi|| e^{-f}\mu_{M}\leq{C\over l} \int_M\Big(\omega_l^2 |d_T\phi|^2 e^{-f} +|d_B|d_T\phi||^2e^{-f}\Big)\mu_M.
\end{align}
From (\ref{4-5}) $\sim$(\ref{4-7}) and Fatou's inequality, it is trivial that $|d_{B}|d_T\phi||\in L^{2}(e^{-f})$, that is, $\int_M |d_B|d_T\phi||^2 e^{-f}\mu_M<\infty$. So letting $l\rightarrow\infty$, we get from (\ref{4-7})
\begin{align}\label{4-8}
\lim\limits_{l\rightarrow\infty}\int_{M}\omega_{l}|d_{B}\omega_{l}||d_T\phi||d_{B}|d_T\phi|| e^{-f}\mu_{M}=0.
\end{align}
From (\ref{4-6}) and (\ref{4-8}), by letting $l\to \infty$, we have
\begin{align}\label{4-9}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle& \omega_{l}^{2}|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}
\geq\int_{M}|d_{B}|d_T\phi||^{2} e^{-f}\mu_{M}.
\end{align}
From (\ref{4-5}) and (\ref{4-9}), we have $d_B|d_T\phi|=0$, that is, $|d_T\phi|$ is constant. From (\ref{4-4}), we have
\begin{align}\label{4-10}
|d_T\phi|\Delta_{B,f} |d_T\phi| \leq -|\nabla_{\rm tr}d_T\phi|^2 + |d_T\phi|\kappa_B^\sharp(|d_T\phi|)
\end{align}
By multiplying (\ref{4-10}) by $\omega_l^2$ and integrating by parts, we have from Lemma 4.1 together with (\ref{4-9}), we get
\begin{align*}
0\leq \lim\limits_{l\rightarrow\infty}\int_{M}\langle& \omega_{l}^{2}|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M} \leq -\int_M |\nabla_{\rm tr} d_T\phi|^2 e^{-f}\mu_M,
\end{align*}
which imlies that $\nabla_{\rm tr}d_T\phi=0$, that is, $\phi$ is transversally totally geodesic.
(2) From (1), we know that $|d_T\phi|$ is constant. Since $E_{B,f}(\phi)=\frac12 |d_T\phi|^2 \int_M e^{-f}\mu_M<\infty$, $\int_M e^{-f}\mu_M=\infty$ implies $d_T\phi=0$, that is, $\phi$ is transversally constant.
(3) Assume ${\rm Ric}_f^{Q}\geq-\mu_0$ for all $x$ and ${\rm Ric}_f^{Q}>-\mu_0$ at some point $x_{0}$ and $K^{Q'}\leq0$. Then from (\ref{3-13})
\begin{align}\label{4-11}
\langle F_f(d_T\phi),d_T\phi\rangle\geq \sum_a g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a))\geq-\mu_0|d_T\phi|^{2}.
\end{align}
Since $\phi$ is the transversally $f$-harmonic map, from (\ref{3-18}) and (\ref{4-11}), we have
\begin{align}\label{4-12}
|d_T\phi|(\Delta_{B,f}-\kappa_{B}^\sharp)|d_T\phi|\leq-\sum_a g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a))\leq \mu_0|d_T\phi|^{2}.
\end{align}
Multiplying (\ref{4-12}) by $\omega_{l}^{2}$ and integrating by parts, we have
\begin{align}\label{4-13}
\int_{M}&\langle \omega_{l}^{2}|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}-\int_{M}\langle \omega_{l}^{2}|d_T\phi|,\kappa_{B}^\sharp(|d_T\phi|)\rangle e^{-f}\mu_{M}\notag \\
&\leq -\sum_a \int_{M}\omega_{l}^{2} g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a)) e^{-f}\mu_{M} \notag \\
&\leq \mu_0\int_{M}\omega_{l}^{2}|d_T\phi|^{2} e^{-f}\mu_{M}.
\end{align}
From Lemma 4.1 and (\ref{4-13}), we get
\begin{align}\label{4-14}
\lim_{l\to\infty} \int_M \langle \omega_l^2 |d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_M\leq \mu_0\int_{M}|d_T\phi|^{2} e^{-f}\mu_{M}.
\end{align}
On the other hand, by the Rayleigh quotient theorem, we have
\begin{align}\label{4-15}
\int_{M}| d_{B}|d_T\phi||^2 e^{-f}\mu_{M}\geq \mu_0 \int_{M} |d_T\phi|^2 e^{-f}\mu_{M}.
\end{align}
From (\ref{4-9}), (\ref{4-13}), (\ref{4-14}) and (\ref{4-15}),
by $l\rightarrow\infty$, we get
\begin{align*}
\mu_{0}\int_{M}|d_T\phi|^{2}e^{-f}\mu_{M}
&\leq-\sum_a \int_{M} g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a)) e^{-f}\mu_{M}\\
&\leq \mu_0\int_{M}|d_T\phi|^{2} e^{-f}\mu_{M}.\notag
\end{align*}
From the above inequality, we have
\begin{align}\label{4-16}
\sum_a \int_{M}g_{Q'}(d_T \phi({\rm (Ric}_f^{Q}+\mu_0)(E_a)),d_T \phi(E_a)) e^{-f}\mu_{M}=0.
\end{align}
Since ${\rm Ric}_f^{Q}>-\mu_0$ at some point, from (\ref{4-16}), $d_T \phi=0$, that is, $\phi$ is transversally constant.
\end{proof}
\begin{lem} Let $(M,g,\mathcal F,e^{-f}\nu)$ be a weighted foliation on a complete Riemannian manifold whose all leaves are compact and the mean curvature form is bounded. Let $\phi: (M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ be a $(\mathcal F,\mathcal F')_f$-harmonic map of $E_{B,f}(\phi)<\infty$. Then
\begin{align*}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}d_T \phi\rangle e^{-f}\mu_{M}=0.
\end{align*}
\end{lem}
\begin{proof}
Since $\phi$ is $(\mathcal F,\mathcal F')_f$-harmonic, $ \delta_\nabla(e^{-f}d_T\phi)=0$ from Remark 3.7. Hence
\begin{align*}
\delta_\nabla (\omega_{l}^{2}e^{-f}d_T \phi)&=\omega_l^2 \delta_\nabla(e^{-f}d_T\phi) -i(d_{B}\omega_{l}^{2})e^{-f}d_T \phi\\
&=-i(d_{B}\omega_{l}^{2})e^{-f}d_T \phi\\
&=-2\omega_{l}e^{-f}i(d_{B}\omega_{l}) d_T \phi.
\end{align*}
By using the inequality
\begin{align*}
|X^{\flat}\wedge d_T\phi|^{2}+|i(X)d_T\phi|^{2}=|X|^{2}|d_T\phi|^{2}
\end{align*}
for any vector $X$, we get
\begin{align*}
\bigg{|}\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}d_T \phi\rangle e^{-f}\mu_{M}\bigg{|}
=&\bigg{|}\int_{M}\langle i(\kappa_{B}^\sharp)d_T\phi,\delta_\nabla (\omega_{l}^{2}e^{-f}d_T \phi)\rangle \mu_{M}\bigg{|}\\
=&\bigg{|}\int_{M}\langle i(\kappa_{B}^\sharp)d_T\phi, -2\omega_{l}i(d_{B}\omega_{l})d_T \phi\rangle e^{-f}\mu_{M}\bigg{|}\notag \\
\leq&2\int_{M}\omega_{l}|i(\kappa_{B}^\sharp)d_T\phi||i(d_{B}\omega_{l})d_T \phi| e^{-f}\mu_{M}\notag \\
\leq&2\int_{M}\omega_{l}|\kappa_{B}||d_{B}\omega_{l}||d_T\phi|^{2}e^{-f}\mu_{M}\notag \\
\leq&\frac{2C}{l}\max|\kappa_{B}|\int_{M}\omega_{l}|d_T\phi|^{2}e^{-f}\mu_{M}.
\end{align*}
By letting $l\rightarrow\infty$, $E_{B,f}(\phi)<\infty$ implies
\begin{align*}
\lim\limits_{l\rightarrow\infty}\int_{M}\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}d_T \phi\rangle e^{-f}\mu_{M}=0,
\end{align*}
which finishes the proof.
\end{proof}
\begin{thm}\label{th5}
Let $(M,g,\mathcal F,e^{-f}\nu)$ and $(M',g',\mathcal F')$ be as in Theorem 4.2.
Let $f\in\mathcal K$ and let $\phi : M \rightarrow M'$ be a $(\mathcal F,\mathcal F')_{f}$-harmonic map of $E_{B,f}(\phi)<\infty$. Then $(1)\sim (3)$ in Theorem 4.2 are satisfied.
\end{thm}
\begin{proof}
(1) Since ${\rm Ric}_f^Q \geq 0$ and $K^{Q'}\leq 0$, from (\ref{3-13}), $\langle F_f(d_T\phi),d_T\phi\rangle \geq 0$. Hence from Lemma 3.5 (2) and the first Kato's inequality (\ref{3-20}), we have
\begin{align}\label{4-17}
|d_T\phi|\Delta_{B,f} |d_T\phi| \leq -\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle+ |d_T\phi|\kappa_B^\sharp(|d_T\phi|).
\end{align}
Multiplying (\ref{4-17}) by $\omega_{l}^{2}$ and integrating by parts, from Lemma 4.1 and Lemma 4.3, we get
\begin{align}\label{4-18}
\lim_{l\to\infty}\int_{M}&\langle \omega_{l}^{2}|d_T\phi|,\Delta_f|d_T\phi|\rangle e^{-f}\mu_{M}\leq 0.
\end{align}
The equation (\ref{4-18}) is same to (\ref{4-5}) in Theorem 4.2. Hence the next process of the proof is same to that of (1) in Theorem 4.2.
(2) The proof is same in Theorem 4.2.
(3) Assume ${\rm Ric}_f^{Q}\geq-\mu_0$ for all points and ${\rm Ric}_f^{Q}>-\mu_0$ at some point and $K^{Q'}\leq0$. Then from (\ref{3-13})
\begin{align*}
\langle F_f(d_T\phi),d_T\phi\rangle\geq \sum_a g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a))\geq-\mu_0|d_T\phi|^{2}.
\end{align*}
From Lemma 3.5 (2) and the Kato's inequality, we get
\begin{align}\label{4-24}
|d_T&\phi|\Delta_{B,f}|d_T\phi|
+\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,d_T \phi\rangle-|d_T\phi|\kappa_{B}^\sharp(|d_T\phi|)\notag\\
&\leq-\sum_a g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a))\leq \mu_0|d_T\phi|^{2}.
\end{align}
Multiplying (\ref{4-24}) by $\omega_{l}^{2}$ and integrating by parts, we get
\begin{align*}
\int_{M}&\langle \omega_{l}^{2}|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}+\int_{M}\Big(\langle d_\nabla i(\kappa_{B}^\sharp)d_T\phi,\omega_{l}^{2}d_T \phi\rangle -\langle \omega_{l}^{2}|d_T\phi|,\kappa_{B}^\sharp(|d_T\phi|)\rangle \Big)e^{-f}\mu_{M}\notag \\
&\leq -\sum_a \int_{M}\omega_{l}^{2} g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a)) e^{-f}\mu_{M} \notag \\
&\leq \mu_0\int_{M}\omega_{l}^{2}|d_T\phi|^{2} e^{-f}\mu_{M}.
\end{align*}
By letting $l\to \infty$, form Lemma 4.1 and Lemma 4.3, we have
\begin{align}\label{4-25}
\lim_{l\to\infty}\int_{M}\langle \omega_{l}^{2}|d_T\phi|,\Delta_{B,f}|d_T\phi|\rangle e^{-f}\mu_{M}&\leq -\lim_{l\to\infty}\sum_a \int_{M}\omega_{l}^{2} g_{Q'}(d_T \phi({\rm Ric}_f^{Q}(E_a)),d_T \phi(E_a)) e^{-f}\mu_{M} \notag \\
&\leq \mu_0\int_{M}|d_T\phi|^{2} e^{-f}\mu_{M}.
\end{align}
The inequality (\ref{4-25}) is same to (\ref{4-14}) in Theorem 4.2. So the remaining part of the proof is same to that in Theorem 4.2. So we omit the next process of the proof.
\end{proof}
If $f$ is constant, then $\Delta_{B,f}=\Delta_B$, ${\rm Ric}_f^Q ={\rm Ric^Q}$, and $(\mathcal F,\mathcal F')_f$-harmonic map is junt $(\mathcal F,\mathcal F')$-harmonic map. Hence we have the following corollary.
\begin{cor}\label{co6}
Let $(M,g,\mathcal F)$ be a complete foliated Riemannian manifold whose all leaves are compact and the mean curvature form is bounded.
Let $(M',g',\mathcal F')$ be a foliated Riemannian manifold with $K^{Q'}\leq 0$. Assume that the transversal Ricci curvature ${\rm Ric^{Q}}$ of $M$ satisfies ${\rm Ric^{Q}}\geq-\mu_{0}$ for all points and ${\rm Ric^{Q}}>-\mu_{0}$ at some point.
Then any $(\mathcal F,\mathcal F')$-harmonic map $\phi : (M,g,\mathcal F) \rightarrow (M', g',\mathcal F')$ of $E_{B}(\phi)<\infty$ is transversally constant.
\end{cor}
\begin{rem} (1) Theorem 4.2 and Theorem \ref{th5} can be found for the point foliation in \cite{RV}.
(2) Corollary \ref{co6} for the transversally harmonic map has been studied by Fu and Jung \cite{FJ}.
(3) For the study of transversally $f$-harmonic maps (in particular, on complete manifolds) and $(\mathcal F,\mathcal F')_f$-harmonic maps, we need the condition $f$ restricted to $f\in\mathcal K$ (cf. Theorem 3.8, Theorem 4.2, Theorem 4.4) except for the transversally $f$-harmonic maps on a closed manifold $M$ (cf. Theorem 3.6).
\end{rem}
\section{The stress energy tensor}
Let $\phi:(M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ be a smooth foliated map with $M$ compact. we calculate the rate of change of the transversal energy of $\phi$ when the metric $g_Q$ is changed. Let $g_Q (t)$ be the variation of $g_Q$ with $g_Q(0)=g_Q$. We put $h= {\partial g_Q \over \partial t}$, a symmetric 2-tensor on $M$. With a transversal coordinate $\{y^a\}(a=1,\cdots,q)$, the metric is written by $g_Q(t)=\sum_{a,b} g_{ab}dy^a dy^b$. Then we have the following.
\begin{lem} Let $\mu_M (t)$ be the volume form with respect to the metric $g(t) = g_L + g_Q(t)$. Then
\begin{align*}
{d\over dt}\mu_M = \frac12 ({\rm tr}_Q h) \mu_M=\frac12\langle g_Q,h\rangle \mu_M.
\end{align*}
\end{lem}
\begin{proof}
Note that for a nonsingular matrix $A$,
\begin{align}\label{5-1}
{d\over dt} {\rm det}(A) = {\rm tr}[ {\rm det}(A) A^{-1} A'].
\end{align}
From (\ref{5-1}), we get
\begin{align}\label{5-2}
{d\over dt} \sqrt{{\rm det}(g_{ab})} = \frac12 ({\rm tr}_Q h) \sqrt{{\rm det}(g_{ab})}.
\end{align}
Since $\chi_{\mathcal F}$ is time independent, ${d\over dt}\mu_M = ({d\over dt}\nu)\wedge \chi_{\mathcal F}$ and
the transversal volume form $\nu$ is $\nu =\sqrt{{\rm det}(g_{ab})} dy^1\wedge\cdots \wedge dy^q$. Hence from (\ref{5-2})
\begin{align*}
{d\over dt}\mu_M = ({d\over dt}\nu)\wedge \chi_{\mathcal F}
=\frac12 ({\rm tr}_Q h) \nu \wedge \chi_{\mathcal F} =\frac12 ({\rm tr}_Q h) \mu_M.
\end{align*}
\end{proof}
Then we have the following variation formula of the transversal metric.
\begin{thm} Let $\phi:(M,g,\mathcal F, e^{-f}\nu)\to (M',g',\mathcal F')$ be a fixed smooth foliated map with $M$ compact. Then
\begin{align}
{d\over dt} E_{B,f}(\phi,g(t))|_{t=0} =\frac12\int_M \langle S_{T}(\phi), h\rangle e^{-f}\mu_M,
\end{align}
where $S_T(\phi) =\frac12|d_T\phi|^2 g_Q -\phi^* g_{Q'}$ is the transversal stress-energy tensor \cite{JU3}.
\end{thm}
\begin{proof} By Lemma 5.1, we have
\begin{align*}
{d\over dt} E_{B,f}(\phi,g(t)) &= \frac12\int_M( {d\over dt}|d_T\phi|^2) e^{-f}\mu_M + \frac14\int_M \langle |d_T\phi|^2 g_Q,h\rangle e^{-f}\mu_M.
\end{align*}
On the other hand, since ${d\over dt} g^{ab} =-h^{ab} (=\sum_{c,d} g^{ac}g^{bd} h_{cd})$, we have
\begin{align}\label{5-4}
{d\over dt}|d_T\phi|^2 = \sum_{a,b}{d\over dt} g^{ab} d_T\phi({\partial\over \partial y^a}) d_T\phi({\partial\over \partial y^b})=- h(d_T\phi, d_T\phi)=-\langle h, \phi^* g_{Q'}\rangle.
\end{align}
From (\ref{5-4}), we have
\begin{align*}
{d\over dt} E_{B,f}(\phi,g(t)) &= \frac12\int_M \langle \frac12|d_T\phi|^2 g_Q- \phi^* g_{Q'},h\rangle e^{-f}\mu_M,
\end{align*}
which impies the proof.
\end{proof}
Now, we put
\begin{align}
S_{T,f}(\phi) = e^{-f} S_T(\phi),
\end{align}
called the {\it transversal stress $f$-energy tensor} of $\phi$.
\begin{cor} Let $\phi:(M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ be a nonconstant foliated smooth map with $M$ compact. Then $\phi:M \to M'$ is an extremal of the transversal $f$-energy functional with respect to variations of the transversal metric $g_Q$ if and only if $S_{T,f}(\phi) =0$.
\end{cor}
\begin{lem} Let $\phi:(M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ be a smooth foliated map. Then
\begin{align*}
({\rm div}_\nabla S_{T,f}(\phi) )(X) = - \langle \tau_{b,f}(\phi),d_T\phi(X)\rangle -\frac12 e^{-f}|d_T\phi|^2 d_B f (X)
\end{align*}
for any normal vector $X\in\Gamma Q$.
\end{lem}
\begin{proof} Let $\{E_a\}$ be a local orthonormal basic frame on $Q$. Then at $x$,
\begin{align*}
({\rm div}_\nabla S_{T,f}(\phi))(X) &= \sum_a (\nabla_{E_a} S_{T,f}(\phi))(E_a,X)\\
&=e^{-f}({\rm div}_\nabla S_T(\phi))(X) - e^{-f}S_T(\phi)(\nabla_{\rm tr} f,X)\\
&= -e^{-f} \langle \tau_b(\phi), d_T\phi(X)\rangle -e^{-f}\Big(\frac12 |d_T\phi|^2 g_Q -\phi^* g_{Q'}\Big)(\nabla_{\rm tr} f,X) \\
&=-\langle \tau_b(\phi) - d_T\phi(\nabla_{\rm tr}f), d_T\phi(X)\rangle e^{-f} - \frac12 e^{-f}|d_T\phi|^2 d_B f(X)\\
&=-\langle \tau_{b,f}(\phi),d_T\phi(X)\rangle -\frac12 e^{-f}|d_T\phi|^2 d_Bf (X),
\end{align*}
which implies the proof.
\end{proof}
\begin{rem} Any transversally harmonic map satisfies the transverse conservation law, i.e, ${\rm div}_\nabla S_T (\phi)=0$ \cite{JU3}, but any transversally $f$-harmonic map does not satify the {\it transverse $f$-conservation law}, i.e., ${\rm div}_\nabla S_{T,f}(\phi)=0$ (cf. Lemma 5.4).
\end{rem}
Let $F:[0,\infty)\to [0,\infty)$ be a $C^2$-function such that $F'>0$ on $(0,\infty)$. The {\it transversally $F$-harmonic map} $\phi:M\to M'$ is a solution of the Eular-Lagrange equation $\tau_{b,F}(\phi)=0$ \cite{CW}, where $\tau_{b,F}(\phi)$ is the transversal $F$-tension field given by
\begin{align}\label{5-6}
\tau_{b,F}(\phi) = F' ({|d_T\phi|^2\over 2})\tau_b(\phi) + d_T\phi\Big(\nabla_{\rm tr} F'({|d_T\phi|^2\over 2})\Big).
\end{align}
When $F(s) =s$, the transversal $F$-tension field $\tau_{b,F}(\phi)$ is the transversal tension field $\tau_b(\phi)$.
\begin{prop} Any transversally $F$-harmonic map $\phi: (M,g,\mathcal F,e^{-f}\nu)\to (M',g',\mathcal F')$ without critical points is a transversally $f$-harmonic map with $f= -\ln F'({|d_T\phi|^2\over 2})$.
\end{prop}
\begin{proof} If $f= -\ln F'({|d_T\phi|^2\over 2})$ in (\ref{5-6}), then $\tau_{b,F}(\phi) =\tau_{b,f}(\phi)$. So the proof is trivial.
\end{proof}
Now, we define the {\it transversal $F$-stress energy} tensor $S_{T,F}(\phi)$ by
\begin{align}
S_{T,F}(\phi) = F({|d_T\phi|^2\over 2}) g_Q - F'({|d_T\phi|^2\over 2}) \phi^* g_{Q'}.
\end{align}
Then we have the following Lemma \cite{CW}.
\begin{lem} Let $\phi: (M,g,\mathcal F)\to (M',g',\mathcal F')$ be a smooth foliated map. Then
\begin{align}
{\rm div}_\nabla S_{T,F}(\phi) = -\langle \tau_{b,F}(\phi), d_T\phi\rangle.
\end{align}
\end{lem}
\begin{proof}
Let $\{E_a\}$ be a local orthonormal basic frame such that $\nabla E_a=0$ at $x$. Then at $x$,
\begin{align*}
({\rm div}_\nabla S_{T,F}(\phi))(E_a)&=\sum_b\nabla_{E_b}S_{T,F}(E_b,E_a)\\
&=\sum_b \nabla_{E_b}\Big( F({|d_T\phi|^2\over 2})\delta_{ab} - F'({|d_T\phi|^2\over 2})\phi^*g_{Q'}(E_a,E_b)\Big)\\
&=g_Q (\nabla_{\rm tr} F({|d_T\phi|^2\over 2}),E_a) -g_{Q'}( d_T\phi \Big(\nabla_{\rm tr} F'({|d_T\phi|^2\over 2})\Big),d_T\phi(E_a))\\
&- g_{Q'} (F'({|d_T\phi|^2\over 2}) \tau_b(\phi),d_T\phi(E_a)) - \frac12 g_Q (F'({|d_T\phi|^2\over 2})\nabla_{\rm tr} {|d_T\phi|^2}, E_a).
\end{align*}
On the other hand, by the chain rule, we have
\begin{align*}
g_Q (\nabla_{\rm tr} F({|d_T\phi|^2\over 2}),E_a)&=E_a [F({|d_T\phi|^2\over 2})]\\
& = F'({|d_T\phi|^2\over 2}) E_a ({|d_T\phi|^2\over 2})\\
& = \frac12 g_Q (F'({|d_T\phi|^2\over 2})\nabla_{\rm tr}{|d_T\phi|^2},E_a).
\end{align*}
Hence from the above equations, we have
\begin{align*}
({\rm div}_\nabla S_{T,F}(\phi))(E_a)&= -g_{Q'}( d_T\phi \Big(\nabla_{\rm tr} F'({|d_T\phi|^2\over 2})\Big) +F'({|d_T\phi|^2\over 2}) \tau_b(\phi),d_T\phi(E_a))\\
&=- g_{Q'} (\tau_{b,F}(\phi), d_T\phi (E_a)),
\end{align*}
which finishes the proof.
\end{proof}
If $\phi:M\to M'$ satisfies ${\rm div}_\nabla S_{T,F}(\phi) =0$, then we say $\phi$ satisfies the {\it transverse $F$-conservation law}. Generally, a transversally $f$-harmonic map does not satisfies the transverse $f$-conservation law. But we have the following.
\begin{prop} Any transversally $F$-harmonic map satisfies the transverse $F$-conservation law. In particular, if $f=-\ln F'({|d_T\phi|^2\over 2})$, then a transversally $f$-harmonic map satisfies the transverse $F$-conservation law, that is, ${\rm div}_\nabla S_{T,F}(\phi)=0$.
\end{prop}
\begin{proof} From Proposition 5.6 and Lemma 5.7, the proof follows.
\end{proof}
|
1,116,691,497,623 | arxiv | \section{Introduction}
Let $n$ and $k$ be integers with $1\leq k\leq n.$ Write $[n]=\{1,2,\ldots,n\}$ and denote by ${[n]\choose k}$ the family of all $k$-subsets of $[n].$ For any positive integer $t$, a family $\mathcal{F}\subseteq {[n]\choose k}$ is said to be \emph{$t$-intersecting} if $|A \cap B|\geq t$ for all $A, B\in\mathcal{F}.$ A $t$-intersecting family is called \emph{trivial} if all its members contain a common specified $t$-subset of $[n]$, and \emph{non-trivial} otherwise.
The Erd\H{o}s-Ko-Rado Theorem gives the maximum size of a $t$-intersecting family and shows further that any $t$-intersecting family with maximum size is a trivial family consisting of all $k$-subsets that contain a fixed $t$-subset of $[n]$ for $n>(t+1)(k-t +1)$ \cite{Deza-Frankl-1983,Erdos-Ko-Rado-1961-313,Frankl-1978,Frankl-1987,GK,Wilson-1984}. In \cite{Ahlswede-Khachatrian-1997,Frankl--Furedi-1991}, the structure of such extremal families for any positive integers $t,\ k$ and $n$ was described.
Determining the structure of non-trivial $t$-intersecting families of $k$-subsets of $[n]$ with maximum size was a long-standing problem. The first such result is the Hilton-Milner Theorem \cite{Hilton-Milner-1967} which describes the structure of such families for $t=1$. A complete solution to this problem for any $t$ was obtained by Ahlswede and Khachatrian \cite{Ahlswede-Khachatrian-1996}.
Recently, other maximal non-trivial $1$-intersecting families with large size have been studied. In \cite{Han-Kohayakawa}, Han and Kohayakawa determined the structure of the third largest maximal $1$-intersecting families of $k$-subsets of $[n]$ with $6\leq 2k<n$. They also mentioned that it would be natural to investigate such problem for $t$-intersecting families. In \cite{Kostochka-Mubayi}, Kostochka and Mubayi described the structure of $1$-intersecting families of $k$-subsets of $[n]$ for large $n$ whose size is quite a bit smaller than the bound ${n-1\choose k-1}$ given by the Erd\H{o}s-Ko-Rado Theorem.
Define the $t$-\emph{covering number} $\tau_t(\mathcal{F})$ of a family $\mathcal{F}\subseteq {[n]\choose k}$ to be the minimum size of a subset $T$ of $[n]$ such that $|T\cap F|\geq t$ for any $F\in \mathcal{F}$. Let $\mathcal{F}\subseteq{[n]\choose k}$ be any $t$-intersecting family. Note that $t\leq \tau_t(\mathcal{F})\leq k.$ In this paper, we consider maximal non-trivial $t$-intersecting families for any positive integer $t$. If $t=k-1$, it is well known that any maximal non-trivial $(k-1)$-intersecting family is a collection of all $k$-subsets containing a fixed $(k-1)$-subset or a collection of all $k$-subsets contained in a fixed $(k+1)$-subset. Thus, we only consider the case with $1\leq t\leq k-2$.
To present our result let us first introduce the following two constructions of $t$-intersecting families of $k$-subsets of $[n]$.
\
\noindent\textbf{Family I.}\quad Let $X$, $M$ and $C$ be three subsets of $[n]$ such that $X\subseteq M\subseteq C$, $|X|=t$, $|M|=k$ and $|C|=c$, where $c\in\{k+1,k+2,\ldots,2k-t,n\}$. Denote
\begin{align*}
\mathcal{H}_1(X,M,C)=\mathcal{A}(X,M)\cup\mathcal{B}(X,M,C)\cup\mathcal{C}(X,M,C),
\end{align*}
where
\begin{align*}
\mathcal{A}(X,M)=&\left\{F\in{[n]\choose k}\mid X\subseteq F,\ |F\cap M|\geq t+1\right\},\\
\mathcal{B}(X,M,C)=&\left\{F\in{[n]\choose k}\mid F\cap M=X,\ |F\cap C|=c-k+t\right\},\\
\mathcal{C}(X,M,C)=&\left\{F\in{C\choose k}\mid |F\cap X|=t-1,\ |F\cap M|=k-1\right\}.
\end{align*}
\
\noindent\textbf{Family II.}\quad Let $Z$ be a $(t+2)$-subset of $[n]$. Define
\begin{align*}
\mathcal{H}_2(Z)=\left\{F\in{[n]\choose k}\mid|F\cap Z|\geq t+1\right\}.
\end{align*}
It is straightforward to verify that Families I and II are $t$-intersecting families with $t$-covering number $t+1$. Observe that the size of each this family only depends on $|X|$, $|M|$, $|C|$ and $|Z|$. Let $h_1(t,k,c)=|\mathcal{H}_1(X,M,C)|$, where $c=|C|\in\{k+1,k+2,\ldots,2k-t,n\}$; $h_2(t+2)=|\mathcal{H}_2(Z)|$.
\begin{remark}\label{s-obs1} {\rm Suppose $X$, $M$ and $C$ are three subsets of $[n]$ satisfying the condition in Family I. If $|C|=k+1$, then
$$
\mathcal{H}_1(X,M,C)=\left\{F\in{[n]\choose k}\mid X\subseteq F,\ |F\cap C|\geq t+1\right\}\cup {C\choose k}.
$$
If $t=k-2$, then $\mathcal{H}_1(X,M,[n])=\mathcal{H}_2(M)$ and $h_1(k-2,k,n)=h_2(k)$. }
\end{remark}
Our main result describes the structure of all maximal non-trivial uniform $t$-intersecting families with large size for finite sets.
\begin{thm}\label{s-main-1}
Let $1\leq t\leq k-2$, and $\max\left\{{t+2\choose 2},\frac{k-t+2}{2}\right\}\cdot(k-t+1)^2+t\leq n$. If $\mathcal{F}\subseteq{[n]\choose k}$ is a maximal non-trivial $t$-intersecting family with
$$
|\mathcal{F}|\geq (k-t){n-t-1\choose k-t-1}-{k-t\choose 2}{n-t-2\choose k-t-2},
$$
then one of the following holds.
\begin{itemize}
\item[{\rm(i)}] $\mathcal{F}=\mathcal{H}_1(X,M,C)$ for some $t$-subset $X$, $k$-subset $M$ and $c$-subset $C$ of $[n]$, where $c\in\{k+1,k+2,\ldots,2k-t,n\}$.
\item[{\rm(ii)}] $\mathcal{F}=\mathcal{H}_2(Z)$ for some $(t+2)$-subset $Z$ of $[n],$ and $\frac{k}{2}-1\leq t\leq k-2$.
\end{itemize}
\end{thm}
In the special case when $t=1$, Theorem~\ref{s-main-1} gives rise to Theorem 7 in \cite{Kostochka-Mubayi}. From Lemmas~\ref{s-lem3}, \ref{s-lem6}, \ref{s-lem2} and \ref{s-lem2-2}, one can determine the sequence of these families according to their sizes.
The rest of this paper is organized as follows. In the next section we will give some properties of the maximal $t$-intersecting families with $t$-covering number $t+1$, and prove a number of inequalities for the sizes of Families I and II. In Section 3 we will prove some upper bounds for the sizes of non-trivial $t$-intersecting families using their $t$-covering number. After these preparations we will prove Theorem~\ref{s-main-1} in Section 4.
\section{$t$-intersecting families with $t$-covering number $t+1$}
In this section, we will give some properties of the maximal $t$-intersecting families with $t$-covering number $t+1$ and prove a number of inequalities for the sizes of Families I and II.
\begin{ass}\label{s-hyp1}
Let $1\leq t\leq k-2$ and $2k\leq n$, let $\mathcal{F}\subseteq {[n]\choose k}$ be a maximal $t$-intersecting family with $\tau_t(\mathcal{F})=t+1.$ Define
$$
\mathcal{T}=\left\{T\in{[n]\choose t+1}\mid |T\cap F|\geq t\ for\ any\ F\in\mathcal{F}\right\}.
$$
\end{ass}
\begin{lemma}\label{s-lem3-1}
Let $n,\ k,\ t,\ \mathcal{F}$ and $\mathcal{T}$ be as in Assumption~\ref{s-hyp1}. Then $\mathcal{T}$ is a $t$-intersecting family with $t\leq\tau_t(\mathcal{T})\leq t+1$. Moreover, the following hold.
\begin{itemize}
\item[{\rm (i)}] If $\tau_t(\mathcal{T})=t$, then there exist a $t$-subset $X$ and an $l$-subset $M$ of $[n]$ with $X\subseteq M$ and $t+1\leq l\leq k+1$ such that \begin{align}\label{s-equ1-1}
\mathcal{T}=\left\{T\in{M\choose t+1}\mid X\subseteq T\right\}.
\end{align}
\item[{\rm (ii)}] If $\tau_t(\mathcal{T})=t+1$, then there exists a $(t+2)$-subset $Z$ of $[n]$ such that $\mathcal{T}={Z\choose t+1}.$
\end{itemize}
\end{lemma}
\proof For any $T\in\mathcal{T}$, by maximality of $\mathcal{F}$, $\mathcal{F}$ contains all $k$-subsets of $[n]$ containing $T$. For any $T_1,T_2\in \mathcal{T},$ if $|T_1\cap T_2|<t,$ then there must exist $F_1,F_2\in\mathcal{F}$ such that $T_1\subseteq F_1$, $T_2\subseteq F_2$ and $|F_1\cap F_2|<t$ from $2k\leq n.$ That is impossible as $\mathcal{F}$ is maximal $t$-intersecting. Hence $|T_1\cap T_2|\geq t,$ and $\mathcal{T}\subseteq {[n]\choose t+1}$ is a $t$-intersecting family with $t\leq\tau_t(\mathcal{T})\leq t+1.$
\
\medskip
\
(i)\quad Suppose that $\tau_t(\mathcal{T})=t.$ Then there exists a $t$-subset $X$ of $[n]$ such that $X$ is contained in every $(t+1)$-subset in $\mathcal{T}.$ Assume that $M=\cup_{T\in\mathcal{T}}T$ and $|M|=l.$ It suffices to prove (\ref{s-equ1-1}) and $t+1\leq|M|\leq k+1$. Since $\tau_t(\mathcal{F})=t+1$, we have $\mathcal{F}\setminus\mathcal{F}_X\neq\emptyset.$ Let $F^\prime$ be any $k$-subset in $\mathcal{F}\setminus\mathcal{F}_X.$ Observe that
$|X\cap F^\prime|\leq t-1.$ For any $T\in\mathcal{T},$ since $X\subseteq T$ and $|T\cap F^\prime|\geq t,$ we have $|X\cap F^\prime|=t-1$ and $|T\cap (X\cup F^\prime)|\geq t+1,$ which imply that $|X\cup F^\prime|=k+1$ and $T\subseteq X\cup F^\prime.$ Hence $M=\cup_{T\in\mathcal{T}}T\subseteq X\cup F^\prime$ and $t+1\leq l\leq k+1.$ It is clear that $\mathcal{T}\subseteq\{T\in{M\choose t+1}\mid X\subseteq T\}.$ Let $T^\prime$ be any $(t+1)$-subset of $M$ with $X\subseteq T^\prime.$ For any $F\in\mathcal{F}$, if $X\subseteq F,$ then $|T^\prime\cap F|\geq t$; if $X\nsubseteq F,$ by above discussion, then $T^\prime\subseteq X\cup F,$ which implies that $|T^\prime\cap F|\geq t$ from $|X\cup F|=k+1.$ Hence $T^\prime\in\mathcal{T}$ and (\ref{s-equ1-1}) is proved.
\
\medskip
\
(ii)\quad Suppose that $\tau_t(\mathcal{T})=t+1$. Let $A,B,C\in\mathcal{T}$ be distinct subsets such that $A\cap B$, $A\cap C$ and $B\cap C$ are pair-wise distinct. Since $\mathcal{T}$ is $t$-intersecting, we have $|A\cap B|=|A\cap C|=|B\cap C|=t,$ which implies that $C=(A\cap C)\cup(B\cap C)\subseteq A\cup B$ from $|C|=t+1.$ Hence, we get $A\cup C\subseteq A\cup B$ and $B\cup C\subseteq A\cup B$, which imply that $A\cup B=A\cup C=B\cup C.$
Since $\tau_t(\mathcal{T})=t+1$, there exist three distinct subsets $T_1,T_2,T_3\in\mathcal{T}$ such that $T_1\cap T_2$, $T_1\cap T_3$ and $T_2\cap T_3$ are pair-wise distinct. For any $T\in\mathcal{T}\setminus\{T_1,T_2,T_3\},$ if $T\cap T_1=T\cap T_2=T\cap T_3,$ then $|T\cap T_1|=t,$ $T\cap T_1\subseteq T_2$ and $T\cap T_1\subseteq T_3,$ which imply that $T\cap T_1=T_1\cap T_2=T_1\cap T_3,$ a contradiction. Hence, there exist $T_i, T_j\in\{T_1,T_2,T_3\}$ such that $T\cap T_i\neq T\cap T_j,$ and $
T= (T\cap T_i)\cup (T\cap T_j)\subseteq T_1\cup T_2=T_1\cup T_3=T_2\cup T_3.$
Let $Z=T_1\cup T_2.$ Then $\mathcal{T}\subseteq{Z\choose t+1}.$ In the following, we show that ${Z\choose t+1}\subseteq\mathcal{T}.$ For any $F\in \mathcal{F},$ if $F\cap T_1=F\cap T_2=F\cap T_3,$ then $F\cap T_1\subseteq T_i$ for any $i\in\{1,2,3\}.$ That is impossible as $T_1\cap T_2,$ $T_1\cap T_3$ and $T_2\cap T_3$ are pair-wise distinct and $|F\cap T_1|\geq t$. Hence there exist $T_i,T_j\in\{T_1,T_2,T_3\}$ such that $F\cap T_i\neq F\cap T_j,$ implying that $|F\cap Z|\geq t+1.$ So for any $F\in\mathcal{F}$ and $T^\prime\in{Z\choose t+1},$ we have $|F\cap T^\prime|\geq t.$ Therefore, we have $\mathcal{T}={Z\choose t+1}$ as desired. $\qed$
\begin{lemma}\label{s-prop3}
Let $n,\ k,\ t,\ \mathcal{F}$ and $\mathcal{T}$ be as in Assumption~\ref{s-hyp1}, and set $M=\cup_{T\in \mathcal{T}}T$. Suppose that $\tau_t(\mathcal{T})=t$, $|M|=k+1$ and $X$ is a $t$-subset of $[n]$ which is contained in each $T\in\mathcal{T}.$ Then $\mathcal{F}=\{F\in {[n]\choose k}\mid X\subseteq F,\ |F\cap M|\geq t+1\}\cup{M\choose k}.$
\end{lemma}
\proof It follows from the proof of Lemma~\ref{s-lem3-1} that, for any $F\in\mathcal{F}\setminus\mathcal{F}_X$, we have $M=F\cup X,$ which implies that $F\in{M\choose k}.$ Let $\mathcal{A}^\prime=\{F\in {[n]\choose k}\mid X\subseteq F,\ |F\cap M|\geq t+1\}$ and $F^\prime$ be a fixed $k$-subset in $\mathcal{F}\setminus\mathcal{F}_X$. For any $F\in\mathcal{F}_X,$ since $|F\cap F^\prime|\geq t,$ $|F^\prime\cap X|\leq t-1$ and $M=F^\prime\cup X,$ we have $|F\cap M|\geq t+1,$ which implies that $\mathcal{F}_X\subseteq \mathcal{A}^\prime.$ Note that $\mathcal{A}^\prime\cup {M\choose k}$ is a $t$-intersecting family. By the maximality of $\mathcal{F},$ we have $\mathcal{F}=\mathcal{A}^\prime\cup {M\choose k}.$ $\qed$
By Remark~\ref{s-obs1}, if $\mathcal{F}$ is a maximal $t$-intersecting family satisfying the conditions in Lemma~\ref{s-prop3}, then $\mathcal{F}=\mathcal{H}_1(X,Y,M)$ for any $Y\in{M\choose k}$ with $X\subseteq Y$.
\begin{lemma}\label{s-prop3-1}
Let $n,\ k,\ t,\ \mathcal{F}$ and $\mathcal{T}$ be as in Assumption~\ref{s-hyp1}, set $M=\cup_{T\in \mathcal{T}}T$ and $C=M\cup(\cup_{F\in\mathcal{F}\setminus \mathcal{F}_X}F)$. Suppose that $\tau_t(\mathcal{T})=t$, $|M|=k$ and $X$ is a $t$-subset of $[n]$ which is contained in each $T\in\mathcal{T}.$ Assume that $|C|=c$. Then either $k+2\leq c\leq 2k-t$ or $c=n.$
Moreover, the following hold.
\begin{itemize}
\item[{\rm(i)}] If $k+2\leq c\leq 2k-t,$ then $\mathcal{F}=\mathcal{H}_1(X,M,C).$
\item[{\rm(ii)}] If $c=n$, then $t\neq k-2$ and $\mathcal{F}=\mathcal{H}_1(X,M,[n])$.
\end{itemize}
\end{lemma}
\proof By the proof of Lemma~\ref{s-lem3-1}, for any $F\in\mathcal{F}\setminus\mathcal{F}_X,$ we have $|F\cap X|=t-1$ and $M\subseteq X\cup F$, which imply $|F\cap M|=k-1$ from $X\subseteq M.$ Choose $F_1\in \mathcal{F}\setminus\mathcal{F}_X.$ Then $|F_1\cup M|=k+1.$ If $c> k+1,$ then there exists $F_2\in \mathcal{F}\setminus\mathcal{F}_X$ such that $F_2\nsubseteq F_1\cup M,$ which implies that $F_2\cap (F_1\cup M)=F_2\cap M.$ Similarly, if $c>k+2,$ then there exists $F_3\in \mathcal{F}\setminus\mathcal{F}_X$ such that $F_3\nsubseteq F_1\cup F_2\cup M,$ which implies that $F_3\cap (F_1\cup F_2\cup M)=F_3\cap M.$ By mathematical induction, we can get $F_1,F_2,\ldots, F_{c-k}\in\mathcal{F}\setminus\mathcal{F}_X$ such that $F_i\cap(M\cup(\cup_{j=1}^{i-1}F_j))=F_i\cap M$ for any $i\in\{1,2,\ldots,c-k\}.$ If there exists $F^\prime\in\mathcal{F}$ such that $F^\prime\cap M=X$, then for any $i\in\{1,2,\ldots,c-k\}$, there exists $y_i\in F_i\setminus M$ such that $y_i\in F^\prime$ from $|F^\prime \cap F_i|\geq t$ and $|F^\prime \cap F_i\cap M|=t-1.$ Suppose $X=\{x_1,\ldots,x_t\}$. By the choice of $F_1,F_2,\ldots,F_{c-k},$ it is clear that $x_1,\ldots,x_t,y_1\ldots,y_{c-k}$ are in $F^\prime$.
Suppose that $c\geq 2k-t+1.$ If there exists $F^\prime\in\mathcal{F}$ such that $F^\prime\cap M=X$, by above discussion, then $|F^\prime|>k$. That is impossible. Hence $|F^{\prime\prime}\cap M|\geq t+1$ for any $F^{\prime\prime}\in\mathcal{F}_X$. By the maximality of $\mathcal{F}$, it is easy to see that any $k$-subset $F^{\prime\prime\prime}$ of $[n]$ satisfying $|F^{\prime\prime\prime}\cap X|=t-1$ and $|F^{\prime\prime\prime}\cap M|=k-1$ is in $\mathcal{F}$. Then we have $C=[n]$ and $c=n.$ On the other hand, we have $c\geq k+2$, for otherwise we would have $c=k+1$, and $|T\cap F|\geq t$ for any $F\in\mathcal{F}$ and any $T\in{C\choose t+1}$ with $X\subseteq T$, which imply that $T\subseteq M$, a contradiction.
So far we have proved that either $k+2\leq c\leq 2k-t$ or $c=n.$ It remains to prove (i) and (ii).
(i)\quad Suppose that $k+2\leq c\leq 2k-t$. Since $|F\cap X|=t-1$ and $|F\cap M|=k-1$ for any $F\in\mathcal{F}\setminus\mathcal{F}_X,$ we have $\mathcal{F}\setminus\mathcal{F}_X\subseteq \mathcal{C}(X,M,C).$ For any $F^\prime\in\mathcal{F}_X,$ if $|F^\prime\cap M|\geq t+1$, then $F^\prime\in\mathcal{A}(X,M);$ if $F^\prime\cap M=X,$ then $|F^\prime\cap C|=c-k+t$ by above discussion, which implies that $F^\prime\in\mathcal{B}(X,M,C).$ Thus, $\mathcal{F}\subseteq \mathcal{H}_1(X,M,C).$ By the maximality of $\mathcal{F}$, we get $\mathcal{F}=\mathcal{H}_1(X,M,C)$.
(ii)\quad Suppose that $c=n$. Then $\mathcal{F}=\mathcal{A}(X,M)\cup\mathcal{C}(X,M,C)$ by the discussion in (i) and maximality of $\mathcal{F}$. If $t=k-2$, then $\mathcal{F}=\mathcal{H}_1(X,M,[n])=\mathcal{H}_2(M)$, which implies that $\mathcal{T}={M\choose t+1}$ and $\tau_t(\mathcal{T})=t+1,$ a contradiction. $\qed$
\begin{lemma}\label{s-prop4}
Let $n,\ k,\ t,\ \mathcal{F}$ and $\mathcal{T}$ be as in Assumption~\ref{s-hyp1}. Suppose $\tau_t(\mathcal{T})=t+1$ and $\mathcal{T}={Z\choose t+1}$ for some $(t+2)$-subset $Z$ of $[n]$. Then $\mathcal{F}=\mathcal{H}_2(Z).$
\end{lemma}
\proof Since $\mathcal{T}={Z\choose t+1}$, we have $|F\cap Z|\geq t$ for any $F\in\mathcal{F}.$ If there exists $F^\prime\in\mathcal{F}$ such that $|F^\prime\cap Z|=t,$ then there exists a $T^\prime\in\mathcal{T}$ such that $|F^\prime\cap T^\prime|=t-1,$ a contradiction. Hence, $\mathcal{F}\subseteq \mathcal{H}_2(Z).$ Since $\mathcal{F}$ is maximal and $\mathcal{H}_2(Z)$ is $t$-intersecting, we have $\mathcal{F}=\mathcal{H}_2(Z)$. $\qed$
Now we give some equalities and inequalities for the sizes of Families I and II.
\begin{lemma}\label{s-lem1}
Suppose $c\in\{k+1,k+2,\ldots,2k-t,n\}.$ Then the following hold.
\begin{align}
h_1(t,k,c)=&{n-t\choose k-t}-{n-k\choose k-t}+{n-c\choose 2k-c-t}+t(c-k).\label{s-equ2}\\
h_2(t+2)=&(t+2){n-t-2\choose k-t-1}+{n-t-2\choose k-t-2}.\label{s-equ4}
\end{align}
\end{lemma}
\proof Let $X,$ $M$, $C$, $\mathcal{A}(X,M)$, $\mathcal{B}(X,M,C)$ and $\mathcal{C}(X,M,C)$ be as in Family I. Then
\begin{align*}
|\mathcal{A}(X,M)|={n-t\choose k-t}-{n-k\choose k-t},\ |\mathcal{B}(X,M,C)|={n-c\choose 2k-c-t},\ |\mathcal{C}(X,M,C)|=t(c-k).
\end{align*}
Hence, (\ref{s-equ2}) holds.
Consider the family $\mathcal{H}_2(Z),$ where $Z$ is a $(t+2)$-subset of $[n]$. Observe that the number of $k$-subsets $F$ of $[n]$ satisfying $|F\cap Z|=t+1$ is $(t+2){n-t-2\choose k-t-1}$, and the number of $k$-subsets $F$ of $[n]$ satisfying $|F\cap Z|=t+2$ is ${n-t-2\choose k-t-2}$. Hence we have (\ref{s-equ4}) holds. $\qed$
Let
\begin{align*}
f(n,k,t)=(k-t){n-t-1\choose k-t-1}-{k-t\choose 2}{n-t-2\choose k-t-2}.
\end{align*}
\begin{lemma}\label{s-lem3}
Let $1\leq t\leq k-2$ and $2k<n$. Then the following hold.
\begin{itemize}
\item[{\rm(i)}] $h_1(t,k,k+1)>h_1(t,k,k+2)>\cdots>h_1(t,k,2k-t).$
\item[{\rm(ii)}] $\min\{h_1(t,k,2k-t),\ h_1(t,k,n)\}\geq f(n,k,t).$
\end{itemize}
\end{lemma}
\proof (i)\quad
For any $c\in\{k+1,k+2,\ldots,2k-t-1\}$, we have
$$
h_2(t,k,c)-h_2(t,k,c+1)={n-c-1\choose 2k-c-t}-t=\prod_{i=0}^{2k-c-t-1}\frac{n-c-1-i}{2k-c-t-i}-t.
$$
Observe that $\frac{n-c-1-i}{2k-c-t-i}>1$ for any $i\in\{0,1,\ldots,2k-c-t-1\},$ and $\frac{n-c-1-i}{2k-c-t-i}=n-2k+t>t$ when $i=2k-c-t-1$. Hence, we have $h_2(t,k,c)>h_2(t,k,c+1)$ for any $c\in\{k+1,k+2,\ldots,2k-t-1\}$, and (i) holds.
\
\medskip
\
(ii)\quad Let $X$ and $M$ be as in Family I. For any $i\in\{t,t+1,\ldots,k\}$, denote $
\mathcal{A}_i(X,M)=\left\{F\subseteq [n]\mid X\subseteq F,\ |F|=k,\ |F\cap M|=i\right\}$ and
$$
\mathcal{L}_i(X,M)=\left\{(I,F)\in{[n]\choose i}\times{[n]\choose k}\mid X\subseteq I\subseteq M,\ I\subseteq F \right\}.
$$
Double counting $|\mathcal{L}_i(X,M)|$, we obtain
\begin{align*}
|\mathcal{L}_i(X,M)|=\sum_{j=i}^k\left|\mathcal{A}_j(X,M)\right|\cdot{j-t\choose i-t}={k-t\choose i-t}{n-i\choose k-i}.
\end{align*}
Since $\mathcal{A}(X,M)=\cup_{j=t+1}^k\mathcal{A}_j(X,M)$ and
$$
|\mathcal{L}_{t+1}(X,M)|=\sum_{j=t+1}^k\left|\mathcal{A}_j(X,M)\right|+\sum_{j=t+2}^k\left|\mathcal{A}_j(X,M)\right|\cdot\left(j-t-1\right),
$$
we obtain
\begin{align*}
|\mathcal{L}_{t+1}(X,M)|=&(k-t){n-t-1\choose k-t-1}\leq|\mathcal{A}|+\sum_{j=t+2}^k\left|\mathcal{A}_j(X,M)\right|\cdot {j-t\choose 2}\\
=&|\mathcal{A}|+|\mathcal{L}_{t+2}(X,M)|=|\mathcal{A}|+{k-t\choose 2}{n-t-2\choose k-t-2},
\end{align*}
which implies that $|\mathcal{A}(X,M)|\geq f(n,k,t)$. From the construction of $\mathcal{H}_1(X,M,C)$, we have (ii) holds.
$\qed$
\begin{lemma}\label{s-lem6}
Let $1\leq t\leq k-2$ and ${t+2\choose 2}(k-t+1)^2+t\leq n$. Then the following hold.
\begin{itemize}
\item[{\rm(i)}] If $1\leq t\leq \frac{k}{2}-\frac{3}{2}$, then $h_2(t+2)<f(n,k,t)$.
\item[{\rm(ii)}] If $\frac{k}{2}-\frac{3}{2}< t\leq k-2$, then $h_2(t+2)> f(n,k,t)$.
\end{itemize}
\end{lemma}
\proof Note that
\begin{align}\label{s-equ3-4}
h_2(t+2)=(t+2){n-t-1\choose k-t-1}-(t+1){n-t-2\choose k-t-2}.
\end{align}
Let $f_2(n,k,t)=(f(n,k,t)-h_2(t+2))/{n-t-2\choose k-t-2}$. Then
$$
f_2(n,k,t)=\frac{(k-2t-2)(n-t-1)}{k-t-1}-{k-t\choose 2}+t+1
$$
\medskip
\noindent{\rm (i)}\quad From $1\leq t\leq \frac{k}{2}-\frac{3}{2}$ and ${t+2\choose 2}(k-t+1)^2+t\leq n$, we have
\begin{align*}
f_2(n,k,t)\geq \frac{(k-2t-2)(t+2)(t+1)(k-t+1)^2}{2(k-t-1)}-\frac{(k-t)(k-t-1)}{2}+t+\frac{t+1}{k-t-1}.
\end{align*}
Since $(k-2t-2)(t+2)(t+1)\geq (k-2t-2)+(t+2)=k-t$, we have $f_2(n,k,t)>0$ and (i) holds.
\medskip
\noindent{\rm (ii)}\quad If $t=\frac{k}{2}-1,$ then $f_2(n,k,t)=-t(t+1)/2<0.$ If $\frac{k}{2}-\frac{1}{2}\leq t\leq k-2,$ then $k-2t-2<0$ and
\begin{align*}
f_2(n,k,t)=&\frac{(k-2t-2)n+(t+1)^2}{k-t-1}-{k-t\choose 2}\\
\leq&\frac{(k-2t-2)(t+2)(t+1)(k-t+1)^2+2(t+1)^2}{2(k-t-1)}+\frac{(k-2t-2)t}{k-t-1}-{k-t\choose 2}<0.
\end{align*}
Therefore, we have (ii) holds. $\qed$
\begin{lemma}\label{s-lem2}
Let $1\leq t\leq k-2$ and ${t+2\choose 2}(k-t+1)^2+t\leq n$. Then the following hold.
\begin{itemize}
\item[{\rm(i)}] Suppose that $1\leq t\leq k-3$. Then $h_1(t,k,2k-t-2)>h_1(t,k,n)\geq h_1(t,k,2k-t-1),$ and equality holds only if $t=1$.
\item[{\rm(ii)}] Suppose that $t=k-2$. Then $h_1(t,k,n)\geq h_1(t,k,k+1)$, and equality holds only if $t=1$.
\end{itemize}
\end{lemma}
\proof From Lemma~\ref{s-lem1}, we have
\begin{align*}
h_1(t,k,2k-t-2)-h_1(t,k,n)=&\frac{1}{2}(n-2k+t+2)(n-2k-t+1),\\
h_1(t,k,n)-h_1(t,k,2k-t-1)=&(t-1)(n-2k+t+1).
\end{align*}
Then (i) holds from
\begin{align*}
n-2k-t\geq{t+2\choose 2}(k-t+1)^2-2k\geq 2(t+2)(k-t+1)-2k=2(t+1)(k-t)+4>0.
\end{align*}
If $t=k-2$, then $h_1(t,k,n)-h_1(t,k,k+1)=(n-k-1)(t-1)$ and (ii) holds. $\qed$
\begin{lemma}\label{s-lem2-2}
Let $1\leq t\leq k-2$ and $2k<n.$ Then the following hold.
\begin{itemize}
\item[{\rm(i)}] Suppose that $t=\frac{k}{2}-1$. Then $h_2(t+2)\geq h_1(t,k,k+2)$ and equality holds only if $t=1$; if $t=1$, or $t\geq 2$ and $n$ is sufficiently large, then $h_1(t,k,k+1)>h_2(t+2)$.
\item[{\rm(ii)}] Suppose that $\frac{k}{2}-\frac{1}{2}\leq t\leq k-2$. Then $h_2(t+2)\geq h_1(t,k,k+1)$ and equality holds only if $(t,k)=(1,3)$.
\end{itemize}
\end{lemma}
\proof Since ${n-t\choose k-t}=\sum_{i=0}^{k-t-1}{n-k+i\choose k-t-1}+{n-k\choose k-t}$, we have
\begin{align}\label{s-equ3-3}
h_1(t,k,c)=\sum_{i=0}^{k-t-1}{n-k+i\choose k-t-1}+{n-c\choose 2k-c-t}+t(c-k)
\end{align}
for any $c\in\{k+1,k+2,\ldots,2k-t,n\}$. By (\ref{s-equ3-4}) and (\ref{s-equ3-3}), we have
\begin{align*}
&h_2(t+2)-h_1(t,k,k+1)\\=&(t+1)\left({n-t-1\choose k-t-1}-{n-t-2\choose k-t-2}\right)-\sum_{i=-1}^{k-t-2}{n-k+i\choose k-t-1}-t\\
=&(t+1){n-t-2\choose k-t-1}-\sum_{i=-1}^{k-t-2}{n-k+i\choose k-t-1}-t\\
=&(2t+1-k){n-t-2\choose k-t-1}+\sum_{i=-1}^{k-t-3}\left({n-t-2\choose k-t-1}-{n-k+i\choose k-t-1}\right)-t.
\end{align*}
If $\frac{k}{2}-\frac{1}{2}\leq t\leq k-3$, then $h_2(t+2)-h_1(t,k,k+1)\geq {n-t-2\choose k-t-1}-{n-k-1\choose k-t-1}-t>0.$ If $1=t=k-2,$ then $h_2(t+2)-h_1(t,k,k+1)=0$. If $2\leq t=k-2,$ then $h_2(t+2)-h_1(t,k,k+1)=(t-1)(n-t-2)+1-t>0$.
Suppose that $\frac{k}{2}-1=t$. If $t=1$, then $h_2(t+2)-h_1(t,k,k+1)=\frac{1}{2}(n-5)(-n+8)<0.$ When $t\geq2$, note that $h_2(t+2)-h_1(t,k,k+1)$ is a polynomial in $n$ with negative leading coefficient. Then $h_2(t+2)-h_1(t,k,k+1)<0$ if $t\geq 2$ and $n$ is sufficiently large. By (\ref{s-equ3-4}) and (\ref{s-equ3-3}) again, we have
\begin{align*}
h_2(t+2)-h_1(t,k,k+2)=t{n-t-2\choose t+1}-\sum_{i=0}^{t-1}{n-2t-2+i\choose t+1}-{n-2t-4\choose t}-2t.
\end{align*}
If $t=1$, then $h_2(t+2)-h_1(t,k,k+2)=0$. If $t\geq 2,$ then
\begin{align*}
h_2(t+2)-h_1(t,k,k+2)>{n-t-2\choose t+1}-{n-2t-2\choose t+1}-{n-2t-4\choose t}-2t>0
\end{align*}
from ${n-t-3\choose t}>n-2t-2>2t$ and $
{n-t-2\choose t+1}={n-t-4\choose t+1}+{n-t-4\choose t}+{n-t-3\choose t}.
$ Hence, the desired result follows. $\qed$
\section{Upper bounds for non-trivial $t$-intersecting families}
In this section, we give some upper bounds on the sizes of the maximal non-trivial $t$-intersecting families. For any family $\mathcal{F}\subseteq{[n]\choose k}$ and any subset $S$ of $[n]$, define $
\mathcal{F}_S=\{F\in\mathcal{F}\mid S\subseteq F\}.$
\begin{lemma}\label{s-lem1-1}
Let $\mathcal{F}\subseteq{[n]\choose k}$ be a $t$-intersecting family and $S$ an $s$-subset of $[n]$, where $t-1\leq s\leq k-1.$ If there exists $F^\prime\in \mathcal{F}$ such that $|S\cap F^\prime|=r\leq t-1,$ then for each $i\in\{1,2,\ldots,t-r\}$ there exists an $(s+i)$-subset $T_i$ with $S\subseteq T_i$ such that $|\mathcal{F}_S|\leq {k-r\choose i}|\mathcal{F}_{T_i}|$.
\end{lemma}
\proof For any $i\in\{1,2,\ldots,t-r\}$, let
\begin{align*}
\mathcal{H}_i=\{H\in{S\cup F^\prime \choose s+i}\mid S\subseteq H\}.
\end{align*}
Observe that $|\mathcal{H}_i|={k-r\choose i}$. For any $F\in\mathcal{F}_S,$ since $\mathcal{F}$ is $t$-intersecting, we have $|F\cap F^\prime|\geq t$, implying that $|F\cap (S\cup F^\prime)|\geq s+t-r$ and there exists $H\in\mathcal{H}_i$ such that $H\subseteq F.$ Therefore $\mathcal{F}_S=\cup_{H\in\mathcal{H}_i}\mathcal{F}_H.$ Let $T_i$ be a subset in $\mathcal{H}_i$ such that $|\mathcal{F}_H|\leq|\mathcal{F}_{T_i}|$ for any $H\in\mathcal{H}_i.$ Thus $|\mathcal{F}_S|\leq {k-r\choose i}|\mathcal{F}_{T_i}|$ as desired. $\qed$
Since $|\mathcal{F}_T|\leq {n-|T|\choose k-|T|}$ for any subset $T$ of $[n]$, we can obtain the following lemma.
\begin{lemma}\label{s-lem2-1}
Let $\mathcal{F}\subseteq{[n]\choose k}$ be a $t$-intersecting family and $S$ an $s$-subset of $[n]$ with $t-1\leq s\leq k.$ If there exists $F^\prime\in \mathcal{F}$ such that $\dim(S\cap F^\prime)=r\leq t-1,$ then $|\mathcal{F}_S|\leq {k-r\choose t-r}{n-s-t+r\choose k-s-t+r}$.
\end{lemma}
The following lemma gives some upper bounds on the size of maximal non-trivial $t$-intersecting families $\mathcal{F}$ with $\tau_t(\mathcal{F})=t+1.$
\begin{lemma}\label{s-prop1}
Let $n,\ k,\ t,\ \mathcal{F}$ and $\mathcal{T}$ be as in Assumption~\ref{s-hyp1}. Then the following hold.
\begin{itemize}
\item[{\rm (i)}] If $|\mathcal{T}|=1$, then $|\mathcal{F}|\leq {n-t-1\choose k-t-1}+(t+1)(k-t)(k-t+1){n-t-2\choose k-t-2}$.
\item[{\rm (ii)}] Suppose that $|\mathcal{T}|\geq 2$ and $ \mathcal{T}=\left\{T\in{M\choose t+1}\mid X\subseteq T\right\}$ for some $t$-subset $X$ and $l$-subset $M$ of $[n]$ with $X\subseteq M.$ Then
\begin{align}\label{s-equ9}
|\mathcal{F}|\leq &(l-t){n-t-1\choose k-t-1}+(k-l+1)(k-t+1){n-t-2\choose k-t-2}+t{n-l\choose k-l+1}.
\end{align}
Moreover, if $l=t+2,$ then
\begin{align}\label{s-equ10}
|\mathcal{F}|\leq 2{n-t-1\choose k-t-1}+(k-1)(k-t+1){n-t-2\choose k-t-2}.
\end{align}
\item[{\rm (iii)}] If $|\mathcal{T}|\geq 2$ and $\mathcal{T}={Z\choose t+1}$ for some $(t+2)$-subset $Z$ of $[n]$, then $|\mathcal{F}|=h_2(t+2).$
\end{itemize}
\end{lemma}
\proof (i) Let $T$ be the unique element in $\mathcal{T}$. Since $|T\cap F|\geq t$ for any $F\in\mathcal{F}$, we have
\begin{eqnarray}\label{s-equ7}
\mathcal{F}=\mathcal{F}_T\cup\left(\bigcup_{S\in{T\choose t}}(\mathcal{F}_S\setminus\mathcal{F}_T)\right).
\end{eqnarray}
We now give an upper bound on $|\mathcal{F}_S\setminus\mathcal{F}_T|$ for any fixed $S\in{T\choose t}.$ Since $\tau_t(\mathcal{F})=t+1,$ there exists an $F^\prime\in\mathcal{F}\setminus \mathcal{F}_S$ such that $|S\cap F^\prime|=t-1$ from $|F^\prime \cap T|\geq t.$ Then we have $T=(F^\prime \cap T)\cup S$ and $T\subseteq F^\prime \cup S.$ For any $F\in \mathcal{F}_S\setminus\mathcal{F}_T$, notice that $(F\cap F^\prime)\cup S\subseteq F\cap (F^\prime\cup S).$ Since $|F\cap F^\prime|\geq t$ and $|F\cap F^\prime\cap S|\leq t-1$, we have $|F\cap(F^\prime\cup S)|\geq t+1.$ Hence there exists a $(t+1)$-subset $H$ such that $H\neq T$, $S\subseteq H\subseteq S\cup F^\prime$ and $H\subseteq F.$ Therefore, we have
\begin{eqnarray}\label{s-equ8}
\mathcal{F}_S\setminus\mathcal{F}_T=\bigcup_{S\subseteq H\subseteq S\cup F^\prime, \atop H\neq T, |H|=t+1} \mathcal{F}_H.
\end{eqnarray}
Consider any $(t+1)$-subset $H$ of $[n]$ satisfying $H\neq T$ and $S\subseteq H\subseteq S\cup F^\prime.$ Since $T$ is the unique $(t+1)$-subset of $[n]$ such that $|T\cap F|\geq t$ for any $F\in\mathcal{F}$, there exists $F^{\prime\prime}$ such that $|H\cap F^{\prime\prime}|<t,$ which implies that $|H\cap F^{\prime\prime}|=t-1$ from $|H\cap T|=|S|=t$ and $|T\cap F^{\prime\prime}|\geq t.$ From Lemma~\ref{s-lem2-1}, we have $|\mathcal{F}_{H}|\leq(k-t+1){n-t-2\choose k-t-2}.$ Observe $|\mathcal{F}_T|\leq{n-t-1\choose k-t-1}$ and
\begin{eqnarray*}
\left|\left\{H\in{S\cup F^\prime\choose t+1}\mid S\subseteq H,\ H\neq T\right\}\right|=k-t.
\end{eqnarray*}
Therefore, from (\ref{s-equ7}) and (\ref{s-equ8}), we obtain
\begin{eqnarray*}
|\mathcal{F}|\leq {n-t-1\choose k-t-1}+(t+1)(k-t)(k-t+1){n-t-2\choose k-t-2},
\end{eqnarray*}
as desired.
\
\medskip
\
(ii) We will obtain the upper bound of $|\mathcal{F}|$ by establishing upper bounds on $|\mathcal{F}_X|$ and $|\mathcal{F}\setminus\mathcal{F}_X|$. Since $\tau_t(\mathcal{F})=t+1,$ we have $|F\cap X|\geq t-1$ for any $F\in\mathcal{F}$, and there exists $F^\prime\in\mathcal{F}$ such that $|X\cap F^\prime|=t-1.$ From the proof of Lemma~\ref{s-lem3-1}, we have $X\subseteq M\subseteq X\cup F^\prime.$
For any $F\in\mathcal{F}_X,$ we have $|F\cap(X\cup F^\prime)|\geq t+1$ from $X\subseteq F$ and $|F\cap F^\prime|\geq t.$ So
\begin{eqnarray}\label{s-equ11}
\mathcal{F}_X=\left(\bigcup_{X\subseteq H_1,\ H_1\in{M\choose t+1}}\mathcal{F}_{H_1}\right)\cup\left(\bigcup_{X\subseteq H_2,\ H_2\in{X\cup F^\prime\choose t+1}\setminus{M\choose t+1}}\mathcal{F}_{H_2}\right).
\end{eqnarray}
Since $|\mathcal{F}_{H_1}|\leq {n-(t+1)\choose k-(t+1)}$ for any $H_1\in{M\choose t+1}$, we have $|\bigcup_{X\subseteq H_1,\ H_1\in{M\choose t+1}}\mathcal{F}_{H_1}|\leq(l-t){n-(t+1)\choose k-(t+1)}.$ For any $H_2\in{X\cup F^\prime\choose t+1}\setminus{M\choose t+1}$ with $X\subseteq H_2,$ since $H_2\notin \mathcal{T},$ there exists $F^{\prime\prime}\in\mathcal{F}$ such that $|H_2\cap F^{\prime\prime}|<t,$ which implies that $|H_2\cap F^{\prime\prime}|=t-1$ from $|F^{\prime\prime}\cap X|\geq t-1.$ It follows that $|\mathcal{F}_{H_2}|\leq (k-t+1){n-(t+1)-1\choose k-(t+1)-1}$ from Lemma~\ref{s-lem2-1}. Notice that
\begin{eqnarray*}
\left|\left\{H_2\in{X\cup F^\prime\choose t+1}\setminus{M\choose t+1}\mid X\subseteq H_2\right\}\right|=k-l+1.
\end{eqnarray*}
Therefore, we have
\begin{eqnarray}\label{s-equ12}
|\mathcal{F}_X|\leq (l-t){n-t-1\choose k-t-1}+(k-l+1)(k-t+1){n-t-2\choose k-t-2}.
\end{eqnarray}
For any $F\in\mathcal{F}\setminus\mathcal{F}_X$ and any $T\in\mathcal{T}$, since $|F\cap X|=t-1$ and $X\nsubseteq F\cap T$, we have $T=(F\cap T)\cup X\subseteq F\cup X$. Then for any $F\in\mathcal{F}\setminus\mathcal{F}_X$ we have $M=\cup_{T\in\mathcal{T}}T\subseteq F\cup X,$ which implies that $|M\cap F|=l-1$. Hence, $\mathcal{F}\setminus\mathcal{F}_X\subseteq\{F\in{[n]\choose k}\mid |F\cap M|=l-1,\ X\nsubseteq F\},$ and
\begin{align}
|\mathcal{F}\setminus\mathcal{F}_X|\leq t{n-l\choose k-l+1}. \label{s-equ13}
\end{align}
Combining (\ref{s-equ12}) and (\ref{s-equ13}), we obtain (\ref{s-equ9}).
Now let us consider the case when $l=t+2$. From the discussion above, we have $|M\cap F|=l-1=t+1$ for any $F\in\mathcal{F}\setminus\mathcal{F}_X,$ which implies that $$\mathcal{F}\setminus\mathcal{F}_X\subseteq \bigcup_{X\nsubseteq L,\ L\in{M\choose t+1}} \mathcal{F}_L.$$
For any $L\in{M\choose t+1}$ with $X\nsubseteq L$, since $L\notin\mathcal{T}$ and $|F\cap M|\geq t$ for any $F\in\mathcal{F}$, there exists $F^\prime\in\mathcal{F}$ such that $|F^\prime\cap L|=t-1.$ Then $|\mathcal{F}_L|\leq (k-t+1){n-t-2\choose k-t-2}$ from Lemma~\ref{s-lem2-1}. Since the number of $(t+1)$-subsets $L$ of $M$ with $X\nsubseteq L$ is equal to $t$, we have
\begin{eqnarray}\label{s-equ14}
|\mathcal{F}\setminus\mathcal{F}_X|\leq t(k-t+1){n-t-2\choose k-t-2}.
\end{eqnarray}
Combining (\ref{s-equ12}) and (\ref{s-equ14}), we obtain (\ref{s-equ10}).
\
\medskip
\
(iii) By Lemma~\ref{s-prop4}, the desired result follows. $\qed$
\begin{lemma}\label{s-prop6}
Let $n$, $k$ and $t$ be integers with $1\leq t\leq k-2$ and ${t+2\choose 2}(k-t+1)^2+t\leq n$, and let $\mathcal{F}\subseteq {[n]\choose k}$ be a maximal $t$-intersecting family with $t+2\leq \tau_t(\mathcal{F})=m\leq k$. Then
\begin{align}\label{s-equ15-1}
|\mathcal{F}|\leq k^{m-t-2}(k-t+1)^2{m\choose t}{n-m\choose k-m}.
\end{align}
Moreover, we have
\begin{align}\label{s-equ16-1}
|\mathcal{F}|\leq (k-t+1)^2{t+2\choose 2}{n-t-2\choose k-t-2}.
\end{align}
\end{lemma}
\proof Let $T$ be an $m$-subset of $[n]$ which satisfies $|T\cap F|\geq t$ for any $F\in\mathcal{F}$. Then $\mathcal{F}=\cup_{H\in{T\choose t}}\mathcal{F}_H$ and there exists $H_1\in{T\choose t}$ such that $|\mathcal{F}|\leq {m\choose t}|\mathcal{F}_{H_1}|$. If $m\geq t+3,$ using Lemma~\ref{s-lem1-1} repeatedly, then there exist $H_2\in{[n]\choose t+1}$, $H_3\in{[n]\choose t+2}$,\ldots,$H_{m-t-1}\in{[n]\choose m-2}$ such that $H_i\subseteq H_{i+1}$ and $|\mathcal{F}_{H_i}|\leq k|\mathcal{F}_{H_{i+1}}|$ for each $i\in\{1,2,\ldots,m-t-2\}$. Thus there exists $H^\prime\in{[n]\choose m-2}$ such that
\begin{align*}
|\mathcal{F}|\leq{m\choose t}k^{m-t-2}|\mathcal{F}_{H^\prime}|.
\end{align*}
Since $\tau_t(\mathcal{F})>m-2$, we have $\mathcal{F}\setminus\mathcal{F}_{H^\prime}\neq\emptyset$ and $|F\cap H^\prime|\leq t-1$ for any $F\in\mathcal{F}\setminus\mathcal{F}_{H^\prime}.$
\medskip
\noindent\textbf{Case 1.} $|F\cap H^\prime|\leq t-2$ for all $F\in\mathcal{F}\setminus\mathcal{F}_{H^\prime}$.
In this case, we have $t\geq 2$. For $s\in\{0,1,\ldots,t-2\}$, let
\begin{align*}
g(s)={k-s\choose t-s}{n-m+2-t+s\choose k-m+2-t+s}.
\end{align*}
Since $t+2\leq m$, $s\leq t-2$ and ${t+2\choose 2}(k-t+1)^2+t\leq n$, we have
\begin{align*}
&(t-s)(n-m+3-t+s)-(k-s)(k-m+3-t+s)\\
=&(k-t)m+(t-s)(n-k)-(k-t)(k+3-t+s)\\
>&(k-t)(t+2)+n-k-(k-t)(k+1)\\
=&n-t-(k-t)^2>0,
\end{align*}
which implies that
\begin{align*}
\frac{g(s+1)}{g(s)}=\frac{(t-s)(n-m+3-t+s)}{(k-s)(k-m+3-t+s)}>1
\end{align*}
for $s\in\{0,1,\ldots,t-3\}$. That is the function $g(s)$ is increasing as $s\in\{0,1,\ldots,t-2\}$ increases.
Let $F_1$ be a fixed $k$-subset in $\mathcal{F}\setminus\mathcal{F}_{H^\prime}$. Assume that $|F_1\cap H^\prime|=s_1$. Observe that $0\leq s_1\leq t-2.$ By Lemma~\ref{s-lem2-1}, we have
$|\mathcal{F}_{H^\prime}|\leq g(s_1)\leq g(t-2)$, which implies that
\begin{align}\label{s-equ15}
|\mathcal{F}|\leq {m\choose t}k^{m-t-2}g(t-2)=k^{m-t-2}{m\choose t}{k-t+2\choose 2}{n-m\choose k-m}.
\end{align}
\medskip
\noindent\textbf{Case 2.} There exists $F_2\in\mathcal{F}\setminus\mathcal{F}_{H^\prime}$ such that $|F_2\cap H^\prime|=t-1$.
By Lemma~\ref{s-lem1-1}, there exists an $(m-1)$-subset $H^{\prime\prime}$ such that $|\mathcal{F}_{H^\prime}|\leq (k-t+1)|\mathcal{F}_{H^{\prime\prime}}|$. Hence, we have $|\mathcal{F}|\leq{m\choose t}k^{m-t-2}(k-t+1)|\mathcal{F}_{H^{\prime\prime}}|$. Since $\tau_t(\mathcal{F})>m-1$, there exists $F_3\in\mathcal{F}$ such that $|F_3\cap H^{\prime\prime}|\leq t-1$.
If $|F_3\cap H^{\prime\prime}|=t-1,$ then there exists an $m$-subset $H^{\prime\prime\prime}$ of $[n]$ with $H^{\prime\prime}\subseteq H^{\prime\prime\prime}$ such that $|\mathcal{F}_{H^{\prime\prime}}|\leq (k-t+1)|\mathcal{F}_{H^{\prime\prime\prime}}|$. Since $|\mathcal{F}_{H^{\prime\prime\prime}}|\leq{n-m\choose k-m}$, we have
\begin{align}\label{s-equ16}
|\mathcal{F}|\leq k^{m-t-2}(k-t+1)^2{m\choose t}{n-m\choose k-m}.
\end{align}
Suppose that $t\geq 2$ and $|F_3\cap H^{\prime\prime}|=s_2\leq t-2$. By Lemma~\ref{s-lem2-1}, we have
\begin{align*}
|\mathcal{F}_{H^{\prime\prime}}|\leq {k-s_2\choose t-s_2}{n-m+1-t+s_2\choose k-m+1-t+s_2}.
\end{align*}
Similar to Case 1, it is straightforward to verify that the function ${k-s\choose t-s}{n-m+1-t+s\choose k-m+1-t+s}$ is increasing as $s\in\{0,1,\ldots,t-2\}$ increases. Hence
\begin{align}\label{s-equ17}
|\mathcal{F}|\leq k^{m-t-2}(k-t+1){m\choose t}{k-t+2\choose 2}{n-m-1\choose k-m-1}.
\end{align}
If $t=1$, then (\ref{s-equ15-1}) holds from (\ref{s-equ16}). If $t\geq 2$, from ${t+2\choose 2}(k-t+1)^2+t\leq n$, it is straightforward to verify that
\begin{align*}
(k-t+1)^2{n-m\choose k-m}\geq\max\left\{{k-t+2\choose 2}{n-m\choose k-m},\ (k-t+1){k-t+2\choose 2}{n-m-1\choose k-m-1}\right\},
\end{align*}
which together with (\ref{s-equ15}), (\ref{s-equ16}) and (\ref{s-equ17}) yields that (\ref{s-equ15-1}) holds.
Let $p(x)=(x-t+1)(n-x)-k(x+1)(k-x)$ for each $x\in\{t+1,t+2,\ldots,k\}$. Observe that
\begin{align*}
p(t+1)=&2(n-t-1)-k(t+2)(k-t-1)\\
\geq& (t+2)(t+1)(k-t+1)^2-k(t+2)(k-t+1)+2k(t+2)-2>0,
\end{align*}
and
\begin{align*}
p(x+1)-p(x)=&n-k^2+2k+t-2+(2k-2)x\\
\geq&(k-t+1)^2+t-k^2+2k+t-2+(2k-2)(t+1)>0
\end{align*}
for each $x\in\{t+1,t+2,\ldots,k-1\}.$ Hence $p(x)>0$ for any $x\in\{t+1,t+2,\ldots,k\}$. Let
\begin{align*}
q(y)=k^{y-t-2}{y\choose t}{n-y\choose k-y}
\end{align*}
for each $y\in\{t+2,t+3,\ldots,k\}$. From ${t+2\choose 2}(k-t+1)^2+t\leq n$ and $p(x)>0$ for any $x\in\{t+1,t+2,\ldots,k\}$, we have
\begin{align*}
\frac{q(y)}{q(y+1)}=\frac{(y-t+1)(n-y)}{k(y+1)(k-y)}>1
\end{align*}
for any $y\in\{t+2,t+2,\ldots,k-1\}$. That is, the function $q(y)$ is decreasing as $y\in\{t+2,t+3,\ldots,k\}$ increases. This together with (\ref{s-equ15-1}) yields (\ref{s-equ16-1}) holds. $\qed$
\section{The proof of Theorem~\ref{s-main-1}}
Let $\mathcal{F}$ be any maximal non-trivial $t$-intersecting family which is not given in Theorem~\ref{s-main-1}. Suppose $f_3(n,k,t)=\left(f(n,k,t)-|\mathcal{F}|\right)/{n-t-2\choose k-t-2}.$ It suffices to prove that $f(n,k,t)>|\mathcal{F}|$ or $f_3(n,k,t)>0$.
\noindent\textbf{Case 1.}\quad $\tau_t(\mathcal{F})=t+1.$
\
\medskip
\
Let $\mathcal{T}$ be the set of all $(t+1)$-subsets $T$ of $[n]$ which satisfies $|T\cap F|\geq t$ for any $F\in\mathcal{F}$.
Suppose that $|\mathcal{T}|=1$. Since $n\geq{t+2\choose 2}(k-t+1)^2+t$, by Lemma~\ref{s-prop1} (i), we have
$$
f_3(n,k,t)\geq n-t-1-{k-t\choose 2}-(t+1)(k-t)(k-t+1)>0.
$$
Suppose that $|\mathcal{T}|\geq 2$ and $\tau_t(\mathcal{T})=t$. Assume that $l=t+2$. By Lemma~\ref{s-prop1} (ii), we have
\begin{align*}
f_3(n,k,t)\geq \frac{(k-t-2)(n-t-1)}{k-t-1}-{k-t\choose 2}-(k-1)(k-t+1).
\end{align*}
Observe that $n\geq{t+2\choose 2}(k-t+1)^2+t$. If $k=t+3,$ then $f_3(n,k,t)\geq 4(k^2-4k+2)+0.5>0.$ If $k=t+4,$ then $f_3(n,k,t)\geq \frac{5}{3}(5k^2-28k+29)>0.$ If $k>t+4,$ then $n-t-1>(t+2)(k-t+1)(k-t-1)$ and
\begin{align*}
f_3(n,k,t)>&(k-t-2)(t+2)(k-t+1)-(k-1)(k-t+1)-(k-t)(k-t-1)/2\\
=&(k-t-4)(k-t+1)t+(k-3)(k-t+1)-(k-t)(k-t-1)/2\\
=&(k-t-4)(k-t+1)t+(k-t)(k+t-5)/2+(k-3)>0.
\end{align*}
Assume that $t+2<l<k$. Since $\max\left\{{t+2\choose 2},\frac{k-t+2}{2}\right\}\geq \frac{1}{k-t}{t+2\choose 2}+\frac{(k-t-1)(k-t+2)}{2(k-t)}$ and ${n-l\choose k-l+1}<{n-t-2\choose k-t-2}$, By Lemma~\ref{s-prop1} (ii), then
\begin{align*}
f_3(n,k,t)>& \frac{(k-l)(n-t-1)}{k-t-1}-{k-t\choose 2}-(k-l+1)(k-t+1)-t\\
\geq& \frac{n-t-1}{k-t-1}-{k-t\choose 2}-2(k-t+1)-t\\
\geq & \frac{(k-t+1)^2}{(k-t)(k-t-1)}{t+2\choose 2}-t-1+\frac{(k-t+2)(k-t+1)^2}{2(k-t)}-{k-t\choose 2}-2(k-t+1)\\
>&1+\frac{(k-t+2)(k-t+1)}{2}-{k-t\choose 2}-2(k-t+1)\geq0.
\end{align*}
Suppose $|\mathcal{T}|\geq 2$ and $\tau_t(\mathcal{T})=t+1$. By Lemmas~\ref{s-prop4} and \ref{s-lem6}, we have $f(n,k,t)>|\mathcal{F}|$ if $1\leq t\leq \frac{k}{2}-\frac{3}{2}$.
\
\medskip
\
\noindent\textbf{Case 2.}\quad $t+2\leq\tau_t(\mathcal{F})\leq k.$
\
\medskip
\
Since
\begin{align*}
\max\left\{{t+2\choose 2},\ \frac{k-t+2}{2}\right\}\geq \frac{k-t-1}{k-t}{t+2\choose 2}+\frac{1}{k-t}\cdot\frac{k-t+2}{2},
\end{align*}
by Lemma~\ref{s-prop6}, we have
\begin{align*}
f_3(n,k,t)=&\frac{(k-t)(n-t-1)}{k-t-1}-{k-t\choose 2}-(k-t+1)^2{t+2\choose 2}\\
\geq&\frac{(k-t+1)^2(k-t+2)-2(k-t)}{2(k-t-1)}-{k-t\choose 2}>0.
\end{align*}
Hence the desired result follows. $\qed$
\section*{Acknowledgement}
This research is supported by NSFC (11671043) and NSF of Hebei Province(A2019205092).
\addcontentsline{toc}{chapter}{Bibliography}
|
1,116,691,497,624 | arxiv | \section{Introduction\label{introduction}}
The existence of two classes of gamma-ray bursts (GRBs) differing in
observed durations and spectral properties has been established for some
time
\citep[e.g.][]{1981Ap&SS..80..119M,1984Natur.308..434N,1992AIPC..265....3H}.
These populations were quantified using the Burst and Transient Source
Experiment (BATSE), which showed a bimodal distribution in the
durations of GRBs well fit by two lognormal functions
\citep{1994MNRAS.271..662M}, with the divide at $\sim2\,$s
\citep{1993ApJ...413L.101K}.
In addition, there is also contamination in the short burst class from soft gamma-ray repearters \citep[e.g.][]{2008arXiv0802.0008C}.
It is generally accepted that long GRBs
have their origins in massive star progenitors because of their association with core-collapse supernovae
\citep[SNe,][]{1998Natur.395..670G,2003Natur.423..847H,2003ApJ...591L..17S,2004ApJ...609L...5M,2006Natur.442.1011P,2006ARA&A..44..507W}
and
occurrence in star-forming galaxies \citep{2002AJ....123.1111B} and in
highly star-forming regions therein \citep{2006Natur.441..463F}.
The origin of short GRBs is still open, with mergers of compact
objects being the leading concept
\citep[e.g.][]{2005Natur.437..851G,2005Natur.437..859H,2005Natur.437..845F}.
The detection of
the spectroscopic signatures of SNe in the 4 nearest GRBs and the detection of
bumps consistent with SNe in the lightcurves of most low-redshift bursts
seemed to confirm the paradigm that all long GRBs would be associated with
SNe \citep{2004ApJ...609..952Z,2006ARA&A..44..507W}, as predicted by the
collapsar model of long GRBs \citep{1999A&AS..138..499W}.
Doubts were cast on
this paradigm
by the non-detection of SNe in
two nearby GRBs, GRB\,060505 at $z=0.089$ \citep{2006GCN..5123....1O}
and
GRB\,060614 at $z=0.125$ \citep{Price06_GCN5275}
discovered by the \emph{Swift} \citep{gehrels:2004} Burst Alert Telescope
\citep[BAT,][]{2005SSRv..120..143B}. Due to their long durations, $T_{90}$ of $4\pm1$\,s and $102\pm5$\,s respectively
\citep{2006GCN..5142....1H,Barthelmy06_GCN5256},
SN searches were initiated.
Although a supernova $\sim$100
times fainter than SN1998bw would have been detected, none was found in
either case
\citep{2006Natur.444.1047F,2006Natur.444.1053G,2006Natur.444.1050D,2007ApJ...662.1129O}.
It was suggested that they were short bursts where
the lack of SNe would not be surprising, as short GRBs
have not shown
SN emission
\citep{2005Natur.437..859H,2005Natur.437..845F,2006ApJ...638..354B,2006A&A...447L...5C}
The classification of GRBs with durations close to the
long/short division is problematic. The argument that GRB\,060614 was a
``short GRB" rests on its extended soft emission component
and on its negligible spectral lag
\citep{2006Natur.444.1044G,2007ApJ...655L..25Z}. When the latter is combined with its
relatively low luminosity, it violates the lag-luminosity relation found by
\citet{2000ApJ...534..248N} for long GRBs.
If the lack of a SN in GRB\,060505 is to be attributed to it being a short burst, it should also have a negligible lag.
We present the spectral lag analysis of the prompt emission of
GRB\,060505 using data from the \emph{Suzaku} Wide Area
Monitor (WAM) and \emph{Swift}-BAT.
\section{Observations and data reduction}
\begin{figure}[t]
\begin{centering}
\epsscale{1}
\plotone{f1.eps}
\caption{The lightcurves of GRB\,060505 with the BAT instrument on
$\emph{Swift}$ (a) and from the WAM0 detector on $\emph{Suzaku}$ (b-e), all at 100\,ms resolution.
Time is relative to the BAT trigger.
A precursor is visible in the lowest energy WAM channel.
\label{fig:BATlightcurves}\label{fig:WAWlightcurves}
\end{centering}
\end{figure}
GRB\,060505 was detected by the BAT instrument on \emph{Swift}.
The fluence
is (6.2$\pm$1.1)$\times 10^{-7}$ ergs cm$^{-2}$ (15-150\,keV) and the spectrum is fit by a power law with index 1.3$\pm0.3$
\citep{2006GCN..5142....1H}. The trigger fell below the
6.5$\sigma$ threshold for an automatic slew but
ground analysis found a 8.5$\sigma$ excess \citep{2006GCN..5076....1P}.
\emph{Swift} was
repointed at T$_0$+0.6~days and a weak fading X-ray source was identified
\citep{2006GCN..5114....1C}.
We obtained the publicly available data for GRB\,060505 from
the $\emph{Swift}$
archive\footnote{http://swift.gsfc.nasa.gov/docs/swift/archive/}. A mask
weighted lightcurve was generated using the BAT data analysis
tools. The available data contained only 10\,s of event data and the
lightcurve is presented in Fig.~\ref{fig:BATlightcurves}.
$\emph{Swift}$ was approaching the South Atlantic Anomaly when the burst occurred and was subject to a higher than normal background level. Additionally, the partial
coding was only 11\% \citep{2006GCN..5142....1H} meaning that the off-axis
angle with respect to the $\emph{Swift}$ axis was almost 50$\degr$,
substantially reducing the effective area of the instrument. Splitting the
data into energy channels for spectral lag analysis
further
reduces the weak signal.
The WAM is the anti-coincidence shield (ACS) of the Hard X-ray Detector on $\emph{Suzaku}$
\citep{2006SPIE.6266E.122Y} and it also triggered on GRB\,060505.
The WAM consists of four identical walls which act as
individual detectors (WAM0 to WAM3). The detectors have a large effective
area \citep{2006SPIE.6266E.122Y}. They are sensitive in the
energy range 50--5000\,keV, and although its primary role is to act as
an ACS, WAM is also used as an all-sky monitor for
GRBs. An automated triggering system operates on board
\citep{2006AIPC..836..201Y} and the lightcurves are publicly available at
15.6\,ms resolution in 4 rough energy bands 50--110, 110--240, 240--520,
520--5000\,keV\footnote{http://www.astro.isas.ac.jp/suzaku/HXD-WAM/WAM/}.
The lightcurves in the four energy channels from the WAM0 detector
at 100\,ms resolution are presented in Fig.~\ref{fig:WAWlightcurves}. The T$_{\rm 90}$ of
GRB\,060505 was $\sim$4.8~s in the
50--5000\,keV band$^{8}$.
The burst struck the WAM detector at an angle
such that principally WAM0, but also to a lesser extent WAM3, detected the burst.
The on-axis effective area of the BAT and WAM instruments are shown in Fig.~2 of
\citet{2006SPIE.6266E.122Y} and the effective area of WAM only exceeds that
of BAT above 300\,keV. However, it should be remembered that GRB\,060505
occurred $\sim50\degr$ off-axis in BAT
and that the effective area of BAT also drops rapidly above 100\,keV.
These factors result in a more significant detection
GRB\,060505
by WAM than BAT and therefore we rely primarily on the WAM data for our
analysis. However, we show that the results are consistent with those obtained
from the BAT data.
\section{Data Analysis and Results}
The spectral lag was calculated by
cross-correlating the lightcurves in different energy channels
\citep[e.g.][]{1997ApJ...486..928B,2000ApJ...534..248N,Foley2007}. The
cross-correlation function (CCF) was fit with a fourth order polynomial and
the quoted lag value is the peak of this function. Statistical errors were
calculated using a bootstrap method as described in
\citet{2000ApJ...534..248N}.
This involves adding
Poissonian noise based on the observed counts to the
lightcurves in the different energy channels and re-computing
the CCF in 100 realisations for each burst.
The
50th ranked value is the mean lag and the 16th and 84th
ranked values represent $\pm1\sigma$
The spectral lag was determined between the 50--110 and 110--240\,keV ($\tau_{110-240,50-110}$)
energy bands for the $\emph{Suzaku}$ WAM detectors over a range of temporal resolutions
(15.6, 31.2, 46.8, 62.4, 78 and 100\,ms). The lightcurves were correlated from $-$4 to +4\,s and
the CCF was fit over a range of $\sim$5\,s.
A lightcurve threshold of 10\% (30\%) is
applied, which means that only data with at least one-tenth (three-tenths) of the peak
count rate is used to calculate the lag, thus reducing the background.
The spectral lag values obtained from WAM0 at the six time resolutions specified above at 10\% threshold agree within the uncertainties and the average value is $0.36\pm{0.05}$\,s.
Applying the 30\% threshold to the same lightcurves, increases the average value to 0.44$\pm{0.06}$\,s and all values are within $1\sigma$ of those obtained at the 10\% threshold except at 60\,ms and 78\,ms, which are consistent at the $2\sigma$ level.
Above the 50\% threshold the results are unreliable
and the lag is not accurately reproduced.
The burst is detected with lower significance in WAM3 and does not allow an accurate
determination of the lag. We add the signal from WAM3 to that of WAM0 to
test if this gives a consistent result.
The results are consistent with WAM0 alone within $\sim1\sigma$, except at 100\,ms
resolution where the WAM0+3 lag is larger but is consistent at
$\sim3\sigma$ with the WAM0 results (10\% threshold).
The average value obtained from the sum of the WAM0 and WAM3 lightcurves
is 0.42$\pm$0.05\,s and 0.47$^{+0.05}_{-0.06}$\,s at 10\% and 30\% thresholds respectively.
The cross correlation data and fit for WAM0 at 100\,ms is presented in
Fig.~\ref{fig:ccfsWAM}\,a and is inconsistent with the negligible lag expected for
a short burst.
A precursor is evident in the WAM data at $-$8\,s and including it in the lag analysis over a wider time range results in a consistent lag measurement of 0.47$\pm0.06$\,s.
We note that precursors are not normally detected in short bursts.
\begin{figure}
\begin{centering}
\epsscale{1}
\plotone{f2.eps}
\caption{The cross-correlation data and fit
for a)
the WAM0 data at 100\,ms between the 110-240 and 50-110\,keV energy bands
and b)
the BAT data at 100\,ms
between the 50--100 and 25--50\,keV bands. A 4$^{\rm th}$
order polynomial fit to the data is shown. The vertical lines
denote zero lag. GRB\,060505 is clearly inconsistent with zero
lag.} \label{fig:ccfsWAM}
\end{centering}
\end{figure}
The lag was also measured between the 25--50\,keV and 50--100\,keV energy
bands ($\tau_{50-100,25-50}$) at 100\,ms using the BAT data with the techniques outlined above (Fig.~\ref{fig:ccfsWAM}\,b). The
lightcurve was too weak to determine the lag at finer time resolution. The
spectral lag value of 0.4$\pm$0.1\,s measured using the BAT data is
consistent within $1\,\sigma$ with that obtained from the WAM0 and WAM0+3.
In order to establish the robustness of our result, we determined the lag for 16 additional
GRBs detected by both BAT and WAM, for which the lightcurve data were sufficient for lag analysis in both instruments.
The analysis was performed in a similar manner to GRB\,060505. The derived lags ranged from $-$\,3\,ms to 0.94\,s in the WAM and 0 to 0.86\,s in the BAT.
The lags are compatible considering the differing instruments and off-axis angles, energy ranges and the spectra of the bursts, except in 2 cases where the BAT lag was significantly longer. The sample consisted of 12 long and 4 short bursts.
Crucially, the short bursts were always found to have negligible lag in both instruments. This shows that
our analysis is sensitive to short lags.
\section{Discussion}
\subsection{Spectral lags}
\begin{figure}[t]
\epsscale{1}
\plotone{f3.eps}
\caption{
The lag-luminosity relation using data (diamonds) and fit from \citet{2000ApJ...534..248N}.
In addition, GRB\,060505 (open circle), GRB\,060614 \citep[filled square:][]{2006Natur.444.1044G}, short GRBs (open squares)
and 3 GRBs associated with SNe (filled-circles) are included. The lag values are from the following: GRB\,060218: \citet{2006ApJ...653L..81L},
GRB\,031203: \citet{Foley2007}, GRB\,980425: \citet{2002ApJ...579..386N}.
The spectral lag of GRB 060505 is significantly longer than those measured for short GRBs, and it falls on the lag-luminosity plot in a position similar to that of some SN-GRBs.
The diamond and filled-circle lag values are determined between the 25--50 and 100--300\,keV energy ranges. Lags for GRB\,060614 and the short bursts are measured between the 15--25 and 50--100\,keV ranges. No $k$-correction is applied. }
\label{fig:lag-lum
\end{figure}
The spectral lags in GRBs have been discussed by many authors
\citep[e.g.][]{1997ApJ...486..928B,2000ApJ...534..248N,2000HEAD....5.3402N,2006ApJ...643..266N,2006ApJ...646.1086H,2007ApJS..169...62H,Foley2007}.
Using BATSE data, \citet{2002ApJ...579..386N} and
\citet{2006ApJ...643..266N} found that long duration GRBs had both
measurable and zero lags but that short GRBs had lags around zero.
\citet{2006ApJ...643..266N} calculated the lags of 260 short GRBs using
BATSE data and found that 90--95\% of the values were consistent with zero
and suggest that bursts with positive lag may result from contamination by the
long GRB class.
It was also argued that if short GRBs had lags proportionally as large as
long GRBs, such lags would be detectable, i.e.\ that this result was not
simply an effect of the duration of short bursts. This is not to say
that bursts with short lags are necessarily in the short GRB class. In the
sample of published lags of BATSE data by \citet{2007ApJS..169...62H}, 1427
bursts have $T_{\rm 90}\geq2$\,s and a measured lag ($\tau_{100-50,20-50}$).
Of these bursts 214
have lags in the range from $-10$ to $+10$\,ms (79 with uncertainties of $\pm10$\,ms) and 348 have lags in the range from $-20$ to $+20$\,ms (217 with
uncertainties of $\pm20$\,ms), showing that there are many
long GRBs with very short lag.
In summary,
long bursts are expected to have predominantly positive lags ranging from zero to several seconds.
Short GRBs have almost exclusively negligible lags.
However, it is not possible to exclude that GRB\,060505 could be an outlier: i.e. a short duration GRB with a positive lag
or due to a process which does not fit into the lag classification scheme.
There have been
difficulties in classifying a number of bursts
and the lag has been used to discriminate
in a
number of cases \citep[e.g.][]{2006astro.ph..5570D}. For example, GRB\,060912A has a T$_{\rm 90}$ of $\sim6$\,s and
was initially thought to have occurred
in a nearby elliptical galaxy, however \citet{2007MNRAS.378.1439L} recently
found that it was more likely to come from a star forming galaxy at
$z=0.937$ and report a lag ($\tau_{100-350,25-50}$) of $83\pm43$\,ms.
Various strategies have been proposed to distinguish bursts more effectively
than the duration alone
\citep[e.g.][]{2000ApJ...534..248N,2006astro.ph..5570D,2007ApJ...655L..25Z}.
However, none have seen widespread adoption.
\subsection{What was the progenitor of GRB\,060505?}
It was argued that GRB\,060505 was probably part of the tail of the short
burst population and connected to mergers of compact objects.
At a redshift of $z$=0.089, GRB\,060505 has an isotropic peak
luminosity of $\sim9\times10^{47}\,\rm{erg}\,\rm{s}^{-1}$ (50-300\,keV).
Having a low luminosity and relatively long lag of 0.36$\pm{0.05}$\,s, GRB\,060505 falls
below the lag-luminosity relation of~\citet{2000ApJ...534..248N}
as shown in Fig.~\ref{fig:lag-lum}.
The spectral lag of GRB 060505 is significantly longer than those measured for short GRBs and GRB\,060614
and it falls on the lag-luminosity plot in a position similar to that of some (but not all) SN-GRBs (e.g. GRB\,031203).
\citet{2007ApJ...662.1129O} argue that the simplest interpretation for GRB\,060505
is that it is
related to a merger event rather than a short-lived massive star and point out
that the maximum allowable distance of GRB\,060505
from a star-forming knot is consistent with the shortest merger timescales.
\citet{2007astro.ph..3407T} claim that GRB060505 occurred in a star forming region of the host galaxy which resembles long GRB host galaxies and argue for a massive star origin for this event. It has
also been argued that the host galaxy of GRB\,060505 is more
similar a short burst host in terms of metallicity and ionisation state
\citep{2007ApJ...667L.121L}.
However, their short GRB host region in the emission line ratio
diagram is based on only two burst host galaxies, one of which is the
GRB 050416A, which has photometric evidence for an associated SN
\citep{2007ApJ...661..982S} and is argued to be a long GRB due its spectral
softness and location on the Amati plot \citep{2006ApJ...636L..73S}.
The host galaxy studies alone do not resolve the classification issue for
GRB\,060505.
The optical luminosity at 12 hours in the source frame is similar to
those of short GRB afterglows, but optical luminosity alone is also
not a valid classification tool \citep{Kann2008}.
In our opinion, the lag measurement suggests that this burst is similar to long GRBs implying a massive star progenitor, despite the lack of a SN detection.
It has been argued that the absence of a SN signature in GRB\,060505
is evidence of a new, quiet endpoint for some massive
stars \citep{2006Natur.444.1047F,2007astro.ph..3678W}.
The existence of a SN was a feature of the early collapsar model. However,
the complete absence of a SN may be expected where the $^{56}$Ni does not
have sufficient impetus to escape the black hole
\citep{2003ApJ...591..288H,2006ApJ...650.1028F} or
in jet-induced explosions with narrow jets when the deposited energy is small
\citep{2007astro.ph..2472N,2007ApJ...657L..77T}.
Progenitor stars with
relatively low angular momentum could also produce GRBs without supernovae
\citep{2003AIPC..662..202M}.
These seem attractive
explanations at least for GRB\,060505.
In the absence of a GRB explosion, the detection of such quiet-death massive stars, if they exist,
is a challenge for current instrumentation \citep[e.g.][]{2008arXiv0802.0456K}
\acknowledgements
SMB acknowledges an EU
Marie Curie Fellowship in Framework 6.
The Dark Cosmology Centre is funded by the DNRF. DM acknowledges IDA for support.
|
1,116,691,497,625 | arxiv | \section{Introduction\label{Sec1}}
The Casimir force \cite{Cas48} between uncharged metallic plates
attracts considerable attention as a macroscopic manifestation of
the quantum vacuum \cite{Mil94,Mos97,Mil01,Kar99,Bor01}. With the
development of microtechnologies, which routinely control the
separation between bodies smaller than 1 $\mu m$, the force became a
subject of systematic experimental investigation. Modern precision
experiments have been performed using different techniques such as
torsion pendulum \cite{Lam97}, atomic force microscope (AFM) \cite
{Moh98,Har00}, microelectromechanical systems (MEMS) \cite
{Cha01,Dec03a,Dec03b,Dec05,Ian04,Ian05} and different geometrical
configurations: sphere-plate \cite{Lam97,Har00,Dec03b}, plate-plate
\cite{Bre02} and crossed cylinders \cite{Ede00}. The relative
experimental precision of the most precise of these experiments is
estimated to be about 0.5\% for the recent MEMS measurement
\cite{Dec05} and 1\% for the AFM experiments \cite{Har00,Cha01}.
In order to come to a valuable comparison between the experiments
and the theoretical predictions, one has to calculate the force
with a precision comparable to the experimental accuracy. This is a
real challenge to the theory because the force is material, surface,
geometry and temperature dependent. Here we will only focus on the
material dependence, which is easy to treat on a level of some
percent precision but which will turn out difficult to tackle on a
high level of precision since different uncontrolled factors are
involved.
In its original form, the Casimir force per unit surface
\cite{Cas48}
\begin{equation}
F_{c}\left( a\right) =-\frac{\pi ^{2}}{240}\frac{\hbar c}{L^{4}}
\label{Fc}
\end{equation}
\noindent was calculated between ideal metals. It depends only on
the fundamental constants and the distance between the plates $L$.
The force between real materials differs significantly
from~(\ref{Fc}) for mirror separations smaller than 1~$\mu$m.
For mirrors of arbitrary material, which can be described by
reflection coefficients, the force per unit area can be written as
\cite{Lam00}:
\begin{eqnarray}
F&=& 2\sum_{\mu}\int \frac{\mathrm{d}^{2}\mathbf{k}}{4\pi ^{2}}
\int_{0}^{\infty }\frac{\mathrm{d}\zeta }{2\pi } \hbar\kappa
\frac{r_{\mu}\left[ i\zeta ,\mathbf{k} \right]^2 e^{-2\kappa
L}}{1-r_{\mu}
\left[ i\zeta ,\mathbf{k} \right]^2 e^{-2\kappa L}}\nonumber \\
&&\kappa=\sqrt{\mathbf{k}^{2}+ \frac{\zeta ^{2}}{c^2}} \label{Force}
\end{eqnarray}
\noindent where $r_{\mu}=(r_s,r_p)$ denotes the reflection
amplitude for a given polarization $\mu=s,\;p$
\begin{eqnarray}
r_{s } &=&-\frac{\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta
\right)\frac{\zeta ^{2}}{c^2}}-c\kappa } {\sqrt{\mathbf{k}^{2}+
\varepsilon \left( i\zeta \right) \frac{\zeta ^{2}}{c^2}}+c\kappa }
\nonumber \\
r_{p} &=&\frac{\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta
\right)\frac{\zeta ^{2}}{c^2}}-c\kappa \varepsilon \left( i\zeta
\right) }{\sqrt{\mathbf{k}^{2}+ \varepsilon \left( i\zeta
\right)\frac{\zeta ^{2}}{c^2}}+c\kappa \varepsilon \left( i\zeta
\right) } \label{rThick}
\end{eqnarray}
The force between dielectric materials had first been derived by
Lifshitz~\cite {Lif56,LP9}. The material properties enter these
formulas via the dielectric function $\varepsilon \left( i\zeta
\right) $ at angular imaginary frequencies $\omega=i\zeta $, which
is related to the physical quantity $\varepsilon ^{\prime \prime
}\left( \omega \right)= \mathrm{Im}\left( \varepsilon \left( \omega
\right)\right) $ with the help of the dispersion relation
\begin{equation}
\varepsilon \left( i\zeta \right) -1=\frac{2}{\pi
}\int\limits_{0}^{\infty } d\omega\frac{\omega \varepsilon ^{\prime
\prime }\left( \omega \right) }{\omega ^{2}+\zeta ^{2}}.
\label{K-K}
\end{equation}
\noindent For metals $\varepsilon ^{\prime \prime }\left( \omega
\right)$ is large at low frequencies, thus the main contribution to
the integral in Eq. (\ref{K-K}) comes from the low frequencies even
if $\zeta $ corresponds to the visible frequency range. For this
reason the low-frequency behavior of $\varepsilon(\omega)$ is of
primary importance.
The Casimir force is often calculated using the optical data taken
from \cite{HB1}, which provides real and imaginary parts of the
dielectric function within some frequency range, typically between
0.1 and $10^4$~eV for the most commonly used metals, Au, Cu and Al,
corresponding to a frequency interval $[1.519\cdot 10^{14},1.519
\cdot10^{19}]$~rad/s (1~eV=$1.519 \cdot10^{15}$~rad/s \footnote{In
\protect{\cite{Lam00}} a conversion factor $1.537 \cdot
10^{15}$~rad/s was used, leading however to a negligible difference
in the Casimir force (well below 1\%).}). When the two plates are
separated by a distance $L$, one may introduce a characteristic
imaginary frequency $\zeta_{\rm ch}=c/2L$ of electromagnetic field
fluctuations in the gap. Fluctuations of frequency $\zeta \sim
\zeta _{\rm ch}$ give the dominant contribution to the Casimir
force. For example, for a plate separation of $L=100$~nm the
characteristic imaginary frequency is $\zeta _{\rm ch}=0.988$~eV.
Comparison with the frequency interval where optical data is
available shows that the high frequency data exceeds the
characteristic frequency by 3 orders of magnitude, which is
sufficient for the calculation of the Casimir force. However, in the
low frequency domain, optical data exists only down to frequencies
which are one order of magnitude below the characteristic frequency,
which is not sufficient to evaluate the Casimir force. Therefore for
frequencies lower than the lowest tabulated frequency, $\omega _{\rm
c}$, the data has to be extrapolated. This is typically done by a
Drude dielectric function
\begin{equation}
\varepsilon \left( \omega \right) =1-\frac{\omega _{\rm
p}^{2}}{\omega \left( \omega +i\omega _{\tau }\right) },
\label{Drude}
\end{equation}
\noindent which is determined by two parameters, the plasma
frequency $\omega _{\rm p}$ and the relaxation frequency $\omega
_{\tau }$.
Different procedures to get the Drude parameters have been discussed
in the literature. They may be estimated, for example, from
information in solid state physics or extracted form the optical
data at the lowest accessible frequencies. The exact values of the
Drude parameters are very important for the precise evaluation of
the force. Lambrecht and Reynaud \cite{Lam00} fixed the plasma
frequency using the relation
\begin{equation}
\omega _{\rm p}^{2}=\frac{Ne^{2}}{\varepsilon _{0}m_{e}^{\ast }},
\label{Omp}
\end{equation}
\noindent where $N$ is the number of conduction electrons per unit
volume, $e $ is the charge and $m_{e}^{\ast }$ is the effective mass
of electron. The plasma frequency was evaluated using the bulk
density of Au, assuming that each atom gives one conduction electron
and that the effective mass coincides with the mass of the free
electron. The optical data at the lowest frequencies were then used
to estimate $\omega _{\tau }$ with the help of Eq. (\ref{Drude}). In
this way the plasma frequency $\omega _{\rm p}=9.0$~eV and the
relaxation frequency $\omega _{\tau }=0.035$~eV have been found.
This procedure was largely adopted in the following
\cite{Har00,Ede00,Cha01,Bre02,Dec03a}. However, on the example of
Cu, it was stressed in \cite{Lam00} that the optical data may vary
from one reference to another and a different choice of parameters
for the extrapolation procedure to low frequencies can influence the
Casimir force significantly.
Bostr\"om and Sernelius \cite{Bos00b} and Svetovoy and Lokhanin
\cite{Sve00b} extracted the low-frequency optical data by fitting
them with Eq. (\ref{Drude}). For one set of data from Ref.
\cite{HB2} the result \cite{Sve00b} was close to that found by the
first approach, but using different sources for the optical data
collected in Ref. \cite{HB2} an appreciable difference was found
\cite{Sve00a,Sve00b}. This difference was attributed to the defects
in the metallic films which appear as the result of the deposition
process. It was indicated that the density of the deposited films is
typically smaller and the resistivity larger than the corresponding
values for the bulk material. The dependence of optical properties
of Au films on the details of the deposition process, annealing,
voids in the films, and grain size was already discussed in the
literature \cite{Sve03b}.
In this paper we analyze the optical data for Au from several
available sources, where the mid-infrared frequency range was
investigated. The purpose is to establish the variation range of the
Drude parameters and calculate the uncertainty of the Casimir force
due to the variation of existing optical data. This uncertainty is
of great importance in view of the recent precise Casimir force
measurement \cite{Che04,Dec05} which have been performed with high
experimental accuracy. On the other hand, sophisticated theoretical
calculations predict the Casimir force at the level of 1\% or
better. These results illustrate the considerable progress achieved
in the field in only one decade. In order to assure a comparison
between theory and experiment at the same level of precision, one
has to make sure that the theoretical calculation considers
precisely the same system investigated in the experiment. This is
the key point we want to address in our paper. With our current
investigation we find an intrinsic force uncertainty of the order of
5\% coming from the fact that the Drude parameters are not precisely
known. These parameters may vary from one sample to another,
depending on many details of the preparation conditions. In order to
assure a comparison at the level of 1\% or better between
theoretical predictions and experimental results for the Casimir
force, the optical properties of the mirrors have to be measured in
the experiment.
The paper is organized as follows. In Sec. \ref{Sec2} we explain and
discuss the importance of the precise values of the Drude
parameters. In Sec. \ref{Sec3} the existing optical data for gold
are reviewed and analyzed. The Drude parameters are extracted from
the data by fitting both real and imaginary parts of the dielectric
function at low frequencies in Sec. \ref{Sec4}. In Section
\ref{Sec5} the Drude parameters are estimated by a different method
using Kramers-Kroning analysis. The uncertainty in the Casimir force
due to the sample dependence is evaluated in Sec. \ref{Sec6} and we
present our conclusions in Sec. \ref{Sec7}.
\section{Importance of the values of the Drude parameters\label{Sec2}}
In Figure \ref{fig1} (left) we present a typical plot of the
imaginary part of the dielectric function, which comprises Palik's
Handbook data for gold \cite{HB1}. The solid line shows the actual
data taken from two original sources: the points to the right of the
arrow are those by Th\`{e}ye \cite{The70} and to the left by Dold
and Mecke \cite{Dol65}. No data is available for frequencies smaller
than the cutoff frequency $\omega _{\rm c}$ ($0.125$~eV for this
data set) and $\varepsilon ^{\prime \prime }\left( \omega \right) $
has to be extrapolated into the region $\omega <\omega _{\rm c}$.
The dotted line shows the Drude extrapolation with the parameters
$\omega _{\rm p}=9.0$~eV and $\omega _{\tau }=0.035$~eV obtained in
Ref.~\cite{Lam00}.
\begin{figure}[tbp]
\epsfig{file=figure1a.eps,height=6cm,width=6.5cm}
\epsfig{file=figure1b.eps,height=6cm,width=6.5cm} \caption{Left
panel: Palik's Handbook data for Au \cite{HB1} (solid line)
extrapolated to low frequencies (dotted line) with the Drude
parameters indicated in the corner. Right panel: contributions of
different real frequency domains to the dielectric function on the
imaginary axis $\varepsilon(i\zeta)$.} \label{fig1}
\end{figure}
One can separate three frequency regions in Fig.~\ref{fig1}~(left
panel). The region marked as {1} corresponds to the frequencies
smaller than $\omega _{\rm c}$. The region {2} defining the Drude
parameters extends from the cutoff frequency to the edge of the
interband absorption $\omega _{0}$. The high energy domain
$\omega>\omega _{0}$ is denoted by {3}.
We may now deduce the dielectric function at imaginary
frequencies~(\ref{K-K}) using the Kramers-Kronig relation
\begin{equation}
\varepsilon \left( i\zeta \right) =1+\varepsilon _{1}\left( i\zeta \right)
+\varepsilon _{2}\left( i\zeta \right) +\varepsilon _{3}\left( i\zeta
\right) , \label{split}
\end{equation}
\noindent where the indices 1, 2, and 3 indicate respectively the
integration ranges $0\leq \omega <\omega _{\rm c}$, $\omega _{\rm
c}\leq\omega <\omega _{0}$, and $\omega _{0}\leq \omega <\infty $.
$\varepsilon _{1}$ can be derived using the Drude model
(\ref{Drude}) leading to
\begin{equation}
\varepsilon _{1}\left( i\zeta \right) =\frac{2}{\pi }\frac{\omega
_{p}^{2}}{\zeta ^{2}-\omega _{\tau }^{2}}\left[ \tan ^{-1}\left( \frac{\omega
_{c}}{\omega _{\tau }}\right) -\frac{\omega _{\tau }}{\zeta }\tan ^{-1}\left(
\frac{\omega _{c}}{\zeta }\right) \right] . \label{eps1}
\end{equation}
\noindent The two other functions $\varepsilon _{2}$ and
$\varepsilon _{3}$ have to be calculated numerically. The results
for all three functions as well as for $ \varepsilon \left( i\zeta
\right) $ are shown in Fig. \ref{fig1}~(right). One can clearly see
that $\varepsilon _{1}\left( i\zeta \right) $ dominates the
dielectric function at imaginary frequencies up to $\zeta \approx
5$~eV. $\varepsilon _{2}\left( i\zeta \right) $ gives a perceptible
contribution to $\varepsilon \left( i\zeta \right)$, while
$\varepsilon_{3}\left( i\zeta \right)$ produces minor contribution
negligible for $\zeta<0.5$~eV.
As mentioned in the Introduction, we may introduce a characteristic
imaginary frequency $\zeta_{\rm ch}=c/2L$ of field fluctuations
which give the dominant contribution to the Casimir force between
two plates at a distance $L$. For a plate separation of $L=100$~nm
the characteristic imaginary frequency is $\zeta _{\rm
ch}=0.988$~eV. At this frequency the contributions of different
frequency domains to $\varepsilon \left( i\zeta _{ch}\right) $ are
$\varepsilon _{1}=68.42$, $\varepsilon _{2}=15.65$, and $\varepsilon
_{3}=5.45$. This means that for all experimentally investigated
situations, $L\gtrsim100$~nm, region {1}, corresponding to the
extrapolated optical data, gives the main contribution to
$\varepsilon \left( i\zeta \right)$. It is therefore important to
know precisely the Drude parameters.
\section{Analysis of different optical data for gold\label{Sec3}}
The optical properties of gold were extensively investigated in
50-70th. In many of those works the importance of sample preparation
methods was recognized and carefully discussed. A complete
bibliography of the publications up to 1981 can be found in Ref.
\cite{Wea81}. Regrettably the contemporary studies of gold
nanoclusters produce data inappropriate for our purposes. Among
recent experiments let us mention the measurement of normal
reflectance for evaporated gold films \cite{Sot03}, which was
performed in the wide wavelength range $0.3-50$ $\mu$m, but
unfortunately does not permit to evaluate independently both real
and imaginary parts of the dielectric function. In contrast, the use
of new ellipsometric techniques~\cite{An02,Xia00} has produced data
for the real and imaginary part of the dielectric function for
energy intervals $1.5-4.5$~eV \cite{Wan98} and $1.5-3.5$~eV
\cite{Ben99}.
A significant amount of data in the interband absorption region
(domain {3}) has been obtained by different methods under different
conditions \cite{Pel69,The70,Joh72,Gue75,Asp80,Wan98,Ben99}. Though
this frequency band is not very important for the Casimir force, it
provides information on how the data may vary from one sample to
another. On the contrary there are only a few sources where optical
data was collected in the mid-infrared (domain 2) and from which the
dielectric function can be extracted. The data available for $
\varepsilon ^{\prime }\left( \omega \right) $ and $ \varepsilon
^{\prime \prime }\left( \omega \right) $ in the range $\omega
<1.5$~eV and interband absorption domain {3} are presented
respectively in the left and right graph of Fig. \ref{fig2}. These
data sets demonstrate considerable variations of the dielectric
function from one sample to another.
\begin{figure}[tbp]
\epsfig{file=figure2a.eps,height=6cm,width=6.5cm}
\epsfig{file=figure2b.eps,height=6cm,width=6.5cm} \caption{Left
panel: Available optical data in the mid-infrared region. The dots
represent the Dold and Mecke data for $\omega<1$~eV \cite{Dol65} and
Th\`{e}ye data \cite{The70} for higher frequencies. The squares
denote the Weaver data \cite{Wea81}. The circles stand for the data
from~\cite{Mot64}. The triangles represent the data \cite{Pad61}.
Solid squares, circles, and triangles are used to mark $ \varepsilon
^{\prime \prime }\left( \omega \right) $ while the open symbols are
used for $ \varepsilon ^{\prime }\left( \omega \right) $. Right
panel: $\varepsilon^{\prime\prime}(\omega)$ in the interband region
for different samples. The solid line represents the data measured
with the well annealed bulk-like film by Th\`{e}ye \cite{The70}. The
dots are the data by Johnson and Christy \cite{Joh72} found for
unannealed films. The dashed and dash-dotted lines are recent data
sets by Wang et al. \cite{Wan98} for unannealed films. They
correspond to films deposited with e-beam and thermal evaporation
methods, respectively. } \label{fig2}
\end{figure}
Let us briefly discuss the sets of
data~\cite{HB1,Wea81,Mot64,Pad61} used in our analysis and the
corresponding samples. The commonly used Handbook of Optical
Constants of Solids~\cite{HB1} comprises the optical data covering
the region from $0.125$ to $9184$~eV (dots in Fig. \ref{fig2}). The
experimental points are assembled from several sources. For
$\omega<1$~eV they are reported by Dold and Mecke~\cite {Dol65}. For
higher frequencies up to $6$~eV they correspond to the Th\`{e}ye
data \cite{The70}. Dold and Mecke give only little information about
the sample preparation, reporting that the films were evaporated
onto a polished glass substrate and measured in air by using an
ellipsometric technique \cite{Dol65}. Annealing of the samples was
not reported.
Th\`{e}ye \cite{The70} described her films very carefully. The
samples were semitransparent Au films with a thickness of
$100-250$~\AA\ evaporated in ultrahigh vacuum on supersmooth fused
silica. The substrate was kept in most cases at room temperature.
After the deposition the films were annealed in the same vacuum at $
100-150^{\circ }$~C. The structure of the films was investigated by
X-ray and transmission-electron-microscopy methods. The dc
resistivity of the films was found to be very sensitive to the
preparation conditions. The errors in the optical characteristics of
the films were estimated on the level of a few percents.
The handbook~\cite{Wea81} embraces the optical data from $0.1$~eV
to $28.6$~eV (marked with squares in Fig. 2). The data in the domain
$\omega<4$~eV is provided by Weaver et al. \cite{Wea81}. The values
of $\varepsilon(\omega)$ were found for the electropolished bulk
Au(110) sample. Originally the reflectance was measured in a broad
interval $0.1\leq \omega \leq 30$~eV and then the dielectric
function was determined by a Kramers-Kronig analysis. Due to
indirect determination of $\varepsilon $ the recommended accuracy of
these data sets is only 10\%.
The optical data of Motulevich and Shubin \cite{Mot64} for Au films
is marked with circles in Fig. 2. In this paper the films were
carefully described. Gold was evaporated on polished glass at a
pressure of $\sim 10^{-6}$~Torr. The investigated films were $0.5-1\
\mu$m thick. The samples were annealed in the same vacuum at
$400^{\circ }$~C for more than 3 hours. The optical constants $n$
and $k$ ($n+ik=\sqrt{\varepsilon }$) were measured by polarization
methods in the spectral range $1-12\ \mu$m. The errors in $n$ and
$k$ were estimated as 2-3\% and 0.5-1\%, respectively.
Finally, the triangles represent Padalka and Shklarevskii data \cite{Pad61}
for unannealed Au films evaporated onto glass.
The variation of the data points from different sources cannot be
explained by experimental errors. The observed deviation is the
result of different preparation procedures and reflects genuine
difference between samples. The deposition method, type of the
substrate, its temperature, quality and the deposition rate
influence the optical properties. When we are speaking about a
precise comparison between theory and experiment for the Casimir
force at the level of 1\% or better, there is no such material as
gold in general any more. There is only a gold sample prepared under
definite conditions.
\section{Evaluation of the Drude parameters through extrapolation\label{Sec4}}
We will now use the available data in the mid-infrared region to
extrapolate into the low frequency range. If the transition between
inter- and intraband absorption in gold is sharp, the data below
$\omega _{0}$ should be well described by the Drude function
\begin{equation}
\varepsilon ^{\prime }\left( \omega \right) =1-\frac{\omega
_{p}^{2}}{\omega ^{2}+\omega _{\tau }^{2} },\ \ \varepsilon ^{\prime
\prime }\left( \omega \right) =\frac{\omega _{p}^{2}\omega _{\tau
}}{\omega \left( \omega ^{2}+\omega _{\tau }^{2}\right). }
\label{ImDrude}
\end{equation}
\noindent For $\omega \gg \omega _{\tau }$, the data on the log-log
plot should fit straight lines with the slopes $-2$ and $-3$ for
$\varepsilon ^{\prime }$ and $\varepsilon ^{\prime\prime }$,
respectively, shifted along the ordinate due to variation of the
parameters for different samples. The data points in the right graph
of Fig.~\ref{fig2} are in general agreement with these expectations.
The onset values for $\varepsilon ^{\prime\prime }$,
$\ln(\omega_{\rm p}^2\omega_{\tau})$, vary more significantly due to
a significant change in $\omega_{\tau}$ for different samples, but
the Casimir force is in general not very sensitive to the relaxation
parameter \cite{Lam00}. The onset values for $-\varepsilon ^{\prime
}$, $\ln(\omega_{\rm p}^2)$, vary less but this variation is more
important for the Caimir force, which is particularly sensitive to
the value of the plasma frequency $\omega_{\rm p}$. The Drude
parameters can be found by fitting both $\varepsilon ^{\prime }$ and
$ \varepsilon ^{\prime \prime }$ with the functions (\ref{ImDrude}).
This procedure is discussed below.
The dielectric function for low frequencies, $\omega < \omega_{\rm
c}$, is found by the extrapolation of the optical data from the
mid-infrared domain, $\omega_{\rm c}<\omega<\omega_0$. The real and
imaginary parts of $\varepsilon $ follow from Eq. (\ref{ImDrude})
with an additional polarization term ${\cal P}$ in $\varepsilon
^{\prime }$:
\begin{equation}
\varepsilon ^{\prime }\left( \omega \right) ={\cal P}-\frac{\omega
_{p}^{2}}{\omega ^{2}+\omega _{\tau }^{2}},\ \ \varepsilon ^{\prime
\prime }\left( \omega \right) =\frac{\omega _{p}^{2}\omega _{\tau
}}{\omega \left( \omega ^{2}+\omega _{\tau }^{2}\right) }.
\label{DrudeRI}
\end{equation}
\noindent The polarization term appears here due to the
following reason. The total dielectric function $\varepsilon
=\varepsilon _{\left( c\right) }+\varepsilon _{\left( i\right) }$
includes contributions due to conduction electrons $\varepsilon
_{\left( c\right) }$ and the interband transitions $\varepsilon
_{\left( i\right) }$. The polarization term consists of the atomic
polarizability and polarization due to the interband transitions $
\varepsilon _{\left( i\right) }^{\prime }$
\begin{equation}
{\cal P}=1+\frac{N_{a}\alpha }{\varepsilon _{0}}+\varepsilon _{\left(
i\right) }^{\prime }\left( \omega \right) , \label{polariz}
\end{equation}
\noindent where $\alpha $ is the atomic polarizability and $N_{a}$
the concentration of atoms. If the transition from intra- to
interband absorption is sharp, the polarization can be considered as
constant, because the interband transitions have a threshold
behavior with an onset frequency $\omega _{0}$ and the
Kramers-Kronig relation allows one to express $\varepsilon _{\left(
i\right) }^{\prime }$ as
\begin{equation}
\varepsilon _{\left( i\right) }^{\prime }\left( \omega \right) =\frac{2}{\pi
}\int\limits_{\omega _{0}}^{\infty }dx\frac{x\varepsilon _{\left( i\right)
}^{\prime \prime }\left( x\right) }{x^{2}-\omega ^{2}}. \label{KKi}
\end{equation}
\noindent For $\omega \ll \omega _{0}$ this integral does not depend
on $\omega $, leading to a constant $\varepsilon _{\left( i\right)
}^{\prime }\left( \omega \right) $. In reality the situation is more
complicated because the transition is not sharp and many factors can
influence the transition region. We will assume here that ${\cal P}$
is a constant but the fitting procedure will be shifted to
frequencies where the transition tail is not very important. In
practice Eq. (\ref{DrudeRI}) can be applied for $\omega <1$ eV.
Our purpose is now to establish the magnitude of the force change
due to reasonable variation of the optical properties. To this end
the available low-frequency data for $\varepsilon ^{\prime }\left(
\omega \right) $ and $\varepsilon ^{\prime\prime }\left( \omega
\right) $ presented in the left graph of Fig. \ref{fig2} were fitted
with Eq. (\ref{DrudeRI}). The results together with the expected
errors are collected in Table \ref{tab1}.
\begin{table}
\centering
\begin{tabular}{l||l|l|l|l}
N & $\ \ \ \omega _{p}$(eV) & $\omega _{\tau }\cdot 10^{2}$(eV) & \
\ \ \ \ ${\cal P}$ \\ \hline\hline 1 & $7.50\pm 0.02$ &
$6.1\pm 0.07$ & $-27.67\pm 5.79$ & Palik, 66 points , $\ \cdot$ \\
2 & $8.41\pm 0.002$ & $2.0\pm 0.005$ & $7.15\pm 0.035$ &
Weaver, 20 points, \ $\blacksquare, \Box $\\
3 & $8.84\pm 0.03$ & $4.2\pm 0.06$ & $12.94\pm 16.81$ &
Motulevich, 11 points, \ $\bullet, \circ$\\
4 & $6.85\pm 0.02$ & $3.6\pm 0.05$ & $-12.33\pm 9.13$ & Padalka 11
points, \ $\blacktriangledown,\triangledown$
\end{tabular}
\caption{The Drude parameters found by fitting the available
infrared data for $\varepsilon ^{\prime }\left( \omega \right)$ and
$\varepsilon ^{\prime \prime }\left( \omega \right) $ with Eq.
(\ref{DrudeRI}). The error is statistical.}\label{tab1}
\end{table}
The error in Table \ref{tab1} is the statistical uncertainty. It was
found using a $\chi ^{2}$ criterion for joint estimation of 3
parameters \cite{PatDat}. For a given parameter the error
corresponds to the change $\Delta\chi ^{2}=1$ when two other
parameters are kept constant. The parameter ${\cal P}$
enters~(\ref{DrudeRI}) as an additive constant and in the considered
frequency range its value is smaller than 1\% of $\varepsilon
^{\prime }\left( \omega \right)$ . That is why the present fitting
procedure cannot resolve it with reasonable errors.
As mentioned before, in the case of the Weaver data \cite{Wea81} the
recommended precision in $\varepsilon^{\prime}$ and
$\varepsilon^{\prime\prime}$ is 10\% while Motulevich and Schubin
reported 2-3\% and 0.5-1\% errors in $n$ and $k$. We did not take
these errors explicitly into account as we do not know if they are
of statistical or systematic nature or a combination of both. But to
illustrate their possible influence let us just mention that if we
interpret them as systematic errors, we can propagate the errors in
$\varepsilon$ or $n,k$ to the values of $\omega_{\rm p}$ and
$\omega_{\tau}$, leading to an additional error in $\omega_{\rm p}$
of about 5\% for the Weaver data and 1\% for the Motulevich data and
twice as large in $\omega_{\tau}$.
Significant variation of the plasma frequency, well above the
errors, is a distinctive feature of the table. The bulk and annealed
samples (rows 2 and 3) demonstrate larger values of $\omega _{\rm
p}$. The rows 1 and 4 corresponding to the evaporated unannealed
films give rise to considerably smaller plasma frequencies $\omega
_{\rm p}$. Note that our calculations are in agreement with the one
given by the authors \cite{Dol65,Pad61} themselves.
To have an idea of the quality of the fitting procedure, we show in Fig.
\ref{fig5} the experimental points and the best fitting curves
for Dold and Mecke data \cite{Dol65,HB1} (full circles and solid
lines) and Motulevich and Shubin data \cite{Mot64} (open circles and
dashed lines). Only 25\% of the points from \cite{HB1} are shown for
clarity. One can see that for $\varepsilon ^{\prime \prime }$ at
high frequencies the dots lie above the solid line demonstrating
presence of a wide transition between inter- and intraband absorption.
Coincidence of the solid and dashed lines for $\varepsilon ^{\prime
\prime }$ is accidental. The fits for $\varepsilon ^{\prime }$ are
nearly perfect for both data sets.
It is interesting to see on the same figure how well the parameters
$\omega _{\rm p}=9.0$ eV, $\omega _{\tau }=0.035$ eV agree with the
data in the mid-infrared range. The curves corresponding to this
set of parameters are shown in Fig. \ref{fig5} as dotted lines. One
can see that the dotted line, which describes $\varepsilon ^{\prime
\prime }$ is very close to the solid line. However, the dotted line
for $ \varepsilon ^{\prime }$ does not describe well the handbook
data (full circles). It agrees much better with Motulevich and
Shubin data \cite{Mot64} (open circles). The reason for this is that
$\omega _{\rm p}=9.0$ eV is the maximal plasma frequency for Au. Any
real film may contain voids leading to smaller density of electrons
and, therefore, to smaller $\omega _{\rm p}$. Motulevich and Shubin
\cite{Mot64} annealed their films which reduced the number of
defects and made the plasma frequency close to its maximum. A plasma
frequency $\omega _{\rm p}=9.0$ eV was also reported in Ref.
\cite{Ben66}, where the authors checked the validity of the Drude
theory by measuring reflectivity of carefully prepared gold films in
ultrahigh vacuum in the spectral range $0.04<\omega<0.6$ eV.
Therefore, this value is good if one disposes of well prepared
samples.
\begin{figure}[tbp]
\epsfig{file=figure3.eps,width=9cm}\newline \caption{The infrared
optical data by Dold and Mecke \cite{Dol65} (full circles) and by
Motulevich and Shubin \cite{Mot64} (open circles) together with the
best Drude fits given by the solid and dashed lines, respectively.
The dotted lines present the fit with $\omega_{\rm p}=9$ eV and
$\omega_{\tau}=35$ meV which agrees better with the Motulevich and
Shubin data (open circles) than with the handbook data (full
circles). } \label{fig5}
\end{figure}
\section{The Drude parameters from Kramers-Kronig analysis\label{Sec5}}
Because the values of the Drude parameters are crucial for a
reliable prediction of the Casimir force, it is important to assess
that different methods to determine the parameters give the same
results. Alternatively to the extrapolation procedure of the
previous section we will now discuss a procedure based on a
Kramers-Kronig analysis. To this aim we will extrapolate only the
imaginary part of the dielectric function to low frequencies
$\omega<\omega_{\rm c}$. The dispersion relation between
$\varepsilon^{\prime}$ and $\varepsilon^{\prime\prime}$
\begin{equation}\label{KKrel}
\varepsilon^{\prime}(\omega)-1=\frac{2}{\pi
}P\int\limits_{0}^{\infty }dx\frac{x\varepsilon ^{\prime \prime
}\left( x\right) }{x^{2}-\omega ^{2}}
\end{equation}
\noindent can then be used to predict the behavior of
$\varepsilon^{\prime}(\omega)$ and compare it with the one observed in
the experiments. From this comparison the Drude parameters can be
extracted.
The low-frequency behavior of $\varepsilon^{\prime\prime}(\omega)$
is important for the prediction of $\varepsilon^{\prime}$ because
for metals $\varepsilon^{\prime\prime}(\omega)\gg1$ in the low
frequency range. Therefore, at $\omega<\omega_{\rm c}$ we are using
$\varepsilon^{\prime\prime}(\omega)$ from Eq. (\ref{ImDrude}). At
higher frequencies the experimental data from different sources
\cite{HB1,Wea81,Mot64,Pad61} are used. The data in Refs.
\cite{Mot64,Pad61} must be extended to high frequencies starting
from $\omega=1.25$~eV. We do this using the handbook data
\cite{HB1}.
Let us start from the data for bulk Au(110) \cite{Wea81}. This data
set is given in the interval $0.1<\omega<30$~eV. Below
$\omega=0.1$~eV we use the Drude model for
$\varepsilon^{\prime\prime}$ and above $\omega=30$~eV the cubic
extrapolation $C/\omega^3$. The Drude parameters are practically
insensitive to the high frequency extrapolation. The data set was
divided into overlapping segments containing 12 points. Each segment
was fitted with a polynomial of forth order in frequency. The first
segment, were $\varepsilon^{\prime\prime}(\omega)$ increases very
fast, was fitted with the polynomial in $1/\omega$. Then, in the
range of overlap (4 points) a new polynomial smoothly connecting two
segments was chosen. In this way we have fitted the experimental
data with a function which is smooth up to the first derivative.
The real part of the dielectric function
$\varepsilon^{\prime}(\omega)$ is predicted by Eq. (\ref{KKrel}) as
a function of the Drude parameters $\omega_p$ and $\omega_{\tau}$.
These parameters are chosen such as to minimize the difference
between observed and predicted values of
$\varepsilon^{\prime}(\omega)$, leading to $\omega_{\rm p}=8.40$~eV
and $\omega_{\tau}=0.020$~eV. These parameters are in reasonable
agreement with the ones indicated in Tab. \ref{tab1}. In Fig.
\ref{fig6} the experimental data (dots) and
$|\varepsilon^{\prime}(\omega)|$ found from Eq. (\ref{KKrel}) (solid
line) are plotted, showing perfect agreement at low frequencies,
while at high frequencies $\omega>2.6$~eV the agreement is not very
good. This may be fixed by choosing an appropriate high frequency
extrapolation. We do not give these details here as this
extrapolation has practically no influence on the Drude parameters.
\begin{figure}[tbp]
\epsfig{file=figure4.eps,width=9cm}\newline
\caption{$|\varepsilon^{\prime}|$ as a function of $\omega$ for bulk
gold. Dots are the experimental data \cite{Wea81}. The solid line is
the prediction according to Eq. (\ref{KKrel}) with the Drude
parameters $\omega_p=8.40\ eV$, $\omega_{\tau}=0.02\ eV$. }
\label{fig6}
\end{figure}
When applying the same procedure to the handbook data \cite{HB1}, we
find $\omega_p=7.54$~eV and $\omega_{\tau}=0.051$~eV, again in
agreement with the parameters indicated in Tab.~\ref{tab1}.
Fig.~\ref{fig7} shows a plot of $\varepsilon^{\prime}(\omega)$
predicted with these parameters. At low frequencies the agreement
with the experimental data is good but it becomes worse when the
interband data \cite{Dol65} joins the intraband (high frequency)
data \cite{The70}. These two data sets correspond to samples with
different optical properties. In this case the dispersion relation
(\ref{KKrel}) is not necessarily very well satified. In contrast
with the previous case, high frequency extrapolation cannot improve
the situation; it influences the curve only marginally.
\begin{figure}[tbp]
\epsfig{file=figure5.eps,width=9cm}\newline
\caption{$|\varepsilon^{\prime}|$ as a function of $\omega$ for
handbook data \cite{HB1} (dots). The solid line is found from
Kramers-Kronig relation. The Drude parameters correspond to minimal
deviations between experimental data and calculations.} \label{fig7}
\end{figure}
Following the same procedure for the Motulevich and Shubin data
\cite{Mot64}, we find the Drude parameters $\omega_{\rm p}=8.81$~eV,
$\omega_{\tau}=0.044$~eV which are close to the values in
Tab.~\ref{tab1}. The experimental data and calculated function
$|\varepsilon^{\prime}(\omega)|$ are shown in Fig.~\ref{fig8}. There
is good agreement for frequencies $\omega<4$~eV as the data in Ref.
\cite{Mot64} matches very well the Th\`{e}ye data \cite{The70}.
Deviations at higher frequencies are again quite sensitive to
high-frequency extrapolation as already noted before.
Similar calculations done for the Padalka and Shklyarevskii data
\cite{Pad61} give the Drude parameters $\omega_{\rm p}=6.88$~eV and
$\omega_{\tau}=0.033$~eV, producing good agreement only in the range
$\omega<1.3$~eV because this data set matches only poorly the
Th\`{e}ye data \cite{The70}.
\begin{figure}[tbp]
\epsfig{file=figure6.eps,width=9cm}
\newline
\caption{$|\varepsilon^{\prime}|$ as a function of $\omega$ for
Motulevich and Shubin data \cite{Mot64} extended by the handbook
data \cite{HB1} for $\omega>1.25$~eV (dots). The solid line is found
from Kramers-Kronig relation. } \label{fig8}
\end{figure}
Using the Kramers-Kronig analysis for the determination of the Drude
parameters leads essentially to the same parameters for all 4 sets
of the experimental data. Experimental and calculated curves for
$\varepsilon^{\prime}(\omega)$ are in very good agreement at low
frequencies. At high frequencies the agreement is not so good for
two different reasons. First, at high frequencies the calculated
curve is sensitive to the high-frequency extrapolation and thus a
better choice of this extrapolation can significantly reduce high
frequency deviations. The other reason is that one has to combine
the data from different sources to make a Kramers-Kronig analysis
possible. These data sets do not always match each other well as it
is for example the case of the Dold and Mecke data and the Th\`{e}ye
data. In this case significant errors might be introduced in the
dispersion relation. Indeed the Kramers-Kronig analysis is a
valuable tool only for data taken from the same sample.
\section{Uncertainty in the Casimir force due to variation of optical properties\label{Sec6}}
We will now assess how the values of the Casimir force are
influenced by the different values of the Drude parameters. As an
example we consider as input the optical data for Au from
\cite{HB1}.
Instead of calculating the absolute value of the Casimir force, we
will give the factor which measures the reduction of the Casimir
force with respect to the ideal Casimir force between perfect
mirrors as introduced in \cite{Lam00}
\begin{equation}\label{eta}
\eta_F=\frac{120 L^4}{c\pi^4}\int\limits_0^{\infty}d\kappa\,\kappa^2
\int\limits_0^\kappa
d\zeta\sum_{\mu}\frac{r_{\mu}^2}{e^{2\kappa}-r_{\mu}^2},
\end{equation}
\noindent The dielectric function at imaginary frequencies
$\varepsilon(i\zeta)$ is calculated using the Kramers-Kronig
relation~(\ref{K-K}) and the integration region is divided in two
parts
\begin{equation}\label{imfreq}
\int_0^{\infty}\frac{x\,
\varepsilon''(x)}{x^2+\omega^2}dx\rightarrow
\left\{\int_{0}^{x_c}+\int_{x_c}^{x_{max}}\right\}\frac{x\,
\varepsilon''(x)}{x^2+\omega^2}dx=I_1+I_2.
\end{equation}
\noindent We assume that for $x<x_{\rm c}$ the Drude
model~(\ref{ImDrude}) is applicable. Then the integration in $I_1$
may be carried out explicitly, see~(\ref{eps1}). In $I_2$ we
integrate from $x_{\rm c}=0.125$~eV to $x_{\rm max}=9000$~eV
(corresponding to the range of available optical data in
\cite{HB1}).
For the calculation of the reduction factor (\ref{eta}) the
integration range was chosen as $10^{-4}-10^{3}$~eV. We also varied
the integration range by half an order of magnitude, which changed
the result by less than $0.1\%$. The results of the numerical
integration are collected in Table~\ref{Tab3}.
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c}
\hline &$\omega_p, \omega_{\tau}(eV)\,\backslash L (\mu m)$&$ \quad
0.1 \quad $& $\quad 0.3\quad $ &$\quad 0.5 \quad$& $\quad 1.0 \quad
$& $\quad 3.0 \quad $
\\
\hline
\hline
1.&$\omega_{\rm p}=7.50$, $\omega_{\tau}=0.061$& 0.43& 0.66 & 0.75 & 0.85 & 0.93 \\[3mm]
\hline
2.&$\omega_{\rm p}=8.41$, $\omega_{\tau}=0.02$& 0.45& 0.69 &0.79 &0.88 &0.95 \\[3mm]
\hline
3.&$\omega_{\rm p}=8.84$, $\omega_{\tau}=0.0422$& 0.46 & 0.69 & 0.78 & 0.87 &0.94 \\[3mm]
\hline
4.&$\omega_{\rm p}=6.85$, $\omega_{\tau}=0.0357$& 0.42 & 0.65 & 0.75 & 0.84 &0.93 \\[3mm]
\hline
5.&$\omega_{\rm p}=9.00$, $\omega_{\tau}=0.035$& 0.47& 0.71 & 0.79 & 0.88 & 0.95 \\[3mm]
\hline
6.&$\omega_{\rm p}=7.50\pm15\%$& 0.45& 0.68 & 0.77 & 0.86& 0.94\\
&$\omega_{\tau}=0.061$&0.41 & 0.63 & 0.73 & 0.83 & 0.92 \\[3mm]
\hline
7.&$\omega_{\rm p}=7.50$& 0.42& 0.65 & 0.74 & 0.84& 0.92\\
&$\omega_{\tau}=0.061\pm30\%$& 0.44& 0.67 & 0.76 & 0.86 & 0.93 \\
\hline
\end{tabular}
\caption{The reduction factors at different plate separations
calculated with the different pairs of values of the Drude
parameters corresponding to different data. The last two rows show
the variation of the reduction factor when either the plasma
frequency or the relaxation parameter is varied.}\label{Tab3}
\end{table}
The first four rows of the table present the reduction factors for
four pairs of the Drude parameters that were obtained by fitting the
optical data from different sources. The next row shows the result
obtained for $\omega_{\rm p}=9$~eV and $\omega_{\tau}=35$~meV. The
last two rows show the variation of the reduction factor if the
plasma frequency $\omega_{\rm p}$ or the relaxation parameter
$\omega_{\tau}$ are varied by $\pm 15\%$ and $\pm 30\%$,
respectively. The upper (lower) line corresponds here to the upper
(lower) sign.
The variation of the optical data and the associated Drude
parameters introduces a variation in the Casimir force ranging from
5.5\% at short distances (100~nm) to 1.5\% at long distances
(3~$\mu$m). The distance dependence is of course related to the fact
that the material properties influence the Casimir force much more
at short than at long plate separation. The strongest variation of
5.5\% gives an indication of the genuine sample dependence of the
Casimir force. For this reason it is necessary to measure the
optical properties of the plates used in the Casimir force
measurement if a precision of the order 1\% or better in the
\textit{comparison} between experimental values and theoretical
predictions is aimed at. Incidentally let us notice that the plasma
frequency $\omega_{\rm p}=7.5$~eV, which is found here to fit best
Palik's handbook data \cite{HB1}, is basically the same as the one
proposed alternatively in \cite{Lam00} for Cu, which has very
similar optical properties to Au concerning the Casimir force
\cite{PRLComment}. For Cu, the variation of the plasma frequency
from $\omega_{\rm p}=9$~eV to $\omega_p=7.5$~eV introduced a
variation of the Casimir force up to 5\% \cite{Lam00}.
In order to asses more quantitatively the role of the two Drude
parameters, we show in the last two rows of table \ref{Tab3} the
variation of the reduction factor when either the plasma frequency
or the relaxation parameter is varied with the other parameter kept
constant. One can see that the increase (decrease) of the relaxation
parameter by $\delta\omega_{\tau}=30\%$ lowers (increases) the
reduction factor $\eta_F$ at $L=0.1~\mu m$ by only
$\delta\eta_F=1.6\%$. However, the $15\%$ variation of the plasma
frequency leads to $4.2\%$ change in the reduction factor. Thus the
Casimir force is much more sensitive to the variation of the plasma
frequency, basically as the plasma frequency determines the
reflection quality of the plates (an infinite plasma frequency
corresponds to perfectly reflecting mirrors).
\section{Conclusions\label{Sec7}}
In this paper we have performed the first systematic and detailed
analysis of optical data for Casmir force measurements. We have
studied the relative importance of the different frequency regions
for the Casimir force as a function of the plate separation and
established the critical role of the Drude parameters in particular
for short distance measurements. We have then analyzed and compared
four different sets of optical data. For each set we have extracted
the corresponding plasma frequency and relaxation parameter either
by fitting real and imaginary part of the dielectric function at low
frequencies or by using a detailed Kramers-Kronig analysis. Both
methods lead essentially to the same results. The Kramers-Kronig
analysis reveals itself to be a powerful tool for the estimation of
the low frequency Drude parameters for data coming from the same
sample.
A variation of the values of the Casimir force up to 5.5\% is found
for different optical data sets. This gives an intrinsic unknown
parameter for the Casimir force calculations and demonstrates the
genuine sample dependence of the Casimir force. The today existing
numerical and analytical calculations of the Casimir force in
themselves are very precise. In the same way, measurements of the
Casimir force have achieved high accuracy over the last decade. In
order to compare the results of the achievements in theory and
experiment at a level of 1\% precision or better, the crucial point
is to make sure that calculations and experiments are performed for
the same physical sample. One therefore has to know the optical and
material properties of the sample used in the experiment. These
properties must be measured for frequencies as low as possible. In
practice, the material properties have to be known over an interval
of about 4 orders of magnitude around the characteristic frequency
$\zeta_{\rm ch}=c/2L$. For a plate separation of $L=100$~nm this
means an interval [10~meV, 100~eV]. If measurements at low
frequencies are not possible, the low frequency Drude parameters
should be extracted from the measured data, by one of the two
methods discussed here.
\textbf{Acknowledgements} Part of this work was funded by the
European Contract STRP 12142 NANOCASE. We wish to thank S. Reynaud
and A. Krasnoperov for useful discussions.
\section*{References}
|
1,116,691,497,626 | arxiv | \section{Introduction}
\label{sec:intro}
Discovering statistically significant variables in high dimensional data is an important problem for many applications such as bioinformatics, materials informatics, and econometrics, to name a few.
To achieve this, for example in a regression model, data analysts often attempt to reduce the dimensionality of the model by utilizing a particular {\it model selection} or {\it variable selection} method.
For example, the Lasso \citep{Tib96} and marginal screening \citep{FanLv08} are frequently used in model selection contexts.
In many applications, data analysts conduct statistical inference based on the selected model as if it is known a priori, but this practice has been referred to as ``a quiet scandal in the statistical community'' in \cite{Bre92}.
If we select a model based on the available data, then we have to pay heed to the effect of model selection when we conduct a statistical inference.
This is because the selected model is no longer deterministic, i.e., random, and statistical inference after model selection is affected by {\it selection bias}.
In hypothesis testing of the selected variables, the validity of the inference is compromised when a test statistic is constructed without taking account of the model selection effect.
This means that, as a consequence, we can no longer effectively control type I error or the false positive rate.
This kind of problem falls under the banner of {\it post-selection inference} in the statistical community and is recently attracted a lot of attention \citep[see, e.g.,][]{Ber13, Efr14, Bar16, Lee16}.
Post-selection inference consists of the following two steps:
\begin{description}
\item[Selection:] The analyst chooses a model or subset of variables and constructs hypothesis, based on the data.
\item[Inference:] The analyst tests the hypothesis by using the selected model.
\end{description}
Broadly speaking, the selection step determines what issue to address, i.e., a hypothesis selected from the data, and the inference step conducts hypothesis testing to enable a conclusion to be drawn about the issue under consideration.
To navigate the issue of selection bias, there are several approaches for conducting the inference step.
{\it Data splitting} is the most common procedure for selection bias correction.
In a high dimensional linear regression model, \cite{WasRoe09} and \cite{Mei09} succeed in assigning a $p$-value for each selected variable by splitting the data into two subsets.
Specifically, they first reduce the dimensionality of the model using the first subset, and then make the final selection using the second subset of the data, by assigning a $p$-value based on a classical least square estimation.
While such a data splitting method is mathematically valid straightforward to implement, it leads to low power for extracting truly significant variables because only sub-samples, whose size is obviously smaller than that of the full sample, can be used in each of the selection and inference steps.
As an alternative, {\it simultaneous inference}, which takes account all possible subsets of variables, has been developed for correcting selection bias.
\cite{Ber13} showed that the type I error can be successfully controlled even if the full sample is used in both the selection and inference steps by adjusting multiplicity of model selection.
Since the number of all possible subsets of variables increases exponentially, computational costs associated with this method become excessive when the dimension of parameters is greater than 20.
On the other hand, {\it selective inference}, which only takes the selected model into account, is another approach for post-selection inference, and provides a new framework for combining selection and hypothesis testing.
Since hypothesis testing is conducted only for the selected model, it makes sense to condition on an event that ``a certain model is selected''.
This event is referred to as a {\it selection event}, and we conduct hypothesis testing conditional on the event.
Thus, we can avoid having to compare coefficients across two different models.
Recently, \cite{Lee16} succeeded in using this method to conduct hypothesis testing through constructing confidence intervals for selected variables by the Lasso in s linear regression modeling context.
When a specific confidence interval is constructed, the corresponding hypothesis testing can be successfully conducted
They also show that the type I error, which is also conditioned on the selection event and is called {\it selective type I error}, can be appropriately controlled.
It is noteworthy that by conditioning on the selection event in a certain class, we can construct exact $p$-values in the meaning of conditional inference based on a truncated normal distribution.
Almost all studies which have followed since the seminal work by \cite{Lee16}, however, focus on linear regression models.
Particularly, normality of the noise is crucial to control selective type I error.
To relax this assumption, \cite{TiaTay15} developed an asymptotic theory for selective inference in a generalized linear modeling context.
Although their results can be available for high dimensional and low sample size data, we can only test a global null hypothesis, that is, a hypothesis that all regression hypothesis is zero, just like with covariance test \citep{Loc14}.
On the other hand, \cite{Tay16} proposed a procedure to test individual hypotheses in a logistic regression model with the Lasso.
By debiasing the Lasso estimator for both the active and inactive variables, they require a joint asymptotic distribution of the debiased Lasso estimator and conduct hypothesis testing for regression coefficients individually.
However, the method is justified only for low dimensional scenarios since they exploit standard fixed dimensional asymptotics.
Our main contribution is that, by utilizing marginal screening as a variable selection method, we can show that the selective type I error rate for logistic regression model is appropriately controlled even in a high dimensional asymptotic scenario.
In addition, our method is applicable not only with respect to testing the global null hypothesis but also hypotheses pertaining to individual regression coefficients.
Specifically, we first utilize marginal screening for the selection step in a similar way to \cite{LeeTay14}.
Then, by considering a logistic regression model for the selected variables, we derive a high dimensional asymptotic property of a maximum likelihood estimator.
Using the asymptotic results, we can conduct selective inference of a high dimensional logistic regression, i.e., valid hypothesis testing for the selected variables from high dimensional data.
The rest of the paper is organized as follows.
Section \ref{sec:SI_and_related} briefly describes the notion of selective inference and intruduces several related works.
In Section \ref{sec:setting}, the model setting and assumptions are described.
An asymptotic property of the maximum likelihood estimator of our model is discussed in Section \ref{sec:propose}.
In Section \ref{sec:simulation}, we conduct several simulation studies to explore the performance of the proposed method before application to real world empirical data sets in Section \ref{sec:real}.
Theorem proofs are relegated to Section \ref{sec:proof}.
Finally, Section \ref{sec:conclusion} offers concluding remarks and suggestions for future research in this domain.
\subsection*{Notation}
Throughout the paper, row and column vectors of $X\in \mathbb{R}^{n\times d}$ are denoted by $\bm{x}_i~(i=1,\ldots, n)$ and $\tilde{\bm{x}}_j,~(j=1,\ldots, d)$, respectively.
An $n\times n$ identity matrix is denoted by $I_n$.
The $\ell_2$-norm of a vector is denoted by $\|\cdot\|$ provided there is no confusion.
For any subset $J\subseteq\{1,\ldots, d\}$, its complement is denoted by $J^\bot=\{1,\ldots,d\}\backslash S$.
We also denote $\bm{v}_J=(v_i)_{i\in J}\in \mathbb{R}^{|J|}$ and $X_J=(\bm{x}_{J,1},\ldots,\bm{x}_{J,n})^\top\in\mathbb{R}^{n\times |J|}$ as a sub-vector of $\bm{v}$ and a sub-matrix of $X$, respectively.
For a differentiable function $f$, we denote $f'$ and $f''$ as the first and second derivatives and so on.
\section{Selective Inference and Related Works}
\label{sec:SI_and_related}
In this section, we overview fundamental notion of selective inference through a simple linear regression model \citep{Lee16}.
We also review related existing works on selective inference.
\subsection{Selective Inference in Linear Regression Model}
\label{subsec:SI_in_LR}
Let $\bm{y}\in\mathbb{R}^n$ and $X\in\mathbb{R}^{n\times d}$ be a response and non-random regressor, respectively, and let us consider a linear regression model
\begin{align*}
\bm{y}=X\bm{\beta}^*+\bm{\varepsilon},
\end{align*}
where $\bm{\beta}^*$ is the true regression coefficient vector and $\bm{\varepsilon}$ is distributed according to ${\rm N}(\bm{0},\sigma^2 I_n)$ with known variance $\sigma^2$.
Suppose that a subset of variables $S$ is selected in the selection step (e.g., Lasso or marginal screening as in \citet{Lee16, LeeTay14}) and let us consider hypothesis testing for $j\in \{1,\ldots, |S|\}$:
\begin{align}
\text{H}_{0,j}:\beta_{S, j}^*=0
\qquad
\text{vs.}
\qquad
\text{H}_{1,j}:\beta_{S, j}^*\neq 0.
\label{eq;stest}
\end{align}
If $S$ is non-random, a maximum likelihood estimator $\hat{\bm{\beta}}_S=(X_S^\top X_S)^{-1}X_S^\top\bm{y}$ is distributed according to ${\rm N}(\bm{\beta}_S^*,\sigma^2(X_S^\top X_S)^{-1})$, as is well-known.
However, we cannot use this sampling distribution when $S$ is selected based on the data, and the selected variable $S$ is also random.
If a subset of variables, i.e., the active set, $\hat{S}$ is selected by the Lasso or marginal screening, the event $\{\hat{S}=S\}$ can be written as an affine set with respect to $\bm{y}$, that is, in the form of $\{\bm{y};A\bm{y}\leq \bm{b}\}$ for some non-random matrix $A$ and vector $\bm{b}$ \citep{Lee16, LeeTay14}, in which the event $\{\hat{S}=S\}$ is called a {\it selection event}.
\cite{Lee16} showed that if $\bm{y}$ follows a normal distribution and the selection event can be written as an affine set, the following lemma holds:
\begin{lem}[Polyhedral Lemma; \cite{Lee16}]
\label{lem1}
Suppose $\bm{y}\sim{\rm N}(\bm{\mu},\Sigma)$.
Let $\bm{c}=\Sigma\bm{\eta}(\bm{\eta}^\top\Sigma\bm{\eta})^{-1}$ for any $\bm{\eta}\in\mathbb{R}^n$, and let $\bm{z}=(I_n-\bm{c}\bm{\eta}^\top)\bm{y}$.
Then we have
\begin{align*}
\{\bm{y};A\bm{y}\leq \bm{b}\}=\{\bm{y}; L(\bm{z})\leq \bm{\eta}^\top\bm{y}\leq U(\bm{z}),\;N(\bm{z})\geq 0\},
\end{align*}
where
\begin{align*}
L(\bm{z})
=\max_{j:(A\bm{c})_j<0}\frac{b_j-(A\bm{z})_j}{(A\bm{c})_j},~~~~~
U(\bm{z})
=\min_{j:(A\bm{c})_j>0}\frac{b_j-(A\bm{z})_j}{(A\bm{c})_j}
\end{align*}
and $N(\bm{z})=\max_{j:(A\bm{c})_j=0}b_j-(A\bm{z})_j$.
In addition, $(L(\bm{z}),U(\bm{z}),N(\bm{z}))$ is independent of $\bm{\eta}^\top\bm{y}$.
\end{lem}
\noindent
By using the lemma, we can find that the distribution of the pivotal quantity for $\bm{\eta}^\top\bm{\mu}$ is given by a truncated normal distribution.
Specifically, let $F^{[L,U]}_{\mu,\sigma^2}$ be a cumulative distribution function of a truncated normal distribution ${\rm TN}(\mu, \sigma^2, L, U)$, that is,
\begin{align*}
F^{[L,U]}_{\mu,\sigma^2}(x)
=\frac{\Phi((x-\mu)/\sigma)-\Phi((L-\mu)/\sigma)}{\Phi((U-\mu)/\sigma)-\Phi((L-\mu)/\sigma)},
\end{align*}
where $\Phi$ is a cumulative distribution function of a standard normal distribution.
Then, for any value of $\bm{z}$, we have
\begin{align*}
\left[
F^{[L(\bm{z}),U(\bm{z})]}_{\bm{\eta}^\top\bm{\mu},\bm{\eta}^\top\Sigma\bm{\eta}}(\bm{\eta}^\top\bm{y})
\mid A\bm{y}\leq \bm{b}
\right]
\sim{\rm Unif}(0,1),
\end{align*}
where $L(\bm{z})$ and $U(\bm{z})$ are defined in the above lemma.
This pivotal quantity allows us to construct a so-called {\it selective $p$-value}.
Precisely, by choosing $\bm{\eta}=X_S(X_S^\top X_S)^{-1}\bm{e}_j$, we can construct a right-side selective $p$-value as
\begin{align*}
P_j
=1-F^{[L(\bm{z}_0), U(\bm{z}_0)]}_{0,\bm{\eta}^\top\Sigma\bm{\eta}}(\bm{\eta}^\top\bm{y}),
\end{align*}
where $\bm{e}_j\in\mathbb{R}^{|S|}$ is a unit vector whose $j$-th element is 1 and 0 otherwise, and $\bm{z}_0$ is a realization of $\bm{z}$.
Note that the value of $P_j$ represents a right-side $p$-value conditional on the selection event under the null hypothesis $\text{H}_{0,j}:\beta_{S,j}^*=\bm{\eta}^\top \bm{\mu}=0$ in (\ref{eq;stest}).
In addition, for the $j$-th test in (\ref{eq;stest}), a two-sided selective $p$-value can be defined as
\begin{align*}
\tilde{P}_j=2\min\{P_j, 1-P_j \},
\end{align*}
which also follows from standard uniform distribution under the null hypothesis.
Therefore, we reject the $j$-th null hypothesis at level $\alpha$ when $\tilde{P}_j\leq \alpha$, and the probability
\begin{align}
\text{P}(\text{H}_{0,j}~\text{is falsely rejected}\mid \hat{S}=S)
=\text{P}(\tilde{P}_j\leq \alpha\mid \hat{S}=S)
\label{eq:sFPR}
\end{align}
is referred to as a {\it selective type I error}.
\subsection{Related Works}
In selective inference, we use the same data in variable selection and statistical inference.
Therefore, the selected model is not deterministic and we can not apply classical hypothesis testing due to selection bias.
To navigate this problem, {\it data splitting} has been commonly utilized.
In data splitting, the data are randomly divided into two disjoint sets, and one of them is used for variable selection and the other is used for hypothesis testing.
This is a particularly versatile method and is widely applicable if we can divide the data randomly \citep[see e.g.,][]{Cox75, WasRoe09, Mei09}.
Since the data are split randomly, i.e., independent of the data, we can conduct hypothesis testing in the inference step independent of the selection step.
Thus, we do not need to concerned with selection bias.
It is noteworthy that data splitting can be viewed as a method of selective inference because the inference is conducted only for the selected variables in the selection step.
However, a drawback of data splitting is that only a part of the data are available for each split, precisely because the essence of this approach involves rendering some data available for the selection step and the remainder for the inference step.
Because only a subset of the data can be used in variable selection, the risk of failing to select truly important variables increases.
Similarly, the power of hypothesis testing would decrease since inference proceeds on the basis of a subset of the total data.
In addition, since data splitting is executed at random, it is possible and plausible that the final results and conclusions will vary non-trivially depending on exactly how this split is manifested.
On the other hand, in the traditional statistical community, {\it simultaneous inference} has been developed for correcting selection bias \citep[see e.g.,][]{Ber13, Dic14}.
In simultaneous inference, type I error is controlled at level $\alpha$ by considering all possible subsets of variables.
Specifically, let $\hat{S}\subseteq \{1,\ldots, d\}$ be the set of variables selected by a certain variable selection method and $P_j(\hat{S})$ be a $p$-value for the $j$-th selected variable in $\hat{S}$.
Then, in simultaneous inference, the following type I error should be adequately controlled:
\begin{align}
{\rm P}(P_j(\hat{S})\leq \alpha ~\text{for any}~\hat{S}\subseteq\{1,\ldots,d\})\leq \alpha.
\label{eq:FPR}
\end{align}
To examine the relationship between selective inference and simultaneous inference, note that the left-hand side in (\ref{eq:FPR}) can be rewritten as
\begin{align*}
&{\rm P}(P_j(\hat{S})\leq \alpha ~\text{for any}~\hat{S}\subseteq\{1,\ldots,d\}) \\
&=\sum_{S\subseteq\{1,\ldots,d\}}{\rm P}(P_j(S)\leq \alpha \mid \hat{S}=S){\rm P}(\hat{S}=S).
\end{align*}
The right-hand side in the above equality is simply a weighted sum of selective type I errors over all possible subsets of variables.
Therefore, if we control selected type I errors for all possible subsets of variables, we can also control type I errors in the sense of simultaneous inference.
However, because the number of all possible subsets of variables is $2^d$, it becomes overly cumbersome to compute the left-hand side in (\ref{eq:FPR}) even for $d=20$.
In contrast to simultaneous inference, selective inference only considers the selected variables, and thus the computational cost is low compared to simultaneous inference.
Following the seminal work of \cite{Lee16}, selective inference for variable selection has been intensively studied \citep[e.g.,][]{FitSunTay14, LeeTay14, taylor2016inference, tian2018selective}.
All these methods, however, rely on the assumption of normality of the data.
\subsection{Beyond Normality}
\label{subsec:non-normality}
It is important to relax the assumption of the normality for applying selective inference to more general cases such as generalized linear models.
To the best of our knowledge, there is death of research into selective inference in such a generalized setting.
Here, we discuss the few studies which do exist in this respect.
\cite{FitSunTay14} derived an exact post-selection inference for a natural parameter of exponential family, and obtained the uniformly most powerful unbiased test in the framework of selective inference.
However, as suggested in their paper, the difficulty in constructing exact inference in generalized linear models emanates from the discreteness of the response distribution.
Focusing on an asymptotic behavior in a generalized linear model context with the Lasso penalty, \cite{TiaTay15} directly considered the asymptotic property of a pivotal quantity.
Although their work can be applied in high dimensional scenarios, we can only test a global null, that is, ${\rm H}_0:\bm{\beta}^*=\bm{0}$, except for the linear regression model case.
This is because that, when we conduct selective inference for individual coefficient, the selection event does not form a simple structure such as an affine set.
On the other hand, \cite{Tay16} proposed a procedure to test individual hypotheses fin logistic regression model context based on the Lasso.
Their approach is fundamentally based on solving the Lasso by approximating the log-likelihood up to the second order, and on debiasing the Lasso estimator.
Because the objective function now becomes quadratic as per the linear regression model, the selection event reduces to a relatively simple affine set.
After debiasing the Lasso estimator, they derive an asymptotic joint distribution of active and inactive estimators.
However, since they required $d$ dimensional asymptotics, high dimensional scenarios can not be supported in their theory.
In this paper, we extend selective inference for logistic regression in \cite{Tay16} to high dimensional settings in the case where variable selection is conducted by marginal screening.
We do not consider asymptotics for a $d$ dimensional original parameter space, but for a $K$ dimensional selected parameter space.
Unfortunately, however, we cannot apply this asymptotic result directly to the polyhedral lemma (Lemma \ref{lem1}) in \cite{Lee16}.
To tackle this problem, we consider a score function for constructing a test statistic for our selective inference framework.
We first define a function $\bm{T}_n(\bm{\beta}_{S}^*)$ based on a score function as a ``source'' for constructing a test statistic.
To apply the polyhedral lemma to $\bm{T}_n(\bm{\beta}_{S}^*)$, we need to asymptotically ensure that
i) the selection event is represented by affine constraints with respect to $\bm{T}_n(\bm{\beta}_{S}^*)$, and
ii) the function in the form of $\bm \eta^\top \bm{T}_n(\bm{\beta}_{S}^*)$ is independent of the truncation points.
Our main technical contribution herein is that, by carefully analyzing problem configuration and by introducing reasonable additional assumptions, we can show that those two requirements for the polyhedral lemma are satisfied asymptotically.
Figure \ref{fig:selective_p} shows the asymptotic distribution of selective $p$-values in our setting and in \cite{Tay16} based on 1,000 Monte-Carlo simulation.
While the theory in \cite{Tay16} does not support high dimensionality, their selective $p$-value (red solid line) appears to effective in high dimensional scenarios, although it is slightly mode conservative compared to the approach developed in this paper (black solid line).
Our high dimensional framework means that the number of selected variables grows with the sample size in an appropriate order, and a proposed method allows us to test (\ref{eq;stest}) individually even in high dimensional contexts.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.45, bb=0 0 360 360]{pval3000.pdf}
\end{center}
\caption{Comparison between empirical distributions of selective $p$-values in (\ref{eq;pval}) (black solid line) and \cite{Tay16} (red solid line).
The dashed line shows the cumulative distribution function of the standard uniform distribution.
Data were simulated for $n=50$ and $d=3{,}000$ under the global null and $x_{ij}$ was independently generated from a normal distribution $\text{N}(0, 1)$.
Our proposed method appears to offer superior approximation accuracy compared to the extant alternative.
}
\label{fig:selective_p}
\end{figure*}
\section{Setting and Assumptions}
\label{sec:setting}
As already noted, our objective herein is to develop a selective inference approach applicable to logistic regression models when the variables are selected by marginal screening.
Let $(y_i,\bm{x}_i)$ be the $i$-th pair of the response and regressor.
We assume that the $y_i$'s are independent random variables which take values in $\{0,1\}$, and the $\bm{x}_i$'s are a $d$ dimensional vector of known constants.
Further, let $X=(\bm{x}_1,\ldots,\bm{x}_n)^\top\in\mathbb{R}^{n\times d}$ and $\bm{y}=(y_1,\ldots,y_n)^\top \in\{0,1\}^n$.
Unlike \cite{Tay16}, we do not require that the dimension $d$ be fixed, that is, $d$ may increase, as well as the sample size $n$.
\subsection{Marginal Screening and Selection Event}
\label{subsec:MS}
In this study, we simply select variables based on a score between the regressor and response $\bm{z}=X^\top\bm{y}$ as per a linear regression problem.
Specifically, we select the top $K$ coordinates of absolute values in $\bm{z}$, that is,
\begin{align*}
\hat{S}=\{j;|z_j|~ \text{is among the first}~ K~ \text{largest of all}\}.
\end{align*}
To avoid computational issues, we consider the event $\{(\hat{S}, s_{\hat{S}})=(S, s_S)\}$ as a selection event (see, e.g., \cite{LeeTay14, TiaTay15, Lee16}).
Here, $\bm{s}_S$ is a vector of sign $z_j~(j\in S)$.
Then, the selection event $\{(\hat{S}, \bm{s}_{\hat{S}})=(S, \bm{s}_S)\}$ can be rewritten as
\begin{align*}
|z_j|\geq |z_k|,
\qquad
\forall (j,k)\in S\times S^\bot,
\end{align*}
which is equivalent to
\begin{align*}
-s_jz_j\leq z_k\leq s_jz_j,
\quad
s_j z_j\geq 0,
\qquad
\forall(j,k)\in S\times S^\bot.
\end{align*}
Therefore, $\{(\hat{S}, s_{\hat{S}})=(S, s_S)\}$ is reduced to an affine set $\{\bm{z};\;A\bm{z}\leq \bm{0}\}$ for an appropriate $\{2K(d-K)+K\}\times d$ dimensional matrix $A$.
In the following, we assume that a sure screening property holds.
This is desirable property for variable selection \citep[see e.g.,][]{FanLv08, FanSon10} and the statement is as follows:
\begin{description}
\item[(C0)] For the true active set $S^*=\{j;\beta_j^*\neq 0\}$, the probability ${\rm P}(\hat{S}\supset S^*)$ converges to 1 as $n$ goes to infinity.
\end{description}
In the above assumption, we denote $\bm{\beta}^*\in\mathbb{R}^d$ as a true value of the coefficient vector.
This assumption requires that the set of selected variables contain the set of true active variables with probability tending to 1.
In the linear regression model, (C0) holds under some regularity conditions in high dimensional settings \cite[see, e.g.,][]{FanLv08}.
The sufficient condition concerning about high dimensionality for (C0) is $\log d=O(n^\xi)$ for some $\xi\in(0,1/2)$, and thus we allow $d$ to be exponentially large.
Because (C0) is not directly related in selective inference, we do not further discuss it.
\subsection{Selective Test}
For a subset of variables $\hat{S}~(=S)$ selected by marginal screening, we consider $K$ selective tests (\ref{eq;stest}) for each variable $\beta_j^*,~j\in S$.
Let us define the loss function of logistic regression with the selected variables as follows:
\begin{align}
\ell_n(\bm{\beta}_S)
=\sum_{i=1}^n\{y_i\bm{x}_{S,i}^\top\bm{\beta}_S-\psi(\bm{x}_{S,i}^\top\bm{\beta}_S)\},
\label{eq;loss}
\end{align}
where $\psi(\bm{x}_{S,i}^\top\bm{\beta}_S)=\log(1+\exp(\bm{x}_{S,i}^\top\bm{\beta}_S))$ is a cumulant generating function.
Observe that $\ell_n(\bm{\beta}_S)$ is concave with respect to $\bm{\beta}_S$.
Thus we can define the maximum likelihood estimator of $\bm{\beta}_S$ as the optimal solution that attains the maximum of the following optimization problem:
\begin{align}
\hat{\bm{\beta}}_S
=\mathop{\rm arg~max}\limits_{\bm{\beta}_S\in{\cal B}}\ell_n(\bm{\beta}_S),
\label{eq;estimator}
\end{align}
where ${\cal B}\subseteq\mathbb{R}^K$ is a parameter space.
\begin{rem}
Suppose that $S~(\supset S^*)$ is fixed.
Then, it holds that
\begin{align*}
\psi'(\bm{x}_{S,i}^\top\bm{\beta}_S^*)=\psi'(\bm{x}_{S^*,i}^\top\bm{\beta}_{S^*}^*),~~~
\psi''(\bm{x}_{S,i}^\top\bm{\beta}_S^*)=\psi''(\bm{x}_{S^*,i}^\top\bm{\beta}_{S^*}^*),
\end{align*}
and thus, we have
\begin{align*}
{\rm P}(y_i=1)={\rm E}[y_i]=\psi'(\bm{x}_{S^*,i}^\top\bm{\beta}_{S^*}^*),~~~
{\rm V}[y_i]=\psi''(\bm{x}_{S^*,i}^\top\bm{\beta}_{S^*}^*).
\end{align*}
\end{rem}
We construct test statistics for (\ref{eq;stest}) by deriving an asymptotic distribution of $\hat{\bm{\beta}}_S$.
To develop our asymptotic theory, we further assume the following conditions in addition to (C0) for a fixed $S$ with $|S|=K$:
\begin{description}
\item[(C1)]
$\max_{i}\|\bm{x}_{S,i}\|={\rm O}(\sqrt{K})$.
In addition, for a $K\times K$ dimensional matrix
\begin{align*}
\Xi_{S,n}=\frac{1}{n}X_S^\top X_S=\frac{1}{n}\sum_{i=1}^{n}\bm{x}_{S,i}\bm{x}_{S,i}^\top \in\mathbb{R}^{K\times K},
\end{align*}
the following holds:
\begin{align*}
0<C_1<\lambda_{\rm min}(\Xi_{S,n})\leq \lambda_{\rm max}(\Xi_{S,n})<C_2<\infty,
\end{align*}
where $C_1$ and $C_2$ are constants that depend on neither $n$ nor $K$.
\item[(C2)]
There exists a constant $\xi\;(<\infty)$ such that $\max_i|\bm{x}_{S,i}^\top\bm{\beta}_S^*|<\xi$.
In addition, parameter space ${\cal B}$ is
\begin{align*}
{\cal B}=\{\bm{\beta}_S\in\mathbb{R}^{K};\max_i|\bm{x}_{S,i}^\top\bm{\beta}_S|<\tilde{\xi}\}
\end{align*}
for some constant $\tilde{\xi}\;(\in(\xi,\infty))$.
\item[(C3)]
$K^3/n={\rm o}(1)$.
\item[(C4)]
For any $p\times q$ dimensional matrix $A$, we denote the spectral norm of $A$ by $\|A\|=\sup_{\bm{v}\neq \bm{0}}\|A\bm{v}\|/\|\bm{v}\|$.
Then the following holds:
\begin{align*}
\Bigl\|\frac{1}{\sqrt{n}}X_{S^\bot}^\top X_S\Bigr\|={\rm O}(K).
\end{align*}
\end{description}
The condition (C1) pertains to the design matrix.
Note that we only consider a high dimensional and small sample size setting for the original data set, and not for selected variables.
This assumption is reasonable for high dimensional and large sample scenarios.
(C2) requires that ${\rm P}(y_i=1)$ not converge to 0 or 1 for any $i=1,\ldots,n$.
Observe that the parameter space ${\cal B}$ is an open and convex set with respect to $\bm{\beta}_S$.
This assumption naturally holds when the space of regressors is compact and $\bm{\beta}_S$ does not diverge.
In addition, if the maximum likelihood estimator $\hat{\bm{\beta}}_S$ is $\sqrt{n/K}$-consistent, then $\hat{\bm{\beta}}_S$ lies in ${\cal B}$ with probability converging to 1.
The condition (C3) represents the relationship between the sample size and the number of selected variables for high dimensional asymptotics in our model.
As related conditions, \cite{FanPen04} employs $K^5/n\to0$, and \cite{DasKhaGho14} employs $K^{6+\delta}/n\to0$ for some $\delta>0$ to derive an asymptotic expansion of a posterior distribution in a Bayesian setting.
Furthermore, \cite{Hub73} employs the same condition as in (C3) in the scenario for $M$-estimation.
Finally, (C4) requires that regressors of selected variables and those of unselected variables be only weakly correlated.
A similar assumption is required in \cite{Hua08} for deriving an asymptotic distribution for a bridge estimator.
This type of assumption, e.g., a restricted eigenvalue condition \citep{bickel2009simultaneous}, is essential for handling high dimensional behavior of the estimator.
\section{Proposed Method}
\label{sec:propose}
In this section, we present the proposed method for selective inference for high dimensional logistic regression with marginal screening.
We first consider a subset of features $\hat{S} = S (\supset S^*)$ as a fixed set, and derive an asymptotic distribution of $\hat{\bm{\beta}}_S$ under the assumptions (C1) -- (C3).
Then, we introduce the ``source'' of the test statistic $\bm{T}_n(\bm{\beta}_S^*)$, which is defined by a score function, and apply it to the polyhedral lemma, where we will show that the truncation points are independent of the $\bm{\eta}^\top \bm T_n(\bm{\beta}_S^*)$ with the assumption (C4).
To extend the selective inference framework to logistic regression, we first consider a subset of variables $\hat{S}=S~(\supset S^*)$ as a fixed set.
From (\ref{eq;loss}), let us define a score function and observed information matrix by
\begin{align*}
\bm{s}_n(\bm{\beta}_S)
&=\frac{1}{\sqrt{n}}\ell'_n(\bm{\beta}_S)
=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bm{x}_{S,i}(y_i-\psi'(\bm{x}_{S,i}^\top\bm{\beta}_S))
\intertext{and}
\Sigma_n(\bm{\beta}_S)
&=-\frac{1}{n}\ell''_n(\bm{\beta}_S)
=\frac{1}{n}\sum_{i=1}^{n}\psi''(\bm{x}_{S,i}^\top\bm{\beta}_S)\bm{x}_{S,i}\bm{x}_{S,i}^\top,
\end{align*}
respectively.
To simplify the notation, we denote $\bm{s}_n(\bm{\beta}_S^*)$ and $\Sigma_n(\bm{\beta}_S^*)$ by $\bm{s}_n$ and $\Sigma_n$, respectively, for the true value of $\bm{\beta}_S^*$.
Because $\psi''(\bm{x}_{S,i}^\top\bm{\beta}_S^*)$ is uniformly bounded on ${\cal B}$ from (C2), $\Sigma_n$ is a symmetric and positive definite matrix when (C1) holds.
Then, by the same argument as in \cite{FanPen04}, if $K^2/n\to 0$, we have
\begin{align}
\|\hat{\bm{\beta}}_S-\bm{\beta}_S^*\|={\rm O}_{{\rm p}}(\sqrt{K/n}).
\label{eq;consistency}
\end{align}
By using Taylor's theorem, we have
\begin{align*}
\bm{0}
=\ell'_n(\hat{\bm{\beta}}_S)
\approx \sqrt{n}\bm{s}_n-n\Sigma_n(\hat{\bm{\beta}}_S-\bm{\beta}_S^*),
\end{align*}
and thus
\begin{align*}
\sqrt{n}(\hat{\bm{\beta}}_S-\bm{\beta}_S^*)
\approx \Sigma_n^{-1}\bm{s}_n.
\end{align*}
As per Remark 1, $S\supset S^*$ implies
\begin{align*}
{\rm E}[\bm{s}_n]
=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bm{x}_{S,i}({\rm E}[y_i]-\psi'(\bm{x}_{S,i}^\top\bm{\beta}_S^*))
=\bm{0}.
\end{align*}
In addition, because the $y_i$'s are independent of each other, we observe that
\begin{align*}
{\rm V}[\bm{s}_n]
=\frac{1}{n}\sum_{i=1}^{n}{\rm V}[y_i]\bm{x}_{S,i}\bm{x}_{S,i}^\top
=\Sigma_n.
\end{align*}
Therefore, by recalling asymptotic normality of the score function, we expect that a distribution of $\Sigma_n^{-1}\bm{s}_n$ can be approximated by a normal distribution with mean $\bm{0}$ and covariance matrix $\Sigma_n^{-1}$.
Indeed, if $S$ is fixed, this approximation is true under the conditions (C1) -- (C3):
\begin{thm}
\label{thm1}
Suppose that the conditions (C1) -- (C3) hold.
Then, for any fixed $S~(\supset S^*)$ and $\bm{\eta}\in\mathbb{R}^K$ with $\|\bm{\eta}\|<\infty$, we have
\begin{align}
\sqrt{n}\sigma_n^{-1}\bm{\eta}^\top(\hat{\bm{\beta}}_S-\bm{\beta}^*_S)
=\sigma_n^{-1}\bm{\eta}^\top\Sigma_n^{-1}\bm{s}_n+{\rm o}_{\rm p}(1) \stackrel{{\rm d}}{\to}
{\rm N}(0,1),
\label{eq;adist}
\end{align}
where $\sigma_n^2=\bm{\eta}^\top\Sigma_n^{-1}\bm{\eta}$ and ${\rm o}_{{\rm p}}(1)$ is a term that converges to 0 in probability uniformly with respect to $\bm{\eta}$ and $S$.
\end{thm}
Note that, under the conditions (C1), (C2) and $d^3/n\to 0$, Theorem \ref{thm1} also holds when we do not enforce variable selection (see e.g., \cite{FanPen04}).
To formulate a selective test, let us consider
\begin{align}
\bm{T}_n(\bm{\beta}_S^*)
=\Sigma_n^{-1}\bm{s}_n
=\Sigma_n^{-1}\left\{
\frac{1}{\sqrt{n}}X_S^\top(\bm{y}-\bm{\psi}'(\bm{\beta}_S^*))
\right\}
\label{eq;test_stat}
\end{align}
as a ``source" of a test statistic, where $\bm{\psi}'(\bm{\beta}_S^*)=(\psi'(\bm{x}_{S,i}^\top\bm{\beta}_S^*))_{i=1,\ldots, n}$.
The term ``source" means that we cannot use it as a test statistic directly because $\bm{T}_n(\bm{\beta}_S^*)$ depends on $\bm{\beta}_S^*$.
In the following, for notational simplicity, we denote $\bm{T}_n(\bm{\beta}_S^*)$ and $\bm{\psi}'(\bm{\beta}_S^*)$ by $\bm{T}_n$ and $\bm{\psi}'$, respectively.
As noted in Section \ref{subsec:MS}, by using an appropriate non-random matrix $A\in\mathbb{R}^{K(2d-2K+1)\times d}$, the marginal screening selection event can be expressed as an affine constraint with respect to $\bm{z}=X^\top\bm{y}$, that is, $\{\bm{z};A\bm{z}\leq \bm{0}\}$.
Then, by appropriately dividing $A$ and $X$ based on the selected $S$, we can rewrite it as follows:
\begin{align*}
A\bm{z}\leq \bm{0}
\quad
\Leftrightarrow
\quad
A_SX_S^\top\bm{y}+A_{S^\bot}X_{S^\bot}^\top\bm{y}\leq \bm{0}
\quad
\Leftrightarrow
\quad
\tilde{A}\bm{T}_n\leq \tilde{\bm{b}}.
\end{align*}
The last inequality is an affine constraint with respect to $\bm{T}_n$, where
\begin{align*}
\tilde{A}=A_S\Sigma_n
\qquad
\text{and}
\qquad
\tilde{\bm{b}}=-\frac{1}{\sqrt{n}}(A_SX_S^\top\bm{\psi}'+A_{S^\bot}X_{S^\bot}^\top\bm{y}).
\end{align*}
Unlike the polyhedral lemma in Section \ref{subsec:SI_in_LR}, $\tilde{\bm{b}}$ depends on $\bm{y}$ and so is a random vector.
By using (C4), we can prove that $\tilde{\bm{b}}$ is asymptotically independent of $\bm{\eta}^\top\bm{T}_n$, which implies the polyhedral lemma holds asymptotically.
\begin{thm}
\label{thm2}
Suppose that (C1) -- (C4) all hold.
Let $\bm{c}=\Sigma_n^{-1}\bm{\eta}/\sigma_n^2$ for any $\bm{\eta}\in\mathbb{R}^K$ with $\|\bm{\eta}\|<\infty$, and $\bm{w}=(I_K-\bm{c}\bm{\eta}^\top)\bm{T}_n$, where $\sigma_n^2=\bm{\eta}^\top\Sigma_n^{-1}\bm{\eta}$.
Then, for any fixed $S~(\supset S^*)$, the selection event can be expressed as
\begin{align*}
\{\bm{T};\tilde{A}\bm{T}\leq \tilde{\bm{b}}\}
=\{\bm{T};L_n\leq \bm{\eta}^\top\bm{T}\leq U_n,N_n=0\},
\end{align*}
where
\begin{align}
L_n
=\max_{l:(\tilde{A}\bm{c})_l<0}\frac{\tilde{b}_l-(\tilde{A}\bm{w})_l}{(\tilde{A}\bm{c})_l},
\qquad
U_n
=\min_{l:(\tilde{A}\bm{c})_l>0}\frac{\tilde{b}_l-(\tilde{A}\bm{w})_l}{(\tilde{A}\bm{c})_l},
\label{eq;truncation}
\end{align}
and $N_n=\max_{l:(\tilde{A}\bm{c})_l=0}\tilde{b}_l-(\tilde{A}\bm{w})_l$.
In addition, $(L_n,U_n,N_n)$ is asymptotically independent of $\bm{\eta}^\top\bm{T}_n$.
\end{thm}
As a result of Theorem \ref{thm1}, Theorem \ref{thm2} and (C0), we can asymptotically identify a pivotal quantity as a truncated normal distribution, that is, by letting $\bm{\eta}=\bm{e}_j\in\mathbb{R}^K$,
\begin{align*}
\left[F^{[L_n,U_n]}_{0,\sigma_n^2}(\bm{\eta}^\top\bm{T}_n) \mid \tilde{A}\bm{T}_n\leq\tilde{\bm{b}}\right]
\stackrel{{\rm d}}{\to}{\rm Unif}(0,1)
\end{align*}
for any $\bm{w}$, under ${\rm H}_{0,j}$.
Therefore, we can define an asymptotic selective $p$-value for selective test (\ref{eq;stest}) under ${\rm H}_{0,j}$ as follows:
\begin{align}
P_{n,j}=2\min\Bigl\{F^{[L_n,U_n]}_{0,\sigma_n^2}(\bm{\eta}^\top\bm{T}_n), 1-F^{[L_n,U_n]}_{0,\sigma_n^2}(\bm{\eta}^\top\bm{T}_n) \Bigr\},
\label{eq;pval}
\end{align}
where $L_n$ and $U_n$ are evaluated at the realization of $\bm{w}=\bm{w}_0$.
Unfortunately, because $\bm{T}_n$, $\Sigma_n$, $L_n$ and $U_n$ are still dependent on the true value of $\bm{\beta}_S^*$, we construct a test statistic by introducing a maximum likelihood estimator (\ref{eq;estimator}), which is a consistent estimator of $\bm{\beta}_S^*$.
\subsection{Computing Truncation Points}
In practice, we need to compute truncation points in (\ref{eq;truncation}).
When we utilize marginal screening for variable selection, it becomes difficult to compute $L_n$ and $U_n$ because $\tilde{A}$ becomes a $\{2K(d-K)+K\}\times K$ dimensional matrix.
For example, even when $d=1{,}000$ and $K=20$, we need to handle a 39,220 dimensional vector.
To reduce the computational burden, we derive a simple form of (\ref{eq;truncation}) in this section.
We first derive $A_S$.
As notedd in Section \ref{subsec:MS}, selection event $\{(\hat{S}, s_{\hat{S}})=(S, s_S)\}$ can be rewritten as
\begin{align*}
-s_jz_j\leq z_k\leq s_jz_j,~s_j z_j\geq 0,
~~~~~
\forall (j,k)\in S\times S^\bot,
\end{align*}
where $s_j={\rm sgn}(z_j)$ is the sign of the $j$-th element of $\bm{z}=X^\top\bm{y}$.
Let $S=\{j_1,\ldots, j_K\}$ and $q=2(d-K)+1$.
Then, by a simple calculation, we have
\begin{align*}
A_S
=\left( \begin{array}{ccc}
-s_{j_1}\bm{1}_{q}&&O \\
&\ddots& \\
O&& -s_{j_K}\bm{1}_{q}
\end{array} \right)
=-J\otimes\bm{1}_{q},
\end{align*}
where $J$ is a $K\times K$ dimensional diagonal matrix whose $j$-th diagonal element is $s_j$ and $\otimes$ denotes a Kronecker product.
Since $\tilde{A}=A_S\Sigma_n$ and $\bm{c}=\Sigma_n^{-1}\bm{\eta}/\sigma_n^2$, the denominator in (\ref{eq;truncation}) reduces to $\tilde{A}\bm{c}=A_S\bm{\eta}//\sigma_n^2$.
For $\bm{\eta}=\bm{e}_j$, we can further evaluate $A_S\bm{\eta}$ as
\begin{align*}
A_S\bm{\eta}=-s_j(\bm{0}_{(j-1)q}^\top,\bm{1}_{q}^\top,\bm{0}_{(K-j)q}^\top)^\top\in\mathbb{R}^{Kq}.
\end{align*}
Further, by the definition of $\tilde{A},~\tilde{\bm{b}}$, and $\bm{w}$, we have
\begin{align*}
\tilde{\bm{b}}-\tilde{A}\bm{w}
=\tilde{\bm{b}}-\tilde{A}\bm{T}_n+(\bm{\eta}^\top\bm{T}_n)\tilde{A}\bm{c}
=-\frac{1}{\sqrt{n}}A\bm{z}+T_{n,j}\tilde{A}\bm{c}.
\end{align*}
Because $\sigma_n^2$, the $j$-th diagonal element of $\Sigma_n^{-1}$, is positive, it is straightforward to observe that
\begin{align*}
\{l:(\tilde{A}\bm{c})_l<0\}
=
\begin{cases}
\{(j-1)q+1,\ldots,jq\}, & \text{if}~s_j=1 \\
\emptyset, & \text{otherwise}
\end{cases}
\end{align*}
for $j=1,\ldots,K$.
Note that, for each $j=1,\ldots,K$, $(A\bm{z})_{l=(j-1)q+1,\ldots,jq}$ consists of $q$ elements of $z_j$ and $z_j\pm z_k$ for any $k\in S^\bot$.
Therefore, for each $j=1,\ldots,K$, we have
\begin{align*}
\max_{l=(j-1)q+1,\ldots,jq}(A\bm{z})_l
=\max_{k\in S^\bot}\{z_j,z_j\pm z_k\}
=z_j+\max_{k\in S^\bot}|z_k|.
\end{align*}
As a consequence, we obtain
\begin{align}
L_n
&=\max_{l:(\tilde{A}\bm{c})_l<0}\frac{\tilde{b}_l-(\tilde{A}\bm{w})_l}{(\tilde{A}\bm{c})_l} \nonumber \\
&=\max_{l:(\tilde{A}\bm{\eta})_l<0}\frac{-(A\bm{z})_l/\sqrt{n}}{(\tilde{A}\bm{\eta})_l/\sigma_n^2}+T_{n,j} \nonumber \\
&=\frac{\sigma_n^2}{\sqrt{n}} \max_{l=(j-1)q+1,\ldots,jq}(A\bm{z})_l+T_{n,j} \nonumber \\
&=\frac{\sigma_n^2}{\sqrt{n}}(|z_j|+\max_{k\in S^\bot}|z_k|)+T_{n,j},
\label{eq;lower}
\end{align}
if $s_j=1$, and $L_n=-\infty$, otherwise.
Similarly, we obtain
\begin{align}
U_n
&=\min_{l:(\tilde{A}\bm{c})_l>0}\frac{\tilde{b}_l-(\tilde{A}\bm{w})_l}{(\tilde{A}\bm{c})_l} \nonumber \\
&=\min_{l:(\tilde{A}\bm{\eta})_l>0}\frac{-(A\bm{z})_l/\sqrt{n}}{(\tilde{A}\bm{\eta})_l/\sigma_n^2}+T_{n,j} \nonumber \\
&=-\frac{\sigma_n^2}{\sqrt{n}} \max_{l=(j-1)q+1,\ldots,jq}(A\bm{z})_l+T_{n,j} \nonumber \\
&=\frac{\sigma_n^2}{\sqrt{n}}(|z_j|-\max_{k\in S^\bot}|z_k|)+T_{n,j},
\label{eq;upper}
\end{align}
if $s_j=-1$, and $U_n=\infty$, otherwise.
Because of this simple form, we can calculate truncation points efficiently.
We summarize the algorithm to compute selective $p$-values of the $K$ selective test in Algorithm \ref{alg1}.
\begin{algorithm}[t]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Data $(\bm{y},X)\in\{0,1\}^n\times\mathbb{R}^{n\times d}$, \# of selected variables $K$}
\Output{Selective $p$-value for $K$ selective test (\ref{eq;stest})}
{
$\bm{z}\leftarrow X^\top\bm{y}$\;
$S\leftarrow \{j;|z_j|~ \text{is among the first}~ K~ \text{largest of all}\}$\;
$\hat{\bm{\beta}}_S\leftarrow \mathop{\rm arg~max}\limits_{\bm{\beta}_S\in{\cal B}}\ell_n(\bm{\beta}_S)$\;
$\bm{p}\leftarrow \bm{0}$\;
}
\For{$j=1,\ldots,K$}
{
$\bm{\eta}\leftarrow \bm{e}_j$\;
Compute $\bm{\eta}^\top\bm{T}_n,\sigma_n^2, L_n$ and $U_n$ based on (\ref{eq;lower}) and (\ref{eq;upper})\;
$p_j\leftarrow2\min\Bigl\{F^{[L_n,U_n]}_{0,\sigma_n^2}(\bm{\eta}^\top\bm{T}_n), 1-F^{[L_n,U_n]}_{0,\sigma_n^2}(\bm{\eta}^\top\bm{T}_n) \Bigr\}$
}
{\bf Return} $\bm{p}\in[0,1]^K$
\caption{Selective Inference for Classification}
\label{alg1}
\end{algorithm}
\subsection{Controlling Family-wise Error Rate}
\label{subsec:FWER}
Since selective test (\ref{eq;stest}) consists of $K$ hypotheses, we may be concerned about multiplicity when $K>1$.
In this case, instead of selective type I error, we control the {\it family-wise error rate} (FWER) in the sense of selective inference and we term it selective FWER.
For the selected variable $\hat{S}=S$, let us denote a family of true null by ${\cal H}=\{{\rm H}_{0,j}:{\rm H}_{0,j}(j\in S)$ is true null$\}$.
Then, let us define the selective FWER by
\begin{align}
{\rm sFWER}
={\rm P}(\text{at least one}~{\rm H}_{0,j}\in{\cal H}~\text{is rejected}\mid \hat{S}=S)
\label{eq;FWER}
\end{align}
in the same way as the classic FWER.
Next, we asymptotically control the selective FWER at level $\alpha$ by utilizing Bonferroni correction for $K$ selective tests.
Specifically, we adjust selective $p$-values (\ref{eq;pval}) as follows.
Let us define $\tilde{\alpha}=\alpha/K$.
Since selective $p$-value $P_{n,j}$ is asymptotically distributed according to ${\rm Unif}(0,1)$, we have that a limit superior of (\ref{eq;FWER}) can be bounded as follows:
\begin{align*}
\limsup_{n\to\infty}{\rm P}\Bigl(\bigcup_{j:{\rm H}_{0,j}\in{\cal H}}\{P_{n,j}\leq \tilde{\alpha} \} \mid \hat{S}=S\Bigr)
&\leq \limsup_{n\to\infty}\sum_{j:{\rm H}_{0,j}\in{\cal H}}{\rm P}(P_{n,j}\leq \tilde{\alpha} \mid \hat{S}=S) \\
&\leq \sum_{j:{\rm H}_{0,j}\in{\cal H}}\limsup_{n\to\infty}{\rm P}(P_{n,j}\leq \tilde{\alpha} \mid \hat{S}=S) \\
&\leq |{\cal H}|\tilde{\alpha}
\leq \alpha.
\end{align*}
In the last inequality, we simply use $|{\cal H}|\leq K$.
Accordingly, letting $p_{n,j}$ be a realization of (\ref{eq;pval}), we reject a null hypothesis when $\{p_{n,j}\leq \tilde{\alpha} \}$.
In the following, we refer to $\tilde{p}_{n,j}=\min\{1,Kp_{n,j}\}$ as an {\it adjusted selective $p$-value}.
Note that we can utilize not only Bonferroni's method but also other methods for correcting multiplicity such as Scheff$\acute{\text{e}}$'s method, Holm's method, and so on.
We use Bonferroni's method for expository purposes.
\section{Simulation Study}
\label{sec:simulation}
Through simulation studies, we explore the performance of the proposed method in Section \ref{sec:propose}, which we term ASICs (Asymptotic Selective Inference for Classification) here.
We first identify if the ASICs can control selective type I error.
We also check the selective type I error when data splitting (DS) and nominal test (NT) methods are used.
In DS, we first randomly divide the data into two disjoint sets.
Then, after selecting $\hat{S}=S$ with $|S|=K$ by using one of these sets, we construct a test statistic $\bm{T}_n(\hat{\bm{\beta}}_S)$ based on the other sets and reject the $j$-th selective test (\ref{eq;stest}) when $|T_{n,j}/\sigma_n|\geq z_{\alpha/2}$, where $z_{\alpha/2}$ is an upper $\alpha/2$-percentile of a standard normal distribution.
In NT, we cannot control type I errors since selection bias is ignored:
it selects $K$ variables by marginal screening first, then rejects the $j$-th selective test (\ref{eq;stest}) when $|T_{n,j}/\sigma_n|\geq z_{\alpha/2}$, where the entire data set is used for both selection and inference steps.
Finally, we explore whether the ASICs can effectively control selective FWER, and at the same time, confirm its statistical power by comparing it with that of DS.
The simulation settings are as follows.
As $d$ dimensional regressor $\bm{x}_i$ ($i=1,\ldots,n$), we used vectors obtained from ${\rm N}(\bm{0},\Sigma)$, where $\Sigma$ is a $d\times d$ dimensional covariance matrix whose $(j,k)$-th element is set to $\rho^{|j-k|}$.
We set $\rho=0$ or $0.5$ in Case 1 and Case 2, respectively.
Note that each element of $\bm{x}_i$ is independent in Case 1 but correlated in Case 2.
Then, for each $\bm{x}_i$, we generate $y_i$ from ${\rm Bi}(\psi'(\bm{x}_i^\top\bm{\beta}^*))$, where $\bm{\beta}^*$ is a $d$ dimensional true coefficient vector and ${\rm Bi}(p)$ is a Bernoulli distribution with parameter $p$.
In the following, we conduct simulations using 1,000 Monte-Carlo runs.
We use the {\tt glm} package in {\tt R} for parameter estimation.
\subsection{Controlling Selective Type I Error}
\label{subsec:selective_type_I_error}
To check if ASICs can control selective type I error, we consider a selective test (\ref{eq;stest}).
Specifically, we first select $K=1$ variable by marginal screening and then conduct a selective test at the 5\% level.
By setting $\bm{\beta}^*=\bm{0}\in\mathbb{R}^d$, we can confirm selective type I error because the selective null is always true.
Therefore, we assess the following index as an estimator of the selective type I error:
letting $\beta$ be the selected variable in each simulation, we evaluate an average and standard deviation of
\begin{align}
I\{{\rm H}_0~\text{is rejected}\},
\label{eq;FP}
\end{align}
where $I$ is an indicator function and ${\rm H}_0:\beta^*=0$ is a selective null.
We construct a selective test at the 5\% level in all simulations.
In the same manner as classical type I error, it is desirable when the above index is less than or equal to 0.05, with particularly small values indicating that the selective test is overly conservative.
Table \ref{tab:FPR} presents averages and standard deviations of (\ref{eq;FP}) based on 1,000 runs.
It is clear that NT cannot control selective type I error;
it becomes larger as the dimension $d$ increases.
In addition, NT does not improve even if the sample size becomes large, because there exist selection bias in the selection step.
On the other hand, both ASICs and DS adequately control selective type I error, although the latter appears slightly more conservative than the former.
Moreover, unlike NT, these two methods can adequately control selective type I error, even when the covariance structure of $\bm{x}_i$ and the number of dimensions change.
\begin{table}[t]
\caption{Method comparison using simulated data based on 1,000 Monte-Carlo runs.
Each cell denotes an average with standard deviations of (\ref{eq;FP}) in parentheses.}
\begin{tabular}{@{\extracolsep{-9pt}}ccrrrrrrr}
&&&\multicolumn{6}{c}{sample size} \\ \cline{4-9}
&$d$&method&50&100&200&500&1,000&1,500 \\ \hline\hline
Case 1&200&ASICs&.029 {\footnotesize (.168)}&.049 {\footnotesize (.216)}&.038 {\footnotesize (.191)}&.031 {\footnotesize (.173)}&.028 {\footnotesize (.165)}&.033 {\footnotesize (.179)} \\
&&DS&.012 {\footnotesize (.109)}&.015 {\footnotesize (.122)}&.004 {\footnotesize (.063)}&.004 {\footnotesize (.063)}&.011 {\footnotesize (.104)}&.011 {\footnotesize (.104)} \\
&&NT&.184 {\footnotesize (.388)}&.226 {\footnotesize (.418)}&.219 {\footnotesize (.414)}&.261 {\footnotesize (.439)}&.255 {\footnotesize (.436)}&.256 {\footnotesize (.437)} \\ \cline{2-9}
&500&ASICs&.028 {\footnotesize (.165)}&.043 {\footnotesize (.203)}&.039 {\footnotesize (.194)}&.039 {\footnotesize (.194)}&.032 {\footnotesize (.176)}&.036 {\footnotesize (.186)} \\
&&DS&.012 {\footnotesize (.109)}&.006 {\footnotesize (.077)}&.008 {\footnotesize (.089)}&.009 {\footnotesize (.094)}&.005 {\footnotesize (.071)}&.008 {\footnotesize (.089)} \\
&&NT&.267 {\footnotesize (.044)}&.273 {\footnotesize (.446)}&.304 {\footnotesize (.460)}&.301 {\footnotesize (.459)}&.326 {\footnotesize (.469)}&.325 {\footnotesize (.469)} \\ \cline{2-9}
&1,000&ASICs&.041 {\footnotesize (.198)}&.044 {\footnotesize (.205)}&.023 {\footnotesize (.150)}&.032 {\footnotesize (.176)}&.038 {\footnotesize (.191)}&.044 {\footnotesize (.205)} \\
&&DS&.006 {\footnotesize (.077)}&.011 {\footnotesize (.104)}&.010 {\footnotesize (.100)}&.009 {\footnotesize (.094)}&.013 {\footnotesize (.113)}&.010 {\footnotesize (.100)} \\
&&NT&.294 {\footnotesize (.456)}&.345 {\footnotesize (.476)}&.390 {\footnotesize (.488)}&.402 {\footnotesize (.491)}&.411 {\footnotesize (.492)}&.405 {\footnotesize (.491)} \\ \hline
Case 2&200&ASICs&.038 {\footnotesize (.191)}&.038 {\footnotesize (.191)}&.040 {\footnotesize (.196)}&.032 {\footnotesize (.176)}&.028 {\footnotesize (.165)}&.031 {\footnotesize (.173)} \\
&&DS&.012 {\footnotesize (.109)}&.007 {\footnotesize (.083)}&.012 {\footnotesize (.109)}&.010 {\footnotesize (.100)}&.012 {\footnotesize (.109)}&.004 {\footnotesize (.063)} \\
&&NT&.177 {\footnotesize (.382)}&.207 {\footnotesize (.405)}&.234 {\footnotesize (.424)}&.211 {\footnotesize (.408)}&.219 {\footnotesize (.414)}&.210 {\footnotesize (.408)} \\ \cline{2-9}
&500&ASICs&.049 {\footnotesize (.216)}&.038 {\footnotesize (.191)}&.030 {\footnotesize (.171)}&.030 {\footnotesize (.171)}&.039 {\footnotesize (.194)}&.034 {\footnotesize (.181)} \\
&&DS&.007 {\footnotesize (.083)}&.006 {\footnotesize (.077)}&.010 {\footnotesize (.100)}&.009 {\footnotesize (.094)}&.007 {\footnotesize (.083)}&.007 {\footnotesize (.083)} \\
&&NT&.247 {\footnotesize (.431)}&.269 {\footnotesize (.443)}&.291 {\footnotesize (.454)}&.295 {\footnotesize (.456)}&.309 {\footnotesize (.462)}&.318 {\footnotesize (.466)} \\ \cline{2-9}
&1,000&ASICs&.049 {\footnotesize (.216)}&.047 {\footnotesize (.212)}&.031 {\footnotesize (.173)}&.034 {\footnotesize (.181)}&.024 {\footnotesize (.153)}&.046 {\footnotesize (.210)} \\
&&DS&.009 {\footnotesize (.094)}&.008 {\footnotesize (.089)}&.013 {\footnotesize (.113)}&.006 {\footnotesize (.077)}&.006 {\footnotesize (.077)}&.010 {\footnotesize (.100)} \\
&&NT&.290 {\footnotesize (.454)}&.350 {\footnotesize (.477)}&.375 {\footnotesize (.484)}&.396 {\footnotesize (.489)}&.407 {\footnotesize (.492)}&.414 {\footnotesize (.493)} \\ \hline\hline
\end{tabular}
\label{tab:FPR}
\end{table}
\subsection{FWER and Power}
\label{subsec:FWER_Power}
Here, we explore selective FWER and statistical power with respect to ASICs and DS for $K$ selective tests (\ref{eq;stest}), where we set $K=5, 10, 15$, and $20$.
Note that, as discussed in the above section, NT is disregarded here because it does no adequately control selective type I error.
We adjust multiplicity by utilizing Bonferroni's method as noted in Section \ref{subsec:FWER}.
The true coefficient vector is set to be $\bm{\beta}^*=(2\times\bm{1}_5^\top,\bm{0}_{d-5}^\top)^\top$ and $\bm{\beta}^*=(2\times\bm{1}_5^\top,-2\times\bm{1}_5^\top,\bm{0}_{d-10}^\top)^\top$ in Model 1 and Model 2, respectively.
In the following, we assess the indices as an estimator of selective FWER and power.
Letting $\hat{S}=S$ be the subset of selected variables for each simulation, we evaluate an average of
\begin{align}
I\{\text{at least one}~{\rm H}_{0,j}\in{\cal H}~\text{is rejected}\}
\label{eq;FWER.est}
\end{align}
and
\begin{align}
\frac{1}{|S^*|}\sum_{j\in S}I\{{\rm H}_{0,j}\not\in{\cal H}~\text{is rejected}\},
\label{eq;TPR}
\end{align}
where, for each $j\in S$, ${\rm H}_{0,j}:\beta_j^*=0$ is the selective null and ${\cal H}$ is a family of true nulls.
Note that, by using Bonferroni's method, we use $\tilde{\alpha}=\alpha/K$ as an adjusted significance level for $\alpha=0.05$.
Similar to the selective type I error, it is desirable when (\ref{eq;FWER.est}) is less than or equal to $\alpha$.
In addition, higher values of (\ref{eq;TPR}) are desiable in the same manner as per classical power.
We evaluate (\ref{eq;TPR}) as the proportion of rejected hypotheses for false nulls to that of true active variables.
We employ this performance index because it is important to identify how many truly active variables are extracted in practice.
Figure \ref{fig:FWER} shows the average (\ref{eq;FWER.est}) for each method.
ASICs and DS are both evaluated with respect to four values of $K$, thus eight lines are plotted in each graph.
Because of the randomness of simulation, some of the ASICs results are larger than 0.05 especially in small sample size and large variable dimension cases.
For both methods, it is clear that selective FWER tends to be controlled at the desired significance level, although DS is more conservative than ASICs.
To accord with our asymptotic theory, the number of selected variables must be $K={\rm o}(n^{1/3})$, which means that the normal approximation is not ensured in the case of $K=15$ and $20$.
However, we observe that selective FWER is correctly controlled even in these cases, which suggests that assumptions (C3) and (C4) can be relaxed.
\begin{figure*}[t]
\begin{center}
Case 1
\end{center}
\vspace{-35pt}
\begin{center}
\subfloat[$d=200$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-indep-FWER200p.pdf}}
\subfloat[$d=500$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-indep-FWER500p.pdf}}
\subfloat[$d=1,000$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-indep-FWER1000p.pdf}}
\end{center}
\begin{center}
Case 2
\end{center}
\vspace{-35pt}
\begin{center}
\subfloat[$d=200$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-FWER200p.pdf}}
\subfloat[$d=500$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-FWER500p.pdf}}
\subfloat[$d=1,000$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-FWER1000p.pdf}}
\caption{Method comparison using simulated data based on 1,000 Monte-Carlo runs.
The vertical and horizontal axes represent an average of (\ref{eq;FWER.est}) and sample size, respectively.
The dotted line shows the significance level ($\alpha=0.05$).}
\label{fig:FWER}
\end{center}
\end{figure*}
Figures \ref{fig:Power1} and \ref{fig:Power2} show the average of (\ref{eq;TPR}) for each method and settings in Model 1 and Model 2, respectively.
In Case 1 of Figure \ref{fig:Power1}, ASICs and DS have almost the same power for each $K$ and $d$.
In addition, ASICs is clearly superior to DS in Case 2.
This is reasonable since DS uses only the half of the data for inference.
On the other hand, in all cases, the power of ASICs becomes higher as the number of selected variables $K$ decreases.
This can be explained by the condition (C3), that is, we need a much larger sample size when $K$ becomes large for assuring the asymptotic result in Theorem \ref{thm2}.
In Figure \ref{fig:Power2}, it is clear that the power of ASICs is superior in almost all settings.
However, neither AISCs nor DS appears to perform well when $K=5$.
In this case, the power of ASICs and DS cannot be improved by $50\%$ or more.
This is because we can only select at most $5$ true nonzero variables, while there are $10$ true nonzero variables.
\begin{figure*}[t]
\begin{center}
Case 1
\end{center}
\vspace{-35pt}
\begin{center}
\subfloat[$d=200$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-indep-TPR200p.pdf}}
\subfloat[$d=500$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-indep-TPR500p.pdf}}
\subfloat[$d=1,000$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-indep-TPR1000p.pdf}}
\end{center}
\begin{center}
Case 2
\end{center}
\vspace{-35pt}
\begin{center}
\subfloat[$d=200$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-TPR200p.pdf}}
\subfloat[$d=500$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-TPR500p.pdf}}
\subfloat[$d=1,000$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model1-TPR1000p.pdf}}
\caption{Method comparison using simulated data based on 1,000 Monte-Carlo runs.
The vertical and horizontal axes represent an average of (\ref{eq;TPR}) and sample size, respectively.}
\label{fig:Power1}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
Case 1
\end{center}
\vspace{-35pt}
\begin{center}
\subfloat[$d=200$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model2-indep-TPR200p.pdf}}
\subfloat[$d=500$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model2-indep-TPR500p.pdf}}
\subfloat[$d=1,000$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model2-indep-TPR1000p.pdf}}
\end{center}
\begin{center}
Case 2
\end{center}
\vspace{-35pt}
\begin{center}
\subfloat[$d=200$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model2-TPR200p.pdf}}
\subfloat[$d=500$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model2-TPR500p.pdf}}
\subfloat[$d=1,000$]{\includegraphics[scale=0.16, bb=0 0 720 720]{model2-TPR1000p.pdf}}
\caption{Method comparison using simulated data based on 1,000 Monte-Carlo runs.
The vertical and horizontal axes represent an average of (\ref{eq;TPR}) and sample size, respectively.}
\label{fig:Power2}
\end{center}
\end{figure*}
\section{Empirical Applications}
\label{sec:real}
We further explore the performance of the proposed method by applying it to several empirical data sets, all of which are available at LIBSVM\footnote{\url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}}.
In all experiments, we standardize the design matrix $X$ to make the scale of each variable the same.
We report adjusted selective $p$-values for selected variables.
To explore the selection bias, we also report naive adjusted $p$-values.
That is, we first compute $p$-values for selected variables based on NT, then we adjust these $p$-values by multiplying the number of selected variables.
The results are plotted in Figures \ref{fig:real1} -- \ref{fig:real3}.
The result shows that almost all adjusted nominal $p$-values are smaller than those of selective inference, and the difference between these $p$-values is interpreted as the effect of selection bias.
\begin{figure*}[h!]
\centering
{\bf Dexter Data Set} ($n=600, d=20,000$)
\vskip -10pt
\subfloat[$K=5$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dexter5K.pdf}}
\subfloat[$K=10$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dexter10K.pdf}} \\
\subfloat[$K=15$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dexter15K.pdf}}
\subfloat[$K=20$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dexter20K.pdf}}
\caption{Comparison between adjusted selective $p$-values and nominal $p$-values.
The vertical and horizontal axes represent adjusted $p$-values and indices of selected variables, respectively, and the black dotted line shows the significance level ($\alpha=0.05$).
In each figure, black circles and red triangles respectively indicate adjusted nominal $p$-values and selective $p$-values.
}
\label{fig:real1}
\end{figure*}
\begin{figure*}[h!]
\centering
{\bf Dorothea Data Set} ($n=1,150, d=100,000$)
\vskip -10pt
\subfloat[$K=5$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dorothea5K.pdf}}
\subfloat[$K=10$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dorothea10K.pdf}} \\
\subfloat[$K=15$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dorothea15K.pdf}}
\subfloat[$K=20$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{Dorothea20K.pdf}} \\
\vspace{20pt}
{\bf FarmAds Data Set} ($n=4,143, d=54,877$)
\vskip -10pt
\subfloat[$K=5$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{FarmAds5K.pdf}}
\subfloat[$K=10$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{FarmAds10K.pdf}} \\
\subfloat[$K=15$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{FarmAds15K.pdf}}
\subfloat[$K=20$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{FarmAds20K.pdf}}
\caption{Comparison between adjusted selective $p$-values and nominal $p$-values.
The vertical and horizontal axes represent adjusted $p$-values and indices of selected variables, respectively, and the black dotted line shows the significance level ($\alpha=0.05$).
In each figure, black circles and red triangles respectively indicate adjusted nominal $p$-values and selective $p$-values.
}
\label{fig:real2}
\end{figure*}
\begin{figure*}[h!]
\centering
{\bf GISETTE Data Set} ($n=1,000, d=5,000$)
\vskip -10pt
\subfloat[$K=5$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{GISETTE5K.pdf}}
\subfloat[$K=10$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{GISETTE10K.pdf}} \\
\subfloat[$K=15$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{GISETTE15K.pdf}}
\subfloat[$K=20$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{GISETTE20K.pdf}} \\
\vspace{20pt}
{\bf rcv1.Binary Data Set} ($n=20,242, d=47,236$)
\vskip -10pt
\subfloat[$K=5$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{rcv1-Binary5K.pdf}}
\subfloat[$K=10$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{rcv1-Binary10K.pdf}} \\
\subfloat[$K=15$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{rcv1-Binary15K.pdf}}
\subfloat[$K=20$]{\includegraphics[scale=0.165, bb=0 0 1080 576]{rcv1-Binary20K.pdf}}
\caption{Comparison between adjusted selective $p$-values and nominal $p$-values.
The vertical and horizontal axes represent adjusted $p$-values and indices of selected variables, respectively, and the black dotted line shows the significance level ($\alpha=0.05$).
In each figure, black circles and red triangles respectively indicate adjusted nominal $p$-values and selective $p$-values.
}
\label{fig:real3}
\end{figure*}
\section{Theoretical Analysis}
\label{sec:proof}
In this section, we provide proofs of the theoretical results derived herein.
We use the notation $p\lesssim q$, which means that, if for any $p,q\in\mathbb{R}$, there exists a constant $r>0$ such that $p\leq rq$, and $p\gtrsim q$ is defined similarly.
All proofs are based on fixed $S~(\supset S^*)$;
thus we simply denote $\hat{\bm{\beta}}_S$ and $X_S$ by $\hat{\bm{\beta}}$ and $X$, respectively.
This is because we need to verify several asymptotic condition before selections in the same way as in \cite{TiaTay15, Tay16}.
\subsection{Proof of (\ref{eq;consistency})}
\label{subsec:consistency}
Let $\alpha_n=\sqrt{K/n}$ and define a $K$ dimensional vector $\bm{u}$ satisfying $\|\bm{u}\|=C$ for a sufficiently large $C>0$.
The concavity of $\ell_n$ implies
\begin{align*}
{\rm P}\left(\|\hat{\bm{\beta}}-\bm{\beta}^*\|\leq \alpha_nC\right)
\geq {\rm P}\Bigl(\sup_{\|\bm{u}\|=C}\ell_n(\bm{\beta}^*+\alpha_n\bm{u})<\ell_n(\bm{\beta}^*)\Bigr),
\end{align*}
and thus we need to show that for any $\varepsilon>0$, there exists a sufficiently large $C>0$ such that
\begin{align}
{\rm P}\left(\sup_{\|\bm{u}\|=C}\ell_n(\bm{\beta}^*+\alpha_n\bm{u})<\ell_n(\bm{\beta}^*)\right)\geq 1-\varepsilon.
\label{eq;claim}
\end{align}
In fact, the above inequality implies that $\hat{\bm{\beta}}\in\{\bm{\beta}+\alpha_n\bm{u};\;\|\bm{u}\|\leq C\}$, that is, $\|\hat{\bm{\beta}}-\bm{\beta}^*\|={\rm O}_{\rm p}(\alpha_n)$.
Observe that $|\psi'(\bm{x}_i^\top\bm{\beta})|,|\psi''(\bm{x}_i^\top\bm{\beta})|$ and $|\psi'''(\bm{x}_i^\top\bm{\beta})|$ are bounded uniformly with respect to $\bm{\beta}\in{\cal B}$ and $i$.
By using Taylor's theorem, we have
\begin{align*}
&\ell_n(\bm{\beta}^*+\alpha_n\bm{u})-\ell_n(\bm{\beta}^*) \\
&=\sum_{i=1}^n\Bigl[\alpha_ny_i\bm{x}_i^\top\bm{u}-\bigl\{\psi(\bm{x}_i^\top(\bm{\beta}^*+\alpha_n\bm{u}))-\psi(\bm{x}_i^\top\bm{\beta}^*)\bigr\}\Bigr] \\
&=\alpha_n\sum_{i=1}^{n}(y_i-\psi'(\bm{x}_i^\top\bm{\beta}^*))\bm{x}_i^\top\bm{u}
-\frac{\alpha_n^2}{2}\sum_{i=1}^{n}\psi''(\bm{x}_i^\top\bm{\beta}^*)(\bm{x}_i^\top\bm{u})^2
-\frac{\alpha_n^3}{6}\sum_{i=1}^{n}\psi'''(\theta_i)(\bm{x}_i^\top\bm{u})^3 \\
&\equiv I_1+I_2+I_3,
\end{align*}
where for $i=1,2,\ldots,n$, $\theta_i$ is in the line segment between $\bm{x}_i^\top\bm{\beta}^*$ and $\bm{x}_i^\top(\bm{\beta}^*+\alpha_n\bm{u})$.
From (C1) and (C2), we observe that
\begin{align*}
{\rm E}\left[\left\{\sum_{i=1}^{n}(y_i-\psi'(\bm{x}_i^\top\bm{\beta}^*))\bm{x}_i^\top\bm{u}\right\}^2\right]
&=\sum_{i=1}^{n}{\rm E}\bigl[(y_i-\psi'(\bm{x}_i^\top\bm{\beta}^*))^2(\bm{x}_i^\top\bm{u})^2\bigr] \\
&=\sum_{i=1}^{n}\psi''(\bm{x}_i^\top\bm{\beta}^*)(\bm{x}_{i}^\top\bm{u})^2
\lesssim n\bm{u}^\top\Xi_n\bm{u}
\lesssim n\|\bm{u}\|^2,
\end{align*}
and thus we have $|I_1|={\rm O}_{\rm p}(\alpha_n\sqrt{n}\|\bm{u}\|)={\rm O}_{\rm p}(\sqrt{K}\|\bm{u}\|)$.
Next, by using (C1) again, $I_2$ can be bounded as
\begin{align*}
I_2
\lesssim -\alpha_n^2\sum_{i=1}^{n}(\bm{x}_i^\top\bm{u})^2
\lesssim -K\|\bm{u}\|^2
<0.
\end{align*}
Finally, for $I_3$, we have
\begin{align*}
|I_3|
&=\left|\frac{\alpha_n^3}{6}\sum_{i=1}^{n}\psi'''(\theta_i)(\bm{x}_i^\top\bm{u})^3\right|
\lesssim \alpha_n^3\sum_{i=1}^{n}|\bm{x}_i^\top\bm{u}|^3
\leq n\alpha_n^3\bm{u}^\top\Xi_n\bm{u}\max_{1\leq i\leq n}|\bm{x}_i^\top\bm{u}| \\
&\lesssim n\alpha_n^3\sqrt{K}\|\bm{u}\|^3
={\rm O}\left(\frac{K^2}{\sqrt{n}}\|\bm{u}\|^3\right).
\end{align*}
Combining all the above, if $K^2/n\to0$ is satisfied, we observe that for sufficiently large $C$, $I_1$ and $I_2$ are dominated by $I_2~(<0)$.
As a result, we obtain (\ref{eq;claim}).
\begin{rem}
From (\ref{eq;consistency}) and (2), we have
\begin{align*}
|\bm{x}_i^\top\hat{\bm{\beta}}|
\leq |\bm{x}_i^\top(\hat{\bm{\beta}}-\bm{\beta}^*)|+|\bm{x}_i^\top\bm{\beta}^*|
={\rm O}_{{\rm p}}(K/\sqrt{n})+\xi,
\end{align*}
and thus, with probability tending to 1, $\hat{\bm{\beta}}\in{\cal B}$ holds.
\end{rem}
\subsection{Proof of Theorem \ref{thm1}}
\label{subsec:thm1}
First, we prove that $\sqrt{n}(\hat{\bm{\beta}}-\bm{\beta}^*)$ is asymptotically equivalent to $\Sigma_n^{-1}\bm{s}_n$.
By using Taylor's theorem, we have
\begin{align}
\bm{0}
=\ell_n'(\hat{\bm{\beta}})
=\ell_n'(\bm{\beta}^*)+\ell''_n(\bm{\beta}^*)(\hat{\bm{\beta}}-\bm{\beta}^*)
+\frac{1}{2}\sum_{i=1}^n\psi'''(\tilde{\theta}_i)\bm{x}_i\{\bm{x}_i^\top(\hat{\bm{\beta}}-\bm{\beta}^*)\}^2,
\label{eq;taylor}
\end{align}
where for $i=1,2,\ldots,n$, $\tilde{\theta}_i$ is in the line segment between $\bm{x}_i^\top\bm{\beta}^*$ and $\bm{x}_i^\top\hat{\bm{\beta}}$.
In addition, (\ref{eq;taylor}) can be rewritten as
\begin{align*}
\sqrt{n}(\hat{\bm{\beta}}-\bm{\beta}^*)
=\Sigma_n^{-1}\bm{s}_n+R_n,
\end{align*}
where
\begin{align*}
R_n
=-\frac{1}{2\sqrt{n}}\Sigma_n^{-1}\sum_{i=1}^n\psi'''(\tilde{\theta}_i)\bm{x}_i\{\bm{x}_i^\top(\hat{\bm{\beta}}-\bm{\beta}^*)\}^2.
\end{align*}
Noting that, from (C1),
\begin{align*}
\lambda_{{\rm min}}(\Sigma_n)
\gtrsim \lambda_{{\rm min}}(\Xi_n)
>C_1
>0,
\end{align*}
(C1), (C3) and (\ref{eq;consistency}) imply
\begin{align*}
\|R_n\|
&\lesssim \frac{1}{\sqrt{n}}\max_{1\leq i\leq n}|\bm{x}_i^\top(\hat{\bm{\beta}}-\bm{\beta}^*)|
\times\Bigl\|\sum_{i=1}^{n}\Sigma_n^{-1}\psi'''(\tilde{\theta}_i)\bm{x}_i\bm{x}_i^\top(\hat{\bm{\beta}}-\bm{\beta}^*)\Bigr\| \\
&\lesssim \frac{1}{\sqrt{n}}\max_{1\leq i\leq n}\|\bm{x}_i\|\|\hat{\bm{\beta}}-\bm{\beta}^*\|\times n\lambda_{{\rm max}}(\Xi_n)\|\hat{\bm{\beta}}-\bm{\beta}^*\| \\
&={\rm O}_{{\rm p}}\Bigl(\frac{K\sqrt{K}}{\sqrt{n}} \Bigr)
={\rm o}_{{\rm p}}(1).
\end{align*}
Now we can prove the asymptotic normality of $\sigma_n^{-1}\Sigma_n^{-1}\bm{s}_n$.
For any $K$ dimensional vector $\bm{\eta}$ with $\|\bm{\eta}\|<\infty$, define $\sigma_n^2=\bm{\eta}^\top\Sigma_n^{-1}\bm{\eta}$ and $\omega_n$ such that
\begin{align*}
\bm{\eta}^\top\Sigma_n^{-1}\bm{s}_n
=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i(y_i-\psi'(\bm{x}_i^\top\bm{\beta}^*))
=\sum_{i=1}^{n}\omega_{ni}.
\end{align*}
Then, since $S\supset S^*$, we observe that
\begin{align*}
\sum_{i=1}^{n}{\rm E}[\omega_{ni}]
=\sum_{i=1}^{n}\frac{1}{\sqrt{n}}\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i{\rm E}[y_i-\psi'_i]
=0,
\end{align*}
and
\begin{align*}
\sum_{i=1}^{n}{\rm V}[\omega_{ni}]
=\frac{1}{n}\sum_{i=1}^n\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i{\rm V}[y_i]\bm{x}_i^\top\Sigma_n^{-1}\bm{\eta}
=\sigma_n^2.
\end{align*}
To state the asymptotic normality of $\sigma_n^{-1}\Sigma_n^{-1}\bm{s}_n$, we check the Lindeberg condition for $\omega_n$: for any $\varepsilon>0$,
\begin{align}
\label{eq;Lindeberg}
\frac{1}{\sigma_n^2}\sum_{i=1}^{n}{\rm E}[\omega_{ni}^2I(|\omega_{ni}|>\sigma_n\varepsilon)]
={\rm o}(1).
\end{align}
For any $\varepsilon>0$, we have
\begin{align*}
&\frac{1}{\sigma_n^2}\sum_{i=1}^{n}{\rm E}[\omega_{ni}^2I(|\omega_{ni}|>\sigma_n\varepsilon)] \\
&=\frac{1}{\sigma_n^2}\cdot\frac{1}{n}\sum_{i=1}^{n}(\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i)^2{\rm E}[(y_i-\psi'_i)^2I(|\omega_{ni}|>\sigma_n\varepsilon)] \\
&\leq \frac{1}{\sigma_n^2}\max_{1\leq i\leq n}{\rm E}[(y_i-\psi'_i)^2I(|\omega_{ni}|>\sigma_n\varepsilon)]
\times\frac{1}{n}\sum_{i=1}^{n}(\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i)^2.
\end{align*}
By using the Cauchy-Schwarz inequality and (C1),
\begin{align*}
\frac{1}{n}\sum_{i=1}^{n}(\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i)^2
\leq \frac{1}{n}\sum_{i=1}^{n}(\bm{\eta}^\top\Sigma_n^{-1}\bm{\eta})(\bm{x}_i^\top\Sigma_n^{-1}\bm{x}_i)
\lesssim\frac{1}{n}\sum_{i=1}^{n}\|\bm{x}_i\|^2
={\rm O}(K).
\end{align*}
Noting that each $y_i$ is distributed according to a Bernoulli distribution with parameter $\psi'$, ${\rm E}[(y_i-\psi'_i)^4]$ is uniformly bounded on ${\cal B}$ for any $i=1,\ldots,n$ by a simple calculation.
Thus, by using the Cauchy-Schwarz inequality and Chebyshev's inequality, we have
\begin{align*}
\max_{1\leq i\leq n}{\rm E}[(y_i-\psi'_i)^2I(|\omega_{ni}|>\sigma_n\varepsilon)]
&\leq \max_{1\leq i\leq n}{\rm E}[(y_i-\psi_i)^4]^{1/2}{\rm P}(|\omega_{ni}|>\sigma_n\varepsilon)^{1/2} \\
&\lesssim \frac{1}{\sigma_n}\max_{1\leq i\leq n}{\rm E}[\omega_{ni}^2]^{1/2} \\
&=\frac{1}{\sigma_n\sqrt{n}}\max_{1\leq i\leq n}|\bm{\eta}^\top\Sigma_n^{-1}\bm{x}_i|\sqrt{\psi'_i(1-\psi'_i)} \\
&\lesssim \frac{1}{\sqrt{n}}\max_{1\leq i\leq n}\|\bm{x}_i\|
={\rm O}\left(\frac{\sqrt{K}}{\sqrt{n}}\right).
\end{align*}
Finally, since
\begin{align*}
\sigma_n^2
=\bm{\eta}^\top\Sigma_n^{-1}\bm{\eta}
\leq \lambda_{{\rm max}}(\Sigma_n^{-1})\|\bm{\eta}\|^2
=\frac{\|\bm{\eta}\|^2}{\lambda_{{\rm min}}(\Sigma_n)}
={\rm O}(1),
\end{align*}
we have
\begin{align*}
\frac{1}{\sigma_n^2}\sum_{i=1}^{n}{\rm E}[\omega_{ni}^2I(|\omega_{ni}|>\sigma_n\varepsilon)]
={\rm O}\left(\frac{\sqrt{K}}{\sqrt{n}}\cdot K \right).
\end{align*}
From (C3), this implies the Lindeberg condition (\ref{eq;Lindeberg}).
\subsection{Proof of Theorem \ref{thm2}}
\label{subsec:thm2}
First, we prove that, for any $K$ dimensional vector $\bm{\eta}$, the selection event can be expressed as an inequality with respect to $\bm{\eta}^\top\bm{T}_n$.
Let us define $\bm{w}=(I_K-\bm{c}\bm{\eta}^\top)\bm{T}_n$, where $\bm{c}=\Sigma_n^{-1}\bm{\eta}/\sigma_n^2$.
Then, since $\bm{T}_n=(\bm{\eta}^\top\bm{T}_n)\bm{c}+\bm{w}$, we have
\begin{align*}
\tilde{A}\bm{T}_n\leq \tilde{\bm{b}}
&\Leftrightarrow (\bm{\eta}^\top\bm{T}_n)\tilde{A}\bm{c}\leq \tilde{\bm{b}}-\tilde{A}\bm{w} \\
&\Leftrightarrow (\bm{\eta}^\top\bm{T}_n)(\tilde{A}\bm{c})_j\leq (\tilde{\bm{b}}-\tilde{A}\bm{w})_j,~\forall j \\
&\Leftrightarrow
\begin{cases}
\bm{\eta}^\top\bm{T}_n\leq (\tilde{\bm{b}}-\tilde{A}\bm{w})_j/(\tilde{A}\bm{c})_j,& j:(\tilde{A}\bm{c})_j>0 \\
\bm{\eta}^\top\bm{T}_n\geq (\tilde{\bm{b}}-\tilde{A}\bm{w})_j/(\tilde{A}\bm{c})_j,& j:(\tilde{A}\bm{c})_j<0 \\
0=(\tilde{\bm{b}}-\tilde{A}\bm{w})_j,& j:(\tilde{A}\bm{c})_j=0 \\
\end{cases}
\end{align*}
and this implies the former result in Theorem \ref{thm2}.
To prove the theorem, we need to verify asymptotic independency between $(L_n, U_n, N_n)$ and $\bm{\eta}^\top\bm{T}_n$.
By the definition of $\bm{w}$ and Theorem~\ref{thm1},
\begin{align*}
\left( \begin{array}{c}
\bm{\eta}^\top\bm{T}_n \\
\bm{w}
\end{array} \right)
=
\left( \begin{array}{c}
\bm{\eta}^\top \\
I_K-\bm{c}\bm{\eta}^\top
\end{array} \right)\bm{T}_n
\end{align*}
is asymptotically distributed according to a Gaussian distribution.
Thus, $\bm{w}$ and $\bm{\eta}^\top\bm{T}_n$ are asymptotically independent since
\begin{align*}
{\rm Cov}[\bm{w},\bm{\eta}^\top\bm{T}_n]
=(I_K-\bm{c}\bm{\eta}^\top){\rm E}[\bm{T}_n\bm{T}_n^\top]\bm{\eta}
=(I_K-\bm{c}\bm{\eta}^\top)\Sigma_n^{-1}\bm{\eta}
=\bm{0}.
\end{align*}
Now we only need to prove asymptotic independency between $\tilde{\bm{b}}$ and $\bm{\eta}^\top\bm{T}_n$.
Letting $\bm{\psi}'=\bm{\psi}'(\bm{\beta}^*)$, the definition of $\bm{T}_n$ and $\Sigma_n$ imply
\begin{align*}
X_S^\top\Bigl\{ (\bm{y}-\bm{\psi}')-\frac{1}{\sqrt{n}}\Psi X_S\bm{T}_n\Bigr\}
=\bm{0},
\end{align*}
and thus
\begin{align*}
\bm{y}=\bm{\psi}'+\frac{1}{\sqrt{n}}\Psi X_S\bm{T}_n
\end{align*}
Then, we observe that
\begin{align*}
\tilde{\bm{b}}
&=-\frac{1}{\sqrt{n}}A_SX_S^\top\bm{\psi}'-\frac{1}{\sqrt{n}}A_{S^\bot}X_{S^\bot}^\top\bm{y} \\
&=-\frac{1}{\sqrt{n}}AX^\top\bm{\psi}'-\frac{1}{\sqrt{n}}A_{S^\bot}X_{S^\bot}^\top\left(\bm{\psi}'+\frac{1}{\sqrt{n}}\Psi X_S\bm{T}_n\right) \\
&=-\frac{1}{\sqrt{n}}AX^\top\bm{\psi}'-\frac{1}{n}A_{S^\bot}X_{S^\bot}^\top\Psi X_S\bm{T}_n.
\end{align*}
Since $\tilde{\bm{b}}$ can be expressed as a linear combination of $\bm{T}_n$ as well as $\bm{w}$, the theorem holds when the covariance between $\tilde{\bm{b}}$ and $\bm{\eta}^\top\bm{T}_n$ converges to 0 as $n$ goes to infinity.
By noting that $\Sigma_n=X_S^\top\Psi X_S/n$, we have
\begin{align*}
{\rm Cov}[\tilde{\bm{b}},\bm{\eta}^\top\bm{T}_n]
&=-\frac{1}{n}A_{S^\bot}X_{S^\bot}^\top\Psi X_S{\rm E}[\bm{T}_n\bm{T}_n^\top]\bm{\eta} \\
&=-A_{S^\bot}(X_{S^\bot}^\top\Psi X_S)(X_S^\top\Psi X_S)^{-1}\bm{\eta}.
\end{align*}
In addition, letting $\bm{a}=(1,-1)^\top$, it is straightforward that
\begin{align*}
A_{S^\bot}
=\bm{1}_{K}\otimes
\left( \begin{array}{ccc}
0&\cdots&0 \\
\bm{a}&&O \\
&\ddots& \\
O&&\bm{a}
\end{array} \right)
=\bm{1}_{K}\otimes \tilde{J}
\end{align*}
by the definition of the selection event, where $\tilde{J}=(\bm{0}_{d-K}, I_{d-K}\otimes\bm{a}^\top)^\top$.
This implies $A_{S^\bot}^\top A_{S^\bot}=2KI_{d-K}$.
Finally, (C1), (C3), and (C4) imply
\begin{align*}
\|{\rm Cov}[\tilde{\bm{b}},\bm{\eta}^\top\bm{T}_n]\|^2
=2K\|(X_{S^\bot}^\top\Psi X_S)(X_S^\top\Psi X_S)^{-1}\bm{\eta}\|^2
\lesssim K\Bigl\|\frac{1}{n}X_{S^\bot}^\top X_S\Bigr\|^2
={\rm O}(K^3/n),
\end{align*}
and this proves the asymptotic independency between $\tilde{\bm{b}}$ and $\bm{\eta}^\top\bm{T}_n$.
\section{Concluding Remarks and Future Research}
\label{sec:conclusion}
Recently, methods for data driven science such as selective inference and adaptive data analysis have become increasingly important as described by \cite{Bar16}.
Although there are several approaches for carrying out post-selection inference, we have developed a selective inference method for high dimensional classification problems, based on the work in \cite{Lee16}.
In the same way as that seminal work, the polyhedral lemma (Lemma \ref{lem1}) plays an important role in our study.
By considering high dimensional asymptotics concerning sample size and the number of selected variables, we have shown that a similar result to the polyhedral lemma holds even for high dimensional logistic regression problems.
As a result, we could construct a pivotal quantity whose sampling distribution is represented as a truncated normal distribution which converges to a standard uniform distribution.
In addition, through simulation experiments, it has been shown that the performance of our proposed method, in almost all cases, superior to other methods such as data splitting.
As suggested by the results from the simulation experiments, conditions might be relaxed to accommodate more general settings.
In terms of future research in this domain, while we considered the logistic model in this paper, it is important to extend the results to other models, for example, generalized linear models.
Further, higher order interaction models are also crucial in practice.
In this situation, the size of the matrix in the selection event becomes very large, and thus it is cumbersome to compute truncation points in the polyhedral lemma.
\cite{suzumura2017selective} have shown that selective inference can be constructed in such a model by utilizing a pruning algorithm.
In this respect, it is desirable to extend their result not only to linear regression modeling contexts but also to other models.
\bibliographystyle{econ}
|
1,116,691,497,627 | arxiv | \section{Introduction and Statement of Results}
In 1946 Paul Erd\H os \cite{E1} posed the following question:
given a finite subset $P$ of
${\mathbb {R}}^2$ or ${\mathbb {R}}^3$, what is the maximum number of pairs $(p_1 ,p_2 )$ with $p_1 ,p_2 \in P$ and $|p_1 -p_2 |=1$?
The Erd\H os unit distance conjecture in ${\mathbb {R}}^2$ is the estimate
\begin{equation}\label{UDC}
\big|\{ (p_1 ,p_2 )\in P^2 :|p_2 -p_1 |=1\}\big| \le C\, |P|\sqrt{\log (|P|)}.
\end{equation}
(We will use $|\cdot |$ for the cardinality of a finite set as well as Lebesgue measure on ${\mathbb {R}}^d$.)
In two dimensions the best currently-known partial result, due to Spencer, Szemer\'edi, and Trotter \cite{SST}, is
\begin{equation*}
\big|\{ (p_1 ,p_2 )\in P^2 :|p_2 -p_1 |=1\}\big| \le C\, |P|^{4/3},
\end{equation*}
while the current best estimate for the analogous problem in ${\mathbb {R}} ^3$ has the exponent $3/2+\epsilon $
(for any $\epsilon >0$ and $C$ depending on $\epsilon$) in
place of $4/3$ - see Clarkson {\it et al.}
\cite{CEGSW}. In four or more dimensions it follows from an example we learned in \cite{IJL} that one cannot significantly improve the trivial $|P|^2$ bound: let $\tilde P$ be any set of $N$ points ${\tilde x}_n$ in ${\mathbb {R}}^2$ satisfying $|{\tilde x}_n |=2^{-1/2}$. Let $P$ be the subset of ${\mathbb {R}}^4$ given by
\begin{equation*}
P\dot= \{({\tilde x}_n ;0,0),(0,0;{\tilde x}_m ):{\tilde x}_n ,{\tilde x}_m \in \tilde P \}.
\end{equation*}
Then the left hand side of \eqref{UDC} is at least $N^2$ while $|P|^2 =4N^2$.
Our first result shows that if we
ban a salient feature of this example - many points in low-dimensional subspaces - then a nontrivial estimate is still possible:
\begin{theorem}\label{discretetheorem}
Fix $d\geq 2$. There is a positive constant $C_d$ such that if $P\subset {\mathbb {R}}^d$ and if every
$d$-element subset of $P$ is affinely independent, then
\begin{equation}\label{result}
\big|\{ (p_1 ,p_2 )\in P^2 :|p_2 -p_1 |=1\}\big| \leq C_d \, |P|^{(2d-1)/d}.
\end{equation}
\end{theorem}
\noindent (The proofs of the results described in this section can be found in \S \ref{proofs}.)
Another famous problem of Erd\H os is his distinct distance conjecture, the estimate
\begin{equation}\label{DDC}
\big|\{ |p_1 -p_2 |:(p_1 ,p_2 )\in P^2 \}\big| \ge c\, \frac{|P|}{\sqrt{\log (|P|)}}.
\end{equation}
An easy pigeon-hole argument shows that \eqref{UDC} implies \eqref{DDC}. But while the conjecture \eqref{UDC}
is still far from resolved, Guth and Katz \cite{GK} have recently come very close to \eqref{DDC} by showing that
\begin{equation*}
\big|\{ |p_1 -p_2 |:(p_1 ,p_2 )\in P^2 \}\big| \ge c\, \frac{|P|}{{\log (|P|)}}.
\end{equation*}
This distinct distance problem has a continuous analog known as the Falconer distance set problem (\cite{F}):
if $K$ is a compact subset
of ${\mathbb {R}}^d$ and if we define the distance set $\Delta (K)$ by
\begin{equation*}
\Delta (K)=\{ |k_1 -k_2 |: (k_1 ,k_2 )\in K^2 \},
\end{equation*}
then what can we say about lower bounds for $\dim \big(\Delta (K)\big)$ in terms of $\dim (K)$?
For example, Wolff proves in \cite{W} that
if $K\subset{\mathbb {R}}^2$ and $\dim (K)>4/3$ then $\Delta (K)$ has positive Lebesgue measure and so dimension one, while
Erdo\u{g}an \cite{B1} contains analogous results in ${\mathbb {R}}^d$.
The primary purpose of this paper is to study the following continuous analog of the {\bf unit} distance problem: if
\begin{equation*}\label{Ddef}
D=D(K)=\{(k_1 ,k_2 )\in K^2 :|k_2 -k_1 |=1\}, \, K\subset {\mathbb {R}}^d ,
\end{equation*}
find
\begin{equation}\label{g_d}
g_d (\alpha )\dot=\sup \{\dim (D): \text{$K$ is a compact subset of ${\mathbb {R}}^d$ with $\dim (K)=\alpha$}\}.
\end{equation}
When $d=1$ this is trivial: the projection $(k_1 ,k_2)\mapsto k_1$ is at most two-to-one on $D$ and so
it follows that $\dim (D)\le\alpha$.
If $\tilde K \subset{\mathbb {R}}$, $\dim (\tilde K )=\alpha$, and if $K=\tilde K \cup (\tilde K +1)$, then
$\dim (D)=\alpha =\dim (K)$. Thus $g_1 (\alpha )=\alpha$.
Here is a trivial bound in higher dimensions: the map
$$
(k_1 ,k_2 )\mapsto (k_1 ,k_2 -k_1 )
$$
shows that $D$ and
\begin{equation}\label{Gdef}
G\doteq\{(k,y): k\in K,\ y\in S^{d-1},\ k+y\in K\}
\end{equation}
have the same dimension. This gives the bound
\begin{equation}\label{trivbd}
\dim (D)\le \alpha +d-1 .
\end{equation}
More interestingly,
$D$ is the intersection of $K\times K$ with the variety
\begin{equation*}
\{(x_1 ,x_2 )\in {\mathbb {R}}^d \times {\mathbb {R}}^d :|x_2 -x_1 |=1\}.
\end{equation*}
Thus one might conjecture that
\begin{equation*}
\dim (D)\le 2\alpha -1
\end{equation*}
and so
\begin{equation}\label{largealpha.}
g_d (\alpha )\le 2\alpha -1.
\end{equation}
Of course this cannot always be correct since $g_d (\alpha )\ge \alpha$
if $0\le\alpha\le 1$ (because $g_d (\alpha )\geq g_1 (\alpha )$
since ${\mathbb {R}}^d$ contains a copy of ${\mathbb {R}}$).
But here is an example related to \eqref{largealpha.}:
suppose $C\subset B(0,1/2)\subset {\mathbb {R}}^{d-1}$ has $\dim (C)=\gamma$ and put
$K=C\times [0,2]\subset{\mathbb {R}}^d$. Then $\alpha =\dim (K)=1+\gamma$. Also
$$
D=\{(c_1 ,t_1 ;c_2 ,t_2):c_1 ,c_2 \in C,\ t_1 ,t_2 \in [0,2],\, |t_1 -t_2 |=\sqrt{1-|c_1 -c_2 |^2}\}.
$$
Since for each fixed $(c_1 ,t_1 ;c_2)$ with $c_1 ,c_2 \in C,\ 0\le t_1 \le 1$ there is a $t_2 \in [0,2]$ which works in
$|t_1 -t_2 |=\sqrt{1-|c_1 -c_2 |^2}$, it follows that
$$
\dim (D)=\dim (C\times C )+1\ge 2\gamma+1 =2\alpha-1.
$$
Thus when $\alpha\ge 1$ it is at least not possible to do better than \eqref{largealpha.}. This example has another implication too: there are sets $C\subset {\mathbb {R}}$ with $\dim (C)=0$ and $\dim (C\times C)=1$. (That is a manifestation of the fact that Hausdorff dimension does not always behave well when forming Cartesian products.) It follows that there are sets $K\subset {\mathbb {R}}^2$ with $\dim (K)=1$ and $\dim (D)=2$, discouraging news when looking for something better than the trivial estimate \eqref{trivbd}.
To rule out this sort of degeneracy we will assume that our $\alpha$-dimensional sets $K$ have a certain regularity - defining
$K_\delta =K+B(0,\delta )$, we will assume for the remainder of the paper that $K_\delta$ is a $\delta$-discrete $\alpha$-set in the sense of Katz and Tao \cite{KT2}. This means that
\begin{equation}\label{alphaset}
|K_\delta \cap B(x,r)|\le C(K) \, (r/\delta )^\alpha \delta^d
\end{equation}
for any $x\in{\mathbb {R}}^d $ and $r\ge \delta $. In particular, we will now assume that the $\alpha$-dimensional sets figuring in \eqref{g_d} all satisfy \eqref{alphaset}.
With the assumption \eqref{alphaset} in place we will obtain some nontrivial estimates on the upper Minkowski dimension $\dim_M (D)$ of $D$. But first we record another trivial estimate.
Since $|K_\delta |\lesssim \delta^{d-\alpha}$ by \eqref{alphaset}, it follows from $D\subset K\times K$
that $|D_\delta |\lesssim \delta^{2d-2\alpha}$. Thus $\dim_M (D)\le 2\alpha$ and so
\begin{equation}\label{trivest.}
g_d (\alpha )\le 2\alpha .
\end{equation}
Our first nontrivial bound for $g_d$ concerns large values of $\alpha$:
\begin{theorem}\label{largealpha}
If $(d+1)/2 \le \alpha \le d$, then $g_d (\alpha )= 2\alpha -1$; if $\alpha \le (d+1)/2$, then
$g_d (\alpha )\le \alpha +(d-1)/2$.
\end{theorem}
\noindent The proof uses the Fourier transform.
The second statement of Theorem \ref{largealpha} is only interesting when $\alpha +(d-1)/2$ is less than the $2\alpha$ in \eqref{trivest.} and so only when $\alpha >(d-1)/2$.
On the other hand, the first statement of Theorem \ref{largealpha} shows that
the conjecture \eqref{largealpha.} is correct for $\alpha\ge (d+1)/2$. In particular, and in contrast to the discrete unit distance problem, when $\alpha$ is sufficiently large there are positive results
available in ${\mathbb {R}}^d$ even when $d\ge 4$. But the same example
which rules out positive results on the discrete unit distance problem for $d\ge 4$ can be easily modified to show that there are no
nontrivial results on the continuous problem when $d\ge 4$ and $\alpha$ is small. In particular we have the following statement.
\begin{equation}\label{example1}
\text{If $d\ge 4$ and $\alpha \le \lfloor d/2 \rfloor -1$, then $g_d (\alpha )=2\alpha$.}
\end{equation}
(To see why \eqref{example1} is true, first note that the inequality
$g_{d+1}(\alpha )\ge g_d (\alpha )$ shows that it is enough to consider only the case when $d$ is even. In this case let $\tilde K$
be an appropriate $\alpha$-dimensional subset of $S^{ d/2 -1}\subset{\mathbb {R}}^{d/2}$ and define $K$ by
$$
K=2^{-1/2}\{(\tilde k_1 ,0),\, (0,\tilde k_2 )\in {\mathbb {R}}^{ d/2}\times {\mathbb {R}}^{ d/2}: \tilde k_1 ,\tilde k_2 \in \tilde K \}.)
$$
If $d\ge 4$ and $\alpha\in (\lfloor \frac{d}{2}\rfloor -1 ,\frac{d-1}{2})$ we do not know if the trivial estimate
\eqref{trivest.} can be improved.
For $d=2$ or $d=3$ we have the following theorems, which contain nontrivial results for small $\alpha$.
\begin{theorem}\label{d=2}
For $0<\alpha \le 1$ we have
\begin{equation*}\label{2dest}
\frac{3\alpha }{2}\le g_2 (\alpha )\le \min \Big\{\frac{5\alpha}{3} ,\frac{\alpha (2+\alpha )}{1+\alpha}\Big\}.
\end{equation*}
Additionally, for $1\le\alpha \le 3/2$ we have $g_2 (\alpha )=\alpha +1/2$ and for
$3/2\le\alpha\le 2$ we have $g_2 (\alpha )=2\alpha -1$.
\end{theorem}
\noindent Except for the fact that
$g_2 (\alpha )\ge \alpha +1/2$ when $1\le \alpha \le 3/2$, the second statement here is a
consequence of Theorem \ref{largealpha}.
Parts of the proofs of Theorem \ref{d=2} and of Theorem \ref{d=3} below employ incidence geometry in the continuous setting - see \cite{S} for other examples.
\begin{theorem}\label{d=3}
We have
$g_3 (\alpha )\le \frac{15\alpha}{8}$.
\end{theorem}
\noindent We note that, in addition to improving \eqref{trivest.} and
improving \eqref{trivbd} for $\alpha <16/7$,
the estimate in Theorem \ref{d=3} improves the second bound in Theorem \ref{largealpha} when $\alpha\le 8/7$.
\section{Proofs}\label{proofs}
{\it Proof of Theorem \ref{discretetheorem}:}
Modifying \eqref{Gdef} to fit the context of Theorem \ref{discretetheorem} gives
\begin{equation*}
G=\{(p,b): p\in P,\, b\in S^{d-1},\, p+b\in P\}.
\end{equation*}
The correspondence $(p_1 ,p_2 )\longleftrightarrow (p_1 ,b)\dot=(p_1 ,p_2 -p_1 )$ shows that \eqref{result} is equivalent to
\begin{equation}\label{result'}
\big|G\big| \leq C_d \, |P|^{(2d-1)/d}.
\end{equation}
Define
\begin{equation*}\label{Vdef}
V\dot= \{(p,b_1 ,\dots ,b_d ):(p,b_j )\in G,\, j=1,\dots ,d\}.
\end{equation*}
Then \eqref{result'} is a consequence of the two inequalities
\begin{equation}\label{result1}
\frac{|G|^d}{|P|^{d-1}}\le |V|
\end{equation}
and
\begin{equation}\label{result2}
|V|\leq C_d \, |P|^d .
\end{equation}
Inequality \eqref{result1} follows from a H\"older's inequality argument in the spirit of \cite{KT}:
\begin{equation*}\label{ineq1}
|G|=\sum_{p\in P,|b|=1}\chi_{G}(p,b)\leq\,
\Big(\sum_{p\in P} \big(\sum_{|b|=1} \chi_G (p,b)\big)^d \Big)^{1/d} \,\big|P|^{(d-1)/d}.
\end{equation*}
To see \eqref{result2}, write $V$ as the disjoint union $V' \cup V''$ where $V'$ is the subset of $V$
consisting of all $(p,b_1 ,\dots ,b_d )$ for which $b_i =b_j$ for some $i\not= j$. Since $(p,b)\in G$
implies $b\in P-p$, it is clear that
\begin{equation*}
|V' |\leq C_d \, |P|^d .
\end{equation*}
To obtain a similar estimate for $V''$, consider the mapping
\begin{equation*}
\Phi:(p,b_1 ,\dots ,b_d )\mapsto (p+b_1 ,\dots ,p+b_d )
\end{equation*}
of $V''$ into $P^d$. It will be enough to show that $\Phi$ is at most two-to-one.
Since $(p,b)\in G$ implies $b\in P-p$, it follows from
$(p,b_1 ,\dots ,b_d )\in V''$ that there are distinct $p_1 ,\dots ,p_d \in P$ such that
\begin{equation*}
(b_2 -b_1 ,\dots ,b_d -b_1 )=(p_2 -p_1 ,\dots ,p_d -p_1 )\dot= (a_2 ,\dots ,a_d ).
\end{equation*}
Our hypothesis concerning affine independence implies that the vectors $a_2 ,\dots ,a_d $
are linearly independent.
Next, suppose that
\begin{equation*}
\Phi(p',b'_1 ,\dots ,b'_d )=\Phi(p,b_1 ,\dots ,b_d ).
\end{equation*}
Then
\begin{equation*}
b'_j -b'_1 =(p'+b'_j )-(p'+b'_1 )=(p +b_j )-(p +b_1 )=a_j
\end{equation*}
for $j=2,\dots ,d$. The desired multiplicity estimate for $\Phi$ now follows from Lemma \ref{lemma1}
below (an analog of the fact that there are at most two chords of a circle which are congruent under translation).
\begin{lemma}\label{lemma1}
Suppose that $a_2 ,\dots ,a_d \in {\mathbb {R}} ^d$ are linearly independent. Then there are at most two $d$-tuples
$(b_1 ,\dots ,b_d )$ with $b_j \in{\mathbb {R}}^d$ such that
\begin{equation}\label{bcond}
|b_1 |=\cdots =|b_d|=1,\text{and }b_j -b_1 =a_j ,\ j=2,\dots ,d.
\end{equation}
\end{lemma}
\begin{proof}
Let $H$ be the hyperplane in ${\mathbb {R}}^d$ spanned by $a_2 ,\dots ,a_d$ and fix a nonzero vector $v$ with
$v\perp H$.
Our first goal is to prove the following statement:
\begin{multline}\label{2t's}
\text{there is $\{t_1 ,t_2\}\subset{\mathbb {R}}$ depending only on $\{ a_2 ,\dots ,a_d \}$ and $v$} \\
\text{such that if \eqref{bcond}
holds, then $\{ b_1 ,\dots ,b_d \}\subset (tv+H) \cap S^{d-1}$} \\
\text{for some $t\in\{t_1 ,t_2 \}$.
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }
\end{multline}
To see \eqref{2t's} we begin by noting that
if $w\in{\mathbb {R}}^d$ then the intersection
$$
(tw+H) \cap S^{d-1}
$$
is either a $(d-2)$-sphere or empty.
In particular, if $r_0$ is the radius of the $(d-2)$-sphere determined by $\{0,a_2 ,\dots ,a_d \}$
(the linear independence of $a_2, \ldots, a_d$ guarantees that there is
exactly one $(d-2)$-sphere containing these points)
then there is $\{t_1 ,t_2 \}\subset {\mathbb {R}}$, depending only on $a_2 ,\dots ,a_d$ and $v$,
such that if $(tv+H) \cap S^{d-1}$ is a $(d-2)$-sphere of radius $r_0$
exactly when $t\in\{t_1 ,t_2 \}$.
Suppose that \eqref{bcond} holds.
Since $(b_1 +H) \cap S^{d-1}$ contains $b_1$, $(b_1 +H) \cap S^{d-1}$ is a $(d-2)$-sphere. Since
\begin{equation*}
b_1 +\{ 0,a_2 ,\dots ,a_d \}\subset (b_1 +H) \cap S^{d-1},
\end{equation*}
it follows that
$(b_1 +H) \cap S^{d-1}$ is a $(d-2)$-sphere of radius $r_0$. Then (by \eqref{bcond})
\begin{equation*}
\{b_1 ,\dots ,b_d \}=b_1 +\{ 0,a_2 ,\dots ,a_d \}\subset (b_1 +H) \cap S^{d-1}=(tv+H) \cap S^{d-1}
\end{equation*}
for some $t\in\{t_1 ,t_2 \}$. This establishes \eqref{2t's}.
Given \eqref{2t's}, the proof of the lemma will be complete if we show that for fixed $t\in{\mathbb {R}}$ there is at most
one $d$-tuple $(b_1 ,\dots ,b_d )$ such that both
\eqref{bcond} and
\begin{equation}\label{bcond1}
\{b_1 ,\dots ,b_d \}\subset (tv+H) \cap S^{d-1}
\end{equation}
hold. So suppose that \eqref{bcond} and \eqref{bcond1} hold for $(b_1 ,\dots ,b_d )$
and also for $(b'_1 ,\dots ,b'_d )$. Let $r_0$ and $c$
be the radius and center of the $(d-2)$-sphere $(tv+H) \cap S^{d-1}$.
Then the points $\{0,a_2,\ldots,a_d\}$ are contained in the $(d-2)$-spheres
in $H\cap S^{d-1}$
of radius $r_0$ centered at $c-b_1$ and $c-b'_1$.
Again appealing to the fact that $\{0,a_2,\ldots, a_d\}$ determines a unique $(d-2)$-sphere we see that $b_1 = b'_1$ and hence $(b_1,\ldots,b_d) = (b'_1, \ldots, b'_d).$
\end{proof}
{\it Proof of Theorem \ref{largealpha}:}
Recalling the definition \eqref{g_d} of $g_d$, we will bound $g_d$ by estimating
$\dim_M (D)$. Since $\dim_M (D)\le \gamma$ will follow from
$|D_\delta |=|D+B(0,\delta )|\lesssim \delta^{2d-\gamma-\epsilon}$
for all $\epsilon >0$ and since $D_\delta \subset D^\delta$
where $D^\delta$ is defined by
\begin{equation*}
D^\delta \doteq \{(k_1 ,k_2 )\in K_\delta \times K_\delta : 1-2\delta\le |k_2 -k_1 |\le 1+2\delta \},
\end{equation*}
we will be interested in estimating $|D^\delta |$.
Without loss of generality assume that $K=-K$ and write
\begin{equation}\label{Dest}
|D^\delta |=\int_{K_\delta}\int_{K_\delta}1_{A(0,\delta )}(x_2 -x_1 )\, dx_1 \, dx_2 =
\langle 1_{K_\delta}\ast 1_{K_\delta},1_{A(0,\delta )}\rangle
\end{equation}
where, for $c\in{\mathbb {R}}^d$, $A(c,\delta ) =\{x\in{\mathbb {R}}^d :1-2\delta \le |x-c|\le 1+2\delta \}$. Let $\rho$ be a symmetric Schwartz function with
\begin{equation}\label{rhoprops}
1_{B(0,C)}\lesssim |\hat\rho |\lesssim 1_{B(0,2C)},\
1_{B(0,C' )}(x)\le \rho (x)
\lesssim \sum_{j=1}^\infty 2^{-jd}\,1 _{B(0,2^j )}(x) .
\end{equation}
Write $\sigma$ for Lebesgue measure on $S^{d-1}$.
If $\rho_r (x)=r^{-d}\, \rho (x/r)$, and if $C'$ is chosen appropriately, then
\begin{multline}\label{Dest2}
|D^\delta |\lesssim \delta\,\langle 1_{K_\delta}\ast1_{K_\delta},\rho_\delta \ast
\rho_\delta \ast \sigma\rangle =\delta\,
\langle (1_{K_\delta}\ast\rho_\delta)\ast (1_{K_\delta}\ast\rho_\delta), \sigma\rangle \lesssim \\
\delta\, \int_{B(0,2C/\delta )}
\big| \widehat {1_{K_\delta}\ast \rho_\delta}(\xi )\big|^2 \, \frac{d\xi}{1+|\xi |^{(d-1)/2}}.
\end{multline}
We will control the last integral by estimating $\|1_{K_\delta}\ast \rho_\delta \|_2$
and we begin by estimating $\|1_{K_\delta}\ast \chi_{B(0,r)}\|_2$ for $r\ge\delta$.
Using \eqref{alphaset} we have
$$
\|1_{K_\delta}\ast 1_{B(0,r)}\|_1
\lesssim r^d \, \delta^{d-\alpha},\ \|1_{K_\delta}\ast 1_{B(0,r)}\|_\infty
\lesssim r^\alpha \delta^{d-\alpha},
$$
and so
\begin{equation}\label{est}
\|1_{K_\delta}\ast 1_{B(0,r)}\|_2 \lesssim r^{(d+\alpha )/2}\,\delta^{d-\alpha},\ r\ge \delta .
\end{equation}
Then \eqref{rhoprops} and \eqref{est} show that
for $r\ge \delta$ we have
\begin{equation}\label{est2}
\|1_{K_\delta}\ast \rho_r\|_2 \lesssim r^{(\alpha -d)/2}\delta^{d-\alpha}.
\end{equation}
Since $\hat{\rho_r}$ is supported on $B(0,C/r)$, \eqref{est2} implies
\begin{equation*}
\int\limits_{\small\frac{C}{2r}\le |\xi |\le \small\frac{C}{r}}
\big| \widehat {1_{K_\delta}\ast \rho_\delta}(\xi )\big|^2 \, d\xi \lesssim
\int\limits_{\small\frac{C}{2r}\le |\xi |\le \small\frac{C}{r}}
\big| \widehat {1_{K_\delta}\ast \rho_r}(\xi )\big|^2 \, d\xi\lesssim r^{\alpha -d}\, \delta^{2(d-\alpha )},\ r\ge \delta .
\end{equation*}
Thus
\begin{multline}\label{est3}
\int\big| \widehat {1_{K_\delta}\ast \rho_\delta}(\xi )\big|^2 \, \frac{d\xi}{|\xi |^{d-\alpha}}=
\int\limits_{|\xi |\le \small\frac{2C}{\delta}}
\big| \widehat {1_{K_\delta}\ast \rho_\delta}(\xi )\big|^2 \, \frac{d\xi}{|\xi |^{d-\alpha}}= \\
\int\limits_{\{|\xi |\le C\}}
\big| \widehat {1_{K_\delta}\ast \rho_\delta}(\xi )\big|^2 \, \frac{d\xi}{|\xi |^{d-\alpha}}+
\sum\limits_{\small\frac{1}{2}\le 2^j \le\small\frac{1}{\delta}}
\int\limits_{\small\frac{C}{2^{j+1} \delta}\le |\xi |\le \small\frac{C}{2^{j}\delta}}
\big| \widehat {1_{K_\delta}\ast \rho_\delta}(\xi )\big|^2 \, \frac{d\xi}{|\xi |^{d-\alpha}}\lesssim \\
\delta^{2(d-\alpha )}+\sum\limits_{\small\frac{1}{2}\le 2^j \le\small\frac{1}{\delta}}
\Big(\frac{C}{2^j \delta}\Big)^{\alpha -d}\big(2^j \delta \big)^{\alpha -d}\,\delta^{2(d-\alpha )}\lesssim
\log (\small\frac{1}{\delta}\big)\,\delta^{2(d-\alpha )}.
\end{multline}
(So normalized Lebesgue measure on $K_\delta$ behaves like an $\alpha$-dimensional measure from the Fourier transform point of view.) We will use \eqref{est3} to estimate $|D^\delta |$ via \eqref{Dest2}
and thus to obtain the upper bounds on $g_d$ in Theorem \ref{largealpha}.
If $\alpha\ge (d+1)/2$, so that $(d-1)/2 \ge d-\alpha$, then the integral in \eqref{Dest2} is dominated by the integral estimated in \eqref{est3}. That leads to
$|D^\delta |\lesssim \log (\small\frac{1}{\delta}\big)\,\delta^{2(d-\alpha )+1}$ and so, by the remarks at the beginning of this proof, to $g_d (\alpha )\le 2\alpha -1$. With the example described
after \eqref{largealpha.}, this gives
$g_d (\alpha )= 2\alpha -1$. If $\alpha\le(d+1)/2$, then when $|\xi |\le C/\delta$ we have
$$
\frac{1}{|\xi |^{(d-1)/2}}
\lesssim\frac{ \delta^{\alpha -(d+1)/2}}{|\xi |^{d-\alpha}}
$$
which leads as above to $g_d (\alpha )\le \alpha +(d-1)/2$.
{\it Proof of Theorem \ref{d=2}:}
We begin by claiming that it is enough to prove the upper bounds for $g_2 (\alpha )$ under the additional assumption that
\begin{equation*}\label{diam}
\text{diam} (K)\le 2-\eta
\end{equation*}
for some fixed $\eta >0$. (The purpose of this restriction is to avoid the possibility of external
tangencies of certain annuli and thus to allow the use of estimates like \eqref{intersectionest} below.)
To see that this reduction is legitimate, let $\{C_1 ,\dots ,C_7 \}$ be a partition of the unit circle into arcs each having length less than $.9$ and let
$$
G_i =B(0,1/100) \cup \big(B(0,1/100)+C_i\big) .
$$
Then
$$
D\subset\cup_{k\in K}\cup_{1\le i\le 7}\{(k_1 ,k_2 )\in (k+G_i )^2 \}.
$$
As $D$ is compact, it is contained in some finite union of sets
$$
\{(k_1 ,k_2 )\in (k+G_i )^2 \}.
$$
Since $\text{diam} \big(k+G_i \big)\le 2-\eta$ for some fixed $\eta >0$, our claim is established.
By renaming $\eta$ and assuming that $\delta>0$ is small enough, we can (and do) assume
for the remainder of this proof that
\begin{equation}\label{diam'}
\text{diam} (K_\delta )\le 2-\eta .
\end{equation}
We now turn to the proof of the upper bound
\begin{equation}\label{UB1}
g_2 (\alpha )\le \frac{\alpha (2+\alpha )}{1+\alpha}
\end{equation}
Under the assumption that $K$ satisfies \eqref{alphaset} for $d=2$, it is enough to establish the estimate
\begin{equation*}
|D^\delta |\lesssim \log (1/\delta )\,\delta ^{4-\alpha (2+\alpha )/(1+\alpha)}.
\end{equation*}
(Throughout this argument the constants implied by the symbol $\lesssim$ depend only on $K$).
With
\begin{equation*}\label{Gdelta}
G^\delta = \{(k,y): k\in K_\delta , \, 1-2\delta \le |y|\le 1+2\delta, \, k+y\in K_\delta \},
\end{equation*}
the correspondence $(k_1 ,k_2 )\longleftrightarrow (k_1 ,y )\dot= (k_1 ,k_2 -k_1 )$ shows that
$|D^\delta |=|G^\delta |$. Thus it suffices to show that
\begin{equation}\label{UB1.1}
|G^\delta |\lesssim \log (1/\delta )\,\delta ^{4-\alpha (2+\alpha )/(1+\alpha)}.
\end{equation}
Recall that $A(0,\delta ) =\{x\in{\mathbb {R}}^d :1-2\delta \le |x|\le 1+2\delta \}$. For $k\in K_\delta$ we will write
$(G^\delta )_k$ for the $k$-section of $G^\delta$ given by $\{y\in A(0,\delta ):k+y\in K_\delta \}$.
Since $K_\delta$ is a $\delta$-discrete $\alpha$-set we can assume that the two-dimensional
Lebesgue measure of $(G^\delta )_k$ satisfies $\delta^2 \lesssim |(G^\delta )_k |\lesssim \delta^{2-\alpha}$.
Find $M\lesssim \log (1/\delta )$ positive numbers $\{\lambda_m \}_{m=1}^M$ such that $\lambda_{m+1}=2\,\lambda_m$
and such that for each $k\in K_\delta$ we have $\lambda _m \le |(G^\delta )_k |\le \lambda_{m+1}$ for some $m$.
Then define
$$
K^m = \{k\in K_\delta : \lambda _m \le |(G^\delta )_k |\le \lambda_{m+1}\}.
$$
The estimate \eqref{UB1.1} (of the four-dimensional Lebesgue measure of $G^\delta$) will follow from the following estimate of the two-dimensional Lebesgue measure of $K^m$:
\begin{equation}\label{UB1.2}
\lambda_m \,|K^m |\lesssim \delta^{4-\alpha (2+\alpha )/(1+\alpha)}.
\end{equation}
So fix $m$. Choose $N=N(m)$ disjoint balls $B(c_n ,\delta )$ with $c_n \in K_m$ for which
\begin{equation}\label{UB1.3}
\lambda\doteq \lambda_m\le
|A(c_n ,\delta ) \cap K_\delta |
\end{equation}
and such that
\begin{equation}\label{UB1.35}
|K^m |\lesssim N\, \delta^2 .
\end{equation}
Our goal is the estimate
\begin{equation}\label{UB1.4}
\lambda\,N\,\delta^2 \lesssim \frac{\delta^{4}}{\lambda^{\alpha}}
\end{equation}
which when interpolated with the trivial estimate
\begin{equation}\label{trivest}
\lambda\,N\,\delta^2 \lesssim \lambda\, \delta^{2-\alpha}
\end{equation}
gives \eqref{UB1.2} via \eqref{UB1.35}.
To prove \eqref{UB1.4} we begin by fixing $r=C\delta^2 /\lambda$.
Since $|K_\delta \cap B(x,r)|\lesssim (r/\delta )^\alpha \delta^{2}$, any
$B(x,r)$ contains $\lesssim (r/\delta )^\alpha$ of the $B(c_n ,\delta )$'s. Thus there is an $r$-separated subcollection
$\{\tilde c_n \}$ containing $\tilde N$ of the of the $c_n$'s, where
\begin{equation}\label{Ntilde}
N\delta^\alpha /r^\alpha \lesssim \tilde N .
\end{equation}
The bound \eqref{UB1.4} will follow from a certain estimate from below of the two-dimensional Lebesgue measure
\begin{equation*}
|\cup_n \big(A(\tilde c_n ,\delta ) \cap K_\delta \big)|.
\end{equation*}
Part of the strategy here is the general estimate
\begin{equation}\label{unionest}
|\cup_n E_n |\ge \sum_n |E_n | -\sum_{n_1 <n_2 }|E_{n_1}\cap E_{n_2}|.
\end{equation}
We will take $E_n =A(\tilde c_n ,\delta ) \cap K_\delta$ and use the estimate
\begin{equation}\label{intersectionest}
|A(\tilde c_{n_1},\delta ) \, \cap \ A(\tilde c_{n_2},\delta )|
\lesssim\frac{\delta^2}{\delta+|\tilde c_{n_1}-\tilde c_{n_2}|}
\end{equation}
(in which the implied constant depends on $\eta$ in \eqref{diam'}\,)
to bound $|E_{n_1}\cap E_{n_2}|$. For this reason we are interested in controlling the quantity
\begin{equation*}\label{sum}
\sum_{n\not= n_0}\frac{\delta^2}{ |\tilde c_{n}-\tilde c_{n_0}|}.
\end{equation*}
We are assuming that the sets $K_{\delta '}$ are unifomly $\delta '$-discrete - that they satisfy \eqref{alphaset} uniformly in $\delta '$ - and so, in particular, $K_r$ is $r$-discrete. Thus for each $\tilde c_{n_0}$ there are at most
$C_2 \, 2^{k\alpha}$ of the $r$-separated $\tilde c_n$' s within distance $2^k r$ of $\tilde c_{n_0}$.
Therefore, since $\alpha <1$,
\begin{equation*}
\sum_{n\not= n_0}\frac{\delta^2}{ |\tilde c_{n}-\tilde c_{n_0}|}\lesssim \delta^2 \sum _{k=1}^\infty \frac{2^{k\alpha}}{2^k r}\lesssim
\frac{\delta^2 }{r}
\end{equation*}
and so
\begin{equation}\label{sumest}
\sum_{n\not= n_0}\frac{\delta^2}{ |\tilde c_{n}-\tilde c_{n_0}|}\le c\,\lambda
\end{equation}
by our choice of $r$. Thus \eqref{intersectionest} and \eqref{sumest} imply
\begin{equation*}
\sum_{n_1 <n_2 }|A(\tilde c_{n_1},\delta ) \, \cap \ A(\tilde c_{n_2},\delta )|\leq \\
C'\tilde{N} c\, \lambda = \tilde{N}c' \lambda.
\end{equation*}
On the other hand, because of \eqref{UB1.3} and \eqref{Ntilde} we have
\begin{equation*}
\sum_n |A(\tilde c_n ,\delta )\cap K_\delta |\ge \tilde{N} \lambda
\end{equation*}
and so, by \eqref{unionest},
\begin{equation*}
|\cup_n \big(A(\tilde c_n ,\delta )\cap K_\delta \big)|\ge (1 - c') \tilde{N} \lambda \gtrsim
(1 - c') \Big(\frac{N\delta^\alpha}{r^\alpha}\Big) \lambda .
\end{equation*}
If $C$ (figuring in the choice of $r$) is large enough, then $1 - c' >0$ and so this
last estimate and the fact that
$|K_\delta |\lesssim \delta^{2-\alpha}$, together with our choice of $r$, yield \eqref{UB1.4}.
This completes the proof of \eqref{UB1}.
Next we give the proof of the upper bound
\begin{equation}\label{UB2}
g_2 (\alpha )\le \frac{5\alpha}{3}.
\end{equation}
Part of the argument is analogous to the proof of Theorem \ref{discretetheorem}.
Let $K_m$, $\lambda =\lambda_m$, and the $B(c_n ,\delta ),\, 1\le n\le N,$ be as in the proof of \eqref{UB1}.
Instead of \eqref{UB1.4} we will now prove
\begin{equation}\label{UB2.1}
\lambda\,N\,\delta^2 \lesssim \frac{\delta^{2+3(2-\alpha )}}{\lambda ^2}.
\end{equation}
As above, interpolation with \eqref{trivest} will then lead to
\begin{equation*}
|D^\delta |\lesssim \log (1/\delta )\,\delta ^{4-5\alpha /3}
\end{equation*}
and so to \eqref{UB2}.
Choose a maximal $\delta$-separated subset $J$ of $K_\delta$. For each $c_n$ let
\begin{equation*}
S_{c_n}=\{a\in J :1-3\delta \le |a-c_n |\le 1+3\delta \},
\end{equation*}
so that $S_{c_n}$ is like a discretized $c_n$-section of $D^\delta$.
Define
\begin {equation*}
V=\big\{(c_n ,a_1 ,a_2 ): 1\le n\le N,\, a_1 ,a_2 \in S_{c_n},\, |a_1-a_2 |\ge
c\, \Big(\small\frac{\lambda}{\delta^{2-\alpha}}\Big)^{1/\alpha }\big\},
\end{equation*}
where $c$ is a small positive constant. We will prove \eqref{UB2.1} by comparing upper and
lower estimates for $|V|$.
Since
\begin{equation*}
|\{k\in K_\delta : 1-2\delta \le |k-c_n | \le 1+2\delta\}|\ge\lambda
\end{equation*}
by the choice of $c_n$ it follows that $|S_{c_n}|\gtrsim \lambda /\delta^2$. Since \eqref{alphaset} implies that
\begin{equation*}
|K_\delta \cap B\big(a, c(\lambda / \delta^{2-\alpha })^{1/\alpha}\big)| \lesssim c^\alpha \, \lambda
\end{equation*}
for any $a$, it follows that
\begin{equation}\label{Tlower}
|V|\gtrsim N\,\Big(\frac{\lambda}{\delta^2}\Big)^2
\end{equation}
if $c$ is small enough.
To obtain an upper bound for $|V|$ we begin by noting that if
$$
(c_{n_0} ,a_1 ,a_2 ) \in V
$$
then $c_{n_0}$ is in
\begin{equation}\label{inter}
A(a_1 ,3\delta )\cap A(a_2 ,3\delta ).
\end{equation}
Because $|a_1 -a_2 |\le 2-\eta <2$ it follows that if $|a_1 -a_2 |\gtrsim \delta$ then \eqref{inter} is a union of two connected components, one on either side of the line through $a_1$ and $a_2$ and each having diameter bounded above by
\begin{equation}\label{multiplicity}
C \frac{\delta}{|a_1 -a_2 |}\lesssim \Big(\frac{\delta^2}{\lambda}\Big)^{1/\alpha},
\end{equation}
where the inequality comes from the definition of $V$. The hypothesis \eqref{alphaset} then implies that each connected component of \eqref{inter} contains $\lesssim \delta^{2-\alpha}/\lambda$ points from $\{c_n\}$. Thus the projection
$$
(c_n ,a_1 ,a_2 )\mapsto (a_1 ,a_2 )
$$
of $V$ into $J\times J$ has multiplicity at most $C\, \delta^{2-\alpha}/\lambda$. Therefore
\begin{equation}\label{Tupper}
|V|\lesssim |J|^2 \, \frac{\delta^{2-\alpha}}{\lambda}\lesssim \delta^{-2\alpha}\, \frac{\delta^{2-\alpha}}{\lambda}.
\end{equation}
Comparison of \eqref{Tlower} and \eqref{Tupper} yields \eqref{UB2.1}.
This completes the proof of \eqref{UB2}.
To complete the proof of Theorem \ref{d=2} we need to establish the two lower bounds on $g_2 (\alpha )$
\begin{equation}\label{LB1}
\text{$g_2 (\alpha )\ge 3\alpha /2$ if $0<\alpha \le 1$ }
\end{equation}
and
\begin{equation}\label{LB2}
\text{$g_2 (\alpha )\ge \alpha +1/2$ if $1<\alpha \le 3/2$. }
\end{equation}
These will be consequences of the following lemma.
\begin{lemma}
Suppose $0<\beta , \gamma <1$ are rational and let $\alpha '=\beta +\gamma$. There is a compact set
$K\subset {\mathbb {R}}^2$ which satisfies \eqref{alphaset} with $\alpha '$ instead of $\alpha$ and for which we have
$|D^\delta |\gtrsim \delta^{4-(\beta +3\gamma /2)}$ for some sequence of $\delta$'s tending to $0$.
\end{lemma}
\noindent To deduce \eqref{LB1}, approximate $\alpha$ by $\alpha '$ with $\beta$ very close to $0$;
to deduce \eqref{LB2}, approximate $\alpha$ by $\alpha '$ with $\gamma$ very close to $1$.
\begin{proof} We will require compact subsets $A,B\subset [0,1]$ which satisfy \eqref{alphaset} with $\alpha$
replaced by $\beta$ in the case of $A$ and by $\gamma$ in the case of $B$. We will also need $A$ and $B$ to satisfy
the two lower bounds
\begin{equation}\label{L0}
\int_{A_{\delta_n}}\int_{A_{\delta_n}}1_{\{2\delta_n \le |x_1 -x_2 |\le 5\delta_n /2\}}\ dx_1 \, dx_2
\gtrsim \delta_n^{2-\beta}
\end{equation}
and
\begin{equation}\label{L1}
\int_{B_{\delta_n}}\int_{B_{\delta_n}}1_{\{ \sqrt{7\delta_n /2}\le |t_1 -t_2 |\le 2\sqrt{\delta_n} \}}\ dt_1 \, dt_2
\gtrsim \delta_n^{2-2\gamma}\delta_n^{\gamma /2}.
\end{equation}
for a sequence $\delta_n$'s tending to $0$. (At the end of this proof we will say a few words
about how to obtain $A$ and $B$.) Put $F=A\cup (A+1)$. Then
\begin{equation}\label{L2}
\int_{F_{\delta_n}}\int_{F_{\delta_n}}1_{\{2\delta_n \le 1-|x_1 -x_2 |\le 5\delta_n /2\}}\ dx_1 \, dx_2
\gtrsim \delta_n^{2-\beta}.
\end{equation}
Let $K=F\times B$. Then \eqref{alphaset} holds with $\alpha ' =\beta +\gamma$ in place of $\alpha$
by our choices of $F$ and $B$.
Now
$$
1-\delta \le \sqrt{(x_1 -x_2 )^2 +(t_1 -t_2 )^2}\le 1+\delta
$$
is equivalent to
$$
\sqrt{(1-\delta )^2 -|x_1 -x_2 |^2 }\le |t_1 -t_2 |\le \sqrt{(1+\delta )^2 -|x_1 -x_2 |^2 }.
$$
If
$$
2\delta\le 1-|x_1 -x_2 |\le 5\delta /2
$$
then
$$
2\delta\le 1-|x_1 -x_2 |^2 \le 5\delta
$$
and so if $\delta <1/2$ some algebra shows that
\begin{equation*}
\sqrt{(1-\delta )^2 -|x_1 -x_2 |^2 }\le \sqrt{7\delta /2}<2\sqrt{\delta} \le \sqrt{(1+\delta )^2 -|x_1 -x_2 |^2 }.
\end{equation*}
Thus if
$$
2\delta\le 1-|x_1 -x_2 |\le 5\delta /2 \text{ and }
\sqrt{7\delta /2}\le |t_1 -t_2 |\le 2\sqrt{\delta}
$$
it follows that
$$
\sqrt{(1-\delta )^2 -|x_1 -x_2 |^2 }\le |t_1 -t_2 |\le \sqrt{(1+\delta )^2 -|x_1 -x_2 |^2 }.
$$
With \eqref{L2} and \eqref{L1} this gives
$$
\int_{F_{\delta_n}}\int_{F_{\delta_n}}\int_{B_{\delta_n}} \int_{B_{\delta_n}}
1_{\{1-\delta_n \le \sqrt{(x_1 -x_2 )^2 +(t_1 -t_2 )^2}\le 1+\delta_n \}}\, dt_1 \, dt_2 \, dx_1 \, dx_2
\gtrsim \delta_n^{4-(\beta +3\gamma /2)}
$$
and so $|D^{\delta_n} |\gtrsim \delta_n^{4-(\beta +3\gamma /2)}$.
We conclude the proof of this lemma by describing a construction (which, though tedious, we include for the sake of completeness) of the required sets $F$ and $B$. For positive integers $p<q$ consider the Cantor set
$C=C(p,q)$ constructed by removing $(2^p -1)$ equally spaced intervals open intervals from $C_0 = [0,1]$
to obtain $C_1 =[0,2^{-q}]\cup\cdots\cup [1-2^{-q},1]$ and then continuing in the usual way, so that at the $j$th stage of the construction we have a set $C_j$ which is the union of $2^{jp}$
closed intervals of length $2^{-jq}$. Then
\eqref{alphaset} holds with $C=\cap C_j$ instead of $K$ and with $\alpha =p/q$. Also,
since $C_j \subset C+B(0,2^{-qj})=C_{2^{-qj}}$, for any $0<\kappa_1 <\kappa_2 <1$ we have
\begin{equation*}
\int_{C_{2^{-qj}}}\int_{C_{{2^{-qj}}}}1_{\{\kappa_1 2^{-qj} \le |x_1 -x_2 |\le \kappa_2 2^{-qj} \}}\ dx_1 \, dx_2
\gtrsim (2^{-qj})^{(2-p/q)}
\end{equation*}
and then also
\begin{equation}\label{L3}
\int_{C_{2^{-qj-2}}}\int_{C_{{2^{-qj-2}}}}1_{\{\kappa_1 2^{-qj} \le |x_1 -x_2 |\le \kappa_2 2^{-qj} \}}\ dx_1 \, dx_2
\gtrsim (2^{-qj})^{(2-p/q)},
\end{equation}
where the implied constant depends on $\kappa_1$ and $\kappa_2$. One then sees that
\begin{multline}\label{L4}
\int_{C_{2^{-2qj-2}}}\int_{C_{2^{-2qj-2}}}1_{\{\kappa2^{-qj} \le |x_1 -x_2 |\le 2^{-qj} \}}\ dx_1 \, dx_2
\gtrsim \\
2^{2(p-q)j}(2^{-qj})^{(2-p/q)} =(2^{-2qj})^{(2-\frac{3}{2}\frac{p}{q})}.
\end{multline}
If $p_2$ and $q_2$ are chosen so that $\gamma =p_2 /q_2$, if $B=C(p_2 ,q_2 )$, and if
$$
\delta_n = 2^{-2q_1 q_2 n-2}
$$
%
then
$$
\sqrt{\frac{7\delta_n} {2}}=\sqrt{\frac{7}{8}}\, 2^{-q_1 q_2 n},\ 2\sqrt{\delta_n}=2^{-q_1 q_2 n}
$$
and so \eqref{L4} with $q=q_2$, $j=nq_1$, and
$\kappa =\sqrt{7/8}$ shows that \eqref{L1} holds.
If $p_1$ and $q_1$ are chosen so that $\beta =p_1 /q_1$ and if $A=C(p_1 ,q_1 )$, then
$$
2\delta_n =\frac{1}{2}2^{-2q_1 q_2 n},\, \frac{5}{2}\delta_n =\small\frac{5}{8}2^{-2q_1 q_2 n}
$$
and so \eqref{L3} with $q=q_1$, $j=nq_2$, $\kappa_1 =1/2$, and $\kappa_2 = 5/8$ shows that \eqref{L0} holds. This completes the proof of the lemma.
\end{proof}
{\it Proof of Theorem \ref{d=3}:} The proof is similar to the proof of the bound $g_2 (\alpha )\leq 5\alpha /3$ of
Theorem \ref{d=2}. We begin by letting $K_m$, $\lambda =\lambda_n$, and the balls $B(c_n ,\delta )$, $1\le n\le N$
be the three-dimensional analogs of the quantities defined in the proof of Theorem \ref{d=2}.
Instead of
\eqref{UB2.1} we will now establish
\begin{equation}\label{UB3.1}
\lambda \, N\, \delta^3 \lesssim \frac{\delta^{5(3-\alpha )}\delta^{\alpha /2}}{\lambda^3}.
\end{equation}
Interpolation with the trivial bound
\begin{equation}\label{trivest2}
\lambda \, N\, \delta^3 \lesssim \lambda\, \delta^{3-\alpha}
\end{equation}
gives
\begin{equation}\label{UB3.2}
\lambda \, N\, \delta^3 \lesssim \lambda\, \delta^{6-15\alpha /8}.
\end{equation}
Then an argument completely analogous to the one in the proof of
Theorem \ref{d=2} leads to $g_3 (\alpha )\le 15 \alpha /8$.
Again choose a maximal $\delta$-separated subset $J$ of $K_\delta$. For each $c_n$ let
\begin{equation*}
S_{c_n}=\{a\in J :1-3\delta \le |a-c_n |\le 1+3\delta \}=J\cap A(c_n ,3\delta )
\end{equation*}
Define
\begin {multline*}
V=\\
\big\{(c_n ,a_1 ,a_2 ,a_3 ): 1\le n\le N,\, a_1 ,a_2 , a_3 \in S_{c_n},\, |a_i-a_j |\ge
c\, \Big(\small\frac{\lambda}{\delta^{3-\alpha}}\Big)^{1/\alpha }
\text{ if }1\le i<j\le 3
\big\},
\end{multline*}
where $c$ is a small positive constant. We will prove \eqref{UB3.1} by again comparing upper and
lower estimates for $|V|$. Before continuing we note that it suffices to prove \eqref{UB3.1} under the assumption that
\begin{equation}\label{largedist}
\Big(\small\frac{\lambda}{\delta^{3-\alpha}}\Big)^{1/\alpha }\gtrsim \delta^{1/2-\epsilon}
\end{equation}
for some small $\epsilon >0$ - otherwise \eqref{UB3.2} follows from \eqref{trivest2}.
By using \eqref{alphaset} just as in the proof of \eqref{Tlower} we get the lower bound
\begin{equation}\label{Tlower2}
|V|\gtrsim N\,\Big(\frac{\lambda}{\delta^3}\Big)^3 .
\end{equation}
As before we will obtain an upper bound for $|V|$ by controlling the multiplicity of the projection
$$
(c_n ,a_1 ,a_2 ,a_3 )\mapsto (a_1 ,a_2 ,a_3 )
$$
of $V$ into $J^3$. In fact we will show that
\begin{equation}\label{multiplicity2}
\text{$(c_n ,a_1 ,a_2 ,a_3 )\mapsto (a_1 ,a_2 ,a_3 )$ has multiplicity bounded by
$C\,\delta^{-\alpha /2}\frac{\delta^{3-\alpha}}{\lambda}$.}
\end{equation}
Since $|J|\lesssim \delta^{-\alpha}$ it will then follow that
\begin{equation*}
|V|\lesssim \delta^{-3\alpha}\delta^{-\alpha /2}\frac{\delta^{3-\alpha}}{\lambda}.
\end{equation*}
Comparing this with \eqref{Tlower2} then gives \eqref{UB3.1}. Thus the proof of
Theorem \ref{d=3} will be complete when \eqref{multiplicity2} is established.
We will establish \eqref{multiplicity2} by estimating the diameter of an intersection
\begin{equation}\label{shellintersect}
I\dot= A(a_1 ,3\delta )\cap A(a_2 ,3\delta )\cap A(a_3 ,3\delta ).
\end{equation}
To begin, the intersection $A(a_1 ,0 )\cap A(a_2 ,0 )$ of the unit spheres centered at $a_1$ and $a_2$
is a circle contained in the hyperplane
\begin{equation}\label{hplane}
P_{1,2}= \big\{x\in{\mathbb {R}}^3 :(x-a_1 )\cdot (a_2 -a_1 )=\frac{1}{2}|a_2 -a_1 |^2\big\}.
\end{equation}
If $x\in A(a_i ,3\delta )$ then $x\in A(a_i +e_i ,0)$ with $|e_i |\le 3\delta$. It follows that if
$x\in A(a_1 ,3\delta )\cap A(a_2 ,3\delta )$, then
\begin{equation*}
\big| (x-a_1 )\cdot (a_2 -a_1 )-\frac{1}{2}|a_2 -a_1 |^2 \big|\lesssim \delta ,
\end{equation*}
and so
\begin{equation*}
A(a_1 ,3\delta )\cap A(a_2 ,3\delta )\subset P_{1,2}+B\big(0, C\frac{\delta} {|a_2 -a_1 |}\big).
\end{equation*}
Similarly,
\begin{equation*}
A(a_1 ,3\delta )\cap A(a_3 ,3\delta )\subset P_{1,3}+B\big(0, C\frac{\delta} {|a_3 -a_1 |}\big).
\end{equation*}
If the $a_i$ are affinely independent, it follows that the intersection \eqref{shellintersect} is contained in an extrusion
(in the direction perpendicular to $a_2 -a_1$ and $a_3 -a_1$) of a parallelogram $P$ contained in the plane
$a_1 +\text{span} (a_2 -a_1 ,a_3 -a_1 )$. This parallelogram has two sides of length
$\frac{C\delta}{|a_2 -a_1 |\sin (\theta )}$ perpendicular to $a_3 -a_1$ and
two sides of length
$\frac{C\delta}{|a_3 -a_1 |\sin (\theta )}$ perpendicular to $a_2 -a_1$ where $\theta\in (0,\pi )$ is the
angle between $a_2 -a_1$ and $a_3 -a_1$.
We will need the estimate
\begin{equation}\label{sine}
\sin (\theta )\gtrsim |a_3 -a_2|.
\end{equation}
This is the point at which \eqref{largedist} will come into play: we will be assuming that
$
(c_n ,a_1 ,a_2 ,a_3 )\in V
$
and so it will follow that
\begin{equation}\label{largeprod}
|a_2 -a_1 |, \, |a_3 -a_1 |, \, |a_3 -a_2 |\gtrsim \delta^{1/2-\epsilon}
\end{equation}
for some $\epsilon>0$.
With no loss of generality we can write $a_1 =(0 ,0 ,0)$, $a_2 =(x_2 ,0 ,0)$, $a_3 =(x_3 ,y_3 ,0)$
and then assume that these points lie in the first octant, that $y_3 >0$,
and that $|a_2 |\ge |a_3 |$. We will now observe that if $\sin (\theta )$ and therefore
$\tan (\theta )=y_3 /x_3$ are small compared to $|a_2 -a_3 |$, then the extrusion fails to intersect the
shells $A(a_i ,3\delta )$.
To show this we begin by observing that
the center $p$ of the parallelogram $P$ is the point of intersection of the perpendicular bisectors
of the segments $[a_1 ,a_2]$ and $[a_1 ,a_3 ]$ and has $y$ coordinate equal to
$$
p_y \doteq\frac{y_3}{2}-\frac{x_3}{2y_3}(x_2 -x_3 )=\frac{y_3}{2}-\frac{1}{2\tan (\theta )}(x_2 -x_3 ).
$$
If $\tan (\theta )$ is small compared to $|a_2 - a_3|$, then $x_2 -x_3 \ge |a_2 -a_3 |/2$. Thus,
it follows that $|p_y |$ is large.
Since we have assumed that $|a_2 -a_1 | \geq |a_3 -a_1|$, the diameter of $P$ is bounded by
$$
\frac{2C\delta}{|a_3 -a_1 |\sin (\theta )}\lesssim \frac{\delta^{1/2+\epsilon}}{\sin (\theta )},
$$
where we have used \eqref{largeprod}.
This will be small compared to $|p_y |$ (since
$$
\frac{|x_2 -x_3 |}{\tan (\theta )}\gtrsim \frac{\delta^{1/2-\epsilon}}{\sin (\theta )},
$$
again by \eqref{largeprod}). In this case the distance $\rho$ from $P$ to the $x$-axis will be comparable to $|p_y |$. But if $\rho>2$, say, the extrusion will miss the shells $A(a_i ,3\delta )$
(whose centers lie in the $xy$-plane above the $x$-axis).
With \eqref{sine} it now follows from the definition of $V$ that the diameter of $P$ is bounded by
$C\delta\big(\frac{\delta^{3-\alpha}}{\lambda}\big)^{2/\alpha}$. The following estimate is a consequence of
the subadditivity of the function $\sqrt \cdot$ on $(0,\infty)$:
\begin{multline*}
\Big|\sqrt {(1+\epsilon_1 )^2 -(x_1 ^2 +y_1^2 )}-\sqrt {(1+\epsilon_2 )^2 -(x_2 ^2 +y_2^2 )}\Big|\le \\
\sqrt{\big|2\epsilon_1 +\epsilon_1^2 -2\epsilon_2 -\epsilon_2^2 \big|} +\sqrt {\big|
x_2^2 +y_2^2 -(x_1^2 +y_2^2 )\big|}.
\end{multline*}
With $|\epsilon_1 |,|\epsilon_2 |\le 2\delta$ this shows that if $(x_i ,y_i ,z_i )\in A(0,\delta )$ for $i=1,2$ and
$|(x_1 ,y_1 )-(x_2 ,y_2 )|\le \kappa$ then
$$
|(x_1 ,y_1 ,z_1 )-(x_2 ,y_2 ,z_2)|\lesssim\max (\delta^{1/2},\kappa^{1/2}).
$$
Thus it follows from our bound on the diameter of $P$ that
\begin{equation}\label{diamest}
\text{diam} (I)\le C\, \Big(\delta\big(\frac{\delta^{3-\alpha}}{\lambda}\big)^{2/\alpha}\Big)^{1/2}
=C\,\delta\,\delta^{-1/2}\big(\frac{\delta^{3-\alpha}}{\lambda}\big)^{1/\alpha}.
\end{equation}
Now if $(c_n ,a_1 ,a_2 ,a_3 ), (c_{n'},a_1 ,a_2 ,a_3 )\in T$, we have $c_n ,c_{n'}\in I$. Thus
\eqref{diamest}, the fact that the $c_n$'s are $\delta$-separated, and \eqref{alphaset}
together yield \eqref{multiplicity2}. This completes the proof of Theorem \ref{d=3}.
|
1,116,691,497,628 | arxiv | \section{Introduction}
The Density Matrix Renormalization Group (DMRG) method is a successful
technique for simulating large low-dimensional quantum mechanical
systems \cite{dmrg}. Developed for computing ground states of 1D
Hamiltonians, it is equivalent to a variational ansatz known as Matrix
Product States (MPS) \cite{ostlund95,verstraete04a}. This relation
has been recently exploited to develop a much wider family of
algorithms for simulating quantum systems, including time evolution
\cite{vidal04,verstraete04b}, finite temperature
\cite{zwolak04,verstraete04b} and excitation spectra \cite{porras06}.
Some of these algorithms have been translated back to the DMRG
language \cite{white93,daley04,gobert05} using optimizations developed
in that field and introducing other techniques such as Runge-Kutta or
Lanczos approximations of the evolution operator
\cite{feiguin04,schmitteckert04,manmana05,manmana06}.
The MPS are a hierarchy of variational spaces, ${\cal S}_D,$ [See
Eq.~(\ref{SD})] sorted by the size of its matrices, $D.$ MPS can
efficiently represent many-body states of 1D systems, even when the
Hilbert space is so big that the coefficients of a pure state on an
arbitrary basis cannot be stored in any computer. While the accuracy
of this representation has been proven for ground
states \cite{verstraete05}, evolution of an arbitrary state changes the
entanglement among its parties, and a MPS description with moderate
resources (small $D$) might stop to be feasible.
We will take a pragmatic approach. First of all, most algorithms in
this work can compute truncation errors so that the accuracy of
simulations remains controlled. Second, we are interested in
simulating \textit{physically} small problems, such as the dynamics of
atoms and molecules in optical lattices. For such problems small $D$
are sufficient to get a qualitative and even quantitatively good
description of the observables in our systems. As we will see below,
the biggest problem is the accumulation of truncation errors and not
always the potential accuracy of a given MPS space for representing
our states.
The outline of the paper is as follows. In Sect.~\ref{sec:mps} we
briefly introduce MPS and review some of their properties. In
Subsect.~\ref{sec:projection} we present the optimal projection onto a
MPS space, which is the keystone of most evolution algorithms. In
Sect.~\ref{sec:dmrg} we introduce for completeness the DMRG algorithm,
focusing on the concepts which are essential for time evolution. In
particular, we concentrate on the difficulties faced when implementing
DMRG simulations and how those techniques relate to MPS. In
Sect.~\ref{sec:time-evolution} we review almost all recently developed
simulation algorithms under the common formalism based on the optimal
truncation operator. Additionally we introduce three new methods: two
of them are based on Taylor and Pad\'e expansions of the evolution
operator while the other one uses an ``Arnoldi'' basis of MPS increase
the accuracy. Sect.~\ref{sec:comparison} is a detailed comparison of
MPS and DMRG algorithms using spin$-\frac{1}{2}$ models. Our study
shows that all methods are strongly limited by truncation and rounding
errors. However, among all techniques, MPS methods and in particular
our Arnoldi method performs best for fixed resources, measured by the
size of matrices $D$ or size of basis in DMRG. In the last part of our
paper, Sect.~\ref{sec:atoms} we present a real-world application of
the Arnolid evolution algorithm, which is to study a model of
hard-core bosonic atoms going through a Feschbach resonance. Current
experiments \cite{thalhammer06,stoferle06,volz06} with such systems
have focused on the number and stability of the formed molecules. In
this work we focus on the 1D many-body states and show that coherence
is transferred from the atomic component to the molecular one, so that
this procedure can be used to probe higher order correlations in the
atomic cloud. Finally, in Sect.~\ref{sec:conclusions} we summarize our
results and open lines of research.
\section{Matrix Product States (MPS)}
\label{sec:mps}
In this first section we introduce the notion of Matrix Product State,
together with some useful properties and an important operator, the
projection onto a MPS space. This section is a brief review of the
concepts found in Refs. \cite{verstraete04a,verstraete04b}.
\subsection{Variational space}
The techniques in this work are designed for the study of
one-dimensional or quasi-one-dimensional quantum mechanical lattice
models. If $N$ is the number of lattice components and $d$ the number
of degrees of freedom of each site, the Hilbert space of states will
have a tensor product structure, ${\cal H} = {\cal H}_1^{\otimes N},$
where $d=\mathrm{dim}\,{\cal H}_1$ are the degrees of freedom of a
single site. We will consider two examples here: one with spin-$1/2$
particles, where $d=2$ (Sect.~\ref{sec:comparison}), and later on a
study of atoms and molecules in an optical lattice where $d=25$
(Sect.~\ref{sec:atoms}).
Given those prerequisites, the space of MPS of size $D$ is defined as
the set of states of the form
\begin{equation}
\label{SD}
{\cal S}_D := \left\{
\left(\Tr\prod_k A_k^{s_k}\right)|s_1\ldots s_N\rangle,~
A_k^{s_n} \in \mathbb{C}^{D_k\times D_{k+1}}\right\},
\end{equation}
where $s_k=1\ldots d$ labels the physical state of the $k$-th lattice
site. The $A_k$ are complex matrices of dimensions that may change
from site to site but are of bounded size, $D_{k-1}\times D_k \leq
D^2.$ Throughout the paper we will use different notation for the
matrices $\{A_k^{s_k}\}.$ An index $k$ or $l$ will always label the
site they belong to and, whenever the expression is not ambiguous, the
site index will be dropped: $A^{s_k}:= A_k^{s_k}.$ The MPS components
can also be regarded as tensors, $A_{\alpha\beta}^{s_k},$ the Greek
indices denoting virtual dimensions $\alpha,\beta = 1\ldots D.$
Finally, at some point we will rearrange all values forming a vector
$\vec{A}_k^{t}:=(A_{11}^1,A_{11}^2,\ldots, A_{DD}^d)$ in a complex
space $\mathbb{C}^{d\times D\times D}.$
The first important property of the MPS is that they do not form a
vector space. In general, a linear combination of $M$ states with size
$D$ requires matrices of size $MD$ to be represented. It is easy to do
a constructive proof of the previous fact, but the reader may convince
himself with a simple example, made of the two product states
$|0\rangle^{\otimes N}$ and $|1\rangle^{\otimes N},$ which live in
${\cal S}_1,$ and the GHZ state $|0\rangle^{\otimes N} +
|1\rangle^{\otimes N}\in S_2.$
The previous remark leads us to another property, which is that the
dimension $D$ required to represent a state faithfully\footnote{By
this we mean with absolutely no error.} is related to the amount of
entanglement in the system. More precisely, that dimension is equal to
the maximum Schmidt number of the state with respect to any
bipartition and thus an entanglement monotone \cite{vidal03}. Indeed,
creating entanglement forces us to use bigger and bigger MPS and this
is the reason why some problems cannot be simulated efficiently using
MPS.
The third important property is that we can efficiently compute scalar
products, distances and in general expectation values of the form
$\langle \psi|O_1 \otimes O_2 \otimes \cdots \otimes O_L|\phi\rangle,$
where $\psi,\phi\in{\cal S}_D$ and $\{O_i\}_{i=1}^L$ are local
operators acting on the individiual qubits or components of our tensor
product Hilbert space. For instance, given that we know the matrices
of those states, $\{A^{s_k}_k\}$ for $\psi$ and $\{B^{s_k}_k\}$ for
$\phi,$ the previous expectation value is made of a product of $N$
matrices,
\begin{equation}
\langle \psi|\otimes_{k=1}^L O_k|\phi\rangle =
\Tr \left[\prod_{k=1}^L E(O_k,A_k,B_k)\right],
\label{expected-value}
\end{equation}
where the ``transfer matrices'' are defined as follows
\begin{equation}
E(O_k,A_k,B_k) := \sum_{i} (A_k^{s_k})^\star \otimes B_k^{s_k}
\langle s_k |O_k|s_k\rangle.
\label{transfer}
\end{equation}
Since all usual Hamiltonians and correlators can be decomposed as sum
of products of local terms, the previous formulas are very useful. An
important remark is that when computing Eq.~(\ref{transfer}) one should
not directly multiply the matrices $E$, but cleverly contract the $A$
and $B$ tensors alternatively, so as to achieve a performance
${\cal O}(dD^3)$.
The last property is that expectation values, distances
$\Vert\psi-\phi\Vert^2,$ fidelities $|\langle\psi|\phi\rangle|$ and
norms $\Vert\psi\Vert^2,$ are quadratic forms with respect to each of
the matrices in the MPS. Regarding the matrices of the state as
elements of a complex vector, we can rewrite
Eq.~(\ref{expected-value}) for the $k$-th site as
\begin{equation}
\label{quadratic-1}
\langle \psi|\otimes_{k=1}^L O_k|\phi\rangle = \vec{A}_k^\dagger Q \vec{B}_k,
\end{equation}
where the quadratic form $Q$ is built as follows
\begin{equation}
\label{quadratic-2}
Q_{[s\alpha\alpha'][s'\beta\beta']} := \langle s|O_k|s'\rangle
\left[\prod_{j=\{k+1\ldots L,1\ldots k-1\}} E(O_{j},A_{j},B_{j})
\right]_{\alpha\beta,\alpha'\beta'},
\end{equation}
where $[s\alpha\beta]$ denotes grouping of indices consistent with the
ordering of the elements of $\vec{A}$ and $\vec{B}$. This formula is
used on all algorithms, from the computation of ground states
\cite{verstraete04a} to the time evolution \cite{verstraete04b}. An
important optimization is to avoid computing the full matrix $Q$, but
to use the structure in (\ref{quadratic-2}) together with the sparsity
of the transfer matrices.
\subsection{Projector operator}
\label{sec:projection}
Even if the MPS do not form a vector space, they are embedded in a
bigger Hilbert space and provided with a distance. It is therefore
feasible, given an arbitrary state vector, to ask for the best
approximation in terms of MPS of a fixed size. The optimal projection
onto ${\cal S}_D$ is a highly nonlinear operation and it will be
denoted in this work by ${\cal P}_D.$ Following
Ref.~\cite{verstraete04b}
\begin{equation}
\label{PDk}
{\cal P}_{D}\sum_k c_k |\phi^{(k)}\rangle
:=\underset{\psi\in{\cal S}_{D}}{\argmin}
\left\Vert |\psi\rangle - \sum_k c_k|\phi^{(k)}\rangle\right\Vert^2
\end{equation}
If we rather want to approximate the action of an operator that can be
decomposed as $U=X^{-1} Y,$ we will apply a generalization of the
correction vector method \cite{dmrg}
\begin{equation}
\label{PD}
{\cal P}_{D}\left(X^{-1}Y | \phi \rangle \right)
:= \underset{\psi \in S_{D}}{\argmin}
\Vert X|\psi\rangle - Y|\phi\rangle\Vert.
\end{equation}
This formula is simple to apply and behaves well for a singular
operator $X.$ Its use will become evident when studying
time evolution algorithms in Sect.~\ref{sec:time-evolution}.
One may quickly devise a procedure for computing the
minimum of Eq.~(\ref{PD}) based on the definition of distance:
\begin{eqnarray}
\Vert X|\psi\rangle - Y|\phi\rangle\Vert^2
=\langle\psi|X^\dagger X |\psi\rangle
-2\Re\langle\psi|X^\dagger Y|\phi\rangle
+ \langle\phi|Y^\dagger Y |\phi\rangle.\label{distance}
\end{eqnarray}
All scalar products in Eq.~(\ref{distance}) are quadratic forms with
respect to each of the matrices in the states $|\phi\rangle$ and
$|\psi\rangle.$ The distance is minimized by optimizing these
quadratic forms site by site, or two sites at a time\footnote{The
two-sites alternative, borrowed from DMRG, has the advantage of not
getting trapped in subspaces of conserved quantities that conmute
with both $X$ and $Y.$}, sweeping over all lattice sites until
convergence to a small value which will be the truncation error
\cite{verstraete04b}.
In short the algorithm looks as follows: (i) Compute some initial
guess for the matrices of the optimized state $\psi.$ (ii) Focus on
the site $k=1.$ (iii) Use Eq.~(\ref{quadratic-1})-(\ref{quadratic-2})
to find out the quadratic form associated to Eq.~(\ref{distance}) for
the first matrix
\begin{equation}
\epsilon :=
\vec{A}_k^\dagger Q_{X^\dagger X} \vec{A}_k - 2\Re \vec{A}_k^\dagger
Q_{X^\dagger Y} \vec{B}_k + \vec{B}_k^\dagger Q_{Y^\dagger Y}\vec{B}_k.
\label{error}
\end{equation}
(iv) The stationary points of the error are given by equation
\begin{equation}
Q_{X^\dagger X} \vec{A}_k = Q_{X^\dagger Y}\vec{B}_k.
\label{optimization}
\end{equation}
Solve this equation and use the outcome as the new value of
$A_k.$ (v) Estimate the error using Eq.~(\ref{error}).
If $\epsilon$ is small enough or does not improve significantly,
stop. Otherwise move to another site, $k=k\pm 1,$ and continue on
step (iii).
\section{MPS and Density Matrix Renormalization
Group (DMRG)}
\label{sec:dmrg}
Even though the numerical techniques for dealing with MPS seem very
different from those of DMRG \cite{dmrg}, both methods are intimately
connected. First of all, the DMRG produces MPS at each state of its
algorithms. Second, particular algorithms such as the search for
ground states in open boundary condition problems are equivalent in
DMRG and MPS \cite{verstraete04a}. Third, other concepts, such as
basis adaptation and state transformation from DMRG are analogous to
the MPS ones, even though they are less powerful and less accurate. In
this section we will elaborate on these statements.
\subsection{DMRG builds matrix product states}
\begin{figure}[t]
\includegraphics[width=0.5\linewidth]{pra-fig-dmrg.eps}%
\caption{(a) DMRG view of a state, with the basis for the left
$\ket{\alpha_{k-1}}$ and right block $\ket{\alpha_{k+1}},$ and the
states of the central spins, $\ket{s_k}, \ket{s_{k+1}}.$ (b) On a
renormalization step, one spin is incorporated to the left block
and a new basis is built, $\ket{\alpha_{k}} := A_{k-1,k}^{s_k}
\ket{\alpha_{k-1}}\ket{s_k}.$}
\label{fig-dmrg}
\end{figure}
The DMRG algorithms are based on the key idea that interesting states
can be expressed using a basis with a small number of vectors. Take
for instance the chain in Fig.~\ref{fig-dmrg}a. Any state of this
chain can be decomposed in the form
\begin{equation}
\label{dmrg-state}
\ket{\psi} = \psi(\alpha_{k-1}s_ks_{k+1}\alpha_{k+1})
\ket{\alpha_{k-1}}\ket{s_k}\ket{s_{k+1}}\ket{\alpha_{k+1}},
\end{equation}
where we sum over repeated indices. While for describing individual
spins we use all possible states, $s_{k},s_{k+1}=1\ldots d,$ in DMRG the
states of the left and right blocks are expressed in a finite basis,
$\alpha_k,\beta_k = 1\ldots M,$ where $M,$ the number of states kept,
is the basic control parameter. DMRG algorithms build those basis
states recursively, by taking a smaller block, adding a site and
truncating the basis of the bigger block ``optimally'' in a way ot be
precised later. Thus we have the relations \numparts
\begin{eqnarray}
\ket{\alpha_k} := A_{\alpha_{k-1}\alpha_{k}}^{s_k}
\ket{\alpha_{k-1}}\ket{s_{k-1}},\label{basis-left}\\
\ket{\beta_k} := A_{\beta_k\beta_{k+1}}^{s_{k+1}}
\ket{s_{k+1}}\ket{\beta_{k+1}}.\label{basis-right}
\end{eqnarray}
\endnumparts
It is trivial to see, substituting those equations into
Eq.~(\ref{dmrg-state}) that all DMRG states are matrix product states
\cite{ostlund95,verstraete04a}.
\subsection{Targetting}
\label{sec:targetting}
An important question in DMRG is how many states we have to keep for
the left and right blocks and how to optimize them. The criterium is
that the basis describing those blocks has to represent accurately a
family of target states, $\ket{\phi_n}.$ The algorithm consists on a
series of sweeps over the lattice
\cite{feiguin04,manmana05,manmana06,schollwoeck06} with a recipe to
achieve an optimal representation. For instance, let us say that we
have an approximate basis around sites $k$ and $k+1$ and we will
improve this sweeping from left to right, to $k+1$ and $k+2$
[Fig.~\ref{fig-dmrg}]. The first step is to build the weighted density
matrix of the left piece of the chain
\begin{eqnarray}
\fl \rho_L := \sum_n w_n \phi_n(\alpha_{k-1}s_ks_{k+1}\beta_{k+1})
\phi_n(\alpha_{k-1}'s_k's_{k+1}\beta_{k+1})^\star
\ket{\alpha_{k-1}s_k}\bra{\alpha_{k-1}'s_k'}\nonumber\\
=\sum_n w_n \Tr_{s_{k+1}\alpha_{k+1}}
\ket{\phi_n}\bra{\phi_n}, \label{rho}
\end{eqnarray}
with some normalized weights, $\sum_n w_n=1.$ Second, take the $M$
most significant eigenvectors of this matrix
\begin{equation}
\rho\ket{\alpha_k} = \lambda_k\ket{\alpha_k},\quad
\lambda_k \geq \lambda_j\,\quad \forall k>l;\;k,l=1\ldots M\times d.
\end{equation}
These vectors become the improved new basis for the enlarged left
block and are related by a transformation matrix to the basis elements
of the smaller block (\ref{basis-left}). Matrix
elements of observables and of the initial and target states have to
be recomputed using this isometry and one continues until the end of
the lattice. A similar procedure is employed to create the vectors
$\ket{\beta_k}$ recursively from right-to-left. Multiple sweeps can be
performed this way.
For static problems one uses as target states, $\ket{\phi_n},$ the
ground state and a number of excitations, computed with the
Hamiltonian in the truncated basis. In some time evolution algorithms
\cite{feiguin04,schmitteckert04,manmana05,manmana06,schollwoeck06} the
targetting is done with respect to approximations of the time evolved
state $\ket{\psi(t)}.$ In this work we have used $\ket{\phi_k} :=
\ket{\psi(k\Delta t/(N_v-1))},$ where the intermediate and final
states, $\ket{\psi(\tau)}$ are computed using the Hamiltonian on the
truncated basis [See Sect.~\ref{sec:rk}].
\subsection{Targetting vs. projection}
\label{sec:mps-vs-dmrg}
There are a number of complicated subtleties and facts that are rarely
mentioned in the DMRG literature. The first one is that in most
implementations of the targetting algorithm, the state itself is
updated and rewritten in the new basis after each step
\begin{equation}
\ket{\psi} \to
\psi(\alpha_{k-1}s_ks_{k+1}\beta_{k+1})
A^{s_k\star}_{\alpha_{k-1}\alpha_k}
\ket{\alpha_k}\ket{s_k}\ket{s_{k+1}}\ket{\beta_{k+2}}.
\end{equation}
However, this leads to wrong answers because the initial state
$\psi(0)$ deteriorates during the algorithm, and it only works when
the basis of the left and right block are already large enough. A more
accurate procedure consists on keeping the initial state in the
original basis, and on each step of the the algorithm compute its
matrix elements in the new basis. As an example, in
Fig.~\ref{fig-targetting} we plot the error of an evolution algorithm
with the trivial algorithm and with the correct one, for an initial
product state $\ket{1}^{\otimes L}$ in the $\sigma^z_i$ basis and
using a simple Hamiltonian $H=\sum_i \sigma^x_i.$ In both cases we
begin with a basis of size $M=1$ vectors per block, letting the basis
grow up to $2^{L/2}.$ However only the second method is capable of
doing the update.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{pra-fig-dmrg-errors.eps}%
\caption{Error in the DMRG Runge-Kutta algorithm [See
Sect.~\ref{sec:rk}], using as initial state
$\ket{\psi}=\ket{0}^{\otimes N}$ and Hamiltonians
$H=\sum_i\sigma^x_i$ (solid, dashed) or $H=\prod_i \sigma^x_i$
(dotted). The algorithm used either a single basis (dashed) or two
set of basis states (solid, dotted) for targetting. The dashed line
shows errors due to using a single basis, the dotted line fails
because the Hamiltonian does not have nonzero elements on the
initial DMRG basis.}
\label{fig-targetting}
\end{figure}
An even more important issue is that if in current literature this
method is formulated in an intuitive way and little is known about why
it works and how accurate it is. To this respect, a close look at the
matrices (\ref{basis-left})-(\ref{basis-right}) created by the
targetting with multiple vectors, $\ket{\phi_t},$ shows that their
procedure is quite similar to the projection operator for some linear
combination ${\cal P}_M (\sum_k c_k \ket{\phi_k}),$ but the steps for
computing the optimal approximation are wrong if there are more than
one target state.
Another consequence of being tied to the notion of ``optimal basis''
instead of treating MPS as a variational family of wavefunctions, is
that the targetting procedure does not work when the operators $O_n$
and the target states $\ket{\phi_n}$ have no elements in the initial
basis. A trivial example consits on the same product state
$\ket{\psi(0)} = \ket{1}^{\otimes N}$ as before, beginning with $M=1$
states per block and evolved with $H=\prod_i \sigma^x_i.$ As shown in
Fig.~\ref{fig-targetting}, a conventional DMRG sweep cannot enlarge
the basis in the appropiate way and the method fails to follow the
evolution of the state. A MPS algorithm, on the other hand, even if it
begins with a state with $D=1$, grows up the state up to the maximal
size $D=2$ and makes absolutely no error in the simulation.
The final practical difference is that in current DMRG works
\cite{feiguin04,schmitteckert04,manmana05,manmana06} the powers of the
Hamiltonian acting on the original state, $H^n\ket{\phi},$ are
approximated by the truncation of $H$ to the current basis. This is
another potential source of errors which we cleverly avoid in our
Arnoldi and Runge-Kutta methods shown below [Sect.~\ref{sec:rkmps} and
\ref{sec:arnoldi}] and its effect are be evident in some of the
simulations [Sect.~\ref{sec:comparison}].
The previous paragraphs can be rephrased as follows: DMRG algorithms
need not only an initial input state, but also a suitable and large
enough basis for doing the time evolution. How to construct this basis
is largely heuristics, and contrasts with the systematic way in which
MPS work.
\section{Time evolution}
\label{sec:time-evolution}
Since ${\cal S}_D$ is not a vector space, we cannot solve a
Schr\"odinger equation directly on it. Our goal is rather to
approximate the evolution at short times by a formula like
\begin{equation}
|\psi(t+\Delta t)\rangle \simeq {\cal P}_D \left[U_n(\Delta t)
|\psi(t)\rangle\right].
\label{evolution}
\end{equation}
Here, ${\cal P}_D$ is the optimal projection operator defined before
and $U_n(\Delta t) = \exp(-i H \Delta t) + {\cal O}(\Delta t^n)$ is
itself an approximation of $n$-th order to the evolution operator.
Even though this formulation applies qualitatively to all recently
developed MPS and DMRG algorithms, there are subtle differences that
make some methods more accurate than other. The actual implementation
of time evolution is thus the topic of the following subsection.
\subsection{Trotter decomposition}
While there had been previous works on simulating time evolution using
DMRG algorithms \cite{cazalilla02,cazalilla03,luo03}, an important
breakthrough happened with the techniques in Ref.~\cite{vidal04},
which was later on translated to the DMRG language
\cite{white93,daley04,gobert05} and have since been applied to a
number of interesting problems
\cite{winkler06,kollath05,micheli04,clark04}. Vidal's seminal paper
suggested doing the time evolution with a mixture of two-qubit quantum
gates and truncation operations. The idea is to split a Hamiltonian
with nearest-neighbor interactions into terms acting on even and odd
lattice edges
\begin{equation}
H = \sum_{k=1}^{L/2} H_{2k} + \sum_{k=1}^{L/2} H_{2k-1}
=: \Heven + \Hodd.
\end{equation}
This leads to a Suzuki-Trotter decomposition of the time evolution
operator into a sequence of unitaries acting on even and odd lattice
bonds:
\begin{equation}
e^{-iH\Delta t} \simeq
e^{-i \Heven \Delta t}
e^{-i \Hodd \Delta t}= \prod_{k=1}^{L/2} e^{-iH_{2k}\Delta t}
\prod_{k=1}^{L/2} e^{-iH_{2k+1}\Delta t}.
\label{trotter-2}
\end{equation}
Inserting optimal truncations in between the applications of these
unitaries, $U_j := \exp(-iH_j\Delta t),$ Vidal's algorithm can be
recasted in the form of Eq.~(\ref{evolution})
\begin{equation}
|\psi(t+\Delta t)\rangle :=
\prod_{k=1}^{L/2} {\cal P}_D U_{2k}
\prod_{k=1}^{L/2} {\cal P}_D U_{2k+1}
|\psi(t)\rangle.
\end{equation}
Applying each of the unitaries $U_{j}$ is a relatively costless task,
and in particular, for open boundary condition problems, the
combination ${\cal P}_D U_{j}$ can be done in a couple of steps which
amount to contracting neighboring matrices and performing a singular
value decomposition \cite{vidal04,white93,daley04}.
In Ref.~\cite{verstraete04b} we developed a variant of this method
which does not insert so many truncation operators, but waits until
the end
\begin{equation}
|\psi(t+\Delta t)\rangle :=
{\cal P}_D \prod_{k=1}^{L/2} U_{2k}
\prod_{k=1}^{L/2} U_{2k+1}
|\psi(t)\rangle.
\end{equation}
This procedure has a similar computational cost, ${\cal O}(ND^3),$ but
the solution is then expected to be optimal for a given Trotter
decomposition and can be generalized to problems with periodic
boundary conditions.
The accuracy of either method may be increased by an order of
magnitude using a different second order decomposition
\begin{equation}
e^{-iH\Delta t} =
e^{-i \Heven \Delta t/2}
e^{-i \Hodd \Delta t}
e^{-i \Heven \Delta t/2}
+ {\cal O}(\Delta t^2),
\label{trotter-2b}
\end{equation}
but it is better to apply a Forest-Ruth formula \cite{white93,omelyan02}
\begin{eqnarray}
\fl e^{-iH\Delta t} = e^{-i\Heven \theta \Delta t/2}
e^{-i\Hodd \theta \Delta t} e^{-i\Heven (1- \theta) \Delta t/2}
e^{-i\Hodd (1- 2\theta) \Delta t}\times\nonumber\\
\times
e^{-i\Heven (1- \theta) \Delta t/2}
e^{-i\Hodd \theta \Delta t}
e^{-i\Heven \theta \Delta t/2},
\label{forest-ruth}
\end{eqnarray}
with the constant $\theta=1/(2-2^{1/3})$ and which has a Trotter error
of order ${\cal O}(\Delta t^5).$ Note that better Suzuki-Trotter
formulas can be designed but they do not provide a big improvement in
the number of exponentials or accuracy \cite{omelyan02}.
\subsection{Runge-Kutta and Lanczos with DMRG}
\label{sec:rk}
After the development of the Trotter methods, in Ref.~\cite{feiguin04}
we find a new algorithm in the field of DMRG. The idea now is to use
not a Trotter formula, but a Runge-Kutta iteration:
\numparts
\begin{eqnarray}
\ket{k_1}&:=&\Delta t\, H(t) \ket{\psi(0)},\\
\ket{k_2}&:=&\Delta t\, H(t) [\ket{\psi(0)}+\case{1}{2}\ket{k_1}],\\
\ket{k_3}&:=&\Delta t\, H(t) [\ket{\psi(0)}+\case{1}{2}\ket{k_2}],\\
\ket{k_4}&:=&\Delta t\, H(t) [\ket{\psi(0)}+\ket{k_3}].
\end{eqnarray}
\endnumparts
These vectors are then used to interpolate the state at other
times\footnote{Notice the typo in Eq.~(4) in Ref.~\cite{feiguin04}.}
\numparts
\begin{eqnarray}
\ket{\psi(\case{\Delta t}{3})} &\simeq&
\ket{\psi(0)} + \case{1}{162}[31\ket{k_1}+14(\ket{k_2}+\ket{k_3})
-5\ket{k_4}],\\
\ket{\psi(2\case{\Delta t}{3})} &\simeq&
\ket{\psi(0)} + \case{1}{81}[16\ket{k_1}+20(\ket{k_2}+\ket{k_3})
-2\ket{k_4}],\\
\ket{\psi(\Delta t)} &\simeq&
\ket{\psi(0)} + \case{1}{6}[\ket{k_1}+2(\ket{k_2}+\ket{k_3})
+\ket{k_4}].
\label{rk1}
\end{eqnarray}
\endnumparts
These three vectors, together with $\ket{\psi(0)},$ are used to find
an optimal basis using the targetting procedure explained in
Sect.~\ref{sec:targetting}. Once the basis is fixed, the time
evolution is performed with the same formulas but smaller time step,
$\Delta t/10.$ There are variants of this technique which approximate
the time evolved state using a Lanczos method
\cite{noack05,hochbruck96} with the truncated matrix of the
Hamiltonian in the DMRG basis. The different submethods differ on
whether the preparation of the basis is done only using the final and
initial state \cite{manmana06} or multiple intermediate time steps
\cite{schmitteckert04,manmana05}.
\subsection{Runge-Kutta like method with MPS}
\label{sec:rkmps}
Our implementation of a 4-th order method with Runge-Kutta like
formulas uses several simplifications. First of all, since our
Hamiltonian is constant, a Runge-Kutta expansion becomes equivalent to
a fourth-order Taylor expansion of the exponential
\begin{eqnarray}
\ket{\psi(\Delta t)} &=& \sum_{n=0}^{4} \frac{1}{n!} (iH\Delta t)^n
\ket{\psi(0)} + {\cal O}(\Delta t^5)\nonumber\\
&\simeq& (iH\Delta t - z_1)(iH\Delta t - z_1^\star)
(iH\Delta t - z_2)(iH\Delta t - z_2^\star) \ket{\psi(0)}\nonumber\\
&=:& Y_1 Y_2 Y_3 Y_4 \ket{\psi(0)}.
\label{rk2}
\end{eqnarray}
Here we have rewritten the fourth order polynomial in terms of its
roots, $z_1$ and $z_2.$ Using the fact that we know an efficient
algorithm to compute ${\cal P}_D Y_i$ acting on a MPS, we can write
\begin{equation}
\ket{\psi(\Delta t)} := \prod_{i=1}^4 {\cal P}_D Y_i \ket{\psi(0)},
\end{equation}
which is our MPS Runge-Kutta-like algorithm. There are multiple
reasons to proceed this way. On the one hand, we do not want to
approximate higher powers of the Hamiltonian using a truncated basis
[See Sect.~\ref{sec:targetting}]. On the other hand, if we treat
expand the Hamiltonian to all powers, there will be too many operators
and the complexity of the state will increase enormously. The previous
decomposition has proven to be a good compromise.
\subsection{Pade approximations}
\label{sec:pade}
Runge-Kutta and in general polynomial approximations to the
exponential are not unitary. Indeed, if we look at the eigenvalues of
the evolution operator in either Eq.~(\ref{rk1}) or Eq.~(\ref{rk2}),
we will see that they are of the form $\lambda_n = \sum_{n=0}^4
(iE_N\Delta t)^n/n!,$ where $E_n$ are the eigenvalues of the
Hamiltonian $H.$ From this equation we see that $|\lambda_n| \neq 1$
and some eigenmodes may grow exponentially, which is another way to
say that Runge-Kutta algorithms are numerically unstable.
There exist multiple implicit methods that eliminate the lack of
unitarity and produce stable approximations. They receive the name
``implicit'' because the value of the state at a later time step
$\Delta t$ is obtained by solving an equation or inverting an
operator. We will focus on Pad\'e approximations to the exponential
\begin{equation}
\label{pade}
U_n(\Delta t) = \frac{\sum_k \alpha_k H^k}{\sum_k \beta_k H^k},
\end{equation}
which are computed with same order polynomials in the numerator and
denominator \cite{moler03}. The lowest order method is known as the
Crank-Nicholson scheme, it arises from a second order discretization
of the Schr\"odinger equation and has the well known form
\begin{equation}
\UCN(\Delta t) = \frac{1 - i H\Delta t/2} {1 + iH\Delta t /2}.
\label{cn2}
\end{equation}
It is easy to verify that the eigenvalues of this operator are just
phases and that the total energy is a conserved quantity. The other
method that we have used and which we compare in this work is a fourth
order expansion
\begin{equation}
\UPADE(\Delta t) =
\frac{1 - i\Delta t H/2 - (\Delta t H)^2/12}{1 + i\Delta t H/2 -
(\Delta tH)^2/12}.
\label{cn4}
\end{equation}
Applying either $\UCN$ or $\UPADE$ on a matrix product state is
equivalent to solving the problem in Eq.~(\ref{PD}), where the
operators $X$ and $Y$ are the denominator and the numerator in the
previous quotients.
\subsection{Arnoldi method}
\label{sec:arnoldi}
The last and most important method that we present in this paper
combines many of the ideas explained before. First of all, we will use
the fact that a linear combination of MPS, such as $\sum_{k=1}^{N_v}
c_k|\phi_D^{(k)}\rangle,$ resides in a space of bigger matrix product
states, ${\cal S}_{N_vD},$ and it is thus a more accurate
representation of the evolved state than a single vector of size $D.$
In the language of DMRG, $N_v$ vectors each of size $D$ provide us
with an effective basis of size $N_vD.$ This optimistic estimate is
only possible when the vectors are indeed linearly independent. Our
choice for an optimal decomposition will be therefore a Gramm-Schmidt
orthogonalization of the Krylov subspace,
$\{\psi,H\psi,H^2\psi\ldots\},$ performed using MPS
\begin{equation}
|\phi_{k+1}\rangle \simeq {\cal P}_D\left(H|\phi_{k}\rangle -
\sum_{j\leq k} \frac{\langle\phi_j|H|\phi_k\rangle}{\langle\phi_j|\phi_j
\rangle}
|\phi_j\rangle\right),
\label{phikp1}
\end{equation}
with initial condition $|\phi_0\rangle := |\psi(0)\rangle.$ Defining
the matrices $N_{ik} := \langle \phi_j |\phi_i\rangle$ and $H_{ik} :=
\langle \phi_j | H |\phi_i\rangle$ we compute an Arnoldi estimate of
the exponential
\begin{equation}
\label{Arnoldi}
|\psi(\Delta t)\rangle :=
{\cal P}_D \sum_k [e^{-i\Delta tN^{-1}H}]_{k0} |\phi_k\rangle.
\end{equation}
This algorithm involves several types of errors all of which can be
controlled. First, the error due to using only $N_v$ basis vectors is
proportional the norm of the vector $\phi_{N_v}$ as in ordinary
Lanczos algorithms \cite{noack05,hochbruck96}. Truncation errors
arising from ${\cal P}_D$ can be also computed during the numerical
simulation. Out of this errors, in our experience, the final
truncation in Eq.~(\ref{Arnoldi}) is the most critical one, since the
other ones may be compesanted by adding more and more vectors.
Finally, for completeness, in this work we have also implemented a
Lanczos method. It differs from the previous one in that the basis is
built orthogonalizing only with respect to the two previous vectors,
so that Eq.~(\ref{phikp1}) contains only three summands, as in
ordinary Lanczos iterations. One then assumes that, due to the
Hermiticity of the Hamiltonian, orthogonalitiy to the rest of the
Krylov basis is preserved. Furthermore, if this is true, the effective
matrices for $H$ and $N$ are tridiagonal and can be constructed with a
simple recurrence. This method has a potential gain of
$\cal{O}(1/N_v)$ in speed due to the simplifications in
Eq.~(\ref{phikp1}). However, as we will see later, truncation errors
spoil the orthogonality of the Lanczos vectors and makes the method
useless for small matrices.
\section{Comparison}
\label{sec:comparison}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{pra-fig-1-trotter.eps}%
\includegraphics[width=0.5\linewidth]{pra-fig-1-mps.eps}
\includegraphics[width=0.5\linewidth]{pra-fig-1-dmrg.eps}%
\includegraphics[width=0.38\linewidth]{pra-fig-1-legend.eps}
\caption{Error, $\varepsilon,$ vs. time step, $\Delta t,$ for
simulations of model (\ref{Hamiltonian}) with 8 spins,
$\theta=0.35,$ $\Delta=0,$ $D=16$ and $T=10.$ We compare (a) Trotter
methods, (b) MPS algorithms and (c) DMRG algorithms. As a reference,
all plots contain the error of the MPS Arnoldi and Lanczos methods
(solid black line).}
\label{fig-exact}
\end{figure}
We tested all algorithms by simulating the evolution of the same state
under a family of spin-$\case{1}{2}$ Hamiltonians with
nearest-neighbor interactions
\begin{equation}
\label{Hamiltonian}
H = \sum_k \left[ \cos(\theta) (s^x_k s^x_{k+1} + s^y_k
s^y_{k+1} + \Delta s^z_k s^z_{k+1})+ \sin(\theta) s^z_k\right].
\end{equation}
As initial state we take the product $|\psi(0)\rangle \propto
(|0\rangle + |1\rangle)^{\otimes L},$ where $|0\rangle$ and
$|1\rangle$ are the eigenstates of $s^z.$ By restricting ourselves to
``small'' problems ($L\leq 20$), we can compare all algorithms with
accurate solutions based on exact diagonalizations and the Lanczos
algorithm \cite{noack05,hochbruck96}. Notice that we measure the error
in the full wavefunction, $\varepsilon := \Vert \psi_D(T) -
U(T)\psi(0)\Vert^2$ and not on the expectation values of simple
correlators whose exact evolution is known \cite{manmana06}. The
outcome of some of the simulations is in Figs.~\ref{fig-exact},
\ref{fig-truncated} and \ref{fig-truncated-20}, which we will discuss
in the following paragraphs.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{pra-fig-3-0.eps}%
\includegraphics[width=0.5\linewidth]{pra-fig-3-1.eps}
\includegraphics[width=0.5\linewidth]{pra-fig-3-2.eps}%
\includegraphics[width=0.5\linewidth]{pra-fig-3-3.eps}
\caption{Wavefunction error vs. time step, $\Delta t,$ for simulations
of model (\ref{Hamiltonian}) with 16 spins, $\theta=0.35,$
$\Delta=0,$ and $T=10.$ In Fig.~(a)-(d) we plot the outcome for
different MPS sizes or DMRG basis, denoted by $D$. The association
between methods and line types is that of Fig.~\ref{fig-exact}.}
\label{fig-truncated}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{pra-fig-3b-0.eps}%
\includegraphics[width=0.5\linewidth]{pra-fig-3b-1.eps}
\includegraphics[width=0.5\linewidth]{pra-fig-3b-2.eps}%
\includegraphics[width=0.5\linewidth]{pra-fig-3b-3.eps}
\caption{Same as in Fig.~\ref{fig-exact} but for $N=20$ spins.}
\label{fig-truncated-20}
\end{figure}
The first set of simulations was done for 8 spins using matrices of
size $D=16$ and a DMRG basis of similar size. Since $S_{D}$ contains
all possible states, ${\cal P}_{D} = 1$, we expect no truncation
errors in any of the algorithms. As shown in Fig.~\ref{fig-exact} for
the XY model with $\theta=0.35$ and $\Delta=0,$ for medium to long
time steps most errors show the expected behavior. Thus, the errors of
the Trotter methods of second and fourth order follow the laws ${\cal
O}(\Delta t^2)$ and ${\cal O}(\Delta t^4)$. Runge-Kutta, Taylor and
Pad\'e approximations have an error of ${\cal O}(\Delta t^2)$, and for
the Arnoldi and Lanczos methods with $N_v=4$ and 8 vectors we have a
qualitative behavior ${\cal O}(\Delta t^{N_v-1}).$ Since the size of
the matrices and of the DMRG basis is very big, all these laws are
followed, irrespective of whether the implementation uses DMRG or
MPS. The only exception seems to be the DMRG Lanczos when implemented
with only two target states. This method behaves more poorly than the
counterpart with $N_v$ target states given by
$\ket{\phi_k}:=\ket{\psi(k\Delta t/(N_v-1)},\,k=0..N_v-1$.
Out of the methods that work as expected, all of them break the ideal
laws at some point, acquiring an error of order ${\cal O}(\Delta
t^{-2}).$ This error is exponential in the number of steps $T/\Delta
t$ and it signals the finite accuracy of the optimization algorithms,
due to the limited precision of the computer. Roughly, since current
computers cannot compute the norms of vectors, $\Vert \psi\Vert^2,$
with a relative error better than $10^{-16},$ a worst case estimate is
$\epsilon = (T/\Delta t)^2 10^{-16}$, which perfectly fits these
lines.
Theoretically, the performance measured in computation time and memory
use of all algorithms is of order ${\cal O}(N_v N_H D^3/N),$ where
$N_H$ is related to the number of operators in $\{X, Y, X^\dagger X,
Y^\dagger Y,\ldots\},$ $N$ is the size of the problems and the
additional factor $N_v$ only appears for Arnoldi and Lanczos
methods. For periodic boundary conditions the cost increases by ${\cal
O}(D^2).$ However, we have explicitely avoided PBC problems because
DMRG cannot handle them so accurately. In practice, out of all
methods, the Pad\'e expansions are the slowest ones, while the Trotter
formulas, being local, are the fastest. In between we find the Arnoldi
and Lanczos methods, which are nevertheless competitive if we consider
their accuracy and the fact that they allow for longer time steps.
In the remaining simulations we dropped the methods from
Sect.~\ref{sec:rkmps} and \ref{sec:pade}, because they have a similar
computational cost and worse performance than the MPS Arnoldi
method. We keep, on the other hand, all Trotter and DMRG methods and
continue our study with bigger problems in which the MPS spaces and
the DMRG basis are smaller than the limit required to represent all
states accurately, $D=d^{N/2}$ and $M=d^{N/2-1}$. More precisely, we
choose $N=16$ and $20$ spins and try with $M,D=32,64,80$ and $128.$
Now that the methods are potentially inexact, our previous error laws
are modified by the introduction of truncation errors. The first thing
we notice in Fig.~\ref{fig-truncated}a-c is that the Trotter error of
second order is rather stable, its error being the same for MPS and
DMRG. However, when we go for higher orders and increase the number of
exponentials, truncation errors affect more strongly the Vidal and
DMRG implementations than the MPS one. This shows the difference
between making local truncations after each bond unitary (DMRG and
Vidal) versus delaying truncations until the end and using the optimal
projection \cite{verstraete04b}.
Another important conclusion is that all DMRG methods are very
sensitive to truncation errors. As shown in
Figs.~\ref{fig-truncated}a-c, there is not a very big difference in
accuracy between the DMRG Runge-Kutta and the Lanczos implementation,
and both methods are not more accurate than the DMRG Forest-Ruth
formula. The main reason why higher order methods from
Sect.~\ref{sec:rk} do not improve the results is due to estimating the
evolution with the matrix of the Hamiltonian on the truncated DMRG
basis. Another reason is that under truncation, the Lanczos recurrence
does no longer produce an orthogonal set of vectors.
To prove those statements we compared with a Lanczos method
implemented with MPS as explained in Sect.~\ref{sec:arnoldi}. This
method does indeed have a better accuracy than the DMRG ones, which
can be attributed to the use of the full Hamiltonian. The errors of
the Lanczos are however larger than those of the Arnoldi method for a
similar $D$ and we have checked that under truncation the vectors of
the Lanczos basis are not truly orthogonal. These small errors
accumulate for shorter time steps, and only the Arnoldi method can
correct them.
As for the Arnoldi method, it has the greatest accuracy and seems to
be stable as $D$ becomes small. For very small matrices the error
remains constant as we decrease the time step, but this is only
because there is a lower bound in the approximation error given by
$\Vert{\cal P}_D\ket{\psi(T)} - \ket{\psi(T)}\Vert^2$.
Figure~\ref{fig-truncation}a shows how the errors in the Arnoldi
method are correlated to the errors made when approximating the exact
solution with a MPS of fixed size. This plot illustrates the fact that
all errors in this method are due to the final truncation.
Summing up, one should use the method that allows for the longest time
steps and the least number of truncations (or applications of ${\cal
P}_D$) and mathematical operations. All methods have an optimal time
step which is a compromise between the errors in $U_n$ and the
rounding and truncation errors made on each step. Regarding
performance and accuracy, the two winning methods are MPS algorithms
using either the fourth order Forest-Ruth decomposition or the Arnoldi
basis. The last method however, has two advantages. One is that it can
deal with nonlocal interactions and the second one is its potential
for parallelizability, roughly ${\cal O}(N_vN_H/L),$ which all other
presented algorithms lack. This can make it competitive with, for
instance, increasing the size of the matrices in the Forest-Ruth
method. Regarding DMRG methods, we find that they give comparable
results only for big basis. When truncation errors pop in, their
behavior is less predictable and it does not seem worth going with
more elaborate algorithms (Lanczos, Runge-Kutta) vs. an ordinary
Trotter formula.
\begin{figure}[t]
\includegraphics[width=\linewidth]{figure-2.eps}%
\caption{Simulations of $L=16$ spins with $\theta=0.35$,
$\Delta=0$. (a) Minimal truncation error $\varepsilon=\Vert (1-{\cal
P}_{64}\exp(-iHt)|\psi(0)\rangle\Vert^2$ of the time evolved state
and accumulated error made when simulating this Schr\"odinger
equation using the Arnoldi method with 8 vectors, $D=64$ and $\Delta
t=0.16$ (dashed). (b) Similar as before, errors in the Arnoldi
method for varying number of vectors, $T=10$ and $\Delta t= 0.04,
0.16, 0.32$ (bottom to top).}
\label{fig-truncation}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.5\linewidth]{fig-scan-1.eps}%
\includegraphics[width=0.5\linewidth]{fig-scan-2.eps}
\caption{Errors for various other spin $s=1/2$ models as parametrized
in Eq.~(\ref{Hamiltonian}). We have used a second order Trotter
method (circles) and an Arnoldi method (solid), with 16 spins, $T=5$
and $D=32$.}
\label{fig-scan}
\end{figure}
The fact that previous results are model-independent has been
confirmed by a systematic scanning of all possible Hamiltonians in
Eq.~(\ref{Hamiltonian}). A selection is shown in
Fig.~\ref{fig-scan}. The Arnoldi method is shown to be accurate,
even for gapless problems. When the Arnoldi method fails it is due to
truncation errors. In those situations the evolved state cannot be
accurately represented by MPS (and by that matter also not by DMRG),
simply because $\Vert(1-{\cal P}_D)\ket{\psi(t)}\Vert^2$ is finite and
large [Fig.~\ref{fig-exact}d]. Increasing the number of Arnoldi
vectors will also not help (See Fig.~\ref{fig-truncation}b) for the
same reason.
\section{Simulation of Feschbach resonances}
\label{sec:atoms}
As a real-world application we have used the MPS Trotter and Arnoldi
methods to simulate the conversion of bosonic atoms into molecules,
when confined in an optical lattices and moving through a Feschbach
resonance \cite{thalhammer06,stoferle06,volz06}. The goal is to study
how correlation properties are transferred from the atoms into the
molecules and how this dynamics is affected by atom motion and
conversion effic1iency.
The effective model combines the soft-core Bose-Hubbard model used to
describe the Tonks gas experiments \cite{paredes04}, with a coupling to
a molecular state \cite{dickerscheid05}
\begin{eqnarray}
\fl
H = -J \sum_{\langle i,j\rangle,\,\sigma} a^\dagger_{i\sigma} a_{j\sigma}
+ \sum_{i,\sigma,\sigma'}\frac{U_{\sigma,\sigma'}}{2}
a^{\dagger}_{i\sigma} a^{\dagger}_{i\sigma} a_{i\sigma} a_{i\sigma}
\label{Hubbard}
\\
+ \sum_i \left\{ (E_m + U_m n^{(a)}_i) n^{(m)}_i +
\Omega [b^\dagger_i a_{i\uparrow} a_{i\downarrow} +
H.~c.]\right\}.
\nonumber
\end{eqnarray}
Here, $a_{i\uparrow},$ $a_{i\downarrow}$ and $b_i$ are bosonic
operators for atoms in two internal states and the molecule; $n^{(a)}$
and $n^{(m)}_i$ are the total number of atoms and of molecules on each
site, and we have the usual two-level coupling with Rabi frequency
$\Omega$ and detuning $\Delta := E_m - U_m.$ For simplicity, we will
assume that atoms and molecules interact strongly among themselves
($U_{\uparrow,\uparrow},U_{\downarrow,\downarrow},U_{m} \to \infty$),
so that we can treat them as hard-core,
$a_{i,\sigma}^2,b_i^2,a_{i\sigma}b_i=0.$ Also since molecules are
heavier, we have neglected their tunneling amplitude, although that
could be easily included.
\begin{figure}[t]
\includegraphics[width=\linewidth]{figure-3.eps}%
\caption{Dynamics of a site with two atoms when the energy of the
molecules is ramped linearly: $E_m = U_{\uparrow\downarrow} +4
\Omega (1 - 2t/T).$ We plot (a) the instantaneous energy levels
(solid), and (b) the fraction of atoms converted into molecules.}
\label{fig-molec-L2}
\end{figure}
As the energy of the molecular state is shifted from $E_m \gg U_m$
down to $E_m \ll U_m,$ the ground state of Eq.~(\ref{Hubbard}) changes
from a pair of coupled of Tonks gases, to a purely molecular
insulator. We want to study the dynamics of this crossover as $E_m$ is
ramped slowly from one phase to the other.
The simplest situation corresponds to no hopping: isolated atoms
experience no dynamics, while sites with two atoms may produce a
molecule. The molecular and atomic correlations at the end of the
process are directly related to two-body correlations in the initial
state \cite{altman05,barankov05},
\begin{eqnarray}
\langle m_k^\dagger m_k\rangle_{t=T} &\sim& \langle n_{k\uparrow}
n_{k\downarrow}\rangle_{t=0},\label{corr}\\
\langle a_k^\dagger a_k\rangle_{t=T} &\sim&
\langle a_k^\dagger a_k\rangle_{t=0} - \langle n_{k\uparrow}
n_{k\downarrow}\rangle_{t=0}.
\end{eqnarray}
Therefore, this process can thus be used as a tool to probe quantum
correlations between atoms. Studying the two-level system
$\{a^\dagger_{k\uparrow}a^{\dagger}_{k\downarrow}|0\rangle,
b_k^\dagger|0\rangle\},$ we conclude that for this process to work
with a $90\%$ efficiency, the ramping time should be larger than
$T\sim 1.5/\Omega$ [See Fig.~\ref{fig-molec-L2}(b)].
\begin{figure}[t]
\includegraphics[width=\linewidth]{figure-4.eps}%
\caption{(a) Fraction of atoms converted into molecules vs.
adimensionalized hopping amplitude, for
$U_{\uparrow,\downarrow}=0$ and $U_{\uparrow\downarrow}=1,$ from
top to bottom. Circles show the outcome of the numerical
experiment, while solid lines contain the ideal fraction
(\ref{corr}). The case $J=0$ uses an initial condition
$J=0.1\Omega$ and then switches off tunneling before ramping. (b)
Correlations of the molecular state, $\langle m_i^\dagger
m_j\rangle_{t=T}$ (circles) after the ramp, and those of the
initial atomic state, $\langle
a^\dagger_{i\downarrow}a^\dagger_{i\uparrow}
a_{j\uparrow}a_{j\downarrow}\rangle_{t=0}$
(solid line). The plot for $U_{\uparrow\downarrow}=5J$ has been
shifted up by $0.2.$}
\end{figure}
We have simulated numerically the ramping of small lattices, $L=10$ to
$32$ sites, with an initial number of atoms $N_{\uparrow,\downarrow} =
L/2, 3L/4.$ The value of the molecular coupling has been fixed to
$\Omega = 1$ and the interaction has been ramped according to $E_m =
U_{\uparrow\downarrow} + 4 \Omega (1 - 2 t/T)$ using the ideal ramp
time $T=1.5/\Omega.$ We have used two particular values of the
inter-species interaction, $U_{\uparrow,\downarrow}/\Omega = 0,2,$ and
scanned different values of the hopping $J/\Omega\in [0,0.4].$ The
initial condition was always the ground state of the model with these
values of $U_{\uparrow\downarrow}/J$ and no coupling. These states
contain the correlations that we want to measure.
The main conclusions is that indeed the correlations of the molecules
are almost those of the initial state of the atoms (\ref{corr}), even
for $J=0.4\Omega$ when the process has not been adiabatic. An
intuitive explanation is that hopping is strongly suppressed as we
approach the resonance, due to the mixing between atomic states, which
can hop, and molecular states, which are slower. We can say that the
molecules thus pin the atoms and \textit{measure} them. This
explanation is supported by a perturbation analysis at $J\ll \Omega,$
where one finds that a small molecular contamination slows the atoms
on the lattice. This analysis breaks down, however, for $J \sim
\Omega,$ the regime in which the numerical simulations are required.
\section{Conclusions}
\label{sec:conclusions}
We have performed a rather exhaustive comparison of different methods
for simulating the evolution of big, one-dimensional quantum systems
\cite{vidal04,verstraete04b,white93,daley04,gobert05,feiguin04,schmitteckert04,manmana05,manmana06}
with three other methods developed in this work. We find the MPS
methods to be optimal both in accuracy and performance within the
formulas of similar order. All procedures are substantially affected
by truncation and rounding errors, and to fight the latter we must
choose large integration time-steps. However, the only algorithm which
succeeds for very large time-steps is an Arnoldi method developed in
this work. Finally, this algorithm can be applied to problems with
long range interactions.
Using this algorithm, we have simulated the dynamics of cold atoms in
a 1D optical lattice when crossing a Feschbach resonance. The main
conclusion is that with rather fast ramp times it is possible to map
the correlations of the atomic cloud (two Tonks gases in this case)
and use this as a measuring tool in current experiments. This result
connects with similar theoretical predictions for fermions in
Ref.~\cite{altman05,barankov05}. Simple generalizations of this work
will allow us in the future to analyze losses and creation of strongly
correlated states with the help of the molecular component.
As posible outlook, we envision the possibility of developing new
algorithms in which the state is approximated by a linear combination
of MPS at all times. This should be more efficient than increasing the
size of the matrices, and could support distributed computations in a
cluster.
\section*{References}
|
1,116,691,497,629 | arxiv | \section*{Introduction}\
Partial actions of groups have been introduced in the theory of
operator algebras as a general approach to study $C^{*}$-algebras by
partial isometries (see, in particular, \cite{E1} and \cite{E3}),
and crossed products classically, as well-pointed out in
\cite{DES1}, are the center of the rich interplay between dynamical
systems and operator algebras (see, for instance, \cite{M1} and
\cite{Q1}). The general notion of (continuous) twisted partial
action of a locally compact group on a $C^{*}$-algebra and the
corresponding crossed product were introduced in \cite{E1}.
Algebraic counterparts for some notions mentioned above were
introduced and studied in \cite{DE}, stimulating further
investigations, see for instance, \cite{LF}, \cite{CCF}, \cite{laz e mig}
and references therein. In particular, twisted partial actions of
groups on abstract rings and corresponding crossed products were
recently introduced in \cite{DES1}.
In \cite{wag e fer}, it was introduced the partial skew polynomial rings and partial skew Laurent of polynomials, and the authors studied prime and maximal ideals. In \cite{CFMH}, it was investigated the Goldie property in partial skew polynomial rings and partial skew Laurent of polynomial. In \cite{Gobbi}, it was introduced the concept of partial skew power series rings and in the authors studied when it is Bezout and distributive.
The authors in \cite{Letzter1} and \cite{Letzter2}, studied the Goldie rank and prime ideals in skew power series ring and skew Laurent series rings with the assumption of noetheriany on the base ring. In this article, we consider twisted partial actions of $\mathbb{Z}$ and we introduce the twisted partial skew power series rings and twisted partial skew Laurent series rings $R [[x;\alpha,w]]$ and $R\langle x;\alpha,w\rangle$, respectively, where $\alpha$ is a twisted partial action of $\mathbb{Z}$ on an unital ring $R$. We study the Goldie property, prime ideals, primality and semiprimality in these rings which generalizes the results presented in \cite{Letzter1} and \cite{Letzter2}.
This article is organized as follows:
In the Section 1, we give some preliminaries and results that will be used during this paper.
In the Section 2, we study the primality and semiprimality of twisted partial skew power series rings and twisted partial skew Laurent series rings. We describe the prime radical of twisted partial skew Laurent series rings and we study the prime ideals of these rings.
In the Section 3, we study the Goldie rank of the twisted partial power series rings and twisted partial skew Laurent series rings and as a consequence we study the Goldie property of these rings. Morever, we study when the twisted partial skew power series rings is semiprime and we give a description of the prime radical of twisted partial skew power series rings, when the unital twisted partial action of $\mathbb{Z}$ has enveloping action.
\section{ Preliminaries}
In this section, we recall some notions about twisted partial actions on rings, more details can be found in \cite{DE}, \cite{DES1} and \cite{DES2}. We introduce, in this section, the twisted partial skew power series rings and twisted partial skew Laurent series rings.
From now on, $R$ will be always an unital ring, unless otherwise stated.
We begin with the following definition that is a particular case of (\cite{DES2}, Definition 2.1).
\begin{defin} \label{def1}
An unital \textit{twisted partial action} \index{twisted partial action} of the additive abelian group
$\mathbb{Z}$ on a ring $R$ is a triple
\begin{center}
\vskip-1mm $\alpha = \big(\{D_i\}_{i \in \mathbb{Z}}, \{\alpha_i\}_{i \in \mathbb{Z}}, \{w_{i,j}\}_{(i,j) \in \mathbb{Z}\times \mathbb{Z}}\big)$,
\end{center}
\vskip-1mm where for each $i \in \mathbb{Z}$, $D_i$ is a two-sided ideal in
$R$ generated by a central idempotent $1_i$, $\alpha_g:D_{-i} \rightarrow D_i$ is an isomorphism of
rings and for each $(i,j) \in \mathbb{Z} \times \mathbb{Z}$, $w_{i,j}$ is an
invertible element of $D_i D_{i+j}$, satisfying the
following postulates, for all $i,j,k \in \mathbb{Z}$:
\begin{itemize}
\item [$(i)$] \vskip-2.2mm $D_{1} = R$ and $\alpha_{1}$ is the identity map of $R$;
\item [$(ii)$] \vskip-2.2mm $\alpha_i(D_{-i} D_j)= D_i D_{i+j}$;
\item [$(iii)$] \vskip-2.2mm $\alpha _i \circ \alpha _j (a)= w_{i,j} \alpha_{i+j}(a) w_{i,j}^{-1}$, for all $a \in D_{-j} D_{-ji}$;
\item [$(iv)$] \vskip-2.2mm $w_{i,1}=w_{1,i}=1$;
\item [$(v)$] \vskip-2.2mm $\alpha_i(a w _{j,k}) w _{i,j+k}= \alpha_i(a)w_{i,j} w _{i+j,k}$, for all $a \in D_{-i} D_{j}D_{j+k}$.
\end{itemize}
\end{defin}
\begin{obs}
If $w_{i,j} = 1_i1_{i+j}$, for all $i,j\in \mathbb{Z}$, then we
have a partial action which is a particular case of
(\cite{DE}, Definition 1.1) and when $D_i = R$, for all $i\in \mathbb{Z}$,
we have that $\alpha$ is a twisted global action.
\end{obs}
Let $\beta = \big(T, \{\beta_i\}_{i \in \mathbb{Z}}, \{u_{i, j}\}_{(i, j)
\in \mathbb{Z} \times \mathbb{Z}}\big)$ be a twisted global action of a group $\mathbb{Z}$
on a (non-necessarily unital) ring $T$ and $R$ an ideal of $T$
generated by a central idempotent $1_R$. We can restrict $\beta$
to $R$ as follows: putting \linebreak $D_i = R\cap \beta_{i}(R) = R
\beta_{i}(R)$, $i\in \mathbb{Z}$, each $D_{i}$ has an identity element
$1_R\beta_{i}(1_R)$. Then defining
$\alpha_{i}=\beta_{i}|_{D_{-i}}$, $\forall i\in \mathbb{Z}$, the items ($i$),
($ii$) and ($iii$) of \linebreak Definition \ref{def1} are satisfied.
Furthermore, defining $w_{i, j} = u_{i,
j}1_R\beta_i(1_R)\beta_{i+j}(1_R)$, $\forall \,\, i,j\in \mathbb{Z}$, the items ($iv$),
($v$) e ($vi$) of Definition \ref{def1} are also satisfied. So, we obtain a twisted partial
action of $\mathbb{Z}$ on $R$.
The following definition appears in (\cite{DES2}, Definition 2.2).
\begin{defin}\label{def2}
A twisted global action $\big(T, \{\beta_i\}_{i\in \mathbb{Z}},
\{u_{i,j}\}_{(i,j)\in \mathbb{Z}\times \mathbb{Z}}\big)$ of a group $\mathbb{Z}$ on an
associative (non-necessarily unital) ring $T$ is said to be an
enveloping action \index{enveloping action} (or a globalization)
of an unital twisted partial action $\alpha$ of $\mathbb{Z}$ on a ring $R$ if,
there exists a monomorphism $\varphi:R\rightarrow T$ such that,
for all $i$ and $j$ in $\mathbb{Z}$:
\begin{itemize}
\item [$(i)$] \vskip-2.2mm $\varphi(R)$ is an ideal of $T$;
\item [$(ii)$] \vskip-2.2mm $T = \displaystyle\sum_{i\in \mathbb{Z}}\beta_i(\varphi(R))$;
\item [$(iii)$] \vskip-2.2mm $\varphi(D_i) = \varphi(R)\cap \beta_i(\varphi(R))$;
\item [$(iv)$] \vskip-2.2mm $\varphi\circ \alpha_i(a) = \beta_i\circ \varphi(a)$, for all $a\in D_{-i}$;
\item [$(v)$] \vskip-2.2mm $\varphi(aw_{i,j}) = \varphi(a)u_{i,j}$ and $\varphi(w_{i,j}a) = u_{i,j}\varphi(a)$, for all $a\in D_iD_{i+j}$.
\end{itemize}
\end{defin}
In (\cite{DES2}, Theorem 4.1), the authors studied necessary and
sufficient conditions for an unital twisted partial action $\alpha$ of a
group $\mathbb{Z}$ on a ring $R$ has an enveloping action. Moreover, they
studied which rings satisfy such conditions.
Suppose that $(R, \alpha, w)$ has an enveloping action
$(T,\beta,u)$. In this case, we may assume that $R$ is an ideal of
$T$ and we can rewrite the conditions of the Definition \ref{def2}
as follows:
\begin{itemize}
\item [$(i')$] \vskip-3mm $R$ is an ideal of $T$;
\item [$(ii')$] \vskip-3mm $T = \displaystyle\sum_{i\in \mathbb{Z}}\beta_i(R)$;
\item [$(iii')$] \vskip-3mm $D_i = R\cap \beta_i(R)$, for all $i\in \mathbb{Z}$;
\item [$(iv')$] \vskip-3mm $\alpha_i(a) = \beta_i(a)$, for all $x\in D_{-i}$ and $i\in \mathbb{Z}$;
\item [$(v')$] \vskip-3mm $aw_{i,j} = au_{i,j}$ and $w_{i,j}a = u_{i,j}a$, for all $a\in D_iD_{i+j}$ and $i, j \in \mathbb{Z}$.
\end{itemize}
Given an unital twisted partial action $\alpha$ of $\mathbb{Z}$ on a ring
$R$, we define the twisted partial skew Laurent series rings
$R\langle x;\alpha,w\rangle=\displaystyle\bigoplus_{i\in \mathbb{Z}} D_ix^i$ whose elements are the series
\begin{center}
\vskip-1mm $\displaystyle\sum_{j\geq s} a_jx^j$, with $a_j\in D_j$
\end{center}
\vskip-1mm
with the usual addition and multiplication defined by
\vskip-5mm
$$(a_ix^i)(a_jx^j) = \alpha_i(\alpha^{-1}_i(a_i)b_j)w_{i,j}x^{i+j}.$$
\vskip2mm
Using the similar techiniques of (\cite{DES1}, Theorem 2.4), $R\langle x;\alpha,w\rangle$ is an associative
ring whose identity is $1_Rx^{0}$. Note that, we have the
injective morphism $\phi: R\rightarrow R\langle x;\alpha,w\rangle$, defined
by $r\mapsto rx^0$ and we can consider $R\langle x;\alpha,w\rangle$ as
an extension of $R$. Moreover, we consider the twisted partial power series rings as a subring of $R\langle x;\alpha,w\rangle$ which we denote it by $R [[x;\alpha,w]]$ whose elements are the series $\displaystyle\sum_{i\geq 0} b_ix^i$ with sum and multiplication rule defined as before.
Let $\alpha$ be an unital twisted partial action of a group $Z$ on a ring
$R$. An ideal $S$ of $R$ is said to be $\alpha$-ideal ($\alpha$-invariant ideal) if,
$\alpha_i(S\cap D_{-i}) \subseteq S\cap D_i$, for all $i\geq
0$ $(\alpha_i(S\cap D_{-i}) = S\cap D_i$, for all $i\in
\mathbb{Z})$.
If $S$ is an $\alpha$-ideal ($\alpha$-invariant ideal), then we have the ideals
\begin{center}
$S [[x;\alpha,w]]=\left\{\displaystyle\sum_{i\geq 0}a_ix^i\, |\, a_i\in S\cap D_i\right\}$ ($S\langle x;\alpha,w\rangle=\left\{\displaystyle\sum_{i\geq m}a_ix^i\, |\, a_i\in S\cap D_i \,\, m\in\mathbb{Z}\right\}$)
\end{center}
is an ideal of $R [[x;\alpha,w]]$ $(R\langle x;\alpha,w\rangle)$. Note that, if $I$ is a right ideal of $R$, then $I [[x;\alpha,w]]=\{\displaystyle\sum_{i\geq 0} a_ix^i: a_i\in D_i\}$ and $I\langle x;\alpha,w\rangle=\{\displaystyle\sum_{i\geq m}b_ix^i:b_i\in D_i\}$ are right ideals of $R [[x;\alpha,w]]$ and $R\langle x;\alpha,w\rangle$, respectively.
Note that for each $\alpha$-invariant ideal $I$ of $R$, the unital twisted partial action $\alpha$ can be extended to an unital twisted partial action $\overline{\alpha}$ of $\mathbb{Z}$ on $R/I$ as follows: for each $i\in \mathbb{Z}$, we define $\overline{\alpha}_i:D_{-i}+I\longrightarrow D_i+I$, putting $\overline{\alpha}_i(a+I)=\alpha_{i}(a)+I$, for all $a\in D_{-i}$, and for each $(i,j)\in \mathbb{Z}\times \mathbb{Z}$, we extend each $w_{i,j}$ to $R/I$ by $\overline{w}_{i,j}=w_{i,j}+I$.
Moreover, when $(R,\alpha,w)$ has enveloping action $(T,\beta,u)$, then by similar methods presented in Section 2 of \cite{laz e mig}, we have that $(T/I^{e}, \overline{\beta},\overline{u})$ is the enveloping action of $(R/I,\overline{\alpha},\overline{w})$, where $I^e$ is the $\beta$-invariant ideal such that $I^{e}\cap R=I$.
We finish this section with some comments about twisted partial actions of finite type that will be necessary in this paper.
The following definition is a particular case of (\cite{lmsw}, Definition 4.13).
\begin{defin} Let $\alpha$ be an unital twisted partial action. We say that $\alpha$ is of finite type if, there exists a finite subset $\{s_1,s_2, \cdots ,s_n\}$ of $\mathbb{Z}$ such that \begin{center} $\displaystyle\sum_{i=1}^n D_{j+s_i} =R$,\end{center} for all $j\in \mathbb{Z}$. \end{defin}
It is convenient to point out that in the same way as in (\cite{laz e mig}, Proposition 1.2) as proved in \cite{lmsw}, we have that an unital twisted partial action $\alpha$ of $\mathbb{Z}$ on an unital ring $R$ with an enveloping action $(T,\beta,u)$ is of finite type if, and only if, there exists $s_1,\cdots, s_n\in \mathbb{Z}$ such that $T=\displaystyle\sum_{i=1}^n \beta_{s_{i}} (R)$ and this is equivalent to say that $T$ has an identity element.
\section{Primality and semiprimality}
In this section, $\alpha$ will denote an unital twisted partial action of $\mathbb{Z}$ on an unital ring $R$, unless otherwise stated.
We begin this section with the following proposition, whose proof is standard, and we put it here for the sake of completeness.
\begin{prop}\label{quociente} If $I$ is an $\alpha$-invariant ideal of $R$, then $\frac{R [[x;\alpha,w]]}{I [[x;\alpha,w]]}\simeq (\frac{R}{I}) [[x;\overline{\alpha},\overline{w}]]$. Moreover, the same result holds to $R\langle x;\alpha,w\rangle$.
\end{prop}
\begin{dem}
We define $\varphi:\frac{R [[x;\alpha,w]]}{I [[x;\alpha,w]]}\rightarrow (\frac{R}{I}) [[x;\overline{\alpha},\overline{w}]]$ by $\varphi(\displaystyle\sum_{i\geq 0}a_ix^i+I[[x;\alpha,w]])=\displaystyle\sum_{i\geq 0} (a_i+ I)x^i$. We easily have that $\varphi$ is an isomorphism. So, $\frac{R [[x;\alpha,w]]}{I [[x;\alpha,w]]}\simeq (\frac{R}{I}) [[x;\overline{\alpha},\overline{w}]]$
\end{dem}
The following definition firstly appeared in \cite{wag e fer} for ordinary partial actions
\begin{defin} Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$ and $I$ an ideal of $R$.
(i) $I$ is $\alpha$-prime if, $I$ is an $\alpha$-invariant ideal and for each $J$ and $K$ $\alpha$-invariant ideals of $R$ such that $JK\subseteq I$ implies that either $J\subseteq I$ or $K\subseteq I$.
(ii) $I$ is strongly $\alpha$-prime if, $I$ is $\alpha$-invariant and for each ideal $M$ of $R$ and $\alpha$-ideal $N$ of $R$ such that $MN\subseteq I$ implies that either $M\subseteq I$ or $N\subseteq I$.
\end{defin}
Let $a\in R$. Then we define the $\alpha$-invariant ideal generated by $a$ as $J=\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_i(a1_{-i})R$.
In the next result, we study necessary and sufficient conditions for $\alpha$-primality and strongly $\alpha$-primality.
\begin{lema}\label {primo2} (1) Let $P$ be an $\alpha$-invariant ideal of $R$. The following conditions are equivalent:
(a) $P$ is $\alpha$-prime
(b) For each $a, b\in R$ such that $\alpha_j(a1_{-j})R\alpha_i(b1_{-i})\subseteq P$, for all $i, j\in \mathbb{Z}$, then either $a\in P$ or $b\in P$
(c) $R/P$ is $\overline{\alpha}-prime$, where $\overline{\alpha}$ is the extension of twisted partial action $\alpha$ to $R/P$
(2) Let $P$ be an $\alpha$-invariant ideal of $R$. The following conditions are equivalent:
(a) $P$ is strongly $\alpha$-prime
(b) For each $a, b\in R$ such that $aR\alpha_j(b1_{-j})\subseteq P$, for all $j\geq 0$, then either $a\in P$ or $b\in P$.
(c) $R/P$ is strongly $\overline{\alpha}$-prime, where $\overline{\alpha}$ is the extension of twisted partial $\alpha$ to $R/P$. \end{lema}
\begin{dem} (1) $(a)\Rightarrow (b)$
Let $a, b\in R$ such that $\alpha_j(a1_{-j})R\alpha_i(b1_{-i})\subseteq P$, for all $i, j\in \mathbb{Z}$. Then, if we fix $j$ we have that $$\alpha_j(a1_{-j})\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_i(b1_{-i})R\subseteq P.$$ and consequently, we get $$\displaystyle\sum_{j\in \mathbb{Z}}R\alpha_j(a1_{-j})R\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_i(b1_{-i})R\subseteq P.$$ Since the ideals $\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_j(a1_{-j})R$ and $\sum_{i\in \mathbb{Z}}R\alpha_i(b1_{-i})R$ are $\alpha$-invariant, then, by assumption, we have that either $\displaystyle\sum_{i\in \mathbb{Z}} R\alpha_j(a1_{-j})R\subseteq P$ or $\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_i(b1_{-i})R\subseteq P$. So, either $a\in P$ or $b\in P$.
$(b)\Rightarrow (a)$
Let $I,J$ be $\alpha$-invariant ideals of $R$ such that $IJ\subseteq P$, take $a\in I$ and suppose that there exists $b\in J\setminus P$. Then, $(\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_i(a1_{-i})R)(\displaystyle\sum_{j\in \mathbb{Z}}R\alpha_j(b1_{-j})R)\subseteq P$. Thus, by assumption, we have that either \begin{center} $\displaystyle\sum_{i\in \mathbb{Z}}R\alpha_i(a1_{-i})R\subseteq P$ or $\displaystyle\sum_{j\in \mathbb{Z}}R\alpha_j(b1_{-j})R\subseteq P$. \end{center} Hence, $a\in P$, because $b\notin P$. So, $I\subseteq P$.
$(a)\Rightarrow (c)$
Let $a, b\in R$ such that \begin{center} $\overline{\alpha}_j((a+P)(1_{-j}+P))(R/P)\overline{\alpha}_i((b+P)(1_{-i}+P))=\overline{0}$, \end{center} for all $i, j\in \mathbb{Z}$. Then, $\alpha_j(a1_{-j})R\alpha_i(b1_{-i})\subseteq P$, for all $i, j\in \mathbb{Z}$. Thus, by assumption, we have that either $a\in P$ or $b\in P$. So, either $a+P=\overline{0}$ or $b+P=\overline{0}$.
$(c)\Rightarrow (a)$
Let $I$ and $J$ be $\alpha$-invariant ideals of $R$ such that $IJ\subseteq P$. Thus, $\overline{I}\overline{J}=\overline{0}$ in $R/P$. Hence, by assumption, we have that either $\overline{I}=\overline{0}$ or $\overline{J}=\overline{0}$. So, either $I\subseteq P$ or $J\subseteq P$.
The proof of item (ii) is analogous.
\end{dem}
It is convenient to point out that $R$ is $\alpha$-prime (strongly $\alpha$-prime) if the zero ideal is $\alpha$-prime (strongly $\alpha$-prime). Next, we have an easy consequence of Lemma \ref{primo2}.
\begin{lema}\label{anelprimo} (i) Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$. Then $R$ is $\alpha$-prime if, and only if, for each $a, b\in R$ such that $\alpha_j(a1_{-j})R\alpha_i(b1_{-i})=0$ for all $i,j\in \mathbb{Z}$, we have that either $a=0$ or $b=0$.
(ii) Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$. Then, $R$ is strongly $\alpha$-prime if, and only if, for each $a, b\in R$ such that $aR\alpha_i(b1_{-i})=0$ for all $i\geq 0$, we have that either $a=0$ or $b=0$
\end{lema}
It is convenient to point out that if, $L$ is a nonzero right ideal of $R\langle x;\alpha,w\rangle$, then $L\cap R [[x;\alpha,w]]$ is a nonzero right ideal of $R [[x;\alpha,w]]$ because of for each nonzero element $f\in L$, there exists $s\geq 0$ such that $0\neq f1_sx^s \in L\cap R [[x;\alpha,w]]$. Moreover, if a right ideal $M$ of $R\langle x;\alpha,w\rangle$ is such that $M\cap R [[x;\alpha,w]]=0$, then we have that $M=0$. We use these facts without further mention.
In the next result, we study conditions for the primality of $R [[x;\alpha,w]]$ and $R\langle x;\alpha,w\rangle$ which partially generalizes (\cite{Letzter2}, Propositions 2.5 and 2.7).
\begin{prop}\label{anelprimo}The following statements hold.
\begin{description}
\item[(a)] $R$ is $\alpha$-prime if and only if $R\langle x;\alpha,w\rangle$ is prime.
\item[(b)] $R [[x;\alpha,w]]$ is prime if and only if $R$ is strongly $\alpha$-prime. In particular, if $R [[x;\alpha,w]]$ is prime, then $R$ is $\alpha$-prime.
\item[(c)] If $R [[x;\alpha,w]]$ is prime, then $R\langle x;\alpha,w\rangle$ is prime
\end{description}
\end{prop}
\begin{dem}
\begin{description}
\item[(a)] Suppose that $R\langle x;\alpha,w\rangle$ is prime and let $I$ and $J$ be $\alpha$-invariant ideals of $R$ such that $IJ=0$. Then $$I\langle x; \alpha,w\rangle J\langle x;\alpha,w\rangle \subseteq (IJ)\langle x;\alpha,w\rangle =0.$$ By the fact that $R\langle x;\alpha,w\rangle$ is prime, we have that either $I\langle x;\alpha,w\rangle=0$ or $J\langle x;\alpha,w\rangle=0$. Hence, either $I=0$ or $J=0$. So, $R$ is $\alpha$-prime.
Conversely, let $f, g \in R\langle x;\alpha,w\rangle$ be nonzero elements, suppose that $fR\langle x;\alpha,w \rangle g=0$ and consider $m$ and $n$ the smallest integers such that $f_m\neq 0$ and $g_n\neq 0$ where $f=\displaystyle\sum_{i\geq m}f_ix^i$ and $g=\displaystyle\sum_{i\geq n}g_ix^i$. Note that, for each $i\in \mathbb{Z}$, $fD_ix^ig\subseteq f R\langle x;\alpha,w\rangle g=0$ and we have that $f_mD_i\alpha_i(1_{-i}g_n)=0$, for all $i\in \mathbb{Z}$. Hence, for each $j\in \mathbb{Z}$, we have that $\alpha_{j}(f_m1_{-j})R\alpha_i(g_n1_{-i})=0$,
for all $i\in \mathbb{Z}$.
Consequently, by Lemma \eqref{anelprimo}, we have that $f_m=0$ or $g_n=0$, which is a contradiction. So, $R\langle x;\alpha,w\rangle$ is prime.
\item[(b)] The proof is similar of the item (a).
\item[(c)] Let $I$ and $J$ be ideals of $R\langle x;\alpha,w\rangle$ such that $IJ=0$. Thus, $$ (I\cap R [[x;\alpha,w]])(J\cap R [[x;\alpha,w]])=0.$$
Since $I\cap R [[x;\alpha,w]]$ and $J\cap R [[x;\alpha,w]]$ are ideals of $R [[x;\alpha,w]]$, then we have that either $I\cap R [[x;\alpha,w]]=0$ or $J\cap R [[x;\alpha,w]]=0$. Hence either $I=0$ or $J=0$. So, $R\langle x;\alpha,w\rangle$ is prime.
\end{description}
\end{dem}
\begin{obs} The authors in (\cite{Letzter2}, Propositions 2.5 and 2.7) used noetherianity to get the equivalences mentioned there. For us to obtain the same equivalences in the case of twisted partial actions, we would need to know if the following question has a positive answer, but until now we do not know.
\begin{center} Are all $\alpha$-ideals of $R$ $\alpha$-invariant ideals when $R$ is Noetherian?\end{center}
So, if this question has a positive answer we would have that $R$ is $\alpha$-prime $\Leftrightarrow$ $R [[x;\alpha,w]]$ is prime $\Leftrightarrow$ $R\langle x;\alpha,w\rangle$ is prime.
\end{obs}
The following result is a direct consequence of the last proposition.
\begin{cor} If $R$ is a prime ring, then $R [[x;\alpha,w]]$ is a prime ring.\end{cor}
The proof of the following result is similar to Proposition \ref{anelprimo} and it partially generalizes (\cite{Letzter2}, Corollary 2.12)
\begin{cor} \label{primeideals} The following statements hold.
(a) Suppose that $I$ is $\alpha$-invariant ideal of $R$. Then $I$ is $\alpha$-prime if and only if $I\langle x;\alpha,w\rangle$ is prime.
(b) Suppose that $I$ is an $\alpha$-invariant ideal of $R$. Then $I$ is strongly $\alpha$-prime if and only if $I [[x;\alpha,w]]$ is prime.
\end{cor}
The following result generalizes (\cite{Letzter1}, Theorem 3.18) and is a direct consequence of the last corollary.
\begin{cor} Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$ and $I$ a strongly $\alpha$-prime ideal of $R$. Then, there exists a prime ideal $P$ of $R [[x;\alpha,w]]$ such that $P\cap R=I$. Moreover, if $I$ is $\alpha$-prime, then there exists a prime ideal $Q$ of $R\langle x;\alpha,w\rangle$ such that $Q\cap R=I$. \end{cor}
In (\cite{Letzter2}, Proposition 2.11) is used the noetherianity property to prove the result in the case of skew Laurent series rings, but in that proof the assumption was not necessary. The next result generalizes (\cite{Letzter2}, Proposition 2.11).
\begin{prop} \label{primality1} If $K$ is a prime ideal of $R\langle x;\alpha,w\rangle$, then $K\cap R$ is an $\alpha$-prime ideal of $R$.
\end{prop}
\begin{dem} Let $K$ be an prime ideal of $R\langle x;\alpha,w\rangle$. Then, we easily have that $K\cap R$ is an ideal of $R$. We claim that $K\cap R$ is an $\alpha$-prime ideal of $R$. In fact, let $a\in (K\cap R)\cap D_{-i}$, for $i\in \mathbb{Z}$. Then, $1_ix^iax^{-i}\in K$. Thus,
$$1_ix^iax^{-i}=1_i\alpha_i(a)w_{i, -i}=\alpha_i(a)w_{i, -i} \in K\cap D_i$$ and since $w_{i, -i}$ is an invertible element of $D_i$, we get that $\alpha_i(a)w_{i,-i}w_{i,-i}^{-1} \in (K\cap R)\cap D_i)$. Hence, $\alpha_i(a)\in (K\cap R)\cap D_i$ and it follows that $\alpha_i((K\cap R)\cap D_{-i})\subseteq (K\cap R)\cap D_i$. By similar methods, we show that $\alpha_{i}^{-1}((K\cap R)\cap D_i)\subseteq (K\cap R)\cap D_{-i}$. Consequently, $\alpha_i((K\cap R)\cap D_{-i})= (K\cap R)\cap D_i$, for all $i\in \mathbb{Z}$ and we have that $K\cap R$ is an $\alpha$-invariant ideal of $R$.
By Proposition 2.1 we have that \begin{center} $\Psi:(R/(K\cap R))\langle x;\overline{\alpha},\overline{w}\rangle \rightarrow (R\langle x;\alpha,w\rangle)/((K\cap R)\langle x; \alpha,w\rangle)$\end{center} defined by $\Psi(\sum_{i\geq s} \overline{a_i}x^ i)=\sum_{i\geq s}a_ix^i+(K\cap R)\langle x;\alpha,w\rangle$ is an isomorphism. Note that $K/((K\cap R)\langle x;\alpha,w\rangle)$ is a prime ideal and we have that $\Psi^{-1}(K/((K\cap R)\langle x;\alpha,w\rangle))=\overline{K}=\{\sum_{i\geq s} (a_i+(K\cap R))x^i:\sum_{i\geq s} a_ix^i\in K\}$ is a prime ideal in $(R/(K\cap R))\langle x;\overline{\alpha},\overline{w}\rangle$ and $\overline{K}\cap (R/(K\cap R))=\overline{0}$. Thus, we may assume that $K\cap R =0$ and in this case we only need to show that $R$ is $\alpha$-prime. In fact, let $I$ and $J$ be $\alpha$-invariant ideals of $R$ such that $IJ=0$. Hence, $IR\langle x;\alpha,w\rangle J\langle x;\alpha,w \rangle\subseteq I\langle x;\alpha,w\rangle J\langle x;\alpha,w\rangle \subseteq (IJ)\langle x;\alpha,w\rangle=0\subseteq K$. By the fact that $K$ is a prime ideal we have that either $IR\langle x;\alpha,w\rangle\subseteq K$ or $J\langle x;\alpha,w \rangle\subseteq K$ and it follows that either $I\subseteq K$ or $J\subseteq K$. So, either $I=0$ or $J=0$ and we have that $R$ is $\alpha$-prime.
\end{dem}
The following notion appears in \cite{CM}.
\begin{defin} Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$. Then the $\alpha$-nil radical $N_{\alpha}(R)$ of $R$ is the intersection of all $\alpha$-prime ideals of $R$.\end{defin}
From now on, for a ring $S$ we denote its prime radical by $Nil_{*}(S)$.
Now, we are in conditions to describe the prime radical of $R\langle x;\alpha,w\rangle$.
\begin{prop}\label{primeradical5} Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$. Then $Nil_{*}(R\langle x;\alpha,w\rangle)=Nil_{\alpha}(R)\langle x;\alpha,w\rangle$. \end{prop}
\begin{dem} Let $P$ be a prime ideal of $R\langle x;\alpha,w\rangle$. Then, by Proposition \ref{primality1}, we have that $P\cap R$ is $\alpha$-prime. Thus, $Nil_{*}(R\langle x;\alpha,w\rangle)\supseteq Nil_{\alpha}(R)\langle x;\alpha,w\rangle$.
On the other hand, let $I$ be an $\alpha$-prime ideal of $R$. Then, by Corollary \ref{primeideals}, we have that $I\langle x;\alpha,w\rangle$ is prime. Hence, $Nil_{\alpha}(R)\langle x;\alpha,w\rangle\supseteq Nil_{*}(R\langle x;\alpha,w\rangle)$. So, $Nil_{*}(R\langle x;\alpha,w\rangle=Nil_{\alpha}(R)\langle x;\alpha,w\rangle$. \end{dem}
\begin{prop} \label{semiprime1} Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$.
(i) If $R$ is semiprime, then $R\langle x;\alpha,w\rangle$ is semiprime. Moreover, if $R$ is Noetherian and $R\langle x;\alpha,w\rangle$ is semiprime, then $R$ is semiprime.
(ii) Let $I$ be an $\alpha$-invariant ideal of $R$. If $I$ is semiprime, then $I\langle x;\alpha,w\rangle$ is semiprime.
\end{prop}
\begin{dem}
(i) Assume, by the way of contradiction, that there exists $f=\displaystyle\sum_{i\geq s}f_ix^i$ such that $fR\langle x;\alpha,w\rangle f=0$, where $f_s\neq 0$. Take any $c\in D_{s}$ and write $b=\alpha_{-s}(c)$ , for some $b\in D_{-s}$. Thus, $fbx^{-s}f=0$ and we have that $f_s\alpha_{s}^{-1}(b)w_{s,-s}f_s = 0$ . Hence, $f_scw_{-s,s}f_s = 0$ and we get that $f_sD_sf_s = 0$. Since $R$ is a semiprime ring, then $D_s$ is also a semiprime ring. Consequently, $f_s=0$ because $f_s\in D_s$, a contradiction. So, $R\langle x;\alpha,w\rangle$ is semiprime.
For the second part, since $R$ is Noetherian, then by (\cite{lam1}, Theorem 4.10.30) the prime radical $Nil_{*}(R)$ is nilpotent. As a consequence, there exists $n\geq 1$ such that for every $\alpha$-prime ideal $P$ of $R$ we have that $Nil_{*}(R)^n\subseteq P$ and it follows that $Nil_{*}(R)\subseteq P$, for every $\alpha$-prime ideal of $R$, because of (\cite{laz e mig}, Remark 3.2) says that $Nil_{*}(R)$ is an $\alpha$-invariant ideal of $R$. Hence, we get that $Nil_{*}(R)\subseteq Nil_{\alpha}(R)$. By assumption and Proposition \ref{primeradical5} we have that $Nil_{\alpha}(R)=0$ and consequently, $Nil_{*}(R)=0$ So, $R$ is semiprime.
(ii) The proof is similar of the item (i).
\end{dem}
Fron now on, we proceed to give a more close description of the prime ideals of
$R[[x;\alpha,w]]$ and $R\langle x;\alpha,w\rangle$. The proof of the next result is similar to (\cite{wag e fer}, Proposition 2.6).
\begin{prop} \label{lem25} Let $P$ be a prime ideal of
$R[[x;\alpha,w]]$ (resp. $R\langle x;\alpha,w\rangle$). Then we have one of the
following possibilities:
(i) $P=Q\oplus \displaystyle\sum_{i\geq 1} D_ix^{i}$, where $Q$ is a prime
ideal of $R$
(resp. $P=Q\oplus \displaystyle\sum_{i\not =0} D_ix^{i}$, where $Q$ is a prime
ideal of $R$ with $D_j\subseteq Q$, for any $j\not =0$).
(ii) $1_ix^{i}\notin P$, for some $i\geq 1$.
\end{prop}
It is clear that for any prime ideal $Q$ of $R$, the ideal $Q\oplus
\displaystyle\sum_{i\geq 1} D_ix^{i}$ is a prime ideal of $R[[x;\alpha,w]]$. Thus,
we are in the case (i) of Proposition \ref{lem25} . If, in addition,
$D_j\subseteq Q$, for all $j\not =0$, it is easy to see that
$P=Q\oplus \displaystyle\sum_{i\not =0} D_ix^{i}$ is an ideal of $R\langle x;\alpha,w\rangle$
which is obviously prime.
From now on, we proceed to study the case of the item (ii) of the last proposition and we have the following two results.
\begin{prop} Let $P$ be an ideal of $R\langle x;\alpha,w\rangle$. If $P\cap R$ is $\alpha$-prime and either
$P=(P\cap R)\langle x;\alpha,w\rangle$ or $P$ is maximal amongst the ideals $N$ of
$R\langle x;\alpha,w\rangle$ with $N\cap R=P\cap R$, then $P$ is prime.
\end{prop}
\begin{dem} If $P=(P\cap R)\langle x;\alpha,w\rangle$, then the result follows from Corollary \ref{primeideals}. Now, suppose that $P\neq (P\cap R)\langle x;\alpha,w\rangle$ and let $I,J$ be ideals of $R\langle x;\alpha,w\rangle$ such that $IJ\subseteq P$. Suppose that $I\nsubseteq P$ and $J\nsubseteq P$ and we get that $P\subsetneq I+P$ and $P\subsetneq J+P$. Note that $((I+P)\cap R)((J+P)\cap R)\subseteq P\cap R$ because of $(I+P)(J+P)\subseteq P$. By assumption, we have that either $((I+P)\cap R)\subseteq P\cap R$ or $((J+P)\cap R)\subseteq P\cap R$. Thus, either $(I+P)\cap R=P\cap R$ or $(J+P)\cap R=P\cap R$, which contradicts the assumption on $P$. Hence, either $I\subseteq P$ or $J\subseteq P$. So, $P$ is prime.
\end{dem}
The proof of the following result is similar to the proof of the last proposition.
\begin{prop} Let $P$ be an ideal of $R[[x;\alpha,w]]$ such that
$1_ix^{i}\notin P$, for some $i\geq 1$ and $P\cap R$ is an $\alpha$-invariant ideal. If $P\cap R$ is prime and either $P=(P\cap
R)[[x;\alpha,w]]$ or $P$ is maximal amongst the ideals $N$ of
$R[[x;\alpha,w]]$ with $N\cap R=P\cap R$, then $P$ is prime.
\end{prop}
We finish this section with the following remark.
\begin{obs} Until now, we do not know if it is true or not the following natural converse of the last two propositions:
(i) If $P$ is a prime ideal of $R\langle x;\alpha,w\rangle$ and $P\neq (P\cap R)\langle x;\alpha,w\rangle$, then $P$ is maximal amongst the ideals $N$ of
$R\langle x;\alpha,w\rangle$ with $N\cap R=P\cap R$.
(ii) Let $P$ be an ideal of $R[[x;\alpha,w]]$ such that
$1_ix^{i}\notin P$, for some $i\geq 1$ and $P\cap R$ is a strongly $\alpha$-prime ideal of $R$. If $P$ is a prime ideal of $R [[x;\alpha,w]]$ and $P\neq (P\cap R) [[x;\alpha,w]]$, then $P$ is maximal amongst the ideals $N$ of
$R[[x;\alpha,w]]$ with $N\cap R=P\cap R$.
.
\end{obs}
\section{Goldie twisted partial skew power series rings}
In this section, $\alpha$ is an unital twisted partial action of $\mathbb{Z}$ on $R$, unless otherwise stated.
Let $S$ be a ring and $M$ a right $S$-module. We remind that $M$ is uniform if, the intersection of any two nonzero submodules is nonzero, see (\cite{mcconnel robson}, pg. 52) for more details. According to (\cite{mcconnel robson}, pg. 57) a ring $S$ is right Goldie if satisfies ACC on right annihilator ideals and $S$ does not have an infinite direct sum of right uniform ideals. In this section, we study the Goldie property in twisted partial skew Laurent series rings and twisted partial skew power series rings. We begin with the following lemma that will be important to prove the principal results of this section, which generalizes (\cite{Letzter2}, Lemma 2.8).
\begin{lema}\label{submoduloordenado} Let $V$ be a right simple $R$-module. Then $VR [[x;\alpha,w]]$ is a right $R$-module whose the only submodules are ordered in the form $$ VR[[x; \alpha,w]]\supset V(\sum_{i\geq 1}D_ix^i)\supset V(\sum_{i\geq 2}D_ix^i)\supset \ldots.$$
\end{lema}
\begin{dem} We easily have that $VR [[x;\alpha,w]]$ is a right $R [[x;\alpha,w]]$-module and note that $ VR [[x;\alpha,w]]\supset V(\displaystyle\sum_{i\geq 1}D_ix^i)\supset V(\displaystyle\sum_{i\geq 2} D_ix^i)\supset \ldots$.
Let $S$ be a $R [[x;\alpha,w]]$- submodule of $VR [[x;\alpha,w]]$ such that $S \neq V\displaystyle\sum_{i\geq 1}D_ix^i$ and $f=\displaystyle\sum_{i\geq 0} v_i x^i$ a nonzero element of $S$ with $0\neq v_0\in V$. Since $V$ is a simple right $R$-module, then $v_0R=V$. Thus there exists $a_i\in R$ such that $v_i=v_0a_i$ for all $i\geq 1$. Let $g=1+ u_1x+ u_2x^2+ \ldots$ be an element of $R [[x;\alpha,w]]$ such that
\begin{eqnarray*}fg&=& (v_0 + v_0a_1x + v_0a_2x^2+ \ldots)(1 + u_1x + u_2x^2+ \ldots)\\
&=& v_0 + (v_0u_1 + v_0a_1)x + (v_0u_2+ \alpha_1(\alpha^{-1}_1(v_0a_1)u_1)w_{1,1}+ v_0a_2)x^2 \\
&+& (v_0u_3 + \alpha_1(\alpha^{-1}_1(v_0a_1)u_2)w_{1,2} + \alpha_2(\alpha^{-1}_2(v_0a_2)u_1)w_{2,1} + v_0a_3)x^3+ \ldots.
\end{eqnarray*}
If we take $u_1=-a_1$, $u_2=-a_2-a_1\alpha_1(u_11_{-1})w_{1,1}$, $u_3=-a_3-a_2\alpha_2(u_11_{-2})w_{2,1}-a_1\alpha_1(u_21_{-1})w_{1,2}$, ..., $u_n= a_n-a_{n-1}\alpha_{n-1}(u_11_{-n+1})w_{n-1,1}-...-a_1\alpha_1(u_{n-1}1_{-1})w_{1,n-1}$.., then we get that $fg=v_0$.
Note that $$VR [[x;\alpha,w]]=(v_0R)R [[x;\alpha,w]]\subseteq v_0R [[x;\alpha,w]]=fgR [[x;\alpha,w]]\subseteq fR [[x;\alpha,w]]$$ and $fR [[x;\alpha,w]]\subseteq VR [[x;\alpha,w]]$. Hence, $VR [[x;\alpha,w]]=fR [[x;\alpha,w]]$, for all $f\in S$ and it follows that $VR [[x;\alpha,w]]\subseteq SR [[x;\alpha,w]]\subseteq S$. So, $V\displaystyle\sum_{i\geq 1}D_ix^i$ is the unique submodule of $VR [[x;\alpha,w]]$. Finally, following this technique we get the result.
\end{dem}
Next, we study the uniformity of $VR [[x;\alpha,w]]$ and $VR\langle x;\alpha,w\rangle$.
\begin{prop}\label{idealsimples} Suppose that $V$ is a right simple ideal of $R$. The following statements hold.
\begin{description}
\item[(a)]$VR [[x;\alpha,w]]$ is uniform as $R [[x;\alpha,w]]$-module.
\item[(b)]$VR\langle x; \alpha,w\rangle$ is uniform as $R\langle x;\alpha,w\rangle$-module.
\end{description}
\end{prop}
\begin{dem}
\begin{description}
\item[(a)] By Lemma \eqref{submoduloordenado}, the unique submodules of $VR [[x;\alpha,w]]$ are $V\displaystyle\sum_{i\geq m}D_ix^i$, for $m\geq 0$ and note that $V\displaystyle\sum_{j\geq i}D_jx^j\supset V\displaystyle\sum_{j\geq i+s}D_jx^j$, for all $s\geq 0$. Thus, $$V\displaystyle\sum_{j\geq s}D_jx^j \cap V\displaystyle\sum_{j\geq t}D_jx^j=V\displaystyle\sum_{j\geq t}D_jx^j\neq 0,$$ always that $s\geq t$. So, $VR [[x;\alpha,w]]$ is uniform.
\item[(b)] Let $L$ be a nonzero submodule of $VR\langle x;\alpha,w\rangle$. Then, $L\cap VR [[x;\alpha,w]]$ is a nonzero submodule of $VR [[x;\alpha,w]]$. Thus, for each nonzero submodules $C$ and $D$ of $VR\langle x;\alpha,w\rangle$, we have that $C\cap VR [[x;\alpha,w]]\neq 0$ and $D\cap VR [[x;\alpha,w]]\neq 0$, and it follows that $(C\cap D)\cap VR [[x;\alpha,w]]=(C\cap VR [[x;\alpha,w]])\cap (D\cap VR [[x;\alpha,w]])\neq 0$. Hence, $C\cap D\neq0$. So, $ VR\langle x;\alpha,w\rangle$ is uniform.
\end{description}
\end{dem}
According to (\cite{mcconnel robson}, 2.2.10) the right Goldie rank of a ring $S$ is $n$ if there exists a direct sum $\displaystyle\bigoplus_{i=1}^n I_i$ of uniform right submodules of $S$ such that $\displaystyle\bigoplus_{i=1}^n I_i$ is right essential in $S$ and we denote it by $rankS$.
In (\cite{Letzter2}, Theorem 2.8) the authors used the noetherianity to prove it. In next result we replace the noetherianity condition for a weaker condition, that is, Goldie property and it generalizes (\cite{Letzter2}, Theorem 2.8).
\begin{teo}\label{dimensaouniforme} If $R$ is semiprime Goldie, then $rankR=rankR [[x;\alpha,w]]=rank R\langle x;\alpha,w\rangle$
\end{teo}
\begin{dem} By the fact that $R$ is semiprime Goldie we have, by (\cite{mcconnel robson}, Theorem 2.3.6) , that there exists the classical quotient ring $E$ of $R$ which is semisimple. Note that $rankR=rankE$, because of (\cite{mcconnel robson}. Lemma 2.2.12). Since $R\subseteq R\langle x;\alpha,w\rangle\subseteq E\langle x;\alpha^{*},w^{*}\rangle$, then $$rank E=rank R\leq rank R\langle x;\alpha,w\rangle \leq rank E\langle x;\alpha^{*},w^{*}\rangle,$$ where $\alpha^{*}$ is the extension of the unital twisted partial action $\alpha$ of $R$ to $E$, see (\cite{lmsw}, Theorem 3.12).
Let $d=rankR$ and we may suppose without loss of generality that $R=E$ and $\alpha=\alpha^{*}$. Then, we can write $$R=V_1\oplus \cdots \oplus V_d$$ where $V_i$ is a simple right ideal of $R$, for all $i=1, \ldots, d$.
Hence, $$R\langle x;\alpha,w\rangle=V_1R\langle x;\alpha,w\rangle\oplus \cdots \oplus V_dR\langle x;\alpha,w\rangle.$$ and by Proposition \eqref{idealsimples}, item $(b)$, each $V_iR\langle x;\alpha,w\rangle$ is uniform as right $R\langle x;\alpha,w\rangle$- module. So, $rank R\langle x;\alpha,w\rangle=d$.
By similar methods, we have that $R [[x;\alpha,w]]= V_1R [[x;\alpha,w]]\oplus \cdots \oplus V_dR[[x, \alpha.w]]$ and by Proposition \eqref{idealsimples} item $(b)$, each $V_iR [[x;\alpha,w]]$ is an unifom submodule of $R [[x;\alpha,w]]$, for all $i=1, \ldots d$. So, $rank R [[x;\alpha,w]]=d$.
\end{dem}
Let $S$ be a ring and $a\in S$. The right annihilator of $a$ in $S$ is $Ann_S(a)=\{x\in S: ax=0\}$. Moreover, according to (\cite{mcconnel robson}, Definition 2.2.4) the singular ideal of $S$ is $Z(S)=\{a\in S: Ann_S(a)\,\, is \,\, right \,\, essential\,\, in \,\, S\}$,
Now, we are ready to prove the second principal result of this section.
\begin{teo}\label{goldie} Let $R$ be a semiprime ring. The following conditions are equivalent:
\begin{description}
\item[(a)] $R$ is Goldie.
\item[(b)] $R [[x;\alpha,w]]$ is Goldie.
\item[(c)] $R\langle x;\alpha,w\rangle$ is Goldie.
\end{description}
\end{teo}
\begin{dem}
$(a)\Rightarrow (c)$ By assumption, Theorem \ref{dimensaouniforme} and by Proposition \ref{semiprime1}, item $(i)$ we have that $rankR\langle x;\alpha,w\rangle=rankR<\infty$ and $R\langle x;\alpha,w\rangle$ is semiprime. We claim that $R\langle x;\alpha,w\rangle$ is nonsingular. In fact, let $f\in Z(R\langle x;\alpha,w\rangle)$, where $f= a_{-j}x^{-j}+ \ldots + a_0+ a_1x+ \ldots$ and
$I$ a nonzero right ideal of $R$. Then $I\langle x;\alpha,w\rangle$ is a right ideal of $R\langle x;\alpha,w\rangle$ and we obtain that $Ann_{R\langle x;\alpha,w\rangle}(f)\cap I\langle x;\alpha,w\rangle\neq 0$. Thus, there exists $0\neq h\in I\langle x;\alpha,w\rangle\cap Ann_{R\langle x;\alpha,w\rangle}(f)$, i.e., $fh=0$. We consider, $h=b_{-k}x^{-k}+ \ldots + b_0 + b_1x+ \ldots $ and suppose without loss of generality that $b_{-k}\neq 0$. Hence, looking at the smallest degree of the product $fh$ we get
$$ a_{-j}\alpha_{-j}(1_jb_{-k})w_{-j, -k} x^{-j-k}=0,$$ which implies that $a_{-j}\alpha_{-j}(1_jb_{-k})=0$. Consequently, $$\alpha^{-1}_{-j}(a_{-j})\alpha^{-1}_{-j}(\alpha_{-j}(1_jb_{-k}))=0 \Longrightarrow \alpha^{-1}_{-j}(a_{-j})1_jb_{-k}=0 $$ and we have that $\alpha^{-1}_{-j}(a_{-j})b_{-k}=0$. So, $0\neq b_{-k}\in Ann_R(\alpha^{-1}_{-j}(a_{-j}))$ and we obtain that $Ann_R(\alpha^{-1}_{-j}(a_{-j}))\cap I\neq 0$ which concludes that $\alpha^{-1}_{-j}(a_{-j})\in Z(R)$. By the fact that $R$ is Goldie we have that $\alpha^{-1}_{-j}(a_{-j})=0$. Since, $\alpha^{-1}_{-j}$ is an isomorphism, then $a_{-j}=0$. Now, following the similar methods, we obtain that $f=0$. Hence, $Z(R\langle x;\alpha,w\rangle)=0$. Therefore, by (\cite{mcconnel robson}, Theorem 2.3.6) we get that $R\langle x;\alpha,w\rangle$ is Goldie.
We need to show that $(c)\Rightarrow (b)\Rightarrow (a)$. In fact, note that $$R\subset R [[x;\alpha,w]]\subset R\langle x;\alpha,w\rangle$$ and by the fact that $R$ is semiprime and Goldie, we have, by Theorem \ref{dimensaouniforme}, that $rank R=rankR [[x;\alpha,w]]=rankR\langle x;\alpha,w\rangle.$ Since the chain conditions on right annihilators is inherited by subrings we obtain the desired result.
\end{dem}
In the article
\cite{lmsw}, the authors worked with twisted partial actions of finite type and the rings satisfied some finiteness conditions as Goldie property. But, at that time the authors did not notice such assumption would imply the existence of the enveloping action. So, in the next result, we show that the unital twisted partial actions on algebras with finite Goldie rank that are of finite type, have enveloping action.
\begin{teo}\label{envolvente} Let $R$ be a ring with finite uniform dimension and $\alpha$ a twisted partial action of $\mathbb{Z}$ on $R$. If $\alpha$ is of finite type, then $\alpha$ has enveloping action.
\end{teo}
\begin{dem} By assumption, there exists a finite set $\{g_1, \ldots , g_n\}$ of $\mathbb{Z}$ such that $$R=D_{g+g_1} + \ldots + D_{g+g_n},$$
for every $g\in \mathbb{Z}$.
We claim that $R$ can be written as direct sum of indecomposable rings. In fact, each $D_{g_i}$ has identity $1_{g_i}$ and by similar methods of (\cite{laz e mig}, Remark 1.11) we can write $$R=F_1\oplus\ldots \oplus F_n,$$ where each $F_i$ is an ideal of $D_{g+g_i}$, $i=1,\ldots, n$, generated by a central idempotent. Now, if each $F_i$ is indecomposable we are done. Next, if there exists $1\leq j\leq n$ such that $F_j$ is not indecomposable, then we may write $F_j=F_j^{1}\oplus F_j^{2}$, and we get $$R= F_1\oplus...\oplus F_j^{1}\oplus F_j^{2}\oplus... F_n.$$
Proceeding in this manner with all other decomposable components we may write $$R=A_1\oplus ...\oplus A_n$$ Now if all $A_i$ are indecomposable, then we are done. If it is not, proceed with similar methods as before. Since $rankR$ is finite, then the process must stop and we have that $R$ is a direct sum of indecomposable rings where, each one is generated by a central idempotent of $R$. So, by (\cite{DES2}, Theorem 7.2), $(R,\alpha,w)$ has enveloping action.
\end{dem}
Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$ that admits enveloping action $(T,\beta,u)$. Following \cite {DE} and \cite{DES2}, we exhibit an explicit Morita context between
$R\langle x;\alpha,w\rangle$ and $T\langle x; \beta,u \rangle$ whose restriction to $T[[x;\beta,u]]$
gives also a Morita context between $R[[x;\alpha,w]]$ and
$T[[x;\beta,u]]$.
Recall that given two rings $R$ and $S$, bimodules ${_R}U_S$ and
${_S}V_R$ and maps $\theta:U\otimes_{S} V\rightarrow R$ and
$\psi:V\otimes_{R} U\rightarrow S$, the collection
$(R,S,U,V,\theta,\psi)$ is said to be a Morita context if the
array
\[ \left[ \begin{array}{cc}
R & V \\
U & S
\end{array} \right], \]
with the usual formal operations of $2\times 2$ matrices, is a
ring.
The following result is proved in (\cite{mcconnel robson}, Theorem 3.6.2), for
rings with identity element. Actually, in the proof of the result, it
is not used the fact that the rings have identity element and the
modules $U$ and $V$ are unital modules. So, we can easily see that
the following is true for rings which do not necessarily have
identity.
\begin{teo} \label{morita} Let $(R,S,U,V,\theta,\psi)$ be a Morita context.
Then there is an order preserving one-to-one correspondence
between the sets of prime ideals $P$ of $R$ with $P\nsupseteqq UV$
and prime ideals $P^{'}$ of $S$ with $P^{'}\nsupseteqq VU$. The
correspondence is given by $P\longmapsto \{s\in S:UsV\subseteq
P\}$ and $P^{'}\longmapsto \{r\in R: VrU\subseteq
P^{'}\}$.\end{teo}
Following the similar ideas of (\cite{DE}, Section 5), we put $U=\{\displaystyle\sum_{i\in
\mathbb{Z}}a_{i}x^{i}: \, \, a_{i}\in R\, \, ,{\rm for}\, \, {\rm
all}\,\, i\in \mathbb{Z}\}$ and $V=\{\displaystyle\sum_{i\in
\mathbb{Z}}a_{i}x^{i}: a_{i}\in \beta_{i}(R)\, \, ,{\rm for}\, \,
{\rm all}\, \, i\in \mathbb{Z}\}$. Then, it can easily be seen that
$UT\langle x;\beta,u \rangle \subseteq U$, $T\langle x;\beta,u\rangle V\subseteq V$,
$R\langle x;\alpha,w\rangle U\subseteq U$ and $VR\langle x;\alpha,w\rangle\subseteq V$ (to show
the relations recall that $\beta_{j}(R)$ is an ideal of $T$ and
$S_{j}=\beta_{j}(S_{-j})$). In case we want to consider
$R[[x;\alpha,w]]$ and $T[[x;\beta,u]]$ we restrict $U$ and $V$ to have
just power series and we have similar relations.
Thus, we have the Morita contexts
$(R\langle x;\alpha,w\rangle,T\langle x;\beta,u\rangle ,U,V,\theta,\psi)$ and
$(R[[x;\alpha,w]],T[[x;\beta,u]]],U,V,\theta,\psi)$, where $\theta$ and
$\psi$ are obvious.
The proof of the following lemma is similar to (\cite{wag e fer}, Lemma 2.2).
\begin{lema} \label{prim1} Let $P$ be a prime ideal of $R[[x;\alpha,w]]$.
Then, there exists a unique prime ideal $P^{'}$ of $T[[x;\beta,u]]$, given by Theorem \ref{morita}, which satisfies $P^{'}\cap R[[x;\alpha,w]]]=P$.
\end{lema}
We have the following easy consequence.
\begin{cor} \label{prim2} There is a one-to-one correspondence, via contraction,
between the set of all prime ideals of $R[[x;\alpha,w]]$ and the set
of all prime ideals of $T[[x;\beta,u]]$ which do not contain $R$.
\end{cor}
The next result is important to prove the last main result of this article and it is an easy consequence of Lemma \ref{prim1} and Corollary \ref{prim2}.
\begin{cor}\label{primeradical} Let $\alpha$ be an unital twisted partial action of $\mathbb{Z}$ on $R$ with enveloping action $(T,\beta,u)$. Then $Nil_{*}(T [[x;\beta,u]])\cap R [[x;\alpha,w]]=Nil_{*}(R [[x;\alpha,w]])$. \end{cor}
Based on the last results, we will proceed to describe the prime radical of $R [[x;\alpha,w]]$ when $(R,\alpha,w)$ has enveloping action $(T,\beta,u)$ and for this we need the following result.
\begin{lema}\label{primeradical1} Let $\beta$ be a twisted global action of $\mathbb{Z}$ on a ring $S$ with cocycle $u$. Then, the prime radical $Nil_*(S[[x;\beta,u]])$ of $S[[x,\beta, u]]$ is $Nil_*(S[[x;\beta,u]])=Nil_{*}(S)\cap N_{\beta}(S)\oplus \displaystyle\sum_{i\geq 1}N_{\beta}(S)x^i$, where $N_{\beta}(S)$ is the intersection of all strongly $\beta$-prime ideals of $S$.\end{lema}
\begin{dem} We have two classes of prime ideals in $S [[x;\beta,u]]$, i.e., \begin{center} $\mathcal{F}_1=\{ P: \,\, prime \,\, ideal\,\, such \,\, that\,\ , S [[x;\beta,u]]x\subseteq P\} $ \end{center} and \begin{center}$\mathcal{F}_2=\{P: \,\, prime \,\, ideal \,,\ such \,\, that\,\, S [[x;\beta,u]]x\nsubseteq P\}$.\end{center} Note that $\displaystyle\bigcap_{P\in \mathcal{F}_1}P=Nil_{*}(S)\oplus \displaystyle\sum _{i\geq 1}Sx^i$. Now, for each strongly $\beta$-prime ideal $Q$ of $S$, we have by similar methods of Corollary \ref{primeideals}, that $Q [[x;\beta,u]]$ is prime and we easily get that each prime ideal $P$ of $\mathcal{F}_2$ implies that $P\cap S$ is a strongly $\beta$-prime ideal of $S$. Thus, $\displaystyle\bigcap_{P\in \mathcal{F}_2} P \supseteq N_{\beta}(S)$. Hence, $Nil_{*}(S [[x;\beta,u]]=(\displaystyle\bigcap_{P\in \mathcal{F}_1} P)\cap (\displaystyle\bigcap_{Q\in \mathcal{F}_2}Q)\supseteq (Nil_{*}(S) +\displaystyle\sum_{i\geq 1}Sx^i)\cap (N_{\beta}(S)) [[x;\alpha,w]])\supseteq Nil_{*}(S)\cap N_{\beta}(S) \oplus \displaystyle\sum_{i\geq 1}N_{\beta}(S)x^i$.
On the other hand, since for each prime ideal $L$ of $S$ we have that $L\oplus \displaystyle\sum _{i\geq 1}Sx^i$ is a prime ideal of $S [[x;\beta,u]]$ and in the same way of Corollary \ref{primeideals} we have that $N [[x;\beta,u]]$ is prime for each strongly $\beta$-prime ideal $N$ of $S$, then $Nil_{*}(S [[x;\beta,u]]) \subseteq (Nil_{*}(S)\cap N_{\beta}(S)) \oplus \displaystyle\sum_{i\geq 1}N_{\beta}(S)x^i$.
So, $Nil_{*}(S [[x;\beta,u]])=Nil_{*}(S)\cap N_{\beta}(S) \oplus \displaystyle\sum_{i\geq 1}N_{\beta}(S)x^i$.
\end{dem}
\begin{prop}\label{primeradical4} Let $\alpha$ be an unital twisted partial action with enveloping action $(T,\beta,u)$. Then the prime radical $Nil_{*}(R [[x;\alpha,w]])$ of $R [[x;\alpha,w]]$ is $Nil_{*}(R [[x;\alpha,w]])=(N_{\alpha}(R)\cap Nil_{*}(R))\oplus \displaystyle\sum_{i\geq 1}(N_{\alpha}(R)\cap D_i)x^i$, where $N_{\alpha}(R)$ is the intersection of all strongly $\alpha$-prime ideals of $R$.\end{prop}
\begin{dem} Using the same methods of (\cite{wag e fer}, Lemma 2.9), we have, for each strongly $\beta$-prime ideal $Q$ of $T$, that $Q\cap R$ is a strongly $\alpha$-prime ideal of $R$ and since $R$ is an ideal of $T$ we have that $Nil_{*}(T)\cap R=Nil_{*}(R)$. Thus, by Corollary \ref{primeradical} we easily get that $Nil_{*}(R [[x;\alpha,w]])=Nil_{*}(T [[x;\beta,u]])\cap R [[x;\alpha,w]]= (Nil_{*}(R)\cap N_{\alpha}(R))\oplus \displaystyle\sum_{i\geq 1}(Nil_{\alpha}(R)\cap D_i)x^i$\end{dem}
\begin{obs} In the last result, we use the fact that the twisted partial action has an enveloping action, but we do not know if the Proposition \ref{primeradical4} is true for unital twisted partial actions of $\mathbb{Z}$ without enveloping action. To solve this problem we need to know if the following result is true:
\begin{center} Let $P$ be a prime ideal of $R [[x;\alpha,w]]$ such that $1_ix^i\notin P$ for some $i\geq 1$. Then $P\cap R$ is strongly $\alpha$-prime.\end {center}
\end{obs}
As it happened in the (\cite{CFMH}, Example 2.6) we obtain by a similar example that the twisted partial skew power series over semiprime Goldie rings are not necessary semiprime, but, if we input the condition of ``finite type" we get the following.
\begin{teo} Let $\alpha$ be an unital twisted partial action. If $R$ is semiprime Goldie and $\alpha$ is a twisted partial action of finite type, then $R [[x;\alpha,w]]$ is semiprime Goldie.
\end{teo}
\begin{dem} Since $\alpha$ is of finite type and $rank(R)$ is finite, then by Theorem \eqref{envolvente} $\alpha$ has enveloping action $\beta=\left(B, \{\beta _g\}_{g\in G}, \{u_{(g,h)}\}_{(g,h)\in {G\times G}}\right)$. In this case, by (\cite{lmsw}, Corollary 4.18), $T$ is semiprime Goldie and we claim that $T [[x;\beta,u]]$ is semiprime. In fact, suppose that $Nil_{*}(T [[x;\beta,u]])$ is not zero. Then, by (\cite{lam1}, Lemma 10.10.29), $Nil_{*}(T [[x;\beta,u]])$ contains a nonzero nilpotent ideal $L$, since by Theorem 3.4 we have that $T[[x;\beta,u]]$ is Goldie. By the fact that $T$ is semiprime we have that $Nil_{*}(T [[x;\beta,u]])=\displaystyle\sum_{i\geq 1}N_{\beta}(T)x^i$. Now, consider $ H=\{0\neq a\in N_{\beta}(T):\exists 0\neq f\in L\,\, such\,\, that \,\, f=ax^{j}+...\in L\}\cup {0}$. It is not difficult to see that $H$ is a nonzero ideal of $T$ with $ \beta_i(H)\subseteq H$, for all $i\in\mathbb{Z}$. Since $L$ is nilpotent, we obtain that $H$ is nilpotent and consequently $H=0$, because of $T$ is semiprime, which is a contradiction. So, $Nil_{*}(T [[x;\beta,u]])=0$.
By Corollary \ref{primeradical} we have that $$Nil_{*}(T [[x;\beta,u]])\cap R [[x;\alpha,w]]=Nil_{*}(R [[x;\alpha,w]])$$ which implies that $Nil_{*}(R [[x;\alpha,w]])=0$. Therefore, $R [[x;\alpha,w]]$ is semiprime Goldie.
\end{dem}
|
1,116,691,497,630 | arxiv | \section{Introduction}
Perovskite manganites with a composition of $R_{1-x}$$A_{x}$MnO$_{3}$ ($R$ stands for a rare-earth ion, $A$ an alkaline earth ion, $x$ the band filling) as illustrated in Fig.~1(a) exhibit a wide variety of ordered structures in spin, charge, and orbital degrees of freedom depending on the band filling and the band width~\cite{Tokura2006}. Various unique phenomena observed in these systems originate from the strong correlation between these electron degrees of freedom. A well-known example for one of such phenomena is the colossal magnetoresistance, which occurs due to the phase transition between a charge-orbital-ordered antiferromagnetic insulating state and a double-exchange ferromagnetic metallic state induced by a magnetic field. The conduction electron in the ferromagnetic-metallic state has a nearly 100~\% spin polarization, and hence it is recognized as a representative half-metal~\cite{Park1998}. Another example is the multiferroicity which appears in Mott insulator phase at $x = 0$. In this phase, the spontaneous electric polarization is induced by a non-collinear spiral order of the local spins, leading to the emergence of the non-trivial electromagnetic responses, such as the large modulation of the magnetization by an electric field~\cite{Cheong2007,Tokura2010}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=8cm]{Fig1_modified.eps}
\caption{(a) A schematic of the crystal structure of perovskite manganite. (b) A schematic spin texture of a single skyrmionic bubble. (c) Stripe domain width in a ferromagnetic thin film as a function of $Q$ factor calculated using Eq.~1. The parameters we used for the calculation are described in the main text. The inset shows the evolution of the characteristic spin configuration with $Q$ factor; spins pointing lateral direction with large domain ($Q < 1$), twisted configuration ($Q\sim1$), and spins pointing perpendicular direction with large domain ($Q > 1$).}
\end{center}
\end{figure}
When a spin-polarized conduction electron passes through a non-collinear local spin order, non-trivial electromagnetic coupling emerges, as typified by the topological Hall effect (THE) observed in the compounds hosting magnetic skyrmion~\cite{Bruno2004,Binz2008}. This Hall effect originates from the Berry phase acquired by the conduction electron whose spin is aligned to the local spin by the Hund's-rule coupling. The half-metallic manganite is an attractive system to examine the THE because its magnitude is proportional to the spin polarization. The magnetic skyrmion is a particle-like object with a whirling spin texture as illustrated in Fig.~1(b). The exact definition of the magnetic skyrmion is a solitonic state stabilized by a competition between the exchange interaction enforcing a parallel spin alignment and the Dzyaloshinskii-Moriya interaction (DMI) twisting the parallel spin alignment. Hence, it is observed in noncentrosymmetric magnets (chiral skyrmion)~\cite{Bogdanov1994,Resler2006,Muhlbauer2009,Yu2010}. Similar but slightly different particle-like magnetic texture is observed in centrosymmetric magnets known as the magnetic bubble which is stabilized by the magnetic dipolar interaction instead of DMI~\cite{Malozemoff,Hubert}. Although the chiral skyrmion and magnetic bubble are different in the magnetization profile in the core region~\cite{Kiselev2011}, their dynamical response and THE can be quantified by a common topological invariant called the skyrmion number ($N$), which is defined as a number of spheres wrapped by the constituent spins~\cite{Moutafis2009,Buttner2015,Nagaosa2013}. Hence, a magnetic bubble can be also regarded as a skyrmion in a broad sense, and is called `skyrmionic bubble'. Although the study on the skyrmionic bubble has a long history, it attracts recently a renewed interest , in particular, due to the richer skyrmion textures which are brought about by the underlying helicity degree of freedom~\cite{Nagaosa2013,Ezawa2010}.
Furthermore, the skyrmionic bubbles can form only in single-phase bulk
crystals~\cite{Nagai2012,Nagao2013,Yu2013,Morikawa2015,Kotani2016,Yu2012,Wang2016}, but also in thin film multilayers, sometimes in conjunction with DMI~\cite{Finazzi2013,Li2014,Jiang2015,Gilbert2015,Woo2016,Lee2016,Soumyanarayanan2016}. The wide variety of materials choice is a major advantage of the skyrmionic bubble from the application point of view.
Although there are several reports on the observation of skyrmionic bubbles in bulk crystals of
manganites~\cite{Nagai2012,Nagao2013,Yu2013,Morikawa2015,Kotani2016}, neither the control of the skyrmion size nor the observation of THE has been achieved yet. Thin film structure provides an excellent platform for such a study. The size of the skyrmionic bubble in a thin film crucially depends on the magnitude of the uniaxial magnetic anisotropy. The quality factor $Q = K_{\mathrm{u}}/\Omega$, where $K_{\mathrm{u}}$ is the uniaxial magnetic anisotropy energy and $\Omega=2\pi M_{\mathrm{s}}^{2}$ ($M_{\mathrm{s}}$ is saturation magnetization) is the dipolar interaction energy, is known to be a good measure of the domain size~\cite{Yafet1988,Vedmedenko2000}. The domain size can be formed only when $Q\geq1$, and the domain width ($L$) becomes larger with $Q$ as shown in Fig.~1(c). $L$ is calculated based on the following analytical solution,
\begin{equation}
L=\frac{5nJ\pi^{2}}{6\Omega_{\mathrm{L}}}\frac{\exp\left(\sqrt{nJ\pi^{4}(Q-1)\Omega_{\mathrm{S}}/\Omega_{\mathrm{L}}^{2}}+1\right)}{\sqrt{nJ\pi^{4}(Q-1)\Omega_{\mathrm{S}}/\Omega_{\mathrm{L}}^{2}}+1},
\end{equation}
where the film has $n$ layers ($n=t/a$, $t$ is the film thickness and $a$ the lattice constant), $J$ the exchange interaction energy, $\Omega_{\mathrm{L}}=2\pi(nM_{\mathrm{s}})^2$, and $\Omega_{\mathrm{S}}=2\pi nM_{\mathrm{s}}^2$~\cite{Wu2004,Won2005}. The $Q$ dependence of the domain width shown in Fig.~1(c) is estimated using $J=$2.5~meV~\cite{Moussa2007}, $n$~=~75 ($t$~=~30~nm) and $M_{\mathrm{s}}$~=~2.5~$\mu_{\mathrm{B}}$/f.u.. In this study, we tune the $Q$ value in thin films of manganite by controlling $K_{\mathrm{u}}$ using both the single-ion anisotropy induced by Ru doping and the epitaxial strain. We find that large THE appears when the perpendicular magnetic anisotropy and dipolar interaction are in keen competition.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{Fig2_modified.eps}
\caption{(a) Temperature dependence of the resistivity for La$_{0.7}$Sr$_{0.3}$Mn$_{1-y}$Ru$_{y}$O$_{3}$ films on LSAT(001) substrate with $y = 0$, 0.05, and 0.1. (b)(c)(d) Temperature dependence of the magnetization for the same samples measured in a magnetic field of 0.1~T applied parallel ($M_{\mathrm{in}}$) and perpendicular ($M_{z}$) to the film surface.}
\end{center}
\end{figure}
\section{Sample Preparation}
Thin films of La$_{0.7}$Sr$_{0.3}$Mn$_{1-y}$Ru$_{y}$O$_{3}$ (LSMRO) with a thickness of 30~nm were grown on (001) surface of (LaAlO$_{3}$)$_{0.3}$(SrAl$_{0.5}$Ta$_{0.5}$O$_{3}$)$_{0.7}$ (LSAT) substrates by a pulsed laser deposition technique. The Ru concentration $y$ was varied from 0 to 0.1. As reported in Ref.~\cite{Yamada2005}, a too high growth temperature ($T_{\mathrm{growth}}$) or a low oxygen pressure ($P_{\mathrm{O}_{2}}$) causes the deficiency of Ru. We optimized the growth condition at $T_{\mathrm{growth}}$ = 720~$^{\circ}$C and $P_{\mathrm{O}_{2}}$ = 40~mTorr to obtain
thin films with a stoichiometric composition and an atomically flat surface. We verified by x-ray diffraction measurements that the films grown under the optimum condition have pseudomorphic structures with their $c$-axis being elongated due to the compressive strain imposed by LSAT substrate whose lattice constant is smaller than that of LSMRO~\cite{Sahu2000}.
\section{Magnetic and Transport Properties}
Figure~2 displays temperature dependence of the resistivity and magnetization for the films with $y$ = 0, 0.05, and 0.1. All films show a metallic behavior concomitant with the appearance of the ferromagnetic state, whereas the magnetic easy axis changes its direction. At $y$ = 0, the magnetic easy axis lies along the in-plane direction ($M_{\mathrm{in}}$). As increasing the Ru concentration, $M_{\mathrm{in}}$ decreases and alternatively out-of-plane component ($M_{z}$) increases. $M_{\mathrm{in}}$ and $M_{\mathrm{z}}$ are comparable at $y=0.05$, and $M_{z}$ dominates $y=0.1$. The switching of the magnetic easy axis with Ru concentration is also apparent in the magnetic-field dependence of the magnetization shown in Figs.~3(a), (b), and (c).
We thus that the film with $y=0.05$ is in a state of $Q\sim1$, the ideal condition for the generation of small skyrmions.
To confirm the effect of the epitaxial strain on the magnetic anisotropy, we fabricated a thin film with $y$ = 0.05 on a SrTiO$_{3}$ (STO) substrate to apply a tensile strain.
As shown in Fig.~4, the $M$-$H$ curve of the film grown on STO substrate indicates a robust in-plane magnetic anisotropy with a large coercive magnetic field, being consistent with the result in Ref.~\cite{Yamada2005}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{Fig3_modified.eps}
\caption{(a)(b)(c) Magnetic-field ($H$) dependence of magnetization measured at a temperature of 100~K for La$_{0.7}$Sr$_{0.3}$Mn$_{1-y}$Ru$_{y}$O$_{3}$ (LSMRO) films grown on LSAT(001) substrates. $M_{\mathrm{in}}$ ($M_{z}$) denotes the magnetization measured by applying $H$ parallel (perpendicular) to the film surface. The spin texture expected from the magnetization curves at each Ru concentration is shown in the inset. (d)(e)(f) $H$ dependence of the Hall resistivity ($\rho_{yx}$) measured at 100~K. The inset of Fig.~3(g) is a picture of the Hall-bar device.}
\end{center}
\end{figure}
It has been known that the magnetic anisotropy in manganite films can be controlled by the epitaxial strain~\cite{Kwon1997,Wu1999,Dho2003}. However, the realization of a perpendicularly magnetized state only by the epitaxial strain requires a sizable compressive strain imposed by a largely lattice-mismatched substrate. This, however, causes a partial strain relaxation and homogeneous magnetic state. Furthermore, it is difficult to realize continuous variation of the magnetic anisotropy due to the limited choice of the substrates. Therefore, there has been no report on the dense and small skyrmion formation in manganite films. Our results indicate that the combination of the compressive strain and Ru doping enables the perpendicularly magnetized state by a modest strain and a continuous control of the magnetic anisotropy by changing Ru concentration. The compressive biaxial epitaxial strain imposed by LSAT substrate lifts the $t_{\mathrm{2g}}$ orbital degeneracy of the doped Ru$^{4+}$ ion with $xy$ orbital being at the lowest energy. The restored orbital angular momentum in a heavy element Ru induces the large single-ion anisotropy necessary for the perpendicular magnetic anisotropy.
Figures~3(d)-(f) show the Hall resistivity ($\rho_{yx}$) of these films in Figs.~3(d)-(f). We used photolithography and ion-milling to pattern the films in Hall-bar shapes (inset of Fig.~3(d)). The $\rho_{yx}$ of the films with $y = 0$ and 0.1 are almost proportional to $M_{z}$. On the contrary, the film with $y$ = 0.05 shows a large contribution of an additional Hall signal in the small magnetic field region. In conventional magnets, the ordinary Hall resistivity ($\rho_{yx}^{\mathrm{O}}$) proportional to the magnetic field and the anomalous Hall resistivity ($\rho_{yx}^{\mathrm{A}}$) proportional to $M_{z}$ contribute to $\rho_{yx}$. In compounds hosting non-zero spin chirality such as skyrmion, another contribution to $\rho_{yx}$ emerges, that is THE. Since the film with $y = 0.05$ is located near the critical point of $Q = 1$, the skyrmion density is expected to be enhanced compared to the other compositions. We thus consider that the additional Hall signal in this compound stems from THE.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{Fig4_modified.eps}
\caption{(a) Magnetic-field dependence of the magnetization of LSMRO($y = 0.05$)/STO(001) film measured at 100~K. (b) Magnetic-field dependence of the Hall resistivity for the same film measured at 100~K.}
\end{center}
\end{figure}
\begin{figure}[bp]
\begin{center}
\includegraphics[width=8cm]{Fig5_modified.eps}
\caption{(a) $H$ dependence of $\rho_{yx}$, ordinary Hall resistivity ($\rho_{yx}^{\mathrm{O}}$), and $M_{z}$ for LSMRO film with $y$ = 0.05 at 100~K. $\rho_{yx}^{\mathrm{O}}$ was derived by linear fitting of $\rho_{yx}$-$H$ curve above 1~T of magnetic field. The scale of the vertical axis for $\rho_{yx}$ (left axis) and that for $M_{z}$ is adjusted so that they overlap at 0.5~T. (b) Temperature evolution of topological Hall resistivity ($\rho_{yx}^{\mathrm{T}}$) versus $H$. (c) Temperature dependence of $\rho_{yx}^{\mathrm{T}}$ and $\rho_{yx}^{\mathrm{A}}$ (left axis) and $M_{\mathrm{in}}$ and $M_{z}$. (right axis). $M_{\mathrm{in}}$ and $M_{z}$ are measured at a magnetic field of 0.1~T. (d) Temperature dependence of $\rho_{yx}^{\mathrm{T}}$ and $\rho_{yx}^{\mathrm{O}}$ at $H$ = 1~T as well as the effective field ($B_{\mathrm{eff}}$) which is derived by $\rho_{yx}^{\mathrm{T}}/\rho_{yx}^{\mathrm{O}}$($H$ = 1~T). The relation between $B_{\mathrm{eff}}$ and effective skyrmion diameter ($d_{\mathrm{eff}}$) is shown for several representative values of $d_{\mathrm{eff}}$. }
\end{center}
\end{figure}
Figure~5(a) shows $H$-dependence of $\rho_{yx}$, $\rho_{yx}^{\mathrm{O}}$, and $M_{z}$ for the film with $y=0.05$ at 100~K. $\rho_{yx}^{\mathrm{O}}$ was derived from the linear fitting of $\rho_{yx}$ measured at magnetic fields above 1~T where $\rho_{yx}$ shows a linear dependence on $H$. The contribution of $\rho_{yx}^{\mathrm{A}}$ is derived by fitting $\rho_{yx}-\rho_{yx}^{\mathrm{O}}$ with $\alpha M_{z}$ ($\alpha$ is a constant) in $H$ outside of the hysteresis. The topological Hall resistivity ($\rho_{yx}^{\mathrm{T}}$) was derived by subtracting $\rho_{yx}^{\mathrm{O}}$ and $\rho_{yx}^{\mathrm{A}}$ from the total $\rho_{yx}$, \textit{i.e.}, $\rho_{yx}^{\mathrm{T}} = \rho_{yx}-\rho_{yx}^{\mathrm{O}}-\rho_{yx}^{\mathrm{A}}$~\cite{Lee2009,Neubauer2009,Kanazawa2011,Huang2012,Porter2014,Yokouchi2014}. The derivation of $\rho_{yx}^{\mathrm{T}}$ was carried out by the same procedure at other temperatures, and $\rho_{yx}^{\mathrm{T}}$ traces are shown in Fig.~5(b). Figure~5(c) plots the peak value of $\rho_{yx}^{\mathrm{T}}$ as well as $M_{\mathrm{in}}$, $M_{z}$, and $\rho_{yx}^{\mathrm{A}}$ as a function of temperature. $\rho_{yx}^{\mathrm{A}}$ appears at the ferromagnetic transition temperature ($T_{\mathrm{C}}=280$~K) and monotonously decreases with lowering temperature. By contrast, $\rho_{yx}^{\mathrm{T}}$ appears at 200~K which is apparently lower than $T_{\mathrm{C}}$, and has a peak at 150~K. The temperature dependence of $\rho_{yx}^{\mathrm{T}}$ is related to the variation of the magnetic anisotropy. The temperature dependence of the magnetization indicates that $M_{\mathrm{in}}$ and $M_{z}$ becomes comparable at around 150~K, and $M_{\mathrm{in}}$ ($M_{z}$) dominates above (below) this temperature. Above 150~K, $\rho_{yx}^{\mathrm{T}}$ is absent because the magnetic easy axis lies in-plane ($Q < 1$) and the skyrmion is not formed. As the system approaches the spin reorientation temperature, $\rho_{yx}^{\mathrm{T}}$ appears and reaches the maximum when $Q\sim1$ state is realized. At this point, the skyrmion density is expected to be largest. At lower temperatures, the perpendicular magnetic anisotropy is further enhanced, which causes the increase of $Q$, leading to the reduction of the skyrmion density and accordingly the magnitude of $\rho_{yx}^{\mathrm{T}}$.
Additional evidence for the skyrmion formation in the film with $y=0.05$ is found in the dependence of the THE on the magnetic field direction~\cite{Yokouchi2014}. Figure~6(a) shows that THE vanishes with increasing the tilt angle of the magnetic field ($\theta$), whereas the anomalous Hall effect remains almost constant. The reduction of $\rho_{yx}^{\mathrm{T}}$ under inclined field is defined by $\Delta\rho_{yx}^{\mathrm{T}}(\theta)=\rho_{yx}(\theta)-\rho_{yx}(\theta=30^{\circ})$, and $\Delta\rho_{yx}^{\mathrm{T}}(\theta)/\Delta\rho_{yx}^{\mathrm{T}}(\theta=0^{\circ})$ is plotted as a function of $\theta$ in Fig.~6(b). $\rho_{yx}^{\mathrm{T}}$ reduces gradually, but it rapidly drops at around $\theta = 5^{\circ}\sim10^{\circ}$.
The skyrmion diameter ($d$) is estimated as $d=t/\sin\theta_{\mathrm{s}}$ ($\theta_{\mathrm{s}}$ is the angle where THE disappears and $t$ is the film thickness)~\cite{Ohuchi2015}. The top axis of Fig.~6(b) shows the scale of $t/\sin\theta$. It indicates that the typical value of $d$ is about 200-300~nm.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=8cm]{Fig6_modified.eps}
\caption{(a) $\rho_{yx}$-$H$ curves for LSMRO ($y = 0.05$) film at 100~K measured in magnetic fields with various inclination angle ($\theta$). Data are plotted as a function of the perpendicular component of magnetic field ($\mu_{0}H\cos\theta$). The inset depicts the relation between $\theta$ and $H$. (b) The reduction of the $\rho_{yx}^{\mathrm{T}}$ under inclined field is defined by $\Delta\rho_{yx}^{\mathrm{T}}(\theta) = \rho_{yx}(\theta)-\rho_{yx}(\theta=30^{\circ})$, and $\Delta\rho_{yx}^{\mathrm{T}}(\theta)/\Delta\rho_{yx}^{\mathrm{T}}(\theta=0^{\circ})$ is plotted as a function of $\theta$. The top axis shows a scale of $t/\sin\theta$ ($t$ = 40~nm).}
\end{center}
\end{figure}
The size of skyrmion can be also estimated from the magnitude of THE. A single skyrmion gives an effective field of the magnetic flux quantum $\Phi_{0}=h/e$ to a conduction electron, where $h$ is the Planck's constant and $e$ is the elementary charge. Therefore, $\rho_{yx}^{\mathrm{T}}$ and the skyrmion density $\Phi$ is related as $\rho_{yx}^{\mathrm{T}}=PR_{0}\Phi_{0}\Phi$~\cite{Neubauer2009}, where $R_{0}$ is the ordinary Hall coefficient and $P$ is the spin polarization. $P\sim 1$ in the metallic state of the perovskite manganites~\cite{Park1998}. We derive the effective field ($B_{\mathrm{eff}}\Phi_{0}\Phi$) by comparing the temperature dependence of $\rho_{yx}^{\mathrm{T}}$ and $\rho_{yx}^{\mathrm{O}}$ shown in Fig.~5(d).
The density of skyrmion estimated from $B_{\mathrm{eff}}$ is about 2000~$\mu\mathrm{m}^{-2}$ at 100~K (1300~$\mu\mathrm{m}^{-2}$ at 10~K), which means that the effective diameter of skyrmion ($d_{\mathrm{eff}}$) is 24~nm (30~nm) assuming a close-packed hexagonal lattice. This value is several times smaller than that estimated from the angle dependence of THE. We shall discuss the possible origins of this discrepancy later.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=13cm]{Fig7_modified.eps}
\caption{(a) $M_{z}$-$H$ and (b) $\rho_{yx}^{\mathrm{T}}$-$H$ curves for LSMRO($y = 0.05$) film measured at 10~K. Data points shown in red indicate the scan direction traced during the real-space observations. The green dots are data points at which real-space magnetic domain images shown in Figs. 7(c) are recorded. (c) Magnetic-force microscope (MFM) images taken at 10~K and (d) Lorentz transmission microscope (L-TEM) images taken at 100~K at various magnetic fields; $\mu_{0}H$ = 0~T (7(c)-1 and 7(d)-1), 50~mT (7(c)-2 and 7(d)-2), 70~mT (7(c)-3 and 7(d)-3), and 300~mT (7(c)-4 and 7(d)-4). (e)-(g) Result of the transport-of-intensity equation (TIE) analyses for L-TEM images. Figures~7(e), 7(f) are TIE analyses for domain-a, domain-b denoted in Fig.~7(d)-3, respectively. The domain-a is a single skyrmion with skyrmion number $N = 1$ and the domain-b is a biskyrmion with $N = 2$. Figure~7(g) is a L-TEM image taken at different area of the sample. Figure~7(h) is TIE analysis for domain-c denoted in Fig.~7(g), which indicates a chain of 4 skyrmions.}
\end{center}
\end{figure*}
\section{Real-Space Observations}
We now describe the real-space observation of spin textures using magnetic-force microscopy (MFM) and Lorentz transmission electron microscopy (L-TEM). These two techniques offer complementary information on the spin texture; MFM is sensitive to the out-of-plane magnetization component, whereas L-TEM can detect the in-plane component~\cite{Yu2010,Milde2013,Soumyanarayanan2016}.
MFM observation was performed using an Attocube low-temperature atomic-force platform (AttoAFMI). During the scan, we detected the resonant frequency shift of the cantilever, which originates from the interaction between the magnetization of the tip and stray magnetic field from the film. The frequency shift is proportional to the second derivative of the local magnetic field with respect to $z$ direction. We employed a Nanosensors PPP-MFMR cantilever. The magnetization direction of the tip was defined by applying $-$5~T in $z$ direction at 10~K. Then, magnetic field was turned back to zero and MFM images were recorded at positive magnetic fields. We did not observe the magnetization reversal of the tip up to +300~mT. The excitation amplitude of the cantilever was 5~ nm. Before the MFM observation, we recorded the surface topography and stored the tilt of the sample. Then, the MFM images were taken under constant height (100~nm) mode. It was not necessary to correct the MFM data by the topography signal because the sample is atomically flat.
Figures~7(c) show the MFM images taken at four representative magnetic fields. The corresponding $H$ dependence of $M_{z}$ and $\rho_{yx}^{\mathrm{T}}$ are shown in Figs.~7(a) and 7(b). More detailed $H$ dependence of MFM image is displayed in Fig.~8. The MFM image at zero field shows a stripe domain (Fig.~7(c)-1). By applying a w, modulation patterns appear inside of the stripe domain as seen in the image at 50~mT (Fig.~7(c)-2). They are pinched off and become discrete domains as seen in the image at 70~mT (Fig.~7(c)-3). Near this magnetic field, $\rho_{yx}^{\mathrm{T}}$ reaches its maximum value. The domain structure finally disappears when $H$ exceeds the field value necessary to saturate $M_{z}$ as seen in the image at 300~mT (Fig.~7(c)-4), and accordingly, $\rho_{yx}^{\mathrm{T}}$ vanishes.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=6.5cm]{Fig8_modified.eps}
\caption{Magnetic-field dependence of MFM images for the LSMRO($y = 0.05$)/LSAT(001) film taken at 10~K. }
\end{center}
\end{figure}
To clarify a possible existence of the periodic structure in the skyrmion phase as well as the stripe phase, the autocorrelation function analysis was performed for the MFM image at 70~mT as displayed in Fig.~9. Average distance between the adjacent skyrmions derived from the autocorrelation function is 300~nm (390~nm) along the red (blue) line (Fig.~9(c)). The skyrmion diameter estimated from the MFM image spread between 90 and 200~nm, and the average size is about 150~nm.
L-TEM observation was performed at the Lorentz TEM mode of a conventional transmission electron microscope (JEM-2100F (JEOL)).
The sample for L-TEM observation was prepared by thinning the substrate of the film.
The substrate was firstly thinned by mechanical polishing, and then, a part of the substrate was further thinned by Ar ion milling at a low temperature.
Since the compressive epitaxial strain from LSAT substrate is important to induce the perpendicular magnetic anisotropy, the ion milling was stopped leaving the substrate thickness of about 200~nm. The magnetic field was controlled by changing the objective lens current. The magnetization textures were obtained by analyzing defocused L-TEM images with a software Qpt based on transport-of-intensity-equation (TIE)~\cite{Ishizuka2005}.
The L-TEM images shown in Figs.~7(d) exhibit a similar evolution of the domain structure in the magnetic field as observed with MFM; stripe domain at zero field, isolated circular domain at 70~mT, and single domain state at 300~mT. The typical diameter of the isolated domain is about 100~nm. The size and density of magnetic textures observed by L-TEM are slightly different from those observed by MFM, probably caused by the partial strain relaxation in L-TEM sample which arose during the thinning process of the substrate. Another reason can be a different measurement temperature. The TIE analysis for a discrete domain denoted by domain-a in Fig.~7(d)-3 indicates that at the center the magnetic moments are pointing normal to the film surface and the outside moments have a swirling structure (Fig.~7(e)). This is a characteristic spin texture of the skyrmion with a skyrmion number $N = 1$. Furthermore, the TIE analysis for another discrete domain denoted by domain-b reveals a bound state of two skyrmions with opposite helicities, namely the biskyrmion state (Fig.~7(f))~\cite{Yu2013,Wang2016}, which has $N = 2$ and therefore contributes doubly to the THE. We also found a domain in which four skyrmions are bound and possibly characterized by $N = 4$ as shown in Figs.~7(g) and 7(h).
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7cm]{Fig9_modified.eps}
\caption{(a) MFM images for the LSMRO($y = 0.05$)/LSAT(001) film taken at three different magnetic fields. (b) Two-dimensional autocorrelation function for the MFM images shown in Fig.~9(a). (c) Line profiles of the autocorrelation function along the lines in Fig.~9(b).}
\end{center}
\end{figure}
The diameter of skyrmion ($d$) observed by MFM and L-TEM is in the range of 100-200~nm, which is consistent with that estimated from the magnetic-field angle dependence of the THE, but $d_{\mathrm{eff}}$ estimated from the magnitude of THE is much smaller and is in the range of 20-30~nm. The fact of such discrepancy in skyrmion size indicates that THE is enhanced by several times by some reason. One apparent reason is the existence of skyrmions having multiple topological charges as revealed by the L-TEM images. Another possible reason is the THE in the momentum space as observed in frustrated and disordered ferromagnets~\cite{Taguchi2001,Lyanda-Geller2001}. In the latter case, Hall effect is usually observed only near $T_{\mathrm{C}}$ in perovskite manganites associated with thermally-driven hedgehog spin configuration~\cite{Lyanda-Geller2001}. In our film with $Q\sim1$, the solid angle of the nearest-neighbor local moments can remain finite even at low temperatures due to a keen competition between the exchange and the dipolar interactions, which may induce the non-trivial Hall effect.
\section{Conclusions}
In conclusion, we controlled the magnetic anisotropy in thin films of a half-metallic perovskite manganite.
We found the emergence of large THE in the film with the perpendicular magnetic anisotropy being balanced with the magnetic dipolar interaction. MFM and L-TEM observations revealed the existence of a few hundreds nanometer sized skyrmion bubbles in the film.
We find several times enhancement in the magnitude of THE compared with that expected from real-space observations,
indicating a possibility of other mechanisms to enhance THE in perovskite manganites.
\begin{acknowledgments}
We appreciate D. Shindo (Tohoku Univ. and RIKEN), T. Akashi (Hitachi Ltd.), and T. Tanigaki (Hitachi Ltd.) for their supporting in L-TEM observation. We also appreciate W. Koshibae (RIKEN) for fruitful discussions and D. Maryenko (RIKEN) for critical reading of our manuscript. This work was partially supported by PRESTO JST (JPMJPR16R5).
\end{acknowledgments}
|
1,116,691,497,631 | arxiv |
\section{Introduction}
\label{sec:introduction}
The measurement of the top quark-antiquark pair (\ttbar) cross section provides a test of the hadro\-production
of top quark pairs as predicted by quantum chromodynamics (QCD). At the CERN LHC, measurements have been performed
in many different decay channels and at three different proton-proton collision energies~\cite{CMStt13,CMStt12,CMStt11,
CMStt10,CMStt9,CMStt8,CMStt6,CMStt5,CMStt4,CMStt3,CMStt2,top15003,topPAS13004,ATLAStt14,ATLAStt13,ATLAStt12,ATLAStt11,
ATLAStt10,ATLAStt9,ATLAStt8,ATLAStt6,ATLAStt3,ATLAStt1, ATLAStt13TeV2}. Precision measurements of these cross sections
allow for a test of their energy dependence as predicted by QCD; they can also place constrains on the parton distribution
functions (PDFs)~\cite{Czakon:2013tha}. In combination with some theory, they also provide unambiguous
measurements of interesting quantities, such as the top quark pole mass~\cite{ATLAStt6, topPAS13004}, which is difficult to
determine by other means. A detailed understanding of the production cross section is also required in searches for evidence
of new physics beyond the standard model, as \ttbar production is often the dominant background process. This is especially
important if the signature for the new physics is similar to that of \ttbar production~\cite{Aad:2015pfx, topPAS13004}. This
paper presents a measurement of the \ttbar production cross section ($\sigma_{\ttbar}$) in the \empm decay channel using
an event-counting method, based on observed yields. The analysis follows closely~\cite{top15003}, and uses the full data set recorded
by CMS at 13\TeV during 2015, which corresponds to an integrated luminosity of 2.2\fbinv. This represents a factor of 50 increase in the amount of data over the original analysis and
allows for more detailed studies of the experimental and theory uncertainties.
\section{The CMS detector and Monte Carlo simulation}
\label{sec:detector}
The CMS detector~\cite{Chatrchyan:2008zzk} has a superconducting solenoid
in its central region that provides an axial magnetic field
of 3.8\unit{T}. The silicon pixel and strip trackers cover
$0 < \phi <2\pi$ in azimuth and $\abs{\eta}<2.5$ in pseudorapidity.
The lead tungstate crystal electromagnetic calorimeter, and the brass and scintillator
hadron calorimeter are located inside the solenoid. These are used to identify electrons,
photons and jets. Muons are measured in gas-ionization detectors embedded in the steel
flux-return yoke outside the solenoid. The detector is nearly hermetic, providing reliable
measurement of the momentum imbalance in the plane transverse to the beams. A two-level
trigger system selects the most interesting $\Pp\Pp$ collisions for offline analysis.
A more detailed description of the CMS detector, together with a definition
of the coordinate system used and the relevant kinematic variables, can be
found in Ref.~\cite{Chatrchyan:2008zzk}.
{\tolerance=800
Different Monte Carlo (MC) event generators are used to simulate signal and background
events. The next-to-leading-order (NLO) \POWHEG~(v2)~\cite{powheg,powheg2} generator is used for
\ttbar events, with the top quark mass ($m_{\cPqt}$) set to 172.5\GeV. The NNPDF3.0
NLO~\cite{nnpdf} PDFs are used.
For the reference \ttbar sample, the events are interfaced
with \PYTHIA~(v8.205) \cite{Sjostrand:2006za,Sjostrand:2014zea}
with the CUETP8M1 tune~\cite{CMS-PAS-GEN-14-001, Skands:2014pea}
to simulate parton showering, hadronization, and the underlying
event. Additional samples are produced by showering the events in the reference sample
with \HERWIGpp ~(v2.7.1)~\cite{herwigpp} or by generating events using
\amcatnlo~(v5\_2.2.2)~\cite{Alwall:2014hca} interfaced with \madspin ~\cite{madspin} to account for
spin correlations in the decays of the top quarks, and using
\PYTHIA for parton showering and hadronization.
\par}
The \amcatnlo\ generator is also used
to simulate \PW+jets events and Drell--Yan (DY) quark-antiquark
annihilation into lepton-antilepton pairs through a virtual photon
or a Z boson exchange; for these backgrounds the event yields are estimated from data.
Single top quark events are simulated using
\POWHEG~(v1)~\cite{powheg1,powheg3} and \PYTHIA, and the event yields are normalized to
the approximate next-to-next-to-leading order (NNLO) cross
sections from Ref.~\cite{Kidonakis:2013zqa}. The diagram removal approach~\cite{Frixione:2008yi} is used to handle the interference between the \ttbar and tW final states starting at NLO. The contributions
from \PW\PW, \PW\cPZ, and \cPZ\cPZ\ (referred to as ``VV'') processes
are simulated with \PYTHIA, and the event rates are normalized to the NLO cross sections from Ref.~\cite{mcfm}.
Other contributions from \PW\ and \cPZ\ boson production in association with \ttbar events
(referred to as ``$\ttbar$V'') are simulated using \amcatnlo~and \PYTHIA.
The simulated samples include additional interactions per bunch crossing (pileup), with the distribution matching that observed in data,
with an average of about 11 collisions per bunch crossing.
{\tolerance=1200
The SM prediction for $\sigma_{\ttbar}$
at 13\TeV is $832^{+20}_{-29}\,\text{(scales)}\pm 35\,$(PDF$+\alpha_s)\unit{pb}$
for $m_{\cPqt}=172.5\GeV$, as calculated with the \textsc{Top++} program~\cite{top++} at
NNLO in perturbative QCD, including
soft-gluon resummation at next-to-next-to-leading-log
order~\cite{mitov}.
The first uncertainty reflects uncertainties in the
factorization ($\mu_\mathrm{F}$) and renormalization ($\mu_\mathrm{R}$) scales.
The second one is associated with possible choices of PDFs and the value of the strong coupling constant,
following the PDF4LHC prescriptions~\cite{pdf4lhcReport, pdf4lhcInterim}, using the MSTW2008 68\% confidence level
NNLO~\cite{Martin:2009iq,mstw08}, CT10 NNLO~\cite{Lai:2010vv,pdfsets},
and NNPDF2.3 5f FFN~\cite{Ball:2012cx} PDF sets.
The expected event yields for signal in all figures and tables are normalized to this cross section.
\par}
\section{Event selection}
\label{sec:event_selection_yields}
In the SM, top quarks in $\Pp\Pp$ collisions are mostly produced as \ttbar pairs, where
each top quark decays predominantly to a \PW\ boson and a bottom quark.
In \ttbar events where both \PW\ bosons decay leptonically, the final state contains two leptons
of opposite electric charge and at least two jets coming from the hadronization of the bottom quarks.
At the trigger level, a combination of the single lepton and dilepton triggers is used. Events are
required to contain either one electron with transverse momentum $\pt > 12\GeV$
and one muon with $\pt > 17\GeV$ or one electron with $\pt> 17\GeV$ and one muon with $\pt > 8\GeV$.
In addition, single-lepton triggers with one electron (muon) with $\pt > 23\GeV$\,(20) are used in order to increase the efficiency.
The efficiency for the combination of the single lepton and dilepton triggers is measured in data using
triggers based on \pt imbalance in the event. The trigger efficiency is measured
to be \ensuremath{0.99 \pm 0.01}\xspace\ (combined statistical and systematic uncertainties) when the selection on the leptons described below is applied. The
trigger in simulation is corrected using a multiplicative data-to-simulation scale
factor (SF), given by the trigger efficiency measured in data with independent monitoring triggers.
The particle-flow (PF) event algorithm~\cite{CMS:2009nxa, CMS:2010byl} reconstructs and identifies each individual particle
with an optimized combination of information from the various elements of the CMS detector. Selected dilepton events are required to contain one isolated electron~\cite{Khachatryan:2015hwa} and one isolated
muon~\cite{Chatrchyan:2012xi} with opposite electric charge and $\pt > 20\GeV$ and $\abs{\eta} < 2.4$.
Isolation requirements are based on
the ratio of the scalar sum of the transverse momenta of all PF candidates, reconstructed inside a
cone centered on the lepton, excluding the contribution from the lepton candidate.
This isolation variable is required to be smaller than 7\%\,(15\%) of the electron (muon) \pt.
In events with more than one pair of leptons passing the selection, the two
opposite-sign different-flavour leptons with the largest \pt are selected for further study.
Events with \PW\ bosons decaying into \Pgt\ leptons contribute to the measurement
only if the \Pgt\ leptons decay into electrons or muons that
satisfy the selection requirements.
The efficiency of the lepton selection is measured using a ``tag-and-probe"~\cite{Khachatryan:2010xn} method
in a sample of same-flavour dilepton events, which is enriched in Z boson candidates.
The measured \pt- and $\eta$-dependent values for the combined identification and isolation
efficiencies average to about 80\%\ for electrons and 90\%\ for muons.
To account for the difference in efficiencies determined using data and simulation, the event yield in simulation is corrected using
\pt- and $\eta$-dependent SFs based on a comparison of lepton selection efficiencies
in data and simulation. These have an average of 0.99\ for electrons and 0.98\ for muons.
In order to suppress backgrounds from DY production of $\tau$ lepton pairs
with low invariant dilepton mass, \ttbar candidate events are further required to have
a dilepton pair of invariant mass $m_{\Pe\PGm} > 20\GeV$.
Jets are reconstructed from the PF particle candidates using the anti-$k_t$ clustering algorithm~\cite{Cacciari:2008gp, Cacciari:2011ma} with a distance
parameter of 0.4. The jet momentum is determined from the vectorial sum of all particle momenta in the jet, and is found from simulation to be within 5 to 10\% of the true momentum over the whole \pt spectrum and detector acceptance. An offset correction is applied to jet energies to take into account the contribution from additional proton-proton interactions within the same or nearby bunch crossings. Jet energy corrections are derived from simulation, confirmed with in situ measurements of the energy balance in dijet and photon + jet events, and
are applied as a function of the jet
\pt and $\eta$~\cite{Khachatryan:2016kdb} to both data and simulated events. The \ttbar candidate events
are required to have at least
two reconstructed jets with $\pt> 30\GeV$ and $\abs{\eta}< 2.4$.
Since \ttbar events decay into final states containing a bottom quark-antiquark pair,
requiring the presence of jets identified as originating
from \cPqb\ quarks (``\cPqb\ jets'') reduces backgrounds from DY and \PW+jets production.
Jets are identified as \cPqb\ jets using the combined secondary vertex algorithm
~\cite{Chatrchyan:2012jua, BTV15001}, with an operating point which yields an identification efficiency of 67\% and a misidentification (mistag)
probability of about 1\% and 15\%~\cite{BTV15001} for light-flavour jets (\cPqu, \cPqd, \cPqs,
and gluons) and \cPqc\ jets, respectively. The selection requires the presence of at least one \cPqb\ jet in the event.
\section{Background determination}
\label{sec:bkgd}
Background events arise primarily from single top quark, DY, and VV events
in which at least two prompt leptons are produced by Z or \PW\ boson decays. The single
top quark and VV contributions are estimated from simulation.
The DY event yield is estimated from data using the ``$R_\text{out/in}$''
method~\cite{CMStt8,CMStt12,CMStt13}, where events with same-flavour leptons
are used to normalize the yield of \empm\ pairs from DY production of $\tau$ lepton pairs. A data-to-simulation normalization
factor is estimated from the number of events in data
within a 15\GeV window around the Z boson mass
and extrapolated to the number of events outside the Z mass window
with corrections applied using control regions enriched in DY events
in data. The SF is found to be $0.95 \pm 0.05$ (statistical uncertainty) after applying the final event selection.
Other background sources, such as \ttbar or \PW+jets events in the lepton+jets final state,
can contaminate the signal sample if a jet is incorrectly
reconstructed as a lepton, or the lepton is incorrectly identified as being isolated. This is
more important for electrons. For muons, the dominant contribution comes from the semileptonic
decay of bottom or charm quarks. These events are grouped into the nonprompt leptons category (``non-\PW/Z
leptons'') since prompt leptons are defined as originating from decays of \PW\ or Z
boson, together with contributions that can arise, for example, from
decays of mesons or photon conversions.
The contribution of non-\PW/Z lepton events is estimated from a control region
of same-sign (SS) events and propagated in the opposite-sign (OS) signal region.
The SS control region is defined using the same criteria as the
nominal signal region, except for requiring \Pe\Pgm\ pairs with the same electric charge.
The SS dilepton events are predominantly events containing misidentified leptons.
Other SM processes
produce prompt SS or charge-misidentified dilepton events with significantly smaller rates;
these are estimated using simulation and subtracted from the observed number of
events in data.
The scaling from the SS control region in data to the signal region is
performed through the ratio of the numbers of OS to SS events with misidentified leptons in simulation.
This ratio is calculated using simulated \ttbar and \PW+jets samples, which are rich in
nonprompt dilepton events,
and is measured to be $1.4 \pm 0.1\stat$.
In data, 152 SS events are observed, with a contribution of $79.8\pm 1.9\stat$
prompt lepton SS events as evaluated from simulation. In total $104 \pm 8\,\text{(stat + syst)}$ events with misidentified leptons
contaminating the signal region are predicted. This agrees within the uncertainties with predictions from the
simulation.
Figure~\ref{fig:dilepton} shows the multiplicity of jets
for events passing the dilepton criteria. The MC simulation does not describe well the data
for events with $\geq$4 jets, the region in which parton shower effects are expected
to dominate the prediction.
After requiring at least two jets, Fig.~\ref{fig:leptons} shows the \pt and
$\abs{\eta}$ distributions of the selected leptons, and
Fig.~\ref{fig:jets} shows the \pt (a, c) and
$\abs{\eta}$ (b, d) distributions of the two most energetic jets; Fig.~\ref{fig:jets} (e) shows the
scalar sum of the transverse momenta of all jets ($H_\mathrm{T}$) and Fig.~\ref{fig:jets} (f) the b jet multiplicity.
Good agreement between data and the predictions for signal and background is observed.
\begin{figure}[htbp!]
\centering
{\includegraphics[width=0.49\textwidth]{Figure_001.pdf}}
\caption{Distribution of the jet multiplicity
in events passing the dilepton selection criteria.
The expected distributions for \ttbar signal and individual backgrounds are shown
after corrections based on control regions in data are applied; the
last bin contains the overflow events.
The ratio of data to the sum of the expected yields is given at the bottom of
the figure. The error bars, which are within the size of the points, indicate the statistical uncertainties.
}
\label{fig:dilepton}
\end{figure}
\begin{figure*}[htbp!]
\centering
{\includegraphics[width=0.49\textwidth]{Figure_002-a.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_002-b.pdf}}\\
\vspace{0.1cm}
{\includegraphics[width=0.49\textwidth]{Figure_002-c.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_002-d.pdf}}
\caption{The distributions of (a) \pt and (b) $\abs{\eta}$
of the electron, and (c) \pt and (d) $\abs{\eta}$ of the muon after the selection of jets and before
the \cPqb\ jet requirement. The expected distributions for \ttbar signal and individual
backgrounds are shown after corrections based on control regions in data are applied; for
the left plots (a, c) the last bin contains the overflow events.
The ratios of data to the sum of the expected yields are given at the bottom of each
panel. The error bars indicate the statistical uncertainties.
}
\label{fig:leptons}
\end{figure*}
\begin{figure*}[htbp!]
\centering
{\includegraphics[width=0.49\textwidth]{Figure_003-a.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_003-b.pdf}}\\
{\includegraphics[width=0.49\textwidth]{Figure_003-c.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_003-d.pdf}}\\
{\includegraphics[width=0.49\textwidth]{Figure_003-e.pdf}}
{\includegraphics[width=0.49\textwidth]{Figure_003-f.pdf}}
\caption{The distributions of (a) \pt and (b) $\abs{\eta}$
for the leading jet, (c) \pt and (d) $\abs{\eta}$ for the
sub-leading jet, (e) $H_\mathrm{T}$, and (f) b jet multiplicity after the jets selection
and before the \cPqb\ jet requirement.
The expected distributions for \ttbar signal and individual
backgrounds are shown after corrections based on control regions in data are applied; in each plot the
last bin contains the overflow events.
The ratios of data to the sum of the expected yields are given at the bottom of each panel.
The error bars indicate the statistical uncertainties.
}
\label{fig:jets}
\end{figure*}
\section{Sources of systematic uncertainty}
\label{sec:systematics}
Table~\ref{tab:breakdown_comb} summarizes the statistical uncertainty and the different
sources of systematic uncertainties in the measured \ttbar production cross section.
\begin{table}[!htb]
\centering
\topcaption{Summary of the individual contributions to the uncertainty in
the $\sigma_{\ttbar}$ measurement.
The first and second uncertainty corresponds to the total and relative component, respectively.
The total uncertainty in the result, calculated as the quadratic sum of the individual components, is also given.}
\begin{tabular}{lcc}
\hline
Source & $\Delta\sigma_{\ttbar}$ (pb) & $\Delta\sigma_{\ttbar} / \sigma_{\ttbar}$ (\%) \\
\hline
\multicolumn{3}{c}{Experimental}\\
\hline
Trigger efficiencies & 9.9 & 1.2 \\
Lepton efficiencies & 18.9 & 2.3 \\
Lepton energy scale & $<$1 & $\leq$0.1\\
Jet energy scale & 17.4 & 2.1 \\
Jet energy resolution & 0.8 & 0.1 \\
b tagging & 11.0 & 1.3 \\
Mistagging & $<$1 & $\leq$0.1\\
Pileup & 1.5 & 0.2 \\
\hline
\multicolumn{3}{c}{Modeling}\\
\hline
$\mu_\mathrm{F}$ and $\mu_\mathrm{R}$ scales & $<$1 & $\leq$0.1 \\
\ttbar NLO generator & 17.3 & 2.1 \\
\ttbar hadronization & 6.0 & 0.7 \\
Parton shower scale & 6.5 & 0.8 \\
PDF & 4.9 & 0.6 \\
\hline
\multicolumn{3}{c}{Background}\\
\hline
Single top quark & 11.8 & 1.5 \\
VV & $<$1 & $\leq$0.1\\
Drell--Yan & $<$1 & $\leq$0.1 \\
Non-\PW/Z leptons & 2.6 & 0.3 \\
$\ttbar$V & $<$1 & $\leq$0.1 \\
\hline
Total systematic & \multirow{2}{*}{37.8} & \multirow{2}{*}{4.6} \\
(no integrated luminosity) & & \\
Integrated luminosity & 18.8 & 2.3 \\
Statistical & 8.5 & 1.0 \\
\hline
Total & 43.0 & 5.3 \\
\hline
\end{tabular}
\label{tab:breakdown_comb}
\end{table}
The uncertainty in the trigger efficiency SF applied to simulation to correct for
differences with respect to data is 1.1\%. The uncertainty in the SF applied to
correct the electron (muon) identification efficiency is found to be about 1.8\%\,(1.5\%),
with some dependence on the lepton \pt and $\eta$.
The modeling of lepton energy scales was studied using $\Z
\to {\Pe\Pe} / \PGm\PGm$
events in data and simulation, resulting in an uncertainty for the
electron (muon) energy scale of 1.0\,(0.5)\%.
These values are used to obtain the effect on the signal acceptance, which is taken
as a systematic uncertainty.
The impact of uncertainties in jet energy scale (JES) and jet energy
resolution (JER) is estimated from the change observed in the number
of simulated \ttbar events selected after changing the jet momenta within
the JES uncertainties, and for JER by an $\abs{\eta}$-dependent variation
of the JER scale factors within their uncertainties.
The uncertainties resulting from the \cPqb\ tagging efficiency and misidentification rate
are determined by varying the \cPqb\ tagging SF of the \cPqb\ jets and the light-flavour jets, respectively.
These uncertainties depend on the \pt and $\eta$ of the jet and amount to approximately 2\% for \cPqb\ jets and 10\% for
mistagged jets~\cite{BTV15001} in \ttbar signal events. They are propagated to the \ttbar selection
efficiency using simulated events.
The uncertainty assigned to the number of pileup events in simulation is obtained by changing the
inelastic proton-proton cross section, which is used to estimate the pileup in data, by
$\pm$5\%~\cite{Aaboud:2016mmw}.
The systematic uncertainty related to the missing higher-order diagrams
in \POWHEG is estimated as follows: the uncertainty in the signal acceptance
is determined by changing the $\mu_\mathrm{F}$ and $\mu_\mathrm{R}$ scales in
\POWHEG independently up and down by a factor of two,
with the uncertainty taken as the maximum observed difference.
The predictions of the NLO generators \POWHEG and \amcatnlo\ for
\ttbar production are compared, where both use \PYTHIA for hadronization, fragmentation, and
additional radiation description. The difference in the signal acceptance between the two is taken as an
uncertainty.
The uncertainty arising from the hadronization model
mainly affects the JES and the fragmentation
of \cPqb\ quark jets.
The uncertainty in the JES already
contains a contribution from the uncertainty in the hadronization.
In addition, we determine a related uncertainty by comparing
samples
of events generated with \POWHEG, where the hadronization is modeled
with \PYTHIA or \HERWIGpp. In what follows we refer to this difference
as the hadronization uncertainty.
The impact of the choice of the parton shower scale is studied by changing the scale of the parton
shower (initial and final state radiation) by a factor of 2 and 1/2 from its default value. The maximum variation with
respect to the central value of the signal acceptance at particle level~\cite{Khachatryan:2015oqa} for the fiducial volume
of the analysis is taken as an uncertainty.
The uncertainty from the choice of PDF
is determined by reweighting the sample of simulated
\ttbar events according to the NNPDF3.0 PDF sets~\cite{nnpdf}. The
root-mean-square of the distribution is taken as an uncertainty.
Based on recent measurements of the production cross section for single top
quark~\cite{CMStopPublicationstop4, Chatrchyan:2014tua, Chatrchyan:2012zca} and
VV~\cite{CMSWWZZPublication8, CMSWWPublication7, CMSWWHiggsPublication7,
Khachatryan:2016txa, Khachatryan:2016tgp, ATLASWWPublication,ATLASWZPublication,ATLASZZPublication}
we use an uncertainty of 30\% for these background processes.
For DY production, an uncertainty of 15\%, that covers the difference of the SF at different levels of the selection, is assumed.
A 30\% systematic uncertainty is estimated for the non-\PW/Z lepton background
derived from the uncertainty in the ratio of the numbers of OS to SS events with misidentified leptons in the MC simulation.
The uncertainty in the integrated luminosity is 2.3\%~\cite{lumiPAS15}.
\section{Results}
\label{sec:measure}
The \ttbar production cross section is measured by counting events and applying
the expression
\begin{equation*}
\sigma_{\ttbar} = \frac{N - N_\mathrm{B}}{\mathcal{A} \, {\mathcal{L}}},
\label{eqn:xsecA}
\end{equation*}
where $N$ is the total number of dilepton events observed in data, $N_\mathrm{B}$ is the
number of estimated background events, $\mathcal{A}$ is the product of the mean acceptance,
the selection efficiency, and the
branching fraction into the \empm final state,
and $\mathcal{L}$ is the integrated luminosity.
Table~\ref{tab:yields} shows
the total number of events observed in data together with the total number of signal
and background events determined from simulation or estimated from
data. The value of $\mathcal{A}$, determined from simulation assuming $m_{\cPqt}= 172.5\GeV$,
is $(0.55\pm 0.03) \% $, including statistical and systematic uncertainties.
The measured cross section is
\begin{equation*}
\ensuremath{\sigma_{\ttbar} = 815 \pm 9\stat \pm 38\syst \pm 19\lum\unit{pb}}\xspace,
\end{equation*}
for a top quark mass of 172.5\GeV.
\begin{table}[!htb]
\centering
\topcaption{Number of dilepton events obtained after applying the full selection.
The results are given for the individual sources of background, \ttbar signal
with a top quark mass of 172.5\GeV and \ensuremath{\sigma_{\ttbar} = 832^{+40}_{-46}\unit{pb}}\xspace, and data. The
uncertainties correspond to the statistical component.
}
\begin{tabular}{lc}
\hline
& Number of \\
Source & \empm\ events \\
\hline
Drell--Yan & 46 $\pm$ 5 $\pm$ 7\phantom{0} \\
Non-\PW/Z leptons & 104 $\pm$ 8 $\pm$ 31\phantom{0} \\
Single top quark & 452 $\pm$ 6 $\pm$ 141 \\
VV & 14 $\pm$ 2 $\pm$ 5\phantom{0} \\
$\ttbar$V & 30 $\pm$ 1 $\pm$ 9\phantom{0} \\ \hline
Total background & 646 $\pm$ 11 $\pm$ 145 \\ \hline
\ttbar signal & 9\,921 $\pm$ 14 $\pm$ 436\phantom{0} \\ \hline
Data & 10368\phantom{0} \\
\hline
\end{tabular}
\label{tab:yields}
\end{table}
As a cross-check, analogous measurements have been performed using independent data samples with
same-flavour leptons in the final state.
The results obtained in the \eepm\ and \mmpm\ channels are consistent with the result in the \empm\ channel. Given their
larger uncertainties, the results are not combined with the main one in the \empm\ channel.
The measured fiducial cross section for \ttbar production with two
leptons (one electron and one muon) in the
range $\pt > 20\GeV$ and $\abs{\eta} < 2.4$, at least two jets with $\pt > 30\GeV$ and
$\abs{\eta} < 2.4$, and at least one b jet is $\sigma^\text{fid}_{\ttbar} = 12.4 \pm 0.1\stat \pm 0.5\syst\pm 0.3\lum\unit{pb}$.
The acceptance has been measured in the range 166.5--178.5\GeV and is parameterized as a linear function of $m_{\cPqt}$.
The cross section varies by 3.7\unit{pb} when the top quark mass changes 0.5\GeV.
\section{Summary}
\label{sec:conclusions}
A measurement of the \ttbar production cross section in proton-proton collisions
at $\sqrt{s} =13\TeV$ is presented for events containing an oppositely charged electron-muon pair,
and two or more jets, of which at least one is tagged as originating from a \cPqb\ quark.
The measurement is performed through an event-counting method based on a data
sample corresponding to an integrated luminosity of 2.2\fbinv.
The measured cross section is
\begin{equation*}
\ensuremath{\sigma_{\ttbar} = 815 \pm 9\stat \pm 38\syst \pm 19\lum\unit{pb}}\xspace ,
\end{equation*}
with a total relative uncertainty of 5.3\%.
The measurement, that supersedes~\cite{top15003}, is consistent with recent measurements from the ATLAS~\cite{ATLAStt13TeV2} and CMS~\cite{top15003}
experiments and with the standard model prediction of \ensuremath{\sigma_{\ttbar} = 832^{+40}_{-46}\unit{pb}}\xspace\ for
a top quark mass of 172.5\GeV.
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845.
\end{acknowledgments}
|
1,116,691,497,632 | arxiv | \section{Introduction}
In the context of higher dimensional field theories, topological defects have been used to explain localization
of matter on four-dimensional branes. If one considers a scalar field defined on a non trivial vaccum, with
the shape of a kink centered on the brane, it is well known that massless chiral fermions coupled to the scalar field
are localized on the corresponding brane \cite{local}.
In addition, this localization procedure can be used to define chiral fermions on the lattice, by
using a kink-like mass term in the extra dimension \cite{lattice1}.
Similar non trivial topological effects, including sphalerons in real time simulations,
also lead to the description of chiral fermions on the lattice \cite{latticefarakos}.
Analytical arguments towards this localization process are given at a classical level,
and we consider here the quantization of
scalar fluctuations above the kink, in order to exhibit non-perturbative properties. This quantization
is stable in 1+1 dimensions only, if we consider the scalar field alone \cite{jackiw}, what we will
do here, since the corresponding toy model exhibits the main features we are interested in.
In order to explain more specifically our motivations, though, we
set up, in Appendix B, the first steps of the generalization to a d+1 dimensional Yukawa model, where fermions
interact with scalar fluctuations above the kink.
The present treatment does not take into account the collective coordinate corresponding to the
translation invariance of the kink \cite{jackiw}, since, in the spirit of the above mentioned
fermion localization problem, we consider here quantum fluctuations above one specific kink only,
centered on $z=0$, and we do not quantize the whole scalar theory, which contains a degenerate
family of kinks. This is done in different papers \cite{kink1+1}, using canonical quantization.
In more than 1+1 dimension, stability of quantum fluctuations necessitate the presence of another
field than the scalar, and, in this context, the
BRST quantization of the non-linear O(3) model was studied in \cite{kink2+1}.
The method we use here is an alternative to exact Wilsonian renormalization \cite{Wilson}, where, instead of
having fixed bare parameters and a running cut off, we keep a fixed cut off and consider a running bare mass,
in the spirit of ``functional Callan Symanzik equations'' \cite{initial}.
The non-perturbative feature of this method, together with the absence of a
running cut off, led to the derivation of a cut-off-independent dynamical mass generated in the
framework of a Kaluza-Klein model \cite{kaluzaklein}. This method was also used
for the description of time-dependent bosonic string actions \cite{string}, where a world sheet cut
off needed to be avoided, and where this alternative approach leads to new results,
by studying the evolution of the quantum theory with the amplitude of the string tension.
In the present work, the cut off which regulates the evolution equation for the effective theory
will not appear in the evolution of the dressed parameters, and the
logarithmic divergences expected in 1+1 dimensions are absent from our flows in the bare mass. Indeed,
these flows are obtained after a differentiation
with respect to the bare mass, which is equivalent to inserting an additional propagator
in the graphs and therefore has the effect of reducing their degree of divergence.
The physical interpretation of the evolution of the quantum theory with a bare mass
is to control the amplitude of quantum fluctuations: when the bare mass is large,
quantum fluctuations are frozen and the system is almost classical. As the bare mass
decreases, quantum fluctuations gradually appear in the system, which therefore becomes dressed.
A review can be found in \cite{review}.
From a technical point of view, this method can be seen as a tool, used to investigate properties of the quantum theory:
the evolution in the bare parameter leads to a functional partial differential equation, which is then
split into a series of differential equations, involving the dressed parameters
which describe the effective theory. The integration of these non-perturbative differential equations leads
to the effective theory, which exhibits the quantum properties of the system.
\vspace{0.5cm}
Section 2 describes the scalar model we study here, and shows the derivation of the evolution equation for the
quantum theory with the bare mass of the quantum field which
fluctuates above a classical background kink.
The evolution equation we arrive at technically looks like an exact Wilsonian renormalization
equation, but is actually very different in essence, as explained above.
Section 3 derives the evolution of the dressed parameters defining the quantum system, and
discusses different properties of quantum theory. We show there that no odd power of the
classical field is present in the effective action, whereas a cubic interaction is present in the bare action.
We also compare our results to one loop predictions, and give new relations on the dressed parameters,
beyond one-loop, as a consequence of the ressumation provided by our evolution equations.
Finally, section 4 contains a general discussion on our results, based on symmetry properties of the quantum
theory. Appendix A shows the derivation of the evolution equations and Appendix B displays the first steps
on how to generalize the method to a d+1 dimensional Yukawa model.
\section{Model and evolution of the effective theory}
The bare action in 1+1 dimensions is
\begin{equation}
S_0=\int dt dz\left\{\frac{1}{2}\partial_\mu\Phi\partial^\mu\Phi-U_B(\Phi)\right\}
\end{equation}
where $z$ is the space coordinate, and the bare potential $U_B(\Phi)$
implements a spontaneous symmetry breaking:
\begin{equation}\label{barepot}
U_B(\Phi)=-\frac{m_0^2}{2}\Phi^2+\frac{\lambda_0}{24}\Phi^4.
\end{equation}
In 1+1 dimensions, the scalar field has mass dimension 0, which leads to an important
renormalization property: all the powers of the field are (classically) relevant operators,
and all the coupling constants have mass dimension 2. As a consequence, the bare potential
(\ref{barepot}) is not chosen on the basis of relevance/irrelevance of the interactions, but
rather on the assumption of small amplitude of fluctuations above the kink. This assumption will
prove to be valid, what will be seen with the effective theory that is obtained.\\
The classical equation of motion for the field is
\begin{equation}\label{eq.mot.}
\partial_\mu\partial^\mu\Phi+U_B^{'}(\Phi)=0,
\end{equation}
where a prime denotes a derivative with respect to $\Phi$.
We concentrate on the kink solution of eq.(\ref{eq.mot.}) which depends on $z$ only and reads
\begin{equation}\label{kink}
\Phi_{bg}(z)=m_0\sqrt\frac{6}{\lambda_0}\tanh(\zeta).
\end{equation}
where the dimensionless coordinate $\zeta$ is defined as
\begin{equation}
\zeta=\frac{m_0z}{\sqrt 2}.
\end{equation}
We consider then the quantum fluctuations $\tilde\Phi$ around $\Phi_{bg}$ and write
\begin{equation}
\Phi(t,z)=\Phi_{bg}(z)+\tilde\Phi(t,z).
\end{equation}
If we take into account the equation of motion (\ref{eq.mot.}), the action depending on the
dynamical variable $\tilde\Phi$ is
\begin{eqnarray}
S&=&\int dt dz\Bigg\{\frac{1}{2}\partial_\mu\tilde\Phi\partial^\mu\tilde\Phi
-m_0^2\tilde\Phi^2-\frac{\lambda_0}{24}\tilde\Phi^4\nonumber\\
&&~~~~~~~~~~~+\frac{3}{2}m_0^2\left[1-\tanh^2(\zeta)\right]\tilde\Phi^2
-m_0\sqrt\frac{\lambda_0}{6}\tanh(\zeta)\tilde\Phi^3\Bigg\},
\end{eqnarray}
We are interested in studying the quantum theory on the kink background,
and we will derive for this the evolution of the effective action with the bare mass $m_0$.
We will therefore start with the following bare action
\begin{eqnarray}\label{bareaction}
S_\xi&=&\int dt dz\Bigg\{\frac{1}{2}\partial_\mu\tilde\Phi\partial^\mu\tilde\Phi
-\xi m_0^2\tilde\Phi^2-\frac{\lambda_0}{24}\tilde\Phi^4\nonumber\\
&&~~~~~~~~~~~+\frac{3}{2}m_0^2\left[1-\tanh^2(\zeta)\right]\tilde\Phi^2
-\frac{g_0}{6}\tanh(\zeta)\tilde\Phi^3\Bigg\},
\end{eqnarray}
where the dimensionless parameter $\xi$ controls the amplitude of the mass term $m_0^2\tilde\Phi^2$,
and $g_0=m_0\sqrt{6\lambda_0}$.
We will show that it is possible to derive an exact evolution equation for the effective action with $\xi$.
The corresponding flows describe the evolution from $\xi>>1$, where
the mass term dominates the Lagrangian and the theory is almost classical,
to the expected quantum theory, obtained for $\xi=1$.
\vspace{0.5cm}
We now proceed to the quantization of the system, integrating over the dynamical field $\tilde\Phi$.
The partition function is
\begin{eqnarray}
Z_\xi&=&\int{\cal D}[\tilde\Phi]\exp\left(iS_\xi[\tilde\Phi]
+i\int dt dz~j\tilde\Phi\right)\nonumber\\
&=&\exp\left(iW_\xi[j]\right),
\end{eqnarray}
where $j$ is the source and $W_\xi$ is the connected graphs generator functional.
The functional derivative of the latter defines the classical field $\phi$:
\begin{eqnarray}\label{derivativesW}
\frac{\delta W_\xi}{\delta j}&=&\left<\tilde\Phi\right>_\xi=\phi_\xi\nonumber\\
\frac{\delta^2 W_\xi}{\delta j\delta j}&=&-i\phi_\xi\phi_\xi+i\left<\tilde\Phi\tilde\Phi\right>_\xi\nonumber,
\end{eqnarray}
where
\begin{equation}
\left<\cdot\cdot\cdot\right>_\xi=\frac{1}{Z_\xi}\int{\cal D}[\tilde\Phi](\cdot\cdot\cdot)
\exp\left(iS_\xi+i\int dt dz~j\tilde\Phi\right).
\end{equation}
The effective action $\Gamma_\xi$ (the proper graphs generator functional) is defined as the Legendre transform of $W_\xi$:
after inverting the relation $j\to\phi_\xi$ to $\phi\to j_\xi$, one writes
\begin{equation}
\Gamma_\xi=W_\xi-\int dt dz~j_\xi\phi,
\end{equation}
where the source $j_\xi$ has now to be seen as a functional of $\phi$, parametrized by $\xi$.
The functional derivatives of $\Gamma$ are then:
\begin{eqnarray}\label{derivativesG}
\frac{\delta\Gamma_\xi}{\delta\phi}&=&-j_\xi\nonumber\\
\frac{\delta^2\Gamma_\xi}{\delta\phi\delta\phi}&=&-\frac{\delta j_\xi}{\delta\phi}=
-\left(\delta^2 W_\xi\right)^{-1}_{jj},\nonumber
\end{eqnarray}
The evolution equation for $W_\xi$ with the parameter $\xi$ is
\begin{eqnarray}
\dot W_\xi&=&-m_0^2\int dt dz~ \left<\tilde\Phi^2\right>\nonumber\\
&=&-m_0^2\int dt dz~\phi^2+im_0^2\mbox{Tr}\left\{\frac{\delta^2 W_\xi}{\delta j\delta j}\right\},
\end{eqnarray}
where a dot over a letter represents a derivative with respect to $\xi$.
For the evolution of the effective action $\Gamma$, one should remember that its independent variables are
$\xi,\phi$, such that
\begin{equation}
\dot\Gamma_\xi=\dot W_\xi+\int dt dz\frac{\delta W_\xi}{\delta j}\partial_\xi j-\int dt dz~\partial_\xi j\phi
=\dot W_\xi.
\end{equation}
Using the previous results, we finally obtain
\begin{equation}\label{evolG}
\dot\Gamma_\xi+m_0^2\int dt dz~\phi^2
=-im_0^2\mbox{Tr}\left\{\left(\frac{\delta^2\Gamma_\xi}{\delta\phi\delta\phi}\right)^{-1}\right\}.
\end{equation}
We stress here that, although the right-hand side of eq.(\ref{evolG}) has the structure of a one-loop
correction, this evolution equation provides a ressumation of all order in $\hbar$, since the
effective action appearing in the trace contains the dressed action, and not the bare one.
Eq.(\ref{evolG}) is therefore a self-consistent equation, in the spirit of a differential
Schwinger-Dyson equation, and is thus non perturbative.\\
In order to extract the evolution of the dressed parameters defining the quantum theory, though, we need to
adopt an approximation scheme and
we assume, in the framework of the gradient expansion, the local potential
approximation for $\Gamma$, with the kinetic term frozen to its classical expression, such that
\begin{equation}\label{parametrization}
\Gamma_\xi=\int dt dz\Bigg\{\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-U_\xi(\phi)
+\left[1-\tanh^2(\zeta)\right]V_\xi(\phi)-\tanh(\zeta)Y_\xi(\phi)\Bigg\},
\end{equation}
where $U_\xi,V_\xi,Y_\xi$ are dressed potentials which define the quantum theory living on the kink,
and depend on the parameter $\xi$.
These potentials will be determined by plugging the ansatz (\ref{parametrization}) into the evolution
equation (\ref{evolG}), and they read, at the tree-level,
\begin{eqnarray}\label{tree_level}
U^{tree}(\phi)&=&\xi m_0^2\phi^2+\frac{\lambda_0}{24}\phi^4\nonumber\\
V^{tree}(\phi)&=&\frac{3}{2}m_0^2\phi^2\nonumber\\
Y^{tree}(\phi)&=&\frac{g_0}{6}\phi^3.
\end{eqnarray}
In order to respect the symmetries of the bare action, in what follows we consider even potentials
$U_\xi,V_\xi$ and an odd potential $Y_\xi$
\section{Evolution of the dressed parameters}
In order to derive the evolution of the dressed parameters, we
have to compute the trace appearing in the evolution equation (\ref{evolG}), for a given configuration $\phi$.
Because of the symmetry of the function $\tanh(\zeta)$, a constant configuration for $\phi$ is not
appropriate, as in such a case the derivative $\dot Y_\xi$ does not appear in the left hand side of eq.(\ref{evolG}).
The appropriate choice here is the step-like configuration
\begin{equation}\label{step1}
\phi_{step}=\mbox{sign}(z)\phi_0,
\end{equation}
where $\phi_0$ is a constant. This configuration has a singular kinetic term, but the corresponding
singularity is $\xi$-independent in the framework of the gradient expansion (\ref{parametrization}),
and therefore has no influence on the evolution in $\xi$.
With such a configuration, the left hand side of the evolution equation (\ref{evolG}) is
\begin{equation}
LT\left[m_0^2\phi_0^2-\dot U_\xi(\phi_0)-\dot Y_\xi(\phi_0)\right]
+\frac{T}{m_0}\left[2\dot V_\xi(\phi_0)+\ln 2 ~\dot Y_\xi(\phi_0)\right],
\end{equation}
where $T$ is the length of the time dimension and $L$ is the length of the space dimension.
These lengths being independent, one can independently identify in eq.(\ref{evolG})
the terms proportional to $T$ and the terms proportional to $LT$.
The second derivative of the effective action is, for the configuration (\ref{step1}),
\begin{eqnarray}\label{2nd}
\frac{\delta^2\Gamma_\xi}{\delta\phi_1\delta\phi_2}&=&
-\left\{\partial_\mu\partial^\mu+U^{''}_\xi(\phi_0)\right\}\delta(t_1-t_2)\delta(z_1-z_2)\\
&&+\left\{\left[1-\tanh^2(\zeta)\right]V^{''}_\xi(\phi_0)-|\tanh(\zeta)|Y^{''}_\xi(\phi_0)\right\}
\delta(t_1-t_2)\delta(z_1-z_2).\nonumber
\end{eqnarray}
We need then the Fourier transform of the functions $|\tanh(\zeta)|$ and $1-\tanh^2(\zeta)$, and we find in Appendix A
\begin{eqnarray}
\int_{-\infty}^\infty dz~e^{-ikz}\left[1-\tanh^2(\zeta)\right]&\simeq&
4\frac{m_0^2}{k^3}\sin\left(\frac{k}{m_0}\right)-4\frac{m_0}{k^2}\cos\left(\frac{k}{m_0}\right)\nonumber\\
\int_{-\infty}^\infty dz~e^{-ikz}|\tanh(\zeta)|
&\simeq&2\pi\delta(k)+2\frac{m_0}{k^2}\left[\cos\left(\frac{k}{m_0}\right)-1\right].
\end{eqnarray}
We are interested in the limit of a strongly localized topological defect, and therefore consider
the first order in $1/m_0$ only, where the previous Fourier transforms are
\begin{eqnarray}
\int_{-\infty}^\infty dz~e^{-ikz}\left[1-\tanh^2(\zeta)\right]
&\simeq&\frac{4}{3m_0}\nonumber\\
\int_{-\infty}^\infty dz~e^{-ikz}|\tanh(\zeta)|
&\simeq&2\pi\delta(k)-\frac{1}{m_0}.
\end{eqnarray}
The Fourier transform of the second functional derivative (\ref{2nd}) is then
\begin{eqnarray}\label{2ndF}
\frac{\delta^2\Gamma_\xi}{\delta\phi_1\delta\phi_2}&\simeq&
\left\{\omega_1^2-k_1^2-U_\xi^{''}(\phi_0)-Y_\xi^{''}(\phi_0)\right\}2\pi\delta(\omega_1+\omega_2)2\pi\delta(k_1+k_2)\\
&&+\frac{1}{3m_0}\left\lbrace 4V_\xi^{''}(\phi_0)+3Y_\xi^{''}(\phi_0)\right\}2\pi\delta(\omega_1+\omega_2)\nonumber
\end{eqnarray}
where we observe that, since translation invariance is broken in the space dimension,
there is no conservation of momentum $k$ in this direction.
In what follows, we give the main steps of the derivations only, and the details can be found in Appendix A.
\subsection{Evolution of the potentials}
For the step-like configuration (\ref{step1}), we evaluate the
inverse of the second derivative (\ref{2ndF}) using the expansion
\begin{equation}\label{ABA}
(A+B)^{-1}=A^{-1}-A^{-1}BA^{-1}+A^{-1}BA^{-1}BA^{-1}+\cdot\cdot\cdot,
\end{equation}
where $A$ is proportional to $\delta(\omega_1+\omega_2)\delta(k_1+k_2)$ and thus is diagonal,
and $B$ is proportional to $\delta(\omega_1+\omega_2)$ only and thus is off-diagonal in the
space dimension. In the previous expansion, the small parameter is $k/m_0$, where $k$ is a typical IR momentum.
The identification of the terms proportional to $LT$ in the trace of eq.(\ref{evolG}) gives then:
\begin{equation}\label{evolUeffL}
\dot U_\xi(\phi_0)+\dot Y_\xi(\phi_0)=m_0^2\phi_0^2
+\frac{m_0^2}{4\pi}\ln\left( 1+\frac{\Lambda^2}{U_\xi^{''}(\phi_0)+Y^{''}_\xi(\phi_0)}\right),
\end{equation}
where a prime denotes a derivative with respect to the constant configuration $\phi_0$,
and $\Lambda$ is the UV cut off.
The latter will actually not appear in the evolution equations
for the parameters, since the expansion of eq.(\ref{evolUeffL}) in powers of $\phi_0$ leads to a
field-independent divergence. We choose then the potentials such that $U_\xi(0)=0,Y_\xi(0)=0$, and
substract the corresponding evolution equation from eq.(\ref{evolUeffL}) to obtain,
in the limit $\Lambda\to\infty$,
\begin{equation}\label{evolU}
\dot U_\xi(\phi_0)+\dot Y_\xi(\phi_0)=m_0^2\phi_0^2
+\frac{m_0^2}{4\pi}\ln\left(\frac{U_\xi^{''}(0)+Y^{''}_\xi(0)}{U_\xi^{''}(\phi_0)+Y^{''}_\xi(\phi_0)}\right).
\end{equation}
The cut off does not appear in our evolution equation as a consequence of the derivative with respect
to a bare mass term, whereas we could expect logarithmic divergences in a 1+1 dimensional field theory.
The projection of the equation (\ref{evolU}) on the subspace of
even functions of $\phi_0$ gives the evolution of $U_\xi$, and its projection on the subspace
of odd functions gives the evolution of $Y_\xi$.
The evolution equations obtained after identification of the terms proportional to $T$ is
\begin{equation}\label{evolV}
\dot V_\xi(\phi_0)+\frac{\ln 2}{2}~\dot Y_\xi(\phi_0)
=-\frac{m_0^2}{24\pi}\left( \frac{4V_\xi^{''}(\phi_0)+3Y_\xi^{''}(\phi_0)}{U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)}
-\frac{4V_\xi^{''}(0)+3Y_\xi^{''}(0)}{U_\xi^{''}(0)+Y_\xi^{''}(0)}\right),
\end{equation}
where the constant term has been chosen so as to respect $V_\xi(0)=0$. In this latter equation also,
the projection on the subspace of
even functions gives the evolution of $V_\xi$, and its projection on the subspace
of odd functions gives the evolution of $Y_\xi$.
As is clear from eqs.(\ref{evolU},\ref{evolV}), a consistent solution for the potentials can be found
only if $Y_\xi=0$: these two evolution equations cannot give identical evolutions for $Y_\xi$.
As a consequence, no odd function of the field appears in the effective theory. This property will be discussed in the
last section, where we show that it is a consequence of symmetries of the quantum theory.
Finally, the effective action is
\begin{equation}\label{finalGamma}
\Gamma_\xi=\int dtdz\Bigg\{\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-U_\xi(\phi)
+\left[1-\tanh^2(\zeta)\right]V_\xi(\phi)\Bigg\},
\end{equation}
where the dressed potentials $U_\xi$ and $V_\xi$ satisfy the evolution equations
\begin{eqnarray}\label{evolUV}
\dot U_\xi(\phi_0)&=&m_0^2\phi_0^2
+\frac{m_0^2}{4\pi}\ln\left(\frac{U_\xi^{''}(0)}{U_\xi^{''}(\phi_0)}\right)\\
\dot V_\xi(\phi_0)&=&-\frac{m_0^2}{6\pi}\left(\frac{V_\xi^{''}(\phi_0)}{U_\xi^{''}(\phi_0)}
-\frac{V_\xi^{''}(0)}{U_\xi^{''}(0)}\right).\nonumber
\end{eqnarray}
We observe that, in the framework of the gradient expansion (\ref{parametrization}),
the evolution equation for $U_\xi$ is independent of $V_\xi$. A further step in the gradient expansion
would consist in taking into account quantum fluctuations in the kinetic term, and
write a general operator of the form $Z_\xi(\phi)\partial_\mu\phi\partial^\mu\phi$ in the effective action.
The function $Z_\xi$ would then couple the evolution equations for $U_\xi$ and $V_\xi$.
Finally, we note that taking into account additional terms in the expansion (\ref{ABA}) would not
influence the evolution of the effective potential $U_\xi$, but would add corrections of higher orders
in $1/m_0$ to the evolution of $V_\xi$.
\subsection{Truncation of the dressed potentials}
Quantum fluctuations generate all the powers of field in the dressed potentials $U_\xi,V_\xi$. As discussed already,
no operator is irrelevant here, in the Wilsonian sense, and therefore in principle one should
take into account all the powers of $\phi$. But if we assume small quantum fluctuations,
we consider then the following truncation of the dressed potentials:
\begin{eqnarray}
U_\xi(\phi_0)&=&\frac{M^2}{2}\phi_0^2+\frac{\lambda}{24}\phi_0^4+\frac{\beta}{6!}\phi_0^6\nonumber\\
V_\xi(\phi_0)&=&\frac{v_1}{2}\phi_0^2+\frac{v_2}{24}\phi_0^4+\frac{v_3}{6!}\phi_0^6,
\end{eqnarray}
where the parameters $M^2,\lambda,\beta,v_1,v_2,v_3$ depend on $\xi$.
This truncation takes into account the interactions which appear in the bare theory, as well as the lowest interaction ($\phi_0^6$) generated by quantum fluctuations.
An expansion in powers of $\phi_0$ in the evolution equation (\ref{evolUV}) for $U_\xi$ gives, after identification
of the different powers,
\begin{eqnarray}\label{sol1}
&&\mbox{order}~\phi_0^2:~~~~~~~~M\dot M=m_0^2-\frac{\lambda m_0^2}{8\pi M^2}\nonumber\\
&&\mbox{order}~\phi_0^4:~~~~~~~~\dot\lambda=\frac{3m_0^2}{4\pi M^2}\left( \frac{\lambda^2}{M^2}-\frac{\beta}{3}\right) \nonumber\\
&&\mbox{order}~\phi_0^6:~~~~~~~~\dot\beta=\frac{15\lambda m_0^2}{2\pi M^4}\left( \frac{\beta}{2}-\frac{\lambda^2}{M^2}\right)
\end{eqnarray}
The identification of the powers of $\phi_0$ in the evolution equation (\ref{evolUV}) for $V_\xi$ gives
\begin{eqnarray}\label{sol2}
&&\mbox{order}~\phi_0^2:~~~~~~~~\dot v_1=\frac{m_0^2}{6\pi M^2}\left( \frac{\lambda v_1}{M^2}-v_2\right) \\
&&\mbox{order}~\phi_0^4:~~~~~~~~\dot v_2=-\frac{m_0^2}{\pi M^2}\left( \frac{\lambda^2 v_1}{M^4}
-\frac{\beta v_1}{6M^2}-\frac{\lambda v_2}{M^2}+\frac{v_3}{6}\right) \nonumber\\
&&\mbox{order}~\phi_0^6:~~~~~~~~\dot v_3=-\frac{15m_0^2}{\pi M^4}\left[ \frac{\lambda v_1}{M^2}
\left( \frac{\beta}{2}-\frac{\lambda^2}{M^2}\right) +v_2\left( \frac{\lambda^2}{M^2}-\frac{\beta}{6}\right)
-\frac{\lambda v_3}{6}\right]. \nonumber
\end{eqnarray}
If one desires to obtain the evolution of the dressed parameters with $\xi$, it is possible to
solve the equations (\ref{sol1},\ref{sol2}) numerically, but we give in what follows approximate analytical solutions,
which contain the essential properties of the quantum theory.
\subsection{One loop approximation}
We study here the one-loop approximation of the non-perturbative evolution equations for $M,\lambda,\beta$.
For this, we note that the right hand side of eq.(\ref{evolG}) contains the
quantum corrections, such that the one-loop approximation
is obtained by replacing on the right hand side the dressed parameters by the bare ones: $M\to\sqrt{2\xi}m_0$,
$\lambda\to\lambda_0$ and $\beta\to 0$. We obtain then
for the one-loop parameters $M^{(1)},\lambda^{(1)},\beta^{(1)}$
\begin{eqnarray}\label{oneloop}
M^{(1)}\dot M^{(1)}&=&m_0^2-\frac{\lambda_0}{16\pi\xi}\nonumber\\
\dot\lambda^{(1)}&=&\frac{3\lambda_0^2}{16\pi\xi^2 m_0^2}\nonumber\\
\dot\beta^{(1)}&=&-\frac{15\lambda_0^3}{16\pi\xi^3 m_0^4}.
\end{eqnarray}
It is interesting to compare these results with usual Feynman diagrams, obtained from the bare theory
\begin{equation}
\int dtdz\left\lbrace \frac{1}{2}\partial_\mu\phi\partial^\mu\phi-\xi m_0^2\phi^2-\frac{\lambda_0}{24}\phi^4\right\rbrace ,
\end{equation}
i.e. the initial bare theory without the $z$-dependent quadratic and cubic terms.
The one-loop correction to the parameter $M^2$ is generated by the interaction
$\phi^4$ which is represented by the tadpole diagram
\begin{eqnarray}
(M^{(1)})^2-2\xi m_0^2
&=&\frac{i\lambda_0}{2}\int\frac{d^2p}{(2\pi)^2}\frac{1}{p^2-2\xi m_0^2}\\
&=&\frac{\lambda_0}{2}\frac{\Omega_2}{(2\pi)^2}\int_0^\Lambda \frac{qdq}{q^2+2\xi m_0^2}\nonumber\\
&=&\frac{\lambda_0}{8\pi}\ln\left( 1+\frac{\Lambda^2}{2\xi m_0^2}\right),\nonumber
\end{eqnarray}
where the factor $1/2$ takes into account the symmetry factor of the graph.
It can be checked that the derivative of the latter result with respect to $\xi$ indeed gives the
expected expression (\ref{oneloop}) for $\partial_\xi[(M^{(1)})^2-2\xi m_0^2]=2(M\dot M^{(1)}-m_0^2)$, in the limit
$\Lambda\to\infty$.
The one-loop correction to the coupling $\lambda$ is given by
\begin{eqnarray}
\lambda^{(1)}-\lambda_0&=&\frac{3i\lambda_0^2}{2}\int\frac{d^2p}{(2\pi)^2}\frac{1}{(p^2-2\xi m_0^2)^2}\\
&=&-\frac{3\lambda_0^2}{2}\frac{\Omega_2}{(2\pi)^2}\int_0^\infty \frac{qdq}{(q^2+2\xi m_0^2)^2}\nonumber\\
&=&-\frac{3\lambda_0^2}{16\pi\xi m_0^2},\nonumber
\end{eqnarray}
where $3\lambda_0/2$ in the first line takes into account the symmetry factor and the 3 permutations
of vanishing incoming momenta.
The derivative of the latter result with respect to $\xi$ indeed gives the above expression
(\ref{oneloop}) for $\dot\lambda^{(1)}$.
The coupling $\beta$ is generated by quantum fluctuations, and its one-loop expression is, taking into
account the symmetry and permutation factors,
\begin{eqnarray}
\beta^{(1)}&=&\frac{6!}{8\times 6}(i\lambda_0)^3\int\frac{d^2p}{(2\pi)^2}\frac{1}{(p^2-2\xi m_0^2)^3}\\
&=&15\lambda_0^3\frac{\Omega_2}{(2\pi)^2}\int_0^\infty \frac{qdq}{(q^2+2\xi m_0^2)^3}\nonumber\\
&=&\frac{15\lambda_0^3}{32\pi\xi^2 m_0^4},\nonumber
\end{eqnarray}
The derivative of the latter result with respect to $\xi$ indeed gives the above expression
(\ref{oneloop}) for $\dot\beta^{(1)}$.
We checked here that our non perturbative evolution equations (\ref{sol1}) are consistent, at one loop,
with usual Feynman graphs. This feature is a consequence of the fact that, in the framework of the
gradient expansion (\ref{parametrization}), the evolution of $U_\xi$ is independent of the
evolution of $V_\xi$. Beyond one loop, the gradient expansion does not give the same results than
the loop expansion, as it is based on an expansion in powers of the momentum.
\subsection{Approximate analytical solution}
We are interested here in approximate analytical solutions for the parameters $\lambda$ and $v_1$.
An approximate solution for $\lambda$ given in eqs.(\ref{sol1}) can be obtained by keeping the bare values
for the other parameters: $M\to\sqrt{2\xi}m_0$ and $\beta\to 0$, in which case the equation for $\lambda$ reads
\begin{equation}\label{approxlm1}
\frac{\dot\lambda}{\lambda^2}=\frac{3}{16\pi\xi^2 m_0^2}
\end{equation}
We see that, as quantum fluctuations arise ($\xi$ decreases),
$\lambda$ decreases ($\dot\lambda>0$), which was expected, as a scalar self coupling is known to decrease
in the IR. If we define the renormalized coupling $\lambda_R=\lambda(1)$, the solution
of eq.(\ref{approxlm1}) can easily be found and reads:
\begin{equation}\label{lambdaapprox}
\lambda(\xi)=\lambda_R\left[1+\frac{3\lambda_R}{16\pi m_0^2}\left( \frac{1}{\xi}-1\right) \right]^{-1}.
\end{equation}
In the spirit of the present functional method,
the bare coupling $\lambda_0$ should be found in the limit $\xi\to\infty$, which leads to the following expression
for the dressed coupling in terms of the bare coupling
\begin{equation}\label{lambdaR}
\lambda_R=\lambda_0\left( 1+\frac{3\lambda_0}{16\pi m_0^2}\right)^{-1}.
\end{equation}
Using a similar approximation in the evolution equation for the parameter $v_1$,
i.e. $M\to\sqrt{2\xi}m_0$ and $v_2\to 0$, we find the following evolution for $v_1$
\begin{equation}
\frac{\dot v_1}{v_1}=\frac{\lambda}{24\pi\xi^2m_0^2},
\end{equation}
where $\lambda$ is given by eq.(\ref{lambdaapprox}). The integration of this equation gives then
\begin{equation}
v_1(\xi)=3m_R^2\left[1+\frac{3\lambda_R}{16\pi m_0^2}\left(\frac{1}{\xi}-1\right)\right]^{-2/9},
\end{equation}
where we define $3m_R^2=v_1(1)$. As previously, a relation between the renormalized parameters
$\lambda_R$ and $m_R^2$ can be obtained, by taking the limit $\xi\to\infty$ in the previous equation,
with $v_1\to 3m_0^2$, such that
\begin{equation}
m_R^2=m_0^2\left(1-\frac{3\lambda_R}{16\pi m_0^2}\right)^{2/9}.
\end{equation}
From the relation (\ref{lambdaR}), we obtain then the following expression for $m_R^2$ in terms of the bare parameters only
\begin{equation}\label{mR}
m_R^2=m_0^2\left(\frac{16\pi m_0^2}{3\lambda_0+16\pi m_0^2}\right)^{2/9}.
\end{equation}
Eqs.(\ref{lambdaR},\ref{mR}) consist in a ressumations in all orders in $\hbar$, and are derived in the
limit of highly localized topological defect, $m_0^2>>\lambda_0$.
\section{Discussion}
We now discuss the vanishing of the dressed term $\tanh(\zeta)Y_\xi(\phi)$ in the effective action,
as a consequence of a discrete symmetry of the theory.
In this work, we considered the quantization of fluctuations above the kink $\Phi_{bg}$ given in eq.(\ref{kink}), but
the classical equation of motion (\ref{eq.mot.}) has actually two kink solutions centered on $z=0$, which
are $\pm\Phi_{bg}$. We now discuss the symmetry of the quantum theory under the transformation
$\Phi_{bg}\to -\Phi_{bg}$, and we denote by an upper indice $^{(\pm)}$ the different quantities
defined respectively on the backgrounds $\pm\Phi_{bg}$.
We note here that the vacuum of the theory, the constant configuration $\Phi_0=m_0\sqrt{6/\lambda_0}$, does not
respect the symmetry $\Phi_0\to -\Phi_0$, since this symmetry is spontaneously broken.
The bare action corresponding to the background $-\Phi_{bg}$ is
\begin{eqnarray}
S_\xi^{(-)}[\phi]&=&\int dt dz\Bigg\{\frac{1}{2}\partial_\mu\tilde\Phi\partial^\mu\tilde\Phi
-\xi m_0^2\tilde\Phi^2-\frac{\lambda_0}{24}\tilde\Phi^4\nonumber\\
&&~~~~~~~~~~~+\frac{3}{2}m_0^2\left[1-\tanh^2(\zeta)\right]\tilde\Phi^2
+\frac{g_0}{6}\tanh(\zeta)\tilde\Phi^3\Bigg\}\nonumber\\
&=&S_\xi^{(+)}[\psi],
\end{eqnarray}
where $\psi(t,z)=\phi(t,-z)$. The source term can then be written
\begin{equation}
\int dt dz ~j\phi=\int dt dz ~g\psi,
\end{equation}
where $g(t,z)=j(t,-z)$, such that the partition function is
\begin{eqnarray}
Z_\xi^{(-)}[j]&=&\int{\cal D}[\phi]\exp\left\{iS^{(-)}[\phi]+i\int dtdz~j\phi\right\}\nonumber\\
&=&\int{\cal D}[\psi]\exp\left\{iS^{(+)}[\psi]+i\int dtdz~g\psi\right\}\nonumber\\
&=&Z_\xi^{(+)}[g].
\end{eqnarray}
The classical field corresponding to the background $-\Phi_{bg}$ is
\begin{eqnarray}
\phi_c^{(-)}(t,z)&=&\frac{\delta W_\xi^{(-)}}{\delta j(t,z)}
=\int dsdy\frac{\delta W_\xi^{(+)}}{\delta g(s,y)}\frac{\delta g(s,y)}{\delta j(t,z)}\nonumber\\
&=&\int dsdy\frac{\delta W_\xi^{(+)}}{\delta g(s,y)}\delta(y+z)\delta(s-t)
=\frac{\delta W_\xi^{(+)}}{\delta g(t,-z)}
=\frac{\delta W_\xi^{(+)}}{\delta j(t,z)}\nonumber\\
&=&\phi_c^{(+)}(t,z).\nonumber
\end{eqnarray}
and is therefore independent of the sign of the background: $\phi_c^{(-)}=\phi_c^{(+)}$.
When defining the Legendre transform $\Gamma_\xi$, we inverse the relation $j\to\phi_c$, such that the source
is now function of the background, and therefore $j^{(-)}=j^{(+)}$. The effective action is then
\begin{eqnarray}
\Gamma_\xi^{(-)}[\phi_c]&=&W_\xi[j^{(-)}]-\int dtdz~j^{(-)}\phi_c\nonumber\\
&=&W_\xi[j^{(+)}]-\int dt dz~j^{(+)}\phi_c\nonumber\\
&=&\Gamma_\xi^{(+)}[\phi_c].
\end{eqnarray}
As a consequence, the effective action does not depend on the sign of the background, such that the
dressed term $\tanh(\zeta)Y_\xi(\phi_c)$ in the effective action (\ref{parametrization})
must vanish, as it should satisfy $-Y_\xi=Y_\xi$.
The corresponding term in the bare action does not survive quantization.
It is interesting to note that the non-perturbative method presented here allows us to see the vanishing
of the dressed potential $Y_\xi$, using eqs.(\ref{evolU},\ref{evolV}), which
means that quantum fluctuations are strong enough to
cancel the corresponding term present in the bare action. This could not be obtained
within a perturbative approach, but only within a method using a self consistent equation as eq.(\ref{evolG}).
A next step in this study consists in including fermions coupled to the scalar field fluctuating
over the background kink. From the results obtained here, we can expect a usual Yukawa coupling $\phi\overline\psi\psi$ to
be relevant to the problem, or more generally a coupling of the form $f(\phi)\overline\psi\psi$,
without an explicit $z$-dependence. As explained in Appendix B, the
evolution equation for the effective action $\Gamma_\xi$ with $\xi$ is then obtained in the same way,
with more involved calculations though, as the second derivative $\delta^{(2)}\Gamma$ is then a
$3\times 3$ matrix, with rows $\phi,\overline\psi,\psi$, and the computation of the trace in eq.(\ref{evolG})
involves the inverse of this matrix.
In higher dimensions, the method presented here is of course valid, and can include any other matter field.
Also, it can be extended to higher symmetries and deal with gauge fields. As far as
supersymmetry is concerned,
the use of superfield formalism necessitates a modification of the evolution equation (\ref{evolG}),
which takes into account the chirality constraint of the superfields.
Finally, we emphasize the advantage of the present approach, in 1+1 dimensions, compared to the
Wilsonian approach: we were able to generate non perturbative flows without referring to a running cut off,
as no cut off appears in the evolution for the dressed potentials, and the integration
of the corresponding flows led us to cut-off-free relations between bare and dressed parameters.
\vspace{1cm}
\noindent{\bf Acknowledgments:}
This work is supported by the EPEAEK program "Pythagoras", and co-funded by
the European Union (75\%) and the Hellenic State (25\%).
\section*{Appendix A: Computation of the trace}
For the step-like configuration $\phi=\mbox{sign}(z)\phi_0$, the second derivative of the effective action is
\begin{eqnarray}
\frac{\delta^2\Gamma_\xi}{\delta\phi_1\delta\phi_2}&=&\Big\{-\partial_\mu\partial^\mu-U_\xi^{''}(\phi_0)
+\left[1-\tanh^2(\zeta)\right]V_\xi^{''}(\phi_0)\nonumber\\
&&~~-|\tanh(\zeta)|Y_\xi^{''}(\phi_0)\Big\}\delta(t_1-t_2)\delta(z_1-z_2),
\end{eqnarray}
and therefore we need the Fourier transform of $1-\tanh^2(\zeta)$ and $|\tanh(\zeta)|$.
For this, we approximate the function $\tanh(\zeta)$ by $\zeta$ in the interval $[-1;1]$ and by 0
outside this interval. This approximation captures the essential features of the kink,
and leads, for the Fourier transform of $1-\tanh^2(\zeta)$, to
\begin{eqnarray}
&&\int_{-\infty}^\infty dz~e^{-ikz}\left[1-\tanh^2(\zeta)\right]\nonumber\\
&\simeq&\int_{-1/m_0}^{1/m_0} dz~e^{-ikz}\left(1-\zeta^2\right)\\
&=&4\frac{m_0^2}{k^3}\sin\left(\frac{k}{m_0}\right)
-4\frac{m_0}{k^2}\cos\left(\frac{k}{m_0}\right)\nonumber.
\end{eqnarray}
The same approximation leads, for the Fourier transform of $|\tanh(\zeta)|$, to
\begin{eqnarray}
&&\int_{-\infty}^\infty dz~e^{-ikz}|\tanh(\zeta)|\\
&\simeq&\int_{-\infty}^\infty dz~e^{-ikz}+\int_{-1/m_0}^{1/m_0} dz e^{-ikz}\left(|\zeta|-1\right)\nonumber\\
&=&2\pi\delta(k)+2\frac{m_0}{k^2}\left[\cos\left(\frac{k}{m_0}\right)-1\right]\nonumber
\end{eqnarray}
Since we are interested in the limit of a highly localized topological defect, we
consider the situation where $m_0>>|k|$, which gives
\begin{eqnarray}
\int_{-\infty}^\infty dz~e^{-ikz}\left[1-\tanh^2(\zeta)\right]
&\simeq&\frac{4}{3m_0}+{\cal O}\left(\frac{k^2}{m_0^3}\right)\nonumber\\
\int_{-\infty}^\infty dz~e^{-ikz}|\tanh(\zeta)|
&\simeq&2\pi\delta(k)-\frac{1}{m_0}+{\cal O}\left(\frac{k^2}{m_0^3}\right)\nonumber
\end{eqnarray}
The inverse $(\delta^2\Gamma)^{-1}$ is taken as
\begin{equation}\label{exp}
(A+B)^{-1}=A^{-1}-A^{-1}BA^{-1}+A^{-1}BA^{-1}BA^{-1}+\cdot\cdot\cdot,
\end{equation}
where $A$ is proportional to $\delta(\omega_1+\omega_2)\delta(k_1+k_2)$ and thus is diagonal,
and $B$ is proportional to $\delta(\omega_1+\omega_2)$ only and thus is off-diagonal in the
space dimension.
We obtain then, taking into account the first order in $B$ in the expansion (\ref{exp})
\begin{eqnarray}
\left(\frac{\delta^2\Gamma_\xi}{\delta\phi_1\delta\phi_2}\right)^{-1}&\simeq&
\frac{2\pi\delta(\omega_1+\omega_2)2\pi\delta(k_1+k_2)}{\omega_1^2-k_1^2-U_\xi^{''}(\phi_0)-Y_\xi^{''}(\phi_0)}\\
&&-\frac{(3m_0)^{-1}\left[4V_\xi^{''}(\phi_0)+3Y_\xi^{''}(\phi_0)\right]2\pi\delta(\omega_1+\omega_2)}
{\left[\omega_1^2-k_1^2-U_\xi^{''}(\phi_0)-Y_\xi^{''}(\phi_0)\right]
\left[\omega_2^2-k_2^2-U_\xi^{''}(\phi_0)-Y_\xi^{''}(\phi_0)\right]}.\nonumber
\end{eqnarray}
The term proportional to $LT$ in the trace of eq.(\ref{evolG}) is
\begin{eqnarray}\label{LT}
&<\int\frac{d\omega}{2\pi}\frac{dk}{2\pi}\frac{1}{\omega^2-k^2-U_\xi^{''}(\phi_0)- Y_\xi^{''}(\phi_0)}\nonumber\\
&=&-iLT\frac{\Omega_2}{(2\pi)^2}\int_0^\Lambda \frac{qdq}{q^2+U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)}\nonumber\\
&=&-LT\frac{i}{4\pi}\ln\left( 1+\frac{\Lambda^2}{U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)}\right) .
\end{eqnarray}
where $q$ is the Euclidean 2-momentum and $\Omega_2=2\pi$ is the solid angle in dimension 2.\\
The term proportional to $T$ only in the trace of eq.(\ref{evolG}) is
\begin{eqnarray}\label{Tonly}
&&-\frac{4V^{''}_\xi(\phi_0)+3Y^{''}_\xi(\phi_0)}{3m_0}T
\int\frac{d\omega}{2\pi}\frac{dk}{2\pi}\frac{1}{\left(\omega^2-k^2-U_\xi^{''}(\phi_0)-Y_\xi^{''}(\phi_0)\right)^2}\nonumber\\
&=&-\frac{4V^{''}_\xi(\phi_0)+3Y^{''}_\xi(\phi_0)}{3m_0}T
\frac{\Omega_2}{(2\pi)^2}\int_0^\infty\frac{qdq}{\left(q^2+U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)\right)^2}\nonumber\\
&=&\frac{-iT}{12\pi m_0}
\frac{4V^{''}_\xi(\phi_0)+3Y_\xi^{''}(\phi_0)}{U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)}\nonumber\\
\end{eqnarray}
The left-hand side of eq.(\ref{evolG}) is, for the step-like configuration,
\begin{equation}\label{lefthand}
LT\left[m_0^2\phi_0^2-\dot U_\xi(\phi_0)-\dot Y_\xi(\phi_0)\right]
+\frac{T}{m_0}\left[2\dot V_\xi(\phi_0)+\ln 2 \dot Y_\xi(\phi_0)\right],
\end{equation}
and, together with eqs.(\ref{LT},\ref{Tonly}), we obtain for the evolution of the dressed potentials
\begin{eqnarray}
\dot U_\xi(\phi_0)+\dot Y_\xi(\phi_0)&=&m_0^2\phi_0^2
+\frac{m_0^2}{4\pi}\ln\left( 1+\frac{\Lambda^2}{U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)}\right)\\
\dot V_\xi(\phi_0)+\frac{\ln 2}{2}~\dot Y_\xi(\phi_0)
&=&-\frac{m_0^2}{24\pi}\frac{4V_\xi^{''}(\phi_0)+3Y_\xi^{''}(\phi_0)}{U_\xi^{''}(\phi_0)+Y_\xi^{''}(\phi_0)}
.\nonumber
\end{eqnarray}
\section*{Appendix B: Extension to a Yukawa interaction}
We give here the main steps of the extension of the previous method to a Yukawa interaction
in $d+1$ dimensions, where the kink expands in the extra dimension, with coordinate $z$. For $d\ge 4$,
the theory is not renormalizable, and we consider it an effective theory, valid up to
an energy scale $\Lambda$, which is our cut off.\\
The bare action is, for massless fermions,
\begin{equation}
S_0=\int d^dx dz\left\{i\overline\Psi\hskip .25cm/\hskip -.25cm\partial\Psi+\frac{1}{2}\partial_\mu\Phi\partial^\mu\Phi
-\eta_0\Phi\overline\Psi\Psi-U_B(\Phi)\right\},
\end{equation}
where the scalar potential $U_B(\Phi)$ is given in eq.(\ref{barepot}). The fermion
field having no expectation value, the kink configuration is the same as in eq.(\ref{kink}), and the action
to quantize is
\begin{eqnarray}
S_\xi&=&\int d^dx dz\Bigg\{i\overline\Psi\hskip .25cm/\hskip -.25cm\partial\Psi
-\eta_0\Phi_{bg}(\zeta)\overline\Psi\Psi -\eta_0\tilde\Phi\overline\Psi\Psi\nonumber\\
&&~~~~~~~~~~+\frac{1}{2}\partial_\mu\tilde\Phi\partial^\mu\tilde\Phi
-\xi m_0^2\tilde\Phi^2-\frac{\lambda_0}{24}\tilde\Phi^4\nonumber\\
&&~~~~~~~~~~~+\frac{3}{2}m_0^2\left[1-\tanh^2(\zeta)\right]\tilde\Phi^2
-\frac{g_0}{6}\tanh(\zeta)\tilde\Phi^3\Bigg\},
\end{eqnarray}
where $\tilde\Phi$ represent the fluctuations above the classical kink. In the previous
expression, the $z$-dependent mass term $\eta_0\Phi_{bg}(\zeta)\overline\Psi\Psi$ for the fermion is responsible for the fermion
localization on the brane $z=0$, as discussed in \cite{local}. \\
The partition function, functional of the sources $j,\eta,\overline\eta$, is
\begin{eqnarray}
Z_\xi&=&\int{\cal D}[\tilde\Phi,\Psi,\overline\Psi]\exp\left\{ iS_\xi[\tilde\Phi,\Psi,\overline\Psi]
+i\int d^dxdz\left( j\tilde\Phi+\overline\eta\Psi+\overline\Psi\eta\right)\right\} \nonumber\\
&=&\exp\left( iW_\xi[j,\eta,\overline\eta]\right) ,
\end{eqnarray}
from which the classical fields $(\phi,\psi,\overline\psi)$ are defined:
\begin{eqnarray}
\frac{\delta W_\xi}{\delta j}&=&\phi\nonumber\\
\frac{\delta W_\xi}{\delta\overline\eta}&=&\psi\nonumber\\
\frac{\delta W_\xi}{\delta\eta}&=&-\overline\psi.
\end{eqnarray}
The proper graphs generator functional of the classical fields $\phi,\overline\psi,\psi$
is defined as the Legendre transform of $W$, after inverting the relations
$(j,\eta,\overline\eta)\to(\phi_\xi,\psi_\xi,\overline\psi_\xi)$ to $(\phi,\psi,\overline\psi)\to(j_\xi,\eta_\xi,\overline\eta_\xi)$:
\begin{equation}
\Gamma_\xi[\phi,\psi,\overline\psi]=W_\xi[j,\eta,\overline\eta]-\int d^dx dz \left(j_\xi\phi+\overline\eta_\xi\psi+\overline\psi\eta_\xi\right),
\end{equation}
and its functional derivatives are
\begin{eqnarray}
\frac{\delta\Gamma_\xi}{\delta\phi}&=&-j_\xi\nonumber\\
\frac{\delta\Gamma_\xi}{\delta\overline\psi}&=&-\eta_\xi\nonumber\\
\frac{\delta\Gamma_\xi}{\delta\psi}&=&\overline\eta_\xi.
\end{eqnarray}
The evolution equation for $\Gamma_\xi$ with $\xi$ is derived in the same way as was done in 1+1 dimensions and reads:
\begin{equation}\label{evolGbis}
\dot\Gamma_\xi+m_0^2\int d^dx dz~\phi^2
=-im_0^2~\mbox{Tr}\left\{\left(\delta^2\Gamma_\xi\right)_{\phi\phi}^{-1}\right\},
\end{equation}
but this time $(\delta^2\Gamma_\xi)^{-1}_{\phi\phi}$ is the $\phi\phi$ component of the inverse of the matrix
\begin{equation}
\delta^2\Gamma_\xi=\left( \begin{array}{ccc}
\frac{\delta^2\Gamma}{\delta\overline\psi\delta\psi}
&\frac{\delta^2\Gamma}{\delta\overline\psi\delta\overline\psi}
&\frac{\delta^2\Gamma}{\delta\overline\psi\delta\phi}\\
\frac{\delta^2\Gamma}{\delta\psi\delta\psi}
&\frac{\delta^2\Gamma}{\delta\psi\delta\overline\psi}
&\frac{\delta^2\Gamma}{\delta\psi\delta\phi}\\
\frac{\delta^2\Gamma}{\delta\phi\delta\psi}
&\frac{\delta^2\Gamma}{\delta\phi\delta\overline\psi}
&\frac{\delta^2\Gamma}{\delta\phi\delta\phi}\\
\end{array}\right) .
\end{equation}
Note that the components $\delta^2\Gamma_{\overline\psi\overline\psi}$ and $\delta^2\Gamma_{\psi\psi}$ do not
vanish in general, as quantum fluctuations generate four-fermion interactions and higher powers of $\overline\psi\psi$.
Also, compared to the 1+1 dimensional model, the trace in eq.(\ref{evolGbis}) contains divergences, such
that the cut off $\Lambda$ will appear in the final equations.\\
In order to take into account fermion localization, and the symmetries of the system,
we propose here the following
gradient expansion, where we disregard higher order fermion interactions and wave function renormalization,
\begin{eqnarray}
\Gamma_\xi&=&\int d^dx dz\Big\lbrace
\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-U^1_\xi(\phi)\overline\psi\psi-U^2_\xi(\phi)\nonumber\\
&&~~~~~+[1-\tanh^2(\zeta)]\left[i\overline\psi\hskip .25cm/\hskip -.25cm\partial\psi+V^1_\xi(\phi)\overline\psi\psi+V^2_\xi(\phi)\right] \Big\}.
\end{eqnarray}
In the previous expression, fermion localization is implemented via the $\zeta$-dependence fermion kinetic term:
away from the brane, for $\zeta\ne0$, the fermion propagation is exponentially damped.
The consistency of this ansatz for the functional dependence of $\Gamma_\xi$ has to be checked when
computing the trace in eq.(\ref{evolGbis}) and following the
evolution of $\Gamma_\xi$ with $\xi$, which leads to the evolution of the scalar potentials $U^{1,2}_\xi(\phi)$
and $V^{1,2}_\xi(\phi)$.\\
An interesting study is then to look for a possible mass generated dynamically, on the brane, for the would-be
massless fermion. This can be investigated using the present method, since it is based on a self consistent
equation, as was already done in the Kaluza-Klein framework \cite{kaluzaklein}. In the present context,
the fermion dynamical mass, if there is one, is $m_{dyn}=U^1_\xi(0)-V^1_\xi(0)$.
|
1,116,691,497,633 | arxiv | \section{Introduction}
Many problems plague big cities, including transportation, pollution, security, and public safety. With the proliferation of mobile devices and availability of more datasets, there is great potential to revisit these problems and introduce more efficient solutions. As a start, in this study, we shall focus on shared transportation as a solution to congestion problems in big cities.
Recent advances in mobile technology enabled the creation and rise of transportation network companies, like Uber and Lyft. Although shared transportation is not new, these companies were able to realize them at scale. Unfortunately, these companies, in order to maintain their competitive edge, do not disclose much of their data or analysis hence we investigate the matter with open datasets.
The approach to the study of human mobility depends on the spatial scale of the target varying from individual mobility to higher levels (buildings, city blocks, etc.). As vehicular transportation is the quintessence of urban mobility we focus on the study of trips happening in a city.
A common representation of trips is through their origins and destination. In this paper, we use origin-destination (OD) representation to study the trips and investigate various scenarios of shared transportation. A comparison of similarity/distance scores is then presented using a more detailed representation of trips based on a sample of 50 waypoints (spatio-temporal triples of x,y,t) for each trip (instead of OD - endpoints only).
Although similarity of trajectories has long been a subject of interest, the concept of time and its effects are not well-studied with respect to the trajectories. We approach this problem using spatio-temporal similarity of waypoints of the trips.
Public transportation is a big part of large cities around the world, but fine-grained shared mobility has received less attention. Recently, it has been seen for its potentials in the reducing travel times and traffic leading to increased productivity, less fuel consumption, and less pollution. In this paper, we use our proposed similarity measure to formulate and investigate three scenarios of shared transportation.
This study provides the following main contributions:
\begin{enumerate*}
\item Analysis of trips data from a spatio-temporal perspective,
\item Proposing a measure of spatio-temporal similarity of trajectories (trips) with flexibility across spatial or temporal elements. A comparison is conducted of various measures available in the literature,
\item Study of various shared transportation scenarios of Catch-a-Ride and Car Pooling. In addition, a novel graph formulation of a free float Car-Sharing scenario is presented.
\end{enumerate*}
The study finds that trip distance and duration follow lognormal and gamma distributions (with gamma providing a better fit), and demonstrates the utility of the score and spectral clustering in finding spatio-temporal clusters. Similarity measures available in the literature have a quadratic computation complexity (in the number of waypoints), while our score has linear complexity with the flexibility to prefer time over space or vise versa. Shared transportation achieves roughly 25\% decrease in distances traveled by cars. The Car-Sharing scenario is optimal in the number of cars (minimum) and also the consecutive pick-ups in terms of combined temporal and spatial distances with the algorithm being bounded by the complexity of the matching in bipartite graphs ($O(n^3)$ in the number of nodes - in our case number of trips).
The rest of the paper is structured as follows: Related work in Sec. \ref{sec:relatedwork} followed by exploratory data analysis in Sec. \ref{sec:analysis}, and definition of similarity score and its potential in finding spatio-temporal clusters in Sec. \ref{sec:similarity} and \ref{sec:clustering}. The score is used for dynamic matching of trips for Catch-a-Ride and Car Pooling scenarios of shared transportation in Sec. \ref{sec:matching}. Sec. \ref{sec:compare} presents a comparison of various measures of similarity. Formulation of the Car Sharing problem and the algorithm is presented in Sec. \ref{sec:carsharing}. Lastly, Sec. \ref{sec:conclusion} concludes the study.
\vspace{-5pt}
\section{Related Work}
\label{sec:relatedwork}
We explore the literature from three perspectives:
\begin{enumerate*}
\item The study of trips for their characteristics in time and space.
\item The similarity of trips using the trajectory of movements.
\item The concept of shared transportation and its variations.
\end{enumerate*}
Several studies use publicly available taxicab datasets to characterize trips.
\citet{Ferreira2013} analyzed a large dataset of taxicab trips (170M trips a year, for 2 years) of NYC for visualization of multiple use cases such as activity in different regions.
Users trips have been extracted from geo-tagged images and studied for patterns of visitations \cite{arase2010mining}. A recent study focused on the travel patterns arising from Internet-based ride-sharing platforms using data from DiDi, China's biggest ride-hailing company \cite{Dong2018}. They found two types of individual behavior: commuters and taxi-like roaming.
In a study of spatial patterns based on the origin-destination representation of trips, \citet{Guo2012} proposed a distance measure based on the number of shared nearest neighbors and used it to find spatial clusters of the trips. We follow a similar approach but we define the similarity score and clustering based on temporal aspect in addition to space.
\citet{Veloso2011} studied a taxi dataset for Lisbon, Portugal. They visualized and explored various spatio-temporal features including distributions of trip durations, distances, pickups, and drop-offs.
Uber and Lyft trips in San Francisco were studied and visualized by SF County Transportation Authority with respect to location and time \cite{sfTNC}.
Trip trajectories have been studied for storage and performance optimizations, as well as traffic insights.
The similarity of trips bounded by traffic network was studied in \cite{Abraham2012}, using spatial and temporal points of interest and the distance of the trip to them.
\citet{tiakas2006trajectory} defined spatial similarity between two trajectories as the average point-wise distance of points in a trajectory and temporal similarity as the sum of differences of consecutive points relative to the maximum of both.
\citet{Toohey2015} performed a comparative study of various similarity measures. They consider measures of longest common subsequence (LCSS), Frechet distance, dynamic time warping (DTW) and edit distance for trajectories of delivery drivers in the UK. These metrics are detailed in section \ref{sec:compare} for comparison.
In a similar study, \citet{Wang2013} investigated the effectiveness of various similarity scores on multiple transformations of trajectories, with parameters controlling the amplitude of change in the trips.
\citet{VanKreveld2007} studied the similarity of trajectories and subtrajectories considering time shifts, to find most similar trajectories. They propose various polynomial time algorithms; $O(n^4)$ for the most sophisticated case with time shifts and subtrajectories.
\citet{yan2017itdl} proposed a method of generating embeddings of Points of Interest (POI) with a spatial context that can be used to measure the similarity of place types. This has potential applications in the matching of rides and profiling trajectories.
Concerning rapidly expanding ride-hailing networks, \citet{Tian2013} presented a system of dynamic ride-sharing called Noah where three algorithms of matching are considered.
Minimizing traveled distances or times along with maximizing number of matches are common non-commercial objectives of dynamic ride-sharing systems \cite{AGATZ2012295,DiFebbraro2013}.
\citet{Gidofalvi2008} studied the social barriers of shared mobility and proposed a system of ride sharing that incorporates social relationships in the matching criteria.
\citet{Masoud2017} studied another variant of ride-sharing, assuming a p2p (similar to carpooling) and flexible (multi-hop with transfers) scenario and provides an optimal solution.
\citet{Ma2013} propose a framework of large-scale dynamic ride-sharing for taxis and evaluate it based on Beijing trips.
\citet{Zhang2014carpool} present a scalable framework of ride sharing with a focus for carpool capabilities (i.e. a taxicab with a passenger and a route, picking up another passenger for a similar trip) and evaluate it on trips in the city of Shenzhen. They show that their system, \textit{CallCab}, can reduce the total traveled distances by 60\% and wait times by 41\%.
\citet{khan2017ride} posit that ride-sharing is more effective when users can go out of their way slightly and also agree on common destinations.
A less studied scenario of shared transportation is car sharing.
\citet{Katzev2003} studied effects of car sharing on mobility and environmental impacts.
Several similar, recent studies have investigated mobility and impacts of car sharing \cite{Baptista2014,Nijland2017,Musso2012}.
\citet{HANNA2016254} formulated the car sharing problem as a bipartite graph and solved the matching for three criteria: min cost, min makespan (limit of distance for matches), and strategic manipulability (no user has the incentive to misreport its location and hence distance).
In \cite{agatz2016autonomous}, authors discuss several challenges of realizing car and ride sharing, including optimization of multiple trade-offs (number of vehicles, convenience, total travel time, operating costs).
This new concept of car sharing and ownership, coupled with the rise of autonomous and automated transportation systems (e.g. self-driving cars) might change the face of mobility in urban environments. In this paper, we present a novel formulation for scheduling a car sharing/automated fleet of vehicles with the minimum number of cars.
\begin{figure}[!t]
\subcaptionbox{8-9am distance}{%
\includegraphics[width=0.49\columnwidth]{images/trip_distance_dist_8}%
}
\subcaptionbox{4-5pm distance}{%
\includegraphics[width=0.49\columnwidth]{images/trip_distance_dist_16}%
}
\subcaptionbox{8-9am duration}{%
\includegraphics[width=0.49\columnwidth]{images/trip_duration_dist_8}%
}
\subcaptionbox{4-5pm duration}{%
\includegraphics[width=0.49\columnwidth]{images/trip_duration_dist_16}%
}%
\caption{Distribution of duration and traveled distance of trips in different times of day.}
\label{fig:distr_dist_dur}
\end{figure}
\section{Dataset and Analysis}\label{sec:analysis}
The dataset used for this study is mobility traces for the city of Cologne \cite{kolntmc}.
The data was collected as part of an initiative to realistically reproduce vehicular mobility in the city\footnote{http://kolntrace.project.citi-lab.fr/}. We choose this dataset as it provides us with fine-grained traces of trips in a long timespan of 24hours in a major city.
This dataset contains traces consisting of location, time and speed of vehicles on a trip (700K trips, 350M records in a 400 $km^2$ area). To analyze different times of the day and understand temporal dynamics, we select four 1-hour time windows throughout the day (Morning 8-9am, Noon 12-1pm, Evening 4-5pm, Night 8-9pm). For the OD representation of the trips, the first and last appearances of each ID are extracted.
\subsection{Exploratory Analysis of Spatial and Temporal Characteristics}
We start with an exploratory analysis of the dataset in order to better understand the vehicular mobility through the characteristics that pertain to the trips. We comparatively investigate (different times of the day) the trip duration and distance distributions in \ref{sssec:distr} (more accurately the distance between origin and destination represents the displacement and we use them interchangeably), number of trip waypoints (records in data) in \ref{sssec:wp}, spatial spread of the trips in \ref{sssec:spatial}, temporal distributions (in \ref{sssec:temporal}) and the relation of trip duration to trip start location in \ref{sssec:relation}.
\subsubsection{Duration and Distance Distribution}\label{sssec:distr}
Figure \ref{fig:distr_dist_dur} shows the distribution of trip duration and distance in each hour of the day. As observable, the distribution in all cases is right-skewed. Since one common distribution to model the trip duration is log-normal \cite{enroute,dandy1984variability,rakha2010trip,arroyo2004modeling}. The fit of lognormal is plotted on the histograms and compared to Gamma distribution which provides a better fit (visually) as it is a more flexible (general) distribution. The duration has a lighter tail comparing to lognormal. Figure \ref{fig:cdf_dist_dur} compares duration and traveled distance in different times of day. Trips that happen around evening rush hour are longer than rest of the day (e.g., $\approx90\%$ of trips are at most $\approx 1000$ seconds vs $\approx 700$ seconds for noon). As for the distance, similar trait holds except the probability of shorter (3KM or less) trips are almost the same. Also, morning trips tend to be lengthier than other times of the day.
\begin{figure}
\setlength{\belowcaptionskip}{-13pt}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/trip_distance_cdf_comparison}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/trip_duration_cdf_comparison}}
\caption{Comparison of trip (a) distance (b) duration through out the day.}
\label{fig:cdf_dist_dur}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.66\columnwidth,height=0.4\columnwidth]{images/trip_waypoint_count_cdf_comparison}
\caption{CDF of number of waypoints per trip in different times of day.}
\label{fig:cdf_waypoint}
\end{figure}
\begin{figure*}[!ht]
\subcaptionbox{8-9am\label{fig:distance_distr:a}}{%
\includegraphics[width=0.49\columnwidth]{images/reappcount_vs_distance_jointplot_8}%
}
\subcaptionbox{12-13pm\label{fig:distance_distr:b}}{%
\includegraphics[width=0.49\columnwidth]{images/reappcount_vs_distance_jointplot_12}%
}
\subcaptionbox{4-5pm\label{fig:distance_distr:c}}{%
\includegraphics[width=0.49\columnwidth]{images/reappcount_vs_distance_jointplot_16}%
}
\subcaptionbox{8-9pm\label{fig:distance_distr:d}}{%
\includegraphics[width=0.49\columnwidth]{images/reappcount_vs_distance_jointplot_20}%
}
\caption{Joint distribution of count of waypoints for a trip vs distance in different times of day.}
\label{fig:joint_plot_waypoint_distance}
\end{figure*}
\begin{figure*}[!ht]
\subcaptionbox{8-9am}{%
\includegraphics[width=0.48\columnwidth]{images/heatmap_uniq_trip_appearance_8}%
}
\subcaptionbox{12-13pm}{%
\includegraphics[width=0.48\columnwidth]{images/heatmap_uniq_trip_appearance_12}%
}
\subcaptionbox{4-5pm}{%
\includegraphics[width=0.48\columnwidth]{images/heatmap_uniq_trip_appearance_16}%
}
\subcaptionbox{8-9pm}{%
\includegraphics[width=0.48\columnwidth]{images/heatmap_uniq_trip_appearance_20}%
}
\caption{Heatmap of unique number of trips spanning the city (on a 300x300 grid). It resembles the city map with its traffic load in different times of the day.}
\label{fig:heatmaps_uniq}
\end{figure*}
\subsubsection{Number of waypoints}\label{sssec:wp}
Waypoints are simply put, reappearances of the same trip\_id in traces. Although this measure is not an intrinsic characteristic of the trips, rather the result of data generation method, it can provide us with insights for understanding the dataset and later for feature engineering and representations for the trips.
Figure \ref{fig:cdf_waypoint} shows the cumulative distribution of the number of waypoints per trip throughout the day. The waypoint count is higher for 4 pm trips ($\approx90\%$ of having roughly 1000 waypoints or less, while at noon the number becomes $\approx$750). Morning and night periods (8 am and 8 pm) follow similar patterns. To further address the implications of such data we compare waypoint count to distance. Figure \ref{fig:joint_plot_waypoint_distance} presents joint-distribution plots of waypoint count and traveled distances. The Pearson correlation coefficient is 0.59, 0.89, 0.71 and 0.81 for the hours of the day in the study respectively which suggests the existence of a relatively strong positive linear correlation between the two peaking at noon time. This, in turn, suggests that the number of waypoints can be a good representative of the distance traveled in this data (possibly due to equidistant time intervals of trip logging in data generation).
\subsubsection{Spatial Spread}\label{sssec:spatial}
Figure \ref{fig:heatmaps_uniq} shows 300x300 heatmaps, depicting the structure of the city based on the volume of traffic (unique count of trips in a grid cell in the hour).
At noon, the traffic seems to be more focused on the middle and south-east side of the city with less load on the highways/beltways.
There is a significant drop in traffic towards the night. The spatial spread of the trips represents the map of city roads, this suggests the existence of a subspace in the spatial space of trips (so a more compact representation might be more useful comparing to lat-lon. coordinate space).
\begin{figure}[t]
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/trip_start_time_cdf_comparison}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/trip_end_time_cdf_comparison}}
\caption{Comparison of trip (a) start time (b) end time through out the day.}
\label{fig:cdf_temporal}
\end{figure}
\subsubsection{Temporal Distribution}\label{sssec:temporal}
CDFs of start and end times of the trips are shown in figure \ref{fig:cdf_temporal}. More morning and night trips start in the first 10 minutes of the hour than the first 10 minutes of noon or evening rush hour although in the case of 4 pm, towards the end of the hour, more trips are started (there is a jump in the number of trips that start after 2200 seconds). At half hour marker, almost 10\% more trips have already finished in the morning and nighttime comparing to noon or 4 pm (close to 50\% vs 40\%).
\subsubsection{Relation of space and time}\label{sssec:relation}
We observe the relationship of trip start location and its duration using box plots in Fig. \ref{fig:box_plots} throughout the day. The repeating patterns each corresponds to same longitude grid cells (left to right, going from west to east) and the overall trend moving from left to right in the figures corresponds to higher latitudes. The patterns suggest that trips happening in the east of the city are longer in general and trips happening on the north side are slightly longer as well. This may be explained by highways (German Autobahn) around the city to the east and north-west.
\begin{figure}[t]
\setlength{\belowcaptionskip}{-13pt}
\includegraphics[width=0.99\columnwidth]{images/8am_boxplot_10by10grid_vs_duration}
\caption{Box plot of duration of 8am trips versus their starting cell in a 10x10 grid of the city area. Repeating patterns are observed on west to east traversal of the grid. Overall trend shows a slight increase in duration towards the north.}
\label{fig:box_plots}
\end{figure}
\subsection{Similarity}\label{sec:similarity}
In order to better understand the trips, we propose a similarity measure as the arithmetic mean of point-wise (e.g. origin vs origin, destination vs destination) distances each achieved through the weighted geometric mean of Euclidean similarity (spatial) and their temporal similarity. Both location and time features are scaled to range $[0,1]$.
The intuition behind such a choice is that the Geometric mean adjusts the spatial component of two points based on their temporal similarity and arithmetic mean provides a measure based on all the point-wise similarities. Mathematically:
$$ psim(p_1,p_2) = e^{\frac{w_1 \ln(\frac{1}{1+dist(p_1,p_2)}) + w_2 \ln(\frac{1}{1+time(p_1,p_2)})}{w_1+w_2}}$$
$$ sim(t_1,t_2) = \frac{\sum_{i=0}^n psim(t_1^i,t_2^i)}{n} \label{math:sim_measure}$$
Where $dist(p1,p2)$ is the spatial distance of two points (Euclidean in this case), $time(p_1,p_2)$ is the absolute difference of the points in time and $w_1$ and $w_2$ are the weight given to them respectively. We use $1/(1+dist)$ to convert a distance into similarity. The similarity of trips $sim(t_1,t_2)$ is the average of all the $n$ points (waypoints, or just endpoints in case of OD representation). In this study, for clustering and matching of trips, higher weight ($w_1 = [0.6 , 0.7]$ vs $w_2 = [0.3 , 0.4]$) is given to the spatial component so that time difference acts as the adjusting factor in determining the similarity measure for a pair of trips. Effect of the choice of weights on the results of the matching is available in Sec. \ref{sec:compare}.
\begin{figure}[t]
\setlength{\belowcaptionskip}{-13pt}
\includegraphics[width=0.55\columnwidth]{images/8am_sim_pdf_both}
\label{fig:similarity measure distribution}
\caption{Distribution of the values of similarity. Laplacian kernel is applied to create more distinction between values. }
\end{figure}
\subsection{Clustering and Visualization}\label{sec:clustering}
With the help of similarity defined above in section \ref{sec:similarity}, we are able to generate spatio-temporal clusters by using Spectral Clustering (SC). Because of the spatial constraints enforced by the map of the city, spatial features do not form a convex space and hence we need clustering algorithms that work with connectivity instead of shape. We tried DBSCAN and SC algorithms; Only SC was able to find meaningful clusters. SC make use of the spectrum (eigenvalues) of the similarity matrix of the data to project the data into fewer dimensions before clustering.
In order to visualize the data, Multi-Dimensional Scaling (MDS) and Principal Component Analysis (PCA) are used. Figure \ref{fig:clustering_mds_pca} shows the resulting clusters for morning trips on 2D scaling and 2D projection of OD representation into first two principal component's space. First two PCs capture over 82\% of the variance of the data.
To test the quality of this clustering, we attempt to interpret the resulting clusters.
Clusters are shown on a scaled lat. lon. plane with mean and median of start and end points in Figure \ref{fig:clustering_spatial}. The ellipsoid axes correspond to the std. of lat. and lon. of the cluster. These clusters cover the area of the city with some of them spanning same areas especially near the city's center (downtown). Interestingly, as Figure \ref{fig:clustering_time} shows, there is a clear separation in time for those clusters that are spatially overlapping. This suggests the clustering scheme is successful in finding clusters that show different spatio-temporal traits.
\begin{figure}[t]
\setlength{\belowcaptionskip}{-5pt}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/8am_2mds_projection_spectral_clusters}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/8am_2pc_projection_spectral_clusters}}
\caption{Results of Spectral Clustering for 8-9am visualized (each color represents a cluster) on (a) 2D MDS plot (b) First two Principal Components. Other times of the day exhibit similar patterns.}
\label{fig:clustering_mds_pca}
\end{figure}
\begin{figure}[t]
\setlength{\belowcaptionskip}{-5pt}
\includegraphics[width=0.55\columnwidth]{images/cluster_distribution_in_space}
\caption{Spread of clusters in space}
\label{fig:clustering_spatial}
\end{figure}
\begin{figure}[!h]
\setlength{\belowcaptionskip}{-13pt}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/8am_kde_trip_startime}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/8am_kde_trip_endtime}}
\caption{Kernel Density Estimates of 8am trips (a) start times (b) end times. There is a clear separation in time for the clusters that are spatially intermixed.}
\label{fig:clustering_time}
\end{figure}
\section{Matching Trips with OD Representation}\label{sec:matching}
The existence of clusters and a spectrum of similarity suggest the availability of trips nearby.
Knowing this we can use variations of the similarity score for applications such as shared transportation.
Thus, we refine the similarity to reflect more on each scenario as well as a dynamic matching of trips for each case.
We focus on morning time window (8 am) and for all cases, the trips are divided into two categories: Riders and Rides (2000 riders vs 10000 rides). The goal is to find matches for the rider to ride with. Trip end times are used as the expected time of arrival (latest possible).
\subsection{Catch-a-Ride}
In this scenario, riders have a time margin to reach the pickup points (ride's origin) and a margin for them to arrive at their destination after drop-off (ride's destination). We redefine the similarity score as Catch-a-Ride (CaR) score between $t_1$ and $t_2$ as if $t_1$ can move to $t_2$ and ride with it. Therefore, $t_2$ has to start after $t_1$ and finish before it to have a high score (and be close nearby). We can do this by modifying the similarity score so that the time component is the time of $t_2 - t_1$ for the origin points and time of $t_1 - t_2$ for destinations rather than the absolute values. This will create an asymmetric affinity matrix $A$ where $A_{i,j}$ is the $CaR\_score(t_i,t_j)$.
\subsubsection{Clustering and Visualization}
We decompose the matrix into one symmetric (average of the matrix and its transpose) and one anti-symmetric (skew-symmetric) component (half of the difference of matrix and its transpose) and then use the symmetric part for clustering and visualization. Fortunately, the ratio sum of squares of the symmetric matrix to the original is 98\% which suggests the matrix is almost symmetric in nature and the symmetric component can be a good representative of it. This is expected as the CaR\_score is a subset of the similarity score where the time difference is not the absolute value. Clusters (using spectral clustering) are visualized in Figure \ref{fig:clustering_CaR} on MDS and 2PC plots.
\begin{figure}[!t]
\setlength{\belowcaptionskip}{-13pt}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/8am_2d_mds_WtR_2k_sample}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/8am_2pc_pca_WtR_2k_sample}}
\caption[Visualization of Catch-a-ride clusters]{Visualization of the clusters on (a) 2D DMS (b) First 2 PCs. CaR score results in nested ellipses in MDS. By definition, $CaR\_score$ and $carpool\_score$ are complements of each other with respect to original similarity score and hence the clusters (and MDS and PCA) would be the same (with some rotation).}
\label{fig:clustering_CaR}
\end{figure}
\subsubsection{Dynamic Matching}
For each rider, the potential matches are filtered out by using distance and time thresholds. The trend of the number of matches per threshold is presented in Figure \ref{fig:CaR_count_per_dist_and_time} (the number of riders where they have at least L matches). As observable, the number of matches is more sensitive to distance thresholds compared to time. After 250 seconds, the number of riders that get a match at any value of L scantly changes. At any distance threshold, the number of riders with at least L matches significantly drop with an increase of L ($\approx$900 riders get at least 5 matches while $\approx$1500 get at least a match). Next, we choose the thresholds of 1800 meters and 900 seconds (which enables us to provide a match for almost 3/4 of the riders) and use the CaR\_score as the matching criteria. Note that the choice of one is independent of others and hence the greedy approach is optimal.
The results of such a matching scheme are presented in Table \ref{tab:dynamic_match_car_carpool}. The total travel required by both the requests (riders) and the matches (rides) are $12990.199$ KM where 33.5\% of it is traveled by the matches.
Since roughly 75\% of the requests have a match (1496 out of 2000), the ratio of matched to total for those is 45.4\% which means in cases where at least a potential match exists, \textbf{$\approx$55\%} of the traveled kilometers can be saved.
Since 5235.3 km of the 8633.8 are taken by those who have matches, the total travel done by unmatched requests is 3398.5 KM. Then, the traveled distances with ride-sharing amount to 3398.5 + 4356.3, the ratio of which to total (12990.2) is 59.7\% which yields \textbf{a saving of 40.3\%}, compared to having no shared transportation at the cost of commuting to pick up and drop off locations ($\approx$1000 KM each, average of 0.66 KM for each rider).
\begin{figure}[t]
\centering
\setlength{\belowcaptionskip}{-10pt}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/CaR_match_per_dist_thresh}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/CaR_match_per_time_thresh}}
\caption{Number of potential matches for CaR (similarly for CarPool) (a) per distance (time threshold fixed at 900 sec) (b) per time (distance threshold fixed at 1800 meters).}
\label{fig:CaR_count_per_dist_and_time}
\end{figure}
\newcolumntype{b}{>{\footnotesize}X}
\newcolumntype{s}{>{\hsize=.33\hsize}X}
\begin{table*}[ht]
\centering
\caption{Results of matching trips for ride sharing Catch-a-Ride (CaR) and carpool (CP) scenarios.}
\label{tab:dynamic_match_car_carpool}
\begin{tabularx}{\textwidth}{bss}
\hline
& CaR & CarPool \\ \hline
match travels (km) & 4356.368 & 5486.785 \\
req travels (km) & 8633.831 & 8633.831 \\
match to total travel ratio & 33.5\% & 38.8\%\\
origin-origin distance (km) & 1017.665 & 948.438 \\
dest-dest distance (km) & 1045.912 & 1019.560\\
origin-origin times (sec) & 70185 & 83171\\
dest-dest times (sec) & 88873 & 88064 \\
\# req with at least a match & 1496 & 1458\\
req travels for least a match (km) & 5235.319 & 4938.073 \\
match to total travel ratio (at least a match) & 45.4\% & 52.6\% \\
\hline
\end{tabularx}
\end{table*}
\subsection{Car Pooling}
Similar to the previous scenario, but instead the ride is willing to go out of their way (within a threshold) to pick the rider up and drop them off before arriving at its own destination on time. The clustering is omitted as the constructed affinity matrix is the transpose of CaR scenario and hence would result in the same symmetric component. The dynamic matching is discussed next. The threshold filters are applied to find potential matches.
The trends of the number of potential matches over time and distance thresholds follow a very similar curve to CaR (Fig. \ref{fig:CaR_count_per_dist_and_time}). There is a slightly lower number of requests with a match at each threshold, but the overall trends are alike with more sensitivity to distance thresholds. Distance and time thresholds of 1800 meters and 900 seconds are again applied, and the CP\_score is used as the matching criteria.
Results are presented in Table \ref{tab:dynamic_match_car_carpool}. The total KMs traveled would be the request travels plus the pickup and drop off ($8633.8 + 948.4 + 1019.5 = 10601.7$). Without any carpooling, the total travels by matches and requests would be $8633.8 + 5486.8 = 14120.6$KM and hence in existence of the ride-sharing there is 25\% decrease in total KMs traveled at the cost of drivers going a total of 948.4 KMs out of their way (average of 0.65 KM per driver) for pick up and 1019.5 KMs for drop off (0.69 KM average). CDFs of resulting pick-up and drop-off distances and times are presented in Figure \ref{fig:CP_result_dist_time}
\begin{figure}
\centering
\subcaptionbox{}{\includegraphics[width=0.49\columnwidth]{images/CP_result_distance}}
\subcaptionbox{}{\includegraphics[width=0.49\columnwidth]{images/CP_result_time}}
\caption{CDF of CP results of pick-up and drop-off (a) distance (b) time.}
\label{fig:CP_result_dist_time}
\vspace{-10pt}
\end{figure}
\section{Comparison with Other Similarity Scores}
\label{sec:compare}
The explained scenarios only utilize the OD representation for the trips (and hence only the endpoints) but the similarity score is not limited to OD representation. As mentioned in the \ref{sec:relatedwork} section, various measures of similarity or distance have been proposed for comparison of trajectories. Longest Common Sub-Sequence (LCSS) is a measure of closeness based on the number of points shared by two trajectories (points within thresholds of each other). Frechet distance is the maximum distance between two curves over all parameterization of the curves. This can be imagined as the minimum length of the leash required to take a dog for a walk when the human and dog have their own path to travel with all the possible combinations of speed for each (no backward movement allowed). Dynamic Time Warping (DTW) works by finding the minimum cost warp path where a warp path is the sequence of changes that warps a trajectory into the other.
In this section, we investigate the performance of Catch-a-Ride scenario using DTW, Frechet Distance and LCSS and our score (referred to as WGM for Weighted Geometric Means, weights of 0.6 and 0.4 for distance and time resp.). In all cases, we use a 50-waypoint (sampled) representation of the trips instead of only OD and the same filtering is applied (and hence measures related to the request set are kept constant). The results are presented in table \ref{tab:comparison}. In case of WGM with time, the weights of the geometric mean are given as 0.1 for spatial and 0.9 for temporal parts. In DTW with time, the cost is the product of distance and time difference instead of distance only. As the table shows, Frechet distance (which is an upper-bound on the distance of the trajectories by definition) provides closest matches in distance (but not in time). On the other hand, DTW with time, results in least orig-orig and dest-dest time but \textbf{not} distance. WGM achieves the results with both time and space in mind so at a tiny cost in orig-orig and dest-dest distance it beats others (original versions) in temporal aspect . Also with the flexibility of weights, we can tune it for more focus on time or space. From a runtime point of view, our score is linear in the number of waypoints for the trips whereas LCSS and DTW have runtime and space complexity of $O(N^2)$ and Frechet takes even longer with $O(N^2 log(N))$. To better understand the sensitivity of results to the choice of the weight of the temporal component in WGM, trends of change in pick-up and drop-off distance and time is plotted against temporal weight (w\_t) in Fig. \ref{fig:oodd_dist_time_per_wt}. With an increase in w\_t, more preference is given to time element of the similarity and hence the matches are closer in time (total pick-up and drop-off times decreases) while they are not necessarily closer in space. This can be used to tune the score for different applications with levels of dependence on time versus location.
\begin{figure}
\centering
\setlength{\belowcaptionskip}{-10pt}
\subcaptionbox{}{\includegraphics[width=0.49\columnwidth]{images/oodd_dist_per_wt}}
\subcaptionbox{}{\includegraphics[width=0.49\columnwidth]{images/oodd_time_per_wt}}
\caption{pick-up and drop-off (a) distance (b) time.}
\label{fig:oodd_dist_time_per_wt}
\end{figure}
\begin{table*}[ht]
\centering
\caption{Comparison on different similarity/distance scores on Catch-a-Ride scenario.}
\label{tab:comparison}
\begin{tabularx}{\textwidth}{bssssss}
\hline
& WGM & LCSS & Frechet & DTW & DTW w/ time & WGM w/ time\\ \hline
match travels (km) & 5655.997 & 5393.861 & 5671.631 & 5652.613 & 6038.372 & 5945.291 \\
req travels (km) & 11706.334 & 11706.334 & 11706.334 & 11706.334 & 11706.334 & 11706.334 \\
match to total travel ratio & 32.57\% & 31.54\% & 32.63\% & 32.56\% & 34.02\% & 33.68\% \\
origin-origin distance (km) & 1149.131 & 1442.782 & 1104.907 & 1127.623 & 1202.733 & 1203.724 \\
dest-dest distance (km) & 1188.705 & 1421.046 & 1144.947 & 1165.916 & 1222.000 & 1255.597 \\
origin-origin times (sec) & 71365 & 86071 & 74544 & 72567 & 52866 & 57896\\
dest-dest times (sec) & 80860 & 101652 & 89570 & 86655 & 57546 & 62424 \\
\# req with at least a match & 1357 & 1357 & 1357 & 1357 & 1357 & 1357\\
req travels for least a match (km) & 7244.179 & 7244.179 & 7244.179 & 7244.179 & 7244.179 & 7244.179 \\
match to total travel ratio (at least a match) & 43.84\% & 42.67\% & 43.91\% & 43.82\% & 45.46\% & 45.07\% \\
\hline
\end{tabularx}
\end{table*}
\section{Car Sharing}\label{sec:carsharing}
The similarity score, in addition to clustering or dynamic matching of trips, can be used to find a scheduling for car sharing applications. This is particularly useful in case of the existence of autonomous driving systems. Our aim here is to service all our requests (trips) with the minimum number of cars shared while in all the possible assignment, we focus on the ones that minimize the distance and time different (based on the similarity score) between consecutive trips assigned to the same car. The same set of 2000 trips is used as the request set. The problem is formulated as a graph:
\begin{itemize}
\item Each node represents a trip (N nodes). Each directed edge between any 2 given nodes represents the possibility of the destination node to use the car/get serviced after the source node of that edge (i.e. an edge $(a,b)$ exist only if the end point of $a$ is within time and distance threshold of start point of $b$ and $b$ starts after $a$ ends). The similarity score is used as edge weights.
\item The problem as stated above (i.e. finding the minimum number of cars) is translated into min-path partitioning of graphs (a path in our graph represents the chain of trips being serviced by the same car). This problem is NP-hard in general graphs (proof by reduction from Hamiltonian Cycle (HC): HC exists if and only if the graph can be covered with 1 path, hence if we find the min number of paths to be 1, the graph has an HC).
\item Since an edge only exists if the trip corresponding to the destination node happens after the source node's, cycles are not possible and a partial ordering is maintained (i.e. the graph is a DAG - directed acyclic graph). Fortunately, there exists a polynomial time algorithm that can solve the path partitioning problem for DAGs.
\end{itemize}
The algorithm works by converting the DAG G into a bipartite graph B where matching in B translates into path partitioning in G:
\begin{itemize}
\item Given G, construct B by creating nodes $i$ in the left part and $i'$ in the right part of B for every node $i$ in G. An \textit{undirected} edge exists between $i$ and $j'$ in B if the directed edge (i,j) exists in G.
\item Running a maximal maximum matching (max-cardinality max-weight) on B would yield the minimum number of cars and their chain of the trip. This can be achieved by modifying weights on the edges by adding N times the maximum score to all edges (so that no augmenting path exists that has fewer edges and higher weight) and then running a maximum matching algorithm (e.g. Hungarian algorithm or Edmond-Karp max-flow equivalent problem). Assume $T = max(score)$ this means every original edge has a value of $[0,T]$, so a matching of size K has a maximum total of $(K * N +1 )* T$. If we take off a match of size $K$ total cost is reduced by a maximum of $(K * N +1 )* T = K N T + T$ and adding a new matching of bigger size K+1 even at the minimum possible weight would be $(K+1) * N * T = K N T + NT$. Since the minimum with $K+1$ has definitely more total weight than maximum with $K$ the algorithm would then always choose the longest path.
\item Each chain of trips (path in DAG) has exactly one start and one end (they can be the same), hence the number of chains would be the number of either start or end nodes. Any right part node of B not in the matching is the start of a chain of trips so the number of such nodes is the required number of cars (N - matching\_cardinality). To retrieve the chains for each $i'$ in the right part that is not in the matching, find $i$ in the left part and follow its match chain recursively (i.e. start with $j'$ that $i$ is matched to).
\end{itemize}
The graph constructed by thresholds of 900 seconds and 1.8 KM, results in 38730 edges (2K nodes). The matching cardinality is 1370 which means 630 cars are needed to service 2000 trips with 149 of them not belonging to any chains (singleton trips), leaving 481 chains with the average length of $\approx$3.88. The distribution of chain lengths, chain travels, consecutive pick-up travels and times are presented in Figure \ref{fig:CSH_results} (CDFs are omitted for brevity). Majority of non-singleton trips have chains of size $\leq$4. Approximately 90\% of chains travel <30 KMs, and their total consecutive pick-up distance and time are at most $\approx3.5$ KMs and 1300 seconds.
\begin{figure}[]
\centering
\setlength{\belowcaptionskip}{-1pt}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/chain_len_hist}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/chain_travel_hist}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/pickup_travels_hist}}
\subcaptionbox{} {\includegraphics[width=0.49\columnwidth]{images/pickup_time_hist}}
\caption{Carsharing results (a) chain length (b) total km traveled (c) total km traveled to pickup next rider (d) total time spent to pickup next rider.}
\label{fig:CSH_results}
\vspace{-6pt}
\end{figure}
\section{Concluding Remarks}\label{sec:conclusion}
In this paper, driven by data, we explored various spatio-temporal characteristics of the trips (of the city of Cologne).
The systematic analysis can be applied to other datasets for understanding such characteristics and for comparison purposes.
Next, We have proposed a similarity score, \textbf{W}eighted \textbf{G}eometric \textbf{M}ean (\textbf{WGM}), that reflects both spatial and temporal similarities with knobs that can be used to favor the spatial or temporal similarity (through weight adjustments).
It is also efficient to calculate.
This score is used in clustering of trips, successfully identifying clusters separated by distance or time.
Next, scenarios of shared transportation including Catch-a-Ride and CarPool are discussed and WGM is used as the criteria for dynamic matching of trips for this purpose.
A comparison of the performance of various similarity scores/distances on the Catch-a-Ride scenario is also given.
WGM is able to capture similar trips in both distance and time dimensions. The weights (of the geometric mean) can be used to capture more similarity in one dimension or the other which gives our score more flexibility while it remains computationally simple compared to other metrics.
Lastly, we formulated the problem of car sharing (autonomous fleet of vehicles) using graph theory and algorithms (with our score as the weights for the graph) which achieves the optimal number of cars required with the minimum spatio-temporal difference between consecutive pick-ups.
To follow up, we plan to confirm the reproducibility of the results on different datasets. A potential dataset is the simulator-generated traces of the trips based on traffic cameras around the globe \cite{enroute,Thakur2012hotplanet}.
Additionally, we plan to find embedding spaces that reflect the similarity of trips. In general, trips are represented through thousands (and not necessarily the same number) of waypoints. This makes detailed comparisons computationally intensive.
Encompassing trips in an embedding space (with a fixed number of dimensions) can ease the problem. Another potential application is in a real-time matching of trips where the details of paths are not shared (e.g. for privacy concerns, or compression). The notion of similarity and its variations can be used for the design of systems with pollution reduction and public safety in mind potentially through participatory sensing and crowdsourcing.
These steps would lead to the concept of profiles of vehicular mobility as a way to describe human mobility behavior.
\begin{acks}
This project was partially supported by NSF grant 1320694 and UF Informatics Institute.
\end{acks}
|
1,116,691,497,634 | arxiv | \section{Introduction}
The existence of a 'negative' viscosity in dispersions of ferromagnetic particles in a nonpolar solvent is a curious phenomenon which has recently been discovered. It was predicted by Shliomis and Morozov$^{\ref{bib:shliomis}}$ and corroborated experimentally by Bacri $\it{et\; al.}$$^{\ref{bib:bacri}}$. Its interest is based upon the fact that contrary to what one would expect, the viscosity of the dispersion as a whole diminishes when an applied magnetic field oscillates at an optimum frequency. As has been known for a long time, the presence of a constant magnetic field prevents free rotation of the dipoles which originates a resistance on the flow of the fluid which increases dissipation. On the other hand, Bacri $\it{et\; al.}$$^{\ref{bib:bacri}}$ have shown that the increment in the viscosity due to the rotational degrees of freedom is proportional to the difference between the vorticity and the angular velocity of the particles in suspension. Thus, a magnetic field oscillating at a high enough frequency can impart an angular velocity to the particles greater than the vorticity, leading to a 'negative' viscosity. One see that the particles gain kinetic energy at the expense of the oscillating field.
This striking phenomena is the motivation for undertaking this investigation. We have found that the effect described is more general than was thought, considering that it can be applied to any ferrofluid that has a frequency different from the inverse of the Brownian relaxation time.
The key point in our results comes from the fact that we work with axisymmetric particles immersed in a suctioning current. Thus, at a low enough temperature, due to thermal agitation, the particles jump back and forth along the axis of the
flow, which thereby introduces an internal frequency in the system. This internal
frequency plays exactly the same role as the frequency of the alternating magnetic
field in the systems studied by Shliomis and Morozov$^{\ref{bib:shliomis}}$ and Bacri $\it{et\; al.}$$^{\ref{bib:bacri}}$.
There are examples of systems other than ferrofluids that potentially can show this phenomenology, these are suspensions of gravity dipoles$^{\ref{bib:brenner},\ref{bib:brenner2}}$ or suspensions of magnetotactic bacteria$^{\ref{bib:blakemore}}$. These bacteria, some of them rod shaped, undergo the phenomenon of magnetotaxis. The bacteria contain iron particles which impart a magnetic moment to themselves.
The paper is organized as follows; in section II we describe the system and
perform an analysis of the stability of the equilibrium orientations of the dipoles. Section III is
devoted to the derivation of the relaxation equation for the magnetization. In
section IV we introduce fluctuations in our analysis and, by applying
fluctuating hydrodynamics in the space of orientations, we derive the
Langevin equation for the magnetic moment. In section V we compute
the stress tensor and the viscosity tensor. Finally, in section
VI, we discuss our main results.
\section{The Magnetic Rotor}
The system we want to study consists of a dilute colloidal suspension
of ferromagnetic rodlike particles immersed in a nonpolar fluid phase. This suspension flows in an
elongational flow and under the influence of a constant magnetic field $
\vec{H}$ oriented in the direction of the symmetry axis of the flow, which
will be taken parallel to the z-axis. All of the particles
are supposed to have the same magnetic moment
\begin{equation} \label{eq:a1}
\vec{m}=m_s\hat{R},
\end{equation}
\noindent where $\hat{R}$ is the unit vector along the direction of the axis
of the particle and $m_s$ is the magnetic moment strength. The velocity
field of the flow is given by
\begin{equation} \label{eq:a2}
\vec{v}=\vec{\vec{\beta}}\cdot\vec{r},
\end{equation}
\noindent where $\vec{\vec{\beta}}$ is the velocity gradient of the flow
\begin{equation} \label{eq:a3}
\vec{\vec{\beta}}=\beta \left(
\begin{array}{ccc}
-1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 2
\end{array}
\right)
\end{equation}
\noindent with $\beta$ being the rate of elongation.
A suspended particle in the carrier fluid experiences the hydrodynamic
torque given by
\begin{equation} \label{eq:a4}
\vec{T}^{h} = - \vec{\vec{\xi}}\cdot(\vec{\omega}-\vec{\Omega}),
\end{equation}
\noindent where the friction tensor $\vec{\vec{\xi}}$ is given in terms of its components $\xi_0$ and $\xi_1$
\begin{equation} \label{eq:a5}
\vec{\vec{\xi}} = \xi_{1} \hat{R}\hat{R} + \xi_{0} (\vec{\vec{1}} - \hat{R}
\hat{R}).
\end{equation}
\noindent Here $\vec{\omega}$, $\vec{\Omega}$ and $\vec{\vec{1}}$ are the
angular velocity of the particle, the drag angular velocity due to the
motion of the fluid and the unit tensor, respectively. Due to the fact that
the only possible relative motion between the ends of the rod is a rigid
rotation one has
\begin{equation} \label{eq:a6}
\vec{v}_+-\vec{v}_- = \vec{\Omega}\times \hat{R} L,
\end{equation}
\noindent where $\vec{v}_{\pm}$ are the velocities of the end points of
the rod and $L$ is its length. This expression can alternatively be written as
\begin{equation} \label{eq:a7}
\vec{v}_+-\vec{v}_- = \vec{\vec{\beta}}\cdot\hat{R} L.
\end{equation}
\noindent Thus, through a comparison of eqs. (\ref{eq:a6}) and (\ref{eq:a7})
we achieve
\begin{equation} \label{eq:a8}
\vec{\Omega}
= \vec{{\cal {R}}}(1/2 \vec{\vec{\beta}} : \hat{R}\hat{R}),
\end{equation}
\noindent with $\vec{{\cal {R}}} \equiv \hat{R}\times\frac{\partial}{
\partial \hat{R}}$ being the rotational operator.
In addition to eq. (\ref{eq:a4}) the dipole is acted upon by a magnetic
torque
\begin{equation} \label{eq:a9}
\vec{T}^m = \vec{m}\times\vec{H}.
\end{equation}
\noindent Therefore, by taking into account the angular momentum conservation, in view of
eqs. (\ref{eq:a4}), (\ref{eq:a5}), (\ref{eq:a8}), and (\ref{eq:a9}) we can write in the high friction limit
\begin{equation} \label{eq:a10}
\vec{\omega} = \frac{1}{\xi_0}(-\vec{{\cal {R}}}U),
\end{equation}
\noindent where the potential U, defined through
\begin{equation} \label{eq:a11}
U = -\vec{m}\cdot\vec{H} - \frac{1}{2}\xi_0\vec{\vec{\beta}}: \hat{R}\hat{R};
\end{equation}
\noindent accounts for the different mechanisms for the rotation of the
particle: the magnetic field and the external flow. In polar spherical
coordinates this energy can be rewritten as
\begin{equation} \label{eq:a12}
U=-m_sH\cos{\theta}+\frac{\xi_0\beta}{2}(1-3\cos^2{\theta}),
\end{equation}
\noindent $\theta$ being the angle between the axis of the particle and the
z-axis.
The equilibrium states of the system can be identified through the condition $
\frac{dU}{d\theta}=0$, i.e.
\begin{equation} \label{eq:a13}
m_{s}H\sin\theta+3\xi_0\beta\sin\theta\cos\theta=0
\end{equation}
\noindent whose solutions, $\theta_-$, $\theta_+$ and $\theta_0$, are
\begin{equation} \label{eq:a14}
\theta_{-}=0,
\end{equation}
\begin{equation} \label{eq:a15}
\theta_{+}=\pi,
\end{equation}
\begin{equation} \label{eq:a16}
\theta_0= \arccos \left (-\frac{m_sH}{3\xi_0\beta}\right ).
\end{equation}
\noindent The stability of these orientations follows from the second derivative of the potential computed in each of the solutions
\begin{equation} \label{eq:a18}
\frac{d^{2}U}{d\theta^2}\vert_{\theta_-}=m_{s}H+3\xi_0\beta=m_{s}H(1+
\frac{H_{c}}{H}),
\end{equation}
\begin{equation} \label{eq:a19}
\frac{d^{2}U}{d\theta^2}\vert_{\theta_+}=-m_{s}H+3\xi_0\beta=m_{s}H(
\frac{H_{c}}{H}-1),
\end{equation}
\begin{equation} \label{eq:a20}
\frac{d^{2}U}{d\theta^2}\vert_{\theta_0}=\frac{(m_{s}H)^2}{3\xi_0\beta}
-3\xi_0\beta.
\end{equation}
\noindent Since $m_s$, $H$, $\xi_0$ and $\beta$ are always positive
quantities, after examining eqs. (\ref{eq:a18})-(\ref{eq:a20}) we
conclude that provided
\begin{equation} \label{eq:a21}
H\leq H_c,
\end{equation}
\noindent $H_c \equiv 3\xi_0\beta/m_s$ being a critical field, $\theta_-$ and $
\theta_+$ are both stable, whereas $\theta_0$ is unstable. We then conclude that under these conditions the potential of the magnetic rotor is bistable. For magnetic fields larger than $H_c$ the system becomes stable.
Equivalently, we could have considered the presence of a critical value for
the elongational rate, $\beta_{c}=\frac{Hm_{s}}{3\xi_0}$, such that for a
fixed value of the magnetic field the system is bistable whenever the actual
value of the elongational rate $\beta$ overcomes $\beta_c$. In the rest of the paper we will assume that $H/H_c < 1$, i. e. we will
remain in the bistable region, or in the range of large elongational rate.
To conclude this section we will give some estimates of the critical field corresponding to situations of experimental accesibility. For particles of magnetite having a volume $V_p = 5\times 10^{-19} cm^3$ and an aspect ratio $\epsilon = 0.1$ one has $m_s = 2.4\times 10^{-16} G\times cm^3$, and consequently $H_c = 58.3 Oe$. On the other hand, for particles of cobalt with volume $V_p = 2.7\times 10^{-19} cm^3$ and $\epsilon = 0.1$ we obtain $m_s = 3.8\times 10^{-16} G\times cm^3$, and $H_c = 17.1 Oe$. In both cases we have assumed $\beta = 10^3 s^{-1}$. At smaller elongational rate, $\beta = 10 s^{-1}$, one has $H_c = 5.8 Oe$ for magnetite and $H_c = 1.7 Oe$ for cobalt.
\section{Relaxation equation}
The relaxation of the magnetic rods can be interpreted as a diffusion process through
a potential barrier in orientation space. In this scenario we can apply the
formalism of non-equilibrium thermodynamics$^{\ref{bib:reacciones}}$. Let $
\rho(\hat{R},t)$ be the angular distribution function that may be viewed as a
density in the space of orientations. Thus, we can define the chemical
potential as
\begin{equation} \label{eq:b1}
\mu=K_{B}T\ln{\rho}+U.
\end{equation}
\noindent The diffusion equation for $\rho$ is written in the form
\begin{equation} \label{eq:b2}
\frac{\partial\rho}{\partial t}=\frac{D}{\sin\theta}\frac{\partial}{
\partial\theta}\{\sin\theta[\frac{\rho}{K_{B}T}\frac{\partial U}{
\partial\theta}+\frac{\partial\rho}{\partial\theta}]\},
\end{equation}
\noindent with $D = k_B T/\xi_0$ and where the axial symmetry of the
potential (\ref{eq:a12}) has been taken into account. Equation (\ref{eq:b2})
can be rewritten as a continuity equation
\begin{equation} \label{eq:b3}
\frac{\partial\rho}{\partial t}=-\frac{1}{\sin\theta}\frac{\partial}{
\partial \theta}(J_{\theta}\sin\theta ),
\end{equation}
\noindent which defines the diffusion current
\begin{equation} \label{eq:b4}
J_{\theta}=-De^{(-U/K_{B}T)}\frac{\partial}{\partial\theta}e^{\mu /K_{B}T}.
\end{equation}
If the height of the potential barrier is large enough as compared to
thermal energy $K_{B}T$, we can suppose that equilibrium is reached
independently on each side of the barrier. Thus, the chemical potential can
be inferred as being
\begin{equation} \label{eq:b5}
\mu(\hat{R},t)=\mu(\theta_{-})\Theta (\theta_{0}-\theta)+\mu(\theta_{+})\Theta (\theta-
\theta_{0}),
\end{equation}
\noindent where $\Theta(\theta)$ is the unit step function. The system is
allowed to obtain the global equilibrium due to the presence of a
quasi-stationary current $J(t)$, assumed uniform.
\begin{equation} \label{eq:b6}
J_{\theta}\sin\theta =J(t)\{\Theta(\theta-\theta_{-})+\Theta(\theta-\theta_{+})\}.
\end{equation}
\noindent This adiabatic hypothesis is one of the essential points in the
present development. By using (\ref{eq:b1}) and (\ref{eq:b5}), one has
\begin{equation} \label{eq:b7}
\rho(\hat{R},t)=\rho_{-}(t)e^{[-(U-U_{-})/K_{B}T)]}\Theta(\theta_{0}-\theta)+
\rho_{+}(t)e^{[-(U-U_{+})/K_{B}T)]}\Theta(\theta-\theta_{0}),
\end{equation}
\noindent where $\rho_{\pm}\equiv\rho( \theta_{\pm},t)$, and $U_{\pm}\equiv
U(\theta_{\pm})$, are the densities and potential energies in the two stable
states. In order to obtain an expression for $J(t)$, we substitute (\ref{eq:b4})
into (\ref{eq:b6}) to arrive at
\begin{equation} \label{eq:b8}
J(t)\frac{e^{U/K_{B}T}}{\sin\theta}\{\Theta(\theta-\theta_{-})+\Theta(\theta-
\theta_{+} )\}=-D\frac{\partial}{\partial\theta}e^{{\mu}/K_{B}T}.
\end{equation}
\noindent Integrating now over $\theta$ and taking into account that, due to
the height of the barrier, the main contribution to these integrals is
around the maximum of the potential $\theta_0$, one has the law of mass action$^{\ref{bib:mazur}}$
\begin{equation} \label{eq:b9}
J(t)=K_{B}{\it {l}(1-e^{A/K_{B}T}),}
\end{equation}
\noindent which is a nonlinear phenomenological relationship between the
quasi-stationary current $J(t)$ and the affinity $A\equiv\mu_{+}-\mu_{-}$, where $\mu_{\pm}\equiv\mu (\theta_{\pm})$. The phenomenological coefficient ${\it {l}}$ is given by
\begin{equation} \label{eq:b10}
{\it {l}=\frac{D\sin\theta_0}{K_B}\rho_{-}(\frac{\vert
U_{0}^{^{\prime\prime}}\vert}{2\pi K_{B}T})^{1/2}e^{(U_{0}-U_{-})/K_{B}T},}
\end{equation}
\noindent where $U_{0}^{^{\prime\prime}}\equiv\frac{d^{2}U}{d\theta^2}
\vert_{\theta_0}$ and $U_{0}\equiv U(\theta_{0})$.
We are now prepared to proceed to the deduction of the relaxation
equations. To this end, we define the following populations:
\begin{equation} \label{eq:b12}
N_{+}\equiv\int_{\phi=0}^{2\pi}\int_{\theta=\theta_0}^{\pi}\rho(\hat{R},t)d
\hat{R}=\int_{\theta_0}^ {\pi}2\pi\sin \theta\rho (\hat{R} ,t)d\theta,
\end{equation}
\noindent and
\begin{equation} \label{eq:b13}
N_{-}\equiv\int_{\phi=0}^{2\pi}\int_{\theta=0}^{\theta_0}\rho(\hat{R},t)d
\hat{R}=\int_{0}^{\theta_0}2\pi\sin \theta\rho (\hat{R} ,t)d\theta,
\end{equation}
\noindent related by the normalization condition
\begin{equation} \label{eq:l1}
N = N_+ + N_{-}.
\end{equation}
\noindent The kinetic equations for $N_+$ and $N_-$ follow after differentiating the
eqs. (\ref{eq:b12}) and (\ref{eq:b13}) and employing eq. (\ref{eq:b3}).
Thus, we have
\begin{equation} \label{eq:b14}
\frac{dN_+}{dt}=-\frac{dN_-}{dt}=2\pi J(t),
\end{equation}
\noindent where, consistently with the adiabatic approximation
\begin{equation} \label{eq:b15}
N_{-}=\frac{2\pi K_{B}T}{U_{-}^{^{\prime\prime}}}\rho_-,
\end{equation}
\begin{equation} \label{eq:b16}
N_{+}=\frac{2\pi K_{B}T}{U_{+}^{^{\prime\prime}}}\rho_+,
\end{equation}
\noindent in which $U_{\pm}^{^{\prime\prime}}\equiv\frac{d^2U}{d\theta^2}
\vert_{\theta_{\pm}}$. Using equations (\ref{eq:b9}) and (\ref{eq:b14})-(\ref
{eq:b16}) we can derive the rate equations for the two populations
\begin{equation} \label{eq:b17}
\frac{dN_+}{dt}=-\frac{dN_-}{dt}=K_{+-}N_{-}-K_{-+}N_+,
\end{equation}
\noindent where the rate constants are given by
\begin{equation} \label{eq:b18}
K_{+-}=\frac{D\sin\theta_0}{2\pi K_{B}T}(\frac{\vert
U_{0}^{^{\prime\prime}}\vert}{2\pi K_{B}T})^{1/2}
U_{-}e^{-(U_{0}-U_{-})/K_BT},
\end{equation}
\noindent and
\begin{equation} \label{eq:b19}
K_{-+}=\frac{D\sin\theta_0}{2\pi K_BT}(\frac{\vert
U_{0}^{^{\prime\prime}}\vert}{2\pi K_{B}T})^{1/2}U_+^{^{\prime
\prime}}e^{-(U_0-U_+)/K_BT}.
\end{equation}
We can now define the magnetic moment parallel to the direction of the
applied magnetic field as
\begin{equation} \label{eq:b20}
m\equiv m_{s}\frac{N_{+}-N_-}{N}.
\end{equation}
\noindent By differentiating equation (\ref{eq:b20}) and employing equation
(\ref{eq:b17}) we obtain the relaxation equation for ${\it {m}}$
\begin{equation} \label{eq:b21}
\frac{dm}{dt}=-\frac{1}{\tau}m+\frac{1}{\alpha}m_s,
\end{equation}
\noindent where
\begin{equation} \label{eq:b22}
\tau=\frac{1}{K_{+-}+K_{-+}}
\end{equation}
\noindent is the relaxation time, and
\begin{equation} \label{eq:b24}
\alpha=\frac{1}{K_{+-}-K_{-+}}
\end{equation}
\noindent accounts for the asymmetry of the potential. An equation similar to (\ref{eq:b21}) was postulated by Shliomis in the context of ferrohydrodynamics$^{\ref{bib:shliomis2}}$. Our approach to deduce eq. (\ref{eq:b21}) has been based upon mesoscopic arguments and as we will see in the next section it provides the natural way of introducing fluctuations in the scheme.
It is useful
for our purposes to introduce the nondimensional
variables $x\equiv\frac{H_c
}{H}$ and $\mu\equiv\frac{m_sH}{K_BT}$, in terms of which the relaxation
time is written as
\pagebreak
$$
\tau=\frac{(2\pi)^{3/2}}{D}(x\mu)^{(-1/2)}(1-\frac{1}{x^2})^{-1}\exp\{ \frac{\mu}{2}(\frac{1}{x}+$$
\begin{equation}\label{eq:b25}
\frac{x}{3})\}[\mu(x-1)\exp
\{\mu(1-\frac{x}{3})\}+\mu(x+1)\exp\{-\mu(1+\frac{x}{3})\}]^{-1},
\end{equation}
\noindent where $x$ must be greater than the unity. The inverse of $\tau$
gives us a characteristic frequency proper to the system, the jump frequency
between the two stable states of the potential energy. This frequency is
given by
\begin{equation} \label{eq:b26}
\omega=D(\frac{x\mu}{2\pi})^{3/2}(1-\frac{1}{x^2})e^{-\frac{x\mu}{2}}e^
{-\mu/x}[e^{\mu}+e^{-\mu}+\frac{1}{x}(e^{-\mu}-e^{\mu})].
\end{equation}
In Figure 1 we can observe the behavior of $\omega$ when one varies the
elongational rate. The way in which it depends on the elongational rate and
the existence of two time scales in our problem, Brownian $D^{-1}$ and $
\tau$, are the key points in understanding the dynamical mechanism which
leads to the results we will obtain in section 5.
\section{Dynamic of the fluctuations of the magnetic moment}
Thermal motion of the dipoles inside the carrier fluid produces fluctuations
in the population of the minima of the potential. This fact manifests itself
on a mesoscopic level through fluctuations of the magnetic moment. These can
be taken into account by adding a random current$^{\ref{bib:reacciones}}$ to equation (\ref{eq:b9}),
\begin{equation} \label{eq:c1}
J(t)=K_B{\it {l}(1-e^{A/K_BT})+\frac{\sqrt{\bar{D}}}{m_s}\xi(t),}
\end{equation}
\noindent where $\bar{D}$ is given by
\begin{equation} \label{eq:c2}
\bar{D}=2m_{s}^{2}k_B\bar{{\it {l}}},
\end{equation}
\noindent $\bar{l}$ is the equilibrium value of the phenomenological
coefficient ${\it {l}}$, which according to (\ref{eq:b10}) should be
proportional to $\rho_-$ computed at equilibrium, $\rho_{-}^{eq}$. By
applying the detailed balance principle to eq. (\ref{eq:b17}) and using the
normalization relation eq. (\ref{eq:l1}) one can easily achieve
\begin{equation} \label{eq:l2}
\rho_{-}^{eq} =\left ( \frac{N}{2\pi k_BT}\right )\;
\frac{U_{-}^{^{\prime\prime}}U_{+}^{^{
\prime\prime}}}{U_{+}^{^{\prime\prime}}
+ U_{-}^{^{\prime\prime}}\exp\{\frac{
U_- - U_+}{k_BT}\}}.
\end{equation}
\noindent Additionally, in eq. (\ref{eq:c1}) $\xi (t)$ is a Gaussian white
noise stochastic process of zero mean and correlation function.
\begin{equation} \label{eq:c3}
\langle\xi(t)\xi (t^{\prime})\rangle=\delta (t-t^{\prime}).
\end{equation}
From equations (\ref{eq:b14}), (\ref{eq:b20}) and (\ref{eq:c1}) we obtain
the Langevin equation for m
\begin{equation} \label{eq:c4}
\frac{dm}{dt}=-\frac{1}{\tau}m+\frac{1}{\alpha}m_s
+\sqrt{\bar{D}}\xi(t).
\end{equation}
\noindent Notice that coefficient $\bar{D}$ is the input noise strength
corresponding to the stochastic process $m$. This coefficient can by
explicitly written as
\begin{equation}\label{eq:c8}
\bar{D}=m_{s}^2D\sin\theta_0(\frac{\vert U''_{0}\vert}{2\pi K_BT})^{1/2}
(1+\frac{\tau}{\alpha})\frac{U''_+}{2\pi K_BT}exp\{-\frac{(U_0-U_+)}{K_BT}\}
\end{equation}
\noindent and in terms of the nondimensional parameters $x$ and $\mu$
\begin{equation}\label{eq:c9}
\bar{D}=\frac{m_{s}^2D}{(2\pi)^{3/2}}(x\mu)^{1/2}(1-\frac{1}{x^2})\mu(x-1)\frac{2(1+x)e^{-\mu}}{(1+x)e^{-\mu}+(x-1)e^{\mu}}
\exp\{-\frac{\mu}{2}(\frac{1}{x}+\frac{x}{3})\}exp\{\mu(1-\frac{x}{3})\}.
\end{equation}
\noindent Finally, in reference to eq. (\ref{eq:c4}), by performing the change
\begin{equation} \label{eq:c5}
\tilde{m}=m-\frac{\tau}{\alpha}m_s,
\end{equation}
\noindent this equation becomes
\begin{equation} \label{eq:c6}
\frac{d\tilde{m}}{dt}=-\frac{1}{\tau}\tilde{m}
+\sqrt{\bar{D}}\xi (t),
\end{equation}
\noindent i.e., $\tilde{m}$ is an Ornstein-Uhlenbeck process$^{\ref{bib:Gardiner}}$. As is well known, the stationary distribution for such a
process is
\begin{equation} \label{eq:c7}
p(\tilde{m})=\frac{1}{\sqrt{\pi\tau\bar{D}}}
\exp\{-\frac{\tilde{m}^2}{\tau
\bar{D}}\},
\end{equation}
\noindent which will be used in the next section to compute the viscosity.
\section{The Viscosity Tensor}
In order to calculate the viscosity tensor, we first have to compute the stress
tensor. This quantity has two contributions, one that comes from the solvent, and the other due to the presence of the particles. The latter is written$^{\ref{bib:Doi}}$ as
\begin{equation} \label{eq:d1}
\vec{\vec{\sigma}}=nK_BT(3\langle\hat{R}\hat{R}\rangle-
\vec{\vec{1}})+ n
\xi_1\vec{\vec{\beta}}
:\langle{\hat{R}\hat{R}\hat{R}
\hat{R}}\rangle-nK_BT\mu\langle\hat{R}\hat{H}
\cdot(\vec{\vec{1}}-\hat{R}\hat{
R})\rangle,
\end{equation}
\noindent where $n$ is the concentration of suspended particles and $L$ is
their longitude. Thus, eq. (\ref{eq:d1}) give us first order contributions
to the viscosity tensor. The moments in eq. (\ref{eq:d1}) will be calculated
by using the stationary distribution (\ref{eq:c7}). It is important to keep
in mind that although we are using a stationary distribution, the dynamical
effects are taken into account through their dependence on the relaxation
time $\tau$.
\noindent We will illustrate the behavior of the viscosity by explicitly computing the parallel viscosity defined through
\begin{equation}\label{eq:d3}
\eta_{\parallel\parallel} = \frac{\sigma_{\parallel\parallel}}{\beta}
\end{equation}
\noindent with $\sigma_{\parallel\parallel} = \hat{H}\cdot\vec{\vec{\sigma}}
\cdot\hat{H}$, that from eq. (\ref{eq:d1}) is written as
\begin{equation} \label{eq:d4}
\sigma_{\parallel\parallel}=
nK_BT(3\langle \hat{R}_{\parallel}\hat{R}_{\parallel}\rangle
-1)+ n\xi_1
\beta(\langle \hat{R}_{\parallel}\hat{R}_{\parallel}
\rangle-\langle \hat{R}_{\parallel}\hat{R}_{\parallel}
\hat{R}_{\parallel}\hat{R}_{\parallel}\rangle
) -nK_BT\mu(\langle \hat{R}_{\parallel}\rangle-\langle
\hat{R}_{\parallel}\hat{R}_{\parallel}\hat{R}_{\parallel}\rangle).
\end{equation}
\noindent Here we have taken into account that $\vec{\vec{\beta}}:\hat{R}\hat{R}=\beta(1-3\hat{R}
_{\parallel}^2)$, and $\hat{R}_{\parallel\parallel}= \hat{R}\cdot\hat{H}\hat{H}$. In the appendix we summarize the result of the computation of all the
moments that appear in the expression for
$\sigma_{\parallel\parallel}$. Making use of these results, we find that
$$\sigma_{\parallel\parallel}=nK_BT[\frac{\tau\bar{D}}{2m_{s}^2}+
(\frac{\tau}{\alpha})^2-1]+
n\xi_1\beta[(\frac{\tau}{\alpha})^2+
\frac{\tau\bar{D}}{2m_{s}^2}-
9(\frac{\tau\bar{D}}
{2m_{s}^2})^2-$$\\
\begin{equation}\label{eq:d5}
18\frac{\tau}{\alpha}\frac{\tau\bar{D}}{2m_{s}^2}-
3(\frac{\tau}{\alpha})^4] - nK_BT\mu[\frac{\tau}{\alpha}-
(\frac{\tau}{\alpha})^3-
3\frac{\tau}{\alpha}\frac{\tau\bar{D}}{2m_{s}^2}].
\end{equation}
\noindent Thus, we finally obtain
\pagebreak
$$\frac{\eta_{\parallel\parallel}}{3n\xi_0}=
\frac{1}{x\mu}[(1+\mu)\frac{\tau}{\alpha}+\frac{\tau\bar{D}}{2m_{s}^2}+
3\mu\frac{\tau}{\alpha}\frac{\tau\bar{D}}{2m_{s}^2}+
\mu(\frac{\tau}{\alpha})^3-1] +$$\\
\begin{equation}\label{eq:d6}
\frac{1}{3}\frac{\xi_1}{\xi_0}[\frac{\tau\bar{D}}{2m_{s}^2}-18\frac{\tau}{\alpha}\frac{\tau\bar{D}}{2m_{s}^2}-9(\frac{\tau\bar{D}}{2m_{s}^2})^2+(\frac{\tau}{\alpha})^2-3(\frac{\tau}{\alpha})^4].
\end{equation}
\noindent In figures 2 and 3 we have plotted the nondimensional
quantity $\eta_{par}\equiv\frac{\eta_{\parallel\parallel}}{3n\xi_0}$ in terms of $x$ and $
\omega$, respectively, for particles with an aspect ratio $\epsilon=0.1$,
for which one has $\frac{1}{3}\frac{\xi_1}{\xi_0} = 0.1805$$^{\ref{bib:viscositat}}$.
One can observe that this viscosity becomes negative for
large shear rates and saturates at a positive value. Similar results have been obtained by Shliomis and Morozov$^{\ref{bib:shliomis}}$ and Bacri ${\it {et\;al.}}$$^{\ref{bib:bacri}}$ with
ferrofluids under an oscillating magnetic field. Negative viscosities in
these cases were due to the existence of two
characteristic time scales in the system that enter in competition. One of them is related to the frequency of the alternating magnetic field and the other is related to the vorticity of the fluid. In our case, the dipoles relax to the equilibrium orientations in a time scale $D^{-1}$, which is shorter than the relaxation time $\tau$ associated with the diffusion through the potential barrier. The rods are then constantly jumping with a frequency $\omega =\tau^{-1}$ acquiring a net angular velocity different from zero, whose expression is given in the appendix. The existence of this angular velocity then shows the conversion of thermal energy into kinetic energy for rotation. This explains why, in our case, the viscosity diminishes.
In looking at fig. 3 one first observes a histeresys cycle described by $\eta_{par}$ when
$\omega$ varies. During the first stage, when the actual value of $\omega
$ is between zero and its maximum value, $\eta_{par}$,
decreases from a positive value to a minimum negative value, as occurs in
the case analyzed by Shliomis and Morozov$^{\ref{bib:shliomis}}$. These results are in good
agreement with those found by them. The difference between our approach
and the one by Shliomis and Morozov$^{\ref{bib:shliomis}}$ is that the externally imposed
frequency can be as large as you want. In our case the frequency of
relaxation, the result of an internal dynamical mechanism, does not grow
freely. In the second stage; $\omega$ is a decreasing function of the
shear rate. This fact is responsible for the particular behavior of $
\eta_{par}$ observed. Note that, as can be seen in Figure 2, $
\eta_{par}$ does not saturate to a negative value when $
\beta\rightarrow\infty$. It tends very slowly to a positive value, as
slowly as $\omega$ tends to zero. Our results also agree with the analysis
of the phenomenon carried out by Rosensweig$^{\ref{bib:rosensweig}}$ as well.
\section{Conclusions}
In this paper we have shown that in a ferrofluid made from rodlike particles under a constant magnetic field and in a elongational flow, rotations of the particles lead to
non-monotonous behavior of the viscosity. This fact allows us to generalize the phenomenon discovered by Bacri $\it{et al.}$$^{\ref{bib:bacri}}$.
As a consequence of the orientating effect of the flow, the particles rotate
with a drag angular velocity $\vec{\Omega}$ that eliminates the degeneracy
of the direction of their actual angular velocity in a volume element of the
ferrofluid (see eq. (\ref{eq:a10})). Thus, all the contributions to
dissipation in a volume element add up constructively.
Moreover, because the flow is elongational, we can write a potential energy
of orientation related to it. This potential is bistable and the addition of a magnetic
field breaks up its symmetry, thus magnetizing the system. On the other hand, thermal motion causes jumps
between the two stable states of the potential with a certain frequency. Consequently, the particles acquire a kinetic energy for rotation at the expense of thermal energy. This
fact eliminates the rigidity that the system would have if no motion were
present, i.e. if the potential time independent were stable. In fact,
this is the way that we interpret the 'negative viscosity' effect predicted
for our system.
We have derived the kinetic equations for the population of the minima of the
potential and from it the relaxation equation for the magnetic moment. Likewise,
by assuming fluctuations of the density of states in the orientation space, we
have formulated the Langevin equation for the magnetic moment. This equation
describes a Ornstein-Uhlembeck process whose moments are well known. The
computation of the first four moments allows us to derive the viscosity.
It should be emphasized that the viscosity is computed in the limit of high energy barrier, i.e. for strong rate of elongation. This is the opposite situation to the case studied in the two previous papers$^{\ref{bib:viscositat},\ref{bib:dumbbell}}$ where we covered the weak flow regime, showing an increase of the viscosity. Analogously, modifying the magnetic field without altering the rate of elongation we achieve the same effect, that is, to vary the viscosity. Thus, a possible application of our results is in adaptive dampers.
Through this behavior of the viscosity, emerge the macroscopic consecuences of the dynamical bifurcation with exchange of stability that our system experiences.
Likewise, the effect that we are studying is only possible in dilute solutions,where each rod can rotate freely without interference by others. Of course in neglecting hydrodynamic interactions among the particles we assume a extremely dilute ferrofluid. Anyway, this is the first step in our study of nonlinear or hysteresis effects in the rheology of ferrofluids suspensions.
The following step will be to assume higher concentrations. Nonetheless, to include hydrodynamic and exclude volume interactions in the dynamics of an assembly of rod particles is difficult. So that, we will model the elongated dipoles by means of rigid dumbbells, such models are well studied in the field of polymeric liquids.
In order to increase the modulation effect of the magnetic field or of the elongation rate on the viscosity, another possibility we are thinking of consists in adding nonmagnetic spheres at high concentration to the ferrofluid.
\acknowledgments
This work has been supported by DGICYT of the Spanish Government under grant
PB95-0881, and also by the INCO-COPERNICUS program of the European Commission under contract IC15-CT96-0719. One of us (T. Alarc\'on) wishes to thank to DGICYT of the Spanish Government for financial support.
|
1,116,691,497,635 | arxiv | \section{Introduction}
Let $f \colon [0,1] \rar \mathbb{R}^{+}$ be an arbitrary function and consider the relation $\mathbf{E}_{f}$ on $[0,1]^{\omega}$ defined by setting, for every $(x_{n})_{n < \omega}, (y_{n})_{n < \omega} \in [0,1]^{\omega}$, \Keq\label{def} (x_{n}) \mathbf{E}_{f} (y_{n}) \Leftrightarrow \sum_{n < \omega} f(|y_{n} - x_{n}|) < \infty.\Zeq Several natural questions arise, e.g.\
\bit
\item[(i)] when is $\mathbf{E}_{f}$ an equivalence relation?
\item[(ii)] which equivalence relations can be obtained in the form $\mathbf{E}_{f}$?
\item[(iii)] for what $f,g \colon [0,1] \rar [0,1]$ is $\mathbf{E}_{f}$ Borel reducible to $\mathbf{E}_{g}$?
\eit In the present paper we answer (i), we initiate a study of (ii) and we obtain various conditions for (iii).
The prototypes of equivalence relations of the form $\mathbf{E}_{f}$ are induced by the Banach spaces $\ell^{p}$ $(1 \leq p < \infty)$, i.e.\ they are defined by the functions $f = \mathrm{Id}^{p}$ for $1 \leq p < \infty$, where $\mathrm{Id} \colon [0,1] \rar [0,1]$ is the identity function. The Borel reducibility among these equivalence relations is fully described by a classical result of R.\ Dougherty and G.\ Hjorth \cite[Theorem 1.1 p.\ 1836 and Theorem 2.2 p.\ 1840]{DH} stating that for every $1 \leq p, q < \infty$, \Keq\label{dh}\mathbf{E}_{\mathrm{Id}^{p}} \leq _{B} \mathbf{E}_{\mathrm{Id}^{q}} \Leftrightarrow p \leq q.\Zeq We note, however, that e.g.\ for the function $f(0) = 0$, $f(x) = 1$ $(0 < x \leq 1)$ we have $\mathbf{E}_{f}$ is the equivalence relation of eventual equality on $[0,1]^{\omega}$, also denoted by $E_{1}$ in the literature; that is, the investigation of equivalence relations of the form $\mathbf{E}_{f}$ concerns equivalence relations which are not necessarily reducible to $\bE_{\mathrm{Id}^{p}}$ for some $1 \leq p < \infty$.
Our investigations were motivated by a question of S.\ Gao in \cite{G} p.\ 74, asking whether for $1 \leq p < \infty$, $\mathbf{E}_{\mathrm{Id}^{p}}$ is the greatest lower bound of $\{\mathbf{E}_{\mathrm{Id}^{q}} \colon p < q < \infty\}$; we note that formally the question in \cite{G} p.\ 74 refers to equivalence relations on $\mathbb{R}^{\omega}$, but as we will see later in Lemma \ref{mege}, the two formulations are equivalent.
We answer this question in the negative by showing, for fixed $1 \leq p < \infty$, that $\mathbf{E}_{\mathrm{Id}^{p}} < _{B} \bE_{f} <_{B} \mathbf{E}_{\mathrm{Id}^{q}}$ for every $q > p$ whenever \Keq\label{nov}\lim_{x \rar +0}\frac{f(x)}{x^{p}} = 0 \textrm{ and } \lim_{x \rar +0}\frac{f(x)}{x^{q}} = \infty ~(p < q < \infty),\Zeq and $f$ satisfies some additional technical assumptions (see e.g.\ Corollary \ref{megm}). However, toward this result we aim to carry out a general study of the relations $\mathbf{E}_{f}$ and their Borel reducibility. To this end, in Section \ref{gen} we characterize the functions for which $\bE_{f}$ is an equivalence relation and, roughly speaking, we show that $f$ is continuous if and only if $E_{1} \not \leq _{B} \bE_{f}$. In Section \ref{red} and in Section \ref{nonred} we prove general reducibility and nonreducibility results for equivalence relations of the form $\mathbf{E}_{f}$. The results of these sections heavily build on techniques developed in \cite{DH}. Finally, in Section \ref{con} we conclude our investigations by applying the technical results of the previous sections to concrete functions; in particular, we answer the above mentioned question of S.\ Gao, and we show that for $1 \leq p < q < \infty$, every linear order which embeds into $(\mc{P}(\omega)/\mathrm{fin},\subset)$ also embeds into the set of equivalence relations $\{\bE_{f} \colon \bE_{\mathrm{Id}^{p}} \leq_{B} \bE_{f} \leq _{B} \bE_{\mathrm{Id}^{q}}\}$ ordered by $<_{B}$.
Our results produce just examples. We are far from giving a full description of the Borel equivalence relations of the form $\bE_{f}$ or a complete picture of the Borel reducibility relation among the $\bE_{f}$s. In particular, it remains open whether there are two functions $f$ and $g$ such that $\bE_{f}$ and $\bE_{g}$ are incomparable under $\leq_{B}$. Nevertheless, we have one qualitative observation. Conditions (\ref{dh}) and (\ref{nov}) may suggest that reducibility among the $\bE_{f}$s is essentially governed by the growth order of the $f$s. However, this is far from being true. As we will see in Section \ref{red}, under mild additional assumptions on $f$ we have e.g.\ $ \bE_{ \mathrm{Id}^{p}} \leq _{B} \bE_{f}$ whenever $\lim_{x \rar +0} f(x)/x^{p-\ve} = 0$ for every $\ve > 0$ (for the precise statement, see Theorem \ref{ala}); this is in contrast with (\ref{nov}).
For basic terminology in descriptive set theory we refer to \cite{K}. As above, if $X$ and $Y$ are Polish spaces, $E$ and $F$ are equivalence relations on $X$ and $Y$, then we say \emph{$E$ is Borel reducible to $F$}, $E \leq _{B} F$ in notation, if there exists a Borel function $\vt \colon X \rar Y$ satisfying $$x E x' \Leftrightarrow \vt(x) F \vt(x').$$ We say \emph{$E$ and $F$ are Borel equivalent} if $E \leq _{B} F$ and $F \leq _{B} E$, while we write $E < _{B} F$ if $E \leq _{B} F$ but $F \not \leq _{B} E$.
Depending on the context, $| \cdot |$ denotes the absolute value of a real number, the length of a sequence or the cardinality of a set; $\lfloor \cdot \rfloor$ and $\{\cdot\}$ stand for lower integer part and fractional part. We denote by $\egesz$ and $\mathbb{R}^{+}$ the set of integers and nonnegative reals.
\section{Basic properties} \label{gen}
\begin{definition}\rm\label{Csop}Let $(G,+)$ be an Abelian group and let $H \ss G$ satisfy
\bit
\item[($H_{1}$)] $0 \in H$;
\item[($H_{2}$)] for every $x,y \in H$, $x-y \in H$ or $y-x \in H$;
\item[($H_{3}$)] for every $x,y,z \in H $, $x-y \in H$ and $y-z \in H$ implies $x-z \in H$.
\eit
For every $x \in H \cup -H$, let $x^{+} = x$ if $x \in H$ and $x^{+} = -x$ if $x \in -H \sm H$.
For every function $f \colon H \rar \mathbb{R}^{+}$, we define the relation $\mathbf{E}_{f}$ on $H^{\omega}$ by setting, for every $(x_{n})_{n < \omega}, (y_{n})_{n < \omega} \in H^{\omega}$, \Keq\label{defG} (x_{n}) \mathbf{E}_{f} (y_{n}) \Leftrightarrow \sum_{n < \omega} f((y_{n} - x_{n})^{+}) < \infty;\Zeq
the definition is valid by $(H_{2})$.
We say $f \colon H \rar \mathbb{R}^{+}$ is \emph{even} if for every $x \in H \cap -H$, $f(x) = f(-x)$.
\end{definition}
Observe that for $\tilde f \colon H \rar [0,1]$, $\tilde f(x) = \min\{f(x), 1\}$ $(x \in H)$ we have $\mathbf{E}_{\tilde f} = \mathbf{E}_{f}$. So in the sequel we only consider bounded functions.
We start this section by characterizing the bounded functions $ f \colon H \rar \mathbb{R}^{+}$ for which $\bE_{f}$ is an equivalence relation. To avoid a meticulous bookkeeping of non-relevant constants, we will use the terminology ``by ($\star$), $A \lesssim B$" to abbreviate that ``by property $(\star)$, there is a constant $C>0$ depending on the parameters of $(\star)$ such that $A \leq CB$". The relations $\gtrsim$ and $\approx$ are defined analogously.
\begin{clm}\label{ekv} Let $f \colon H \rar \mathbb{R}^{+}$ be a bounded even function. Let $\bE_{f}$ be the relation on $H^{\omega}$ defined by (\ref{defG}). Then $\bE_{f}$ is an equivalence relation if and only if the following conditions hold:
\bit
\item[($R_{1}$)] $f(0)=0$;
\item[($R_{2}$)] there is a $C \geq 1$ such that for every $x,y \in H$ with $x+y \in H$,
$$\begin{array}{ll} (a) & f(x+y) \leq C(f(x)+f(y)), \\ & \\ (b) & f(x) \leq C(f(x+y) + f(y)).\end{array}$$
\eit
\end{clm}
\textbf{Proof. } Since $f$ is even, $\bE_{f}$ is symmetric. It is obvious that ($R_{1}$) is equivalent to $\bE_{f}$ being reflexive, so it remains to show that ($R_{2}$) is equivalent to transitivity.
Suppose first ($R_{2}$) holds and let $(x_{n})_{n < \omega}$, $(y_{n})_{n < \omega}$, $(z_{n})_{n < \omega} \in H^{\omega}$ such that $(x_{n}) \bE_{f}(y_{n})$ and $(y_{n}) \bE_{f}(z_{n})$. Let $n < \omega$ be fixed. Since the role of $x_{n}$ and $z_{n}$ is symmetric, by $(H_{2})$ we can assume $z_{n} - x_{n} \in H$. We distinguish several cases.
If $x_{n}-y_{n} \in H$ then by $(H_{3})$, $z_{n}-y_{n} \in H$ so by $(R_{2} b)$ using $(z_{n} - y_{n}) = (z_{n} - x_{n}) + (x_{n} - y_{n})$, $$ f(z_{n} - x_{n}) \lesssim f(z_{n} - y_{n})+f((y_{n} - x_{n})^{+}).$$
If $y_{n}-x_{n} \in H$ then either $z_{n}-y_{n} \in H$, hence by $(R_{2} a)$, using $(z_{n} - x_{n}) = (z_{n}-y_{n} ) + (y_{n}-x_{n} )$, $$f (z_{n} - x_{n}) \lesssim f(z_{n} - y_{n})+f(y_{n} - x_{n});$$ or $y_{n}-z_{n} \in H$ hence by $(R_{2} b)$ using $(y_{n} - z_{n}) + (z_{n}-x_{n} ) = (y_{n}-x_{n} )$, $$f (z_{n} - x_{n}) \lesssim f((z_{n} - y_{n})^{+})+f(y_{n} - x_{n}).$$
Thus $$\sum_{n < \omega} f((z_{n} - x_{n})^{+}) \lesssim \sum_{n < \omega} f((z_{n} - y_{n})^{+}) + \sum_{n < \omega} f((y_{n} - x_{n})^{+}) < \infty,$$ which gives $(x_{n}) \bE_{f}(z_{n})$; i.e.\ $(R_{2})$ implies transitivity.
To see the other direction, suppose first there is no $C\geq 1$ for which $(R_{2} a)$ holds, i.e.\ for every $n < \omega$ there are $\xi_{n}, \eta_{n} \in H$ such that $\xi_{n} + \eta_{n} \in H$ and \Keq \notag f(\xi_{n}+\eta_{n}) > 2^{n} (f(\xi_{n})+f(\eta_{n})).\Zeq Set $k_{n} = \max\{1, \lfloor 1/f(\xi_{n}+\eta_{n}) \rfloor\}$; if $B \geq 1$ is an upper bound of $f$, we have \Keq\label{1}B \geq k_{n}f(\xi_{n}+\eta_{n})\geq \frac{1}{2} \textrm{ and } B > 2^{n} k_{n} (f(\xi_{n})+f(\eta_{n})).\Zeq Let $ (x_{m})_{m < \omega} \in H^{\omega}$ be the sequence which, for every $n < \omega$, admits the value $\xi_{n}$ with multiplicity $k_{n}$; and define the sequence $ (y_{m})_{m < \omega} \in H^{\omega}$ to admit $\eta_{n}$ exactly there where $ (x_{m})_{m < \omega}$ admits $\xi_{n}$ $(n < \omega)$. Then by (\ref{1}), $$\sum_{m < \omega} f(x_{m}) < 2B \textrm{ and } \sum_{m < \omega} f(y_{m}) < 2B,$$ i.e. if $\underline{0}$ denotes the constant zero sequence we have $\underline{0} \bE_{f} (x_{m})$ and $(x_{m}) \bE_{f} (x_{m}+y_{m})$. Also by (\ref{1}), $$\sum _{m < \omega} f(x_{m} + y_{m}) = \infty,$$ i.e.\ $\underline{0} \not \!\!\bE_{f} (x_{m}+y_{m})$, which shows transitivity fails.
Finally suppose there is no $C\geq 1$ for which $(R_{2} b)$ holds, i.e.\ for every $n < \omega$ there are $\xi_{n}, \eta_{n} \in H$ such that \Keq \notag f(\xi_{n}) > 2^{n} (f(\xi_{n}+\eta_{n})+f(\eta_{n})).\Zeq Set $k_{n} = \max\{1, \lfloor 1/f(\xi_{n}) \rfloor\}$ and let $ (x_{m})_{m < \omega}$, $ (y_{m})_{m < \omega}$ be as above. Then $(y_{m}) \bE_{f} \underline{0}$ and $\underline{0} \bE_{f} (x_{m}+y_{m})$ but $(y_{m})\not \!\!\bE_{f} (x_{m}+y_{m})$.$\bs$
\medskip
If $f$ is an arbitrary function, thus $\bE_{f}$ is not necessarily an equivalence relation, then one could consider the equivalence relation generated by $\bE_{f}$. However, it is very hard to control the properties of this generated equivalence relation by the properties of $f$, in particular we do not know how to ensure $\bE_{f}$ is Borel. Therefore, from now on, we restrict our attention to such functions $f$ for which $\bE_{f}$ is an equivalence relation.
Despite of the general setting of Definition \ref{Csop} and Proposition \ref{gen}, in the present paper we will work only with two special cases. At some point, we will set $G= H$ to be the circle group $\CG=[0,1)$ with mod 1 addition. Then $(H_{1})$-$(H_{3})$ obviously hold, $x^{+} = x$ $(x \in \CG)$, moreover $(R_{2}a)$ and $(R_{2}b)$ are equivalent. But mainly we will work with $G = \mathbb{R}$ and $H = [0,1]$; then $(H_{1})$-$(H_{3})$ hold and $x^{+} = |x|$. Our reason for working with functions $f$ defined on $[0,1]$ instead of $\mathbb{R}$ is that on a smaller domain it is easier to define $f$ such that it satisfies ($R_{1}$) and $(R_{2})$. Next we show that for $\mathbf{E}_{\mathrm{Id}^{p}}$, this change of domain makes no difference.
\begin{lemma}\label{mege} For $1 \leq p < \infty$, let $\ell^{p}$ denote the equivalence relation defined by (\ref{defG}) with $f \colon \mathbb{R} \rar \mathbb{R}^{+}$, $f(x) = |x|^{p}$ $(x \in \mathbb{R})$. Then $\ell^{p}$ and $\ell^{p}|_{[0,1]^{\omega} \times [0,1]^{\omega}}$ are Borel equivalent.
\end{lemma}
\textbf{Proof. } It is obvious that $\ell^{p}|_{[0,1]^{\omega} \times [0,1]^{\omega}} \leq _{B} \ell^{p}$. To see the other direction, for every $k \in \egesz$ let $\rho_{k} \colon \mathbb{R} \rar [0,1]$, $$\rho_{k}(x) = \b\{ \begin{array}{ll} 1, \textrm{ if } k < \lfloor x \rfloor; \\ \{x\}, \textrm{ if } k = \lfloor x \rfloor; \\ 0, \textrm{ if } k > \lfloor x \rfloor; \end{array}\j.$$ and set $\vt \colon \mathbb{R}^{\omega} \rar [0,1]^{\egesz \times \omega }$, $$\vt((x_{n})_{n < \omega}) = (\rho_{k}(x_{n}))_{k \in \egesz,n < \omega}.$$ For every $x,y \in \mathbb{R}$ with $| y-x |\leq 1$, we have $\rho_{k}(x) \neq \rho_{k}(y)$ only if $k =\lfloor x \rfloor$ or $k=\lfloor y \rfloor$; moreover $$|y-x| = \sum_{k \in \egesz} |\rho_{k}(y) - \rho_{k}(x) | ~(x,y \in \mathbb{R}).$$ Thus $$\sum_{k \in \egesz} |\rho_{k}(y) - \rho_{k}(x)|^{p} \leq |y-x|^{p}~(x,y \in \mathbb{R});$$ and for $x,y \in \mathbb{R}$ with $| y-x |\leq 1$, $$ |y-x|^{p} \leq 2^{p}\sum_{k \in \egesz} |\rho_{k}(y) - \rho_{k}(x)|^{p}.$$ Since $(x_{n}) \ell_{p} (y_{n})$ implies $\lim_{n < \omega} |y_{n} -x_{n}|=0$, after reindexing the coordinates of its range, $\vt$ reduces $\ell^{p}$ to $\ell^{p}|_{[0,1]^{\omega} \times [0,1]^{\omega}}$, as required.$\bs$
\medskip
As we have seen already in the introduction, $\mathbf{E}_{f}$ may be an equivalence relation for a discontinuous $f$, e.g.,
for the function $f(0) = 0$, $f(x) = 1$ $(0 < x \leq 1)$ we have $\mathbf{E}_{f}$ is the equivalence relation of eventual equality on $[0,1]^{\omega}$. Following the literature, we denote this equivalence relation by $E_{1}$. In the remaining part of this section we show that $f$ is continuous in zero if and only if $E_{1} \not \leq _{B} \bE_{f}$.
\begin{theorem}\label{folyt} Let $ f \colon [0,1] \rar \mathbb{R}^{+}$ be a bounded Borel function such that $\bE_{f}$ is an equivalence relation. Then $f$ is continuous in zero if and only if $E_{1} \not \leq _{B} \bE_{f}$.
\end{theorem}
Before proving Theorem \ref{folyt} we show that up to Borel reducibility, requiring continuity in zero or continuity on the whole $[0,1]$ is the same condition for $\bE_{f}$.
\begin{clm}\label{f_e} Let $ f \colon [0,1] \rar \mathbb{R}^{+}$ be a bounded function such that $\bE_{f}$ is an equivalence relation. If $f$ is continuous in zero then there exists a continuous function $\tilde{f} \colon [0,1] \rar \mathbb{R}^{+}$ such that $\bE_{f} = \bE_{\tilde{f}}$.
\end{clm}
As a corollary of Theorem \ref{folyt} and Proposition \ref{f_e}, we obtain the following surprising result.
\begin{corollary}\label{fkov}
Let $ f,g \colon [0,1] \rar \mathbb{R}^{+}$ be bounded Borel functions such that $\bE_{f}$ and $\bE_{g}$ are equivalence relations. If $g$ is continuous and $\bE_{f} \leq _{B} \bE_{g}$ then $f$ is continuous in zero hence there is a continuous function $\tilde{f} \colon [0,1] \rar \mathbb{R}^{+}$ such that $\bE_{f}=\bE_{\tilde{f}}$.
\end{corollary}
We start with the proof of Proposition \ref{f_e}.
\medskip
\textbf{Proof of Proposition \ref{f_e}.} Let $C \geq 1$ be the constant of $(R_{2})$. First we show that there exists an increasing function $\ve \colon [0,1] \rar [0,1]$ such that $\ve(a) > 0$ for $a > 0$ and for every $x,y \in [0,1]$, \Keq\label{34} |y-x| \leq \ve(f(x)) \Rightarrow \frac{f(x)}{2C} \leq f(y) \leq 2Cf(x).\Zeq Set $$\ve(a)= \frac{1}{2}\sup\b\{y \in [0,1] \colon f(d) \leq \frac{a}{2C} \textrm{ for } 0 \leq d \leq y\j\};$$ then $\ve$ is increasing and since $f(0) = 0$ and $f$ is continuous in zero, $\ve(a) >0$ for $a > 0$. We show (\ref{34}). By $(R_{2}a)$, $$f(y) \leq C(f(x)+f(y-x)) \leq 2Cf(x)~(0 \leq y-x \leq \ve(f(x)))$$ and $$\frac{f(x)}{2C} \leq \frac{f(x)}{C} - f(x-y) \leq f(y)~(0 \leq x-y \leq \ve(f(x)));$$ and by $(R_{2}b)$, $$\frac{f(x)}{2C} \leq \frac{f(x)}{C} - f(y-x) \leq f(y)~(0 \leq y-x \leq \ve(f(x)))$$ and $$f(y) \leq C(f(x)+f(x-y)) \leq 2Cf(x)~(0 \leq x-y \leq \ve(f(x))),$$ as required.
As a corollary of (\ref{34}), we get $U=\{x \in [0,1] \colon f(x) >0\}$ is an open set. Moreover, for every $a > 0$ the $\ve(a)$-neighborhood of $\{x \in [0,1] \colon f(x) > a\}$ is contained in $U$, i.e.\ $f$ is continuous at every point of $[0,1] \sm U = \{x \in [0,1] \colon f(x) = 0\}$. For every $x \in U$, set $$I_{x} = (x-\ve(f(x)), x+\ve(f(x))) \cap [0,1].$$ Then $I_{x} \ss U$ and $\{I_{x} \colon x \in U\}$ is an open cover of $U$. Since the covering dimension of $U$ is one, there is an open refinement $J_{x} \ss I_{x}$ $(x \in U)$ such that $\{J_{x} \colon x \in U\}$ is an open cover of $U$ of order at most two, i.e.\ for every $x \in U$, $|\{y \in U \colon x \in J_{y}\}| \leq 2$. So the function $\vp \colon U \rar 2^{\mathbb{R}}$, \Keq \notag \vp(x) = \bigcup_{\footnotesize \begin{array}{c}y \in U \\ x \in J_{y}\end{array}} \b[ \frac{f(y)}{2C},2Cf(y)\j] \Zeq is closed convex valued and lower semicontinuous, hence Michael's Selection Theorem \cite[Theorem 3.2 p.\ 364]{M} can be applied to have a continuous function $\tilde f \colon U \rar \mathbb{R}$ satisfying $\tilde f (x) \in \vp(x)$ $(x \in U)$. Since $f$ is continuous at every point of $[0,1] \sm U$, $\tilde f$ extends continuously to $[0,1]$ with $\tilde f (x) = 0$ for $x \in [0,1] \sm U$.
For fixed $x \in [0,1]$, $x \in J_{y}$ implies $x \in I_{y}$. So by (\ref{34}), $$f(x) \in \b[ \frac{f(y)}{2C},2Cf(y)\j]\textrm{ hence } f(y) \in \b[ \frac{f(x)}{2C},2Cf(x)\j].$$ Thus $$\bigcup_{\footnotesize \begin{array}{c}y \in U \\ x \in J_{y}\end{array}} \b[ \frac{f(y)}{2C},2Cf(y)\j] \ss \b[ \frac{f(x)}{4C^2}, 4C^2f(x) \j] $$ and so $$\frac{f(x)}{4C^2} \leq \tilde f(x) \leq 4C^2f(x) ~(x \in [0,1]).$$ Therefore $\bE_{f}=\bE_{\tilde f}$, as required.$\bs$
\medskip
We close this section with the proof of Theorem \ref{folyt}. We obtain the nonreducibility of $E_{1}$ to $\bE_{f}$ for a continuous $f$ via \cite[Theorem 4.1 p.\ 238]{KL}, which says that $E_{1}$ is not reducible to any equivalence relation induced by a Polish group action. To this end, first we show that for continuous $f$, $\bE_{f}$ is essentially induced by a Polish group action. Recall that $\CG$ denotes the circle group $[0,1)$ with mod 1 addition.
\begin{lemma} \label{cont} Let $f \colon [0,1] \rar \mathbb{R}^{+}$ be a continuous function such that $\bE_{f}$ is an equivalence relation. Then either $f$ is identically zero or there is a continuous even function $\tilde f \colon \CG \rar \mathbb{R}^{+}$ such that $\tilde f(x) > 0$ for $x \neq 0$, $\bE_{\tilde f}$ is an equivalence relation and $\bE_{f} \leq _{B} \bE_{\tilde f}$.
\end{lemma}
\textbf{Proof. } Suppose $f$ is not identically zero. We distinguish two cases. Suppose first $f(x) > 0$ for $x > 0$. Then set $$\tilde f(x) = \b\{ \begin{array}{ll} f(2x), & \textrm{ if } 0 \leq x < 1/2; \\ f(2-2x), & \textrm{ if } 1/2 \leq x < 1. \end{array} \j.$$ It is obvious that $\tilde f$ is continuous, even and $\tilde f(x) > 0$ for $x \neq 0$. We show that $\bE_{\tilde f}$ is an equivalence relation by verifying the conditions of Proposition \ref{ekv}. We have $(R_{1})$; since $(R_{2}a)$ implies $(R_{2}b)$, we prove only $(R_{2}a)$. Let $C$ be the constant of $(R_{2})$ for $f$. If $x \in [1/4,3/4]$ or $y \in [1/4,3/4]$ then $$\tilde f (x+y) \leq \frac{\max \tilde f}{ \min \tilde f|_{[1/4,3/4]}} (\tilde f(x) + \tilde f(y)).$$ If $x,y \in[0,1/4]$ or $x,y \in[3/4,1)$ then by $(R_{2}a)$ for $f$, $(R_{2}a)$ holds for $\tilde f$ with $C$. Finally if exactly one of $x$ and $y$ is in $[0,1/4]$ and $[3/4,1)$ then by $(R_{2}b)$ for $f$, $(R_{2}a)$ holds for $\tilde f$ with $C$.
Also, $\vt \colon [0,1]^{\omega} \rar \CG^{\omega}$, $\vt((x_{n})_{n < \omega}) = (x_{n}/2)_{n < \omega}$ is a reduction of $\bE_{f}$ to $\bE_{\tilde f}$, so the proof of first case is complete.
In the second case, suppose $f(x) = 0$ for some $x \in (0,1]$. By $(R_{2})$, the nonempty set $\{x \in [0,1] \colon f(x) = 0\}$ is closed under
additions that are in $[0,1]$. Hence by the continuity of $f$, $x^{\star} = \inf\{x \in (0,1] \colon f(x) = 0\}$ satisfies $x^{\star} > 0$ and $f(x^{\star}) = 0$.
Set $$\tilde f(x) = \b\{ \begin{array}{ll} f(xx^{\star}), & \textrm{ if } 0 \leq x < 1/2; \\ f((1-x)x^{\star}), & \textrm{ if } 1/2 \leq x < 1. \end{array} \j.$$ It is obvious that $\tilde f$ is continuous, even and $\tilde f(x) > 0$ for $x \neq 0$. Similarly to the previous case, we get $\bE_{\tilde f}$ is an equivalence relation by distinguishing several cases. If $x \in [1/4,3/4]$ or $y \in [1/4,3/4]$ then $$\tilde f (x+y) \leq \frac{\max \tilde f}{ \min \tilde f|_{[1/4,3/4]}} (\tilde f(x) + \tilde f(y)).$$ If $x,y \in[0,1/4]$ then by $(R_{2}a)$ for $f$, $(R_{2}a)$ holds for $\tilde f$ with $C$. If $x,y \in[3/4,1)$ then again by $(R_{2}a)$ for $f$, $$\tilde f (x+y) = f(2 x^{\star} - (x+y) x^{\star}) \lesssim f( x^{\star} - x x^{\star}) + f( x^{\star} - y x^{\star}) = \tilde f (x) + \tilde f (y). $$ Finally if exactly one of $x$ and $y$ is in $[0,1/4]$ and $[3/4,1)$ then by $(R_{2}b)$ for $f$, $(R_{2}a)$ holds for $\tilde f$ with $C$.
For every $x \in [0,1]$, let $\langle x\rangle = x/x^{\star}-\lfloor x/x^{\star} \rfloor $. We show that
$\vt \colon [0,1]^{\omega} \rar \CG^{\omega}$, $\vt((x_{n})_{n < \omega}) = (\langle x_{n} \rangle)_{n < \omega}$ is a reduction of $\bE_{f}$ to $\bE_{\tilde f}$. For every $0\leq x \leq y \leq 1$, with $k = \b\lfloor y/x^{\star} \j\rfloor - \b\lfloor x/x^{\star} \j\rfloor$ we have $\langle y \rangle -\langle x \rangle = y/x^{\star} - x/x^{\star} - k$, so $$ \tilde f \b(\langle y \rangle -\langle x \rangle\j) = \b\{ \begin{array}{ll} f\b(y-x -k x^{\star} \j), & \textrm{ if } 0 \leq y-x -k x^{\star} < x^{\star}/2; \\ f(-y+x + k x^{\star}), & \textrm{ if } 0 \leq -y+x+k x^{\star} < x^{\star}/2; \\ f\b(x^{\star} -y + x +k x^{\star} \j), & \textrm{ if } x^{\star}/2 \leq y-x -k x^{\star} < x^{\star}; \\ f(x^{\star} +y-x-k x^{\star}), & \textrm{ if } x^{\star}/2 \leq -y+x+k x^{\star} < x^{\star}. \end{array} \j.$$ For $l = k$ or $l = k\pm 1$, in any of the cases where applicable, by $(R_{2})$ we have $$f(y-x) \lesssim f(y-x - lx^{\star}) + f(lx^{\star}), ~ f(y-x)\lesssim f(lx^{\star}) + f(lx^{\star} -y+x),$$ $$f(y-x - lx^{\star})\lesssim f(y-x) + f(lx^{\star}), ~f(lx^{\star} -y+x) \lesssim f(lx^{\star}) + f(y-x).$$ So $f(y-x) \approx \tilde f (\langle y \rangle - \langle x\rangle )$ follows from $f(lx^{\star}) = 0$. This implies that $\vt$ is a reduction, so the proof is complete.$\bs$
\medskip
In the next lemma, for an $\tilde f$ as in Lemma \ref{cont}, we find a Polish group action inducing $\bE_{\tilde f}$.
\begin{definition}\rm\label{No} Let $f \colon H \rar \mathbb{R}^{+}$ be an arbitrary function. For every $x = (x_{n})_{n < \omega} \in H^{\omega}$ and $I \ss \omega$ we set $$\|x \|_{f} = \sum_{n < \omega}f(x_{n}), ~\|x|_{I} \|_{f} = \sum_{n \in I}f(x_{n}).$$ We define $\mc{N}_{f}= \{x\in H^{\omega} \colon \|x\|_{f} < \infty\}$.
\end{definition}
\begin{lemma}\label{idea} Let $f \colon \CG \rar \mathbb{R}^{+}$ be a continuous even function such that $f(x) > 0$ for $x \neq 0$ and $\bE_{f}$ is an equivalence relation.
\ben
\item\label{idea1} There is a unique topology $\tau_{f}$ on $\mc{N}_{f}$ such that for every $x \in\mc{N}_{f}$, the sets $$B(x, \ve) = \{y \in \mc{N}_{f} \colon \|y-x\|_{f} < \ve\}~(\ve > 0)$$ form a neighborhood base at $x$. This topology is regular, second countable and refines the topology inherited from $\CG^{\omega}$.
\item\label{idea2} With $\tau_{f}$, $(\mc{N}_{f},+)$ is a Polish group. The natural action of $\mc{N}_{f}$ on $\CG^{\omega}$ is continuous, and the equivalence relation induced by this action is $\bE_{f}$.
\een
\end{lemma}
\textbf{Proof. } For \ref{idea1}, we show that for every $x \in \mc{N}_{f}$, $\ve > 0$ and $y \in B(x, \ve)$ there is a $\delta > 0$ such that $B(y,\delta) \ss B(x,\ve)$; once this done, the first part of the statement follows from elementary topology (see e.g.\ \cite{C}). Let $C \geq 1$ be the constant of $(R_{2})$, fix $x \in \mc{N}_{f}$, $\ve > 0$ and $y \in B(x, \ve)$. Let $n < \omega$ be such that $$\|(y-x)|_{\omega\sm n} \|_{f} < \frac{\ve - \|y-x\|_{f}}{3C}.$$ Let $\delta > 0$ satisfy $\delta < (\ve-\|y-x\|_{f})/(3C)$, and such that for every $i < n$ and $z_{i} \in [0,1]$, $f(z_{i} - y_{i}) < \delta$ implies $$|f(z_{i}-x_{i}) - f(y_{i}-x_{i})| < \frac{\ve-\|y-x\|_{f}}{3n};$$ such a $\delta$ exists by the continuity of $f$ and by $f(x) > 0$ for $x \neq 0$.
Let $z \in B(y,\delta)$; then by $(R_{2})$, \begin{multline}\notag \|z-x\|_{f} = \|(z-x)|_{n}\|_{f} + \|(z-x)|_{\omega \sm n}\|_{f} < \\ \|(y-x)|_{n}\|_{f} + n \frac{\ve-\|y-x\|_{f}}{3n} + C(\|(z-y)|_{\omega \sm n}\|_{f} + \|(y-x)|_{\omega \sm n}\|_{f}) < \\ \|y-x\|_{f} + \frac{\ve-\|y-x\|_{f}}{3} + \frac{\ve-\|y-x\|_{f}}{3} + \frac{\ve-\|y-x\|_{f}}{3} = \ve,
\end{multline} as required.
Since $f(x) > 0$ for $x \neq 0$, $\tau_{f}$ refines the topology inherited from $\CG^{\omega}$. The countable set of eventually zero rational sequences shows separability and hence second countability.
To see regularity, let $F \ss (\mc{N}_{f}, \tau_{f})$ be a closed set and take $x \notin F$. Then $\gamma = \inf\{\|y-x\|_{f} \colon y \in F\} > 0$. By $(R_{2})$, $B(x,\gamma/(2C)) \cap B(F,\gamma/(2C)) = \es$, as required.
For \ref{idea2}, first we show $(\mc{N}_{f},+)$ is a topological group. Let $x,y \in \mc{N}_{f}$ and $\gamma > 0$. By $(R_{2})$, $B(x,\gamma/2C)+B(y,\gamma/2C) \ss B(x+y,\gamma)$, so addition is continuous. The continuity of the inverse operation is obvious, so the statement follows.
Next we show $(\mc{N}_{f}, \tau_{f})$ is strong Choquet (for the definition and notation see \cite[Section 8.D p.\ 44]{K}). The closed balls $\overline B(x,\ve) = \{y \in \mc{N}_{f} \colon \|y-x\|_{f} \leq \ve\}$ are closed in $\CG^{\omega}$, thus every $\| \cdot\|_{f}$-Cauchy sequence is convergent in $\mc{N}_{f}$. If player $I$ plays $(x_{n}, U_{n})_{n < \omega}$, a winning strategy for player $II$ is to choose $V_{n}=B(x_{n}, \gamma_{n})$ such that $\overline B(x_{n}, \gamma_{n}) \ss U_{n}$ and $\gamma_{n} \leq 1/2^{n}$ $(n < \omega)$. So $(\mc{N}_{f}, \tau_{f})$ is strong Choquet, hence Polish by Choquet's Theorem (see e.g.\ \cite[(8.18) Theorem p.\ 45]{K}).
The continuity of the action of $\mc{N}_{f}$ on $\CG^{\omega}$ follows from the fact that $\tau_{f}$ refines the topology inherited from $\CG^{\omega}$. It is obvious that the equivalence relation induced by this action is $\bE_{f}$, so the proof complete.$\bs$
\medskip
\begin{definition}\rm\label{V} For a topological space $X$ and $G \ss X$, we set $$V(G)= \bigcup\{ U \ss X \colon U \textrm{ is open, } G \cap U \textrm{ is comeager in } U\}.$$
\end{definition}
\medskip
The next lemma is a folklore result on the existence of a perfect set with special distance set.
\begin{lemma} \label{dis} Let $G \ss [0,1]$ be a Borel set such that zero is adherent to $V(G)$. Then there exists a nonempty perfect set $P \ss [0,1]$ such that \Keq\label{DI}\{|y-x| \colon x,y \in P\} \ss G \cup \{0\}.\Zeq
\end{lemma}
\textbf{Proof. } By passing to a subset, we can assume that $G$ is a comeager $G_{\delta}$ subset of $V(G)$. Set $\tilde G = G \cup (-G)\cup \{0\}$ and let $d_{\tilde G}$ be the metric on $\tilde G$ for which $(\tilde G,d_{\tilde G})$ is a Polish space with the topology inherited from $[-1,1]$ (see e.g.\ \cite[(3.11) Theorem p.\ 17]{K}). We construct inductively a sequence $(x_{n})_{n < \omega} \ss [0,1]$ with the following properties:
\ben
\item\label{en0} for every $n < \omega$, $x_{n+1} < x_{n}/2$;
\item\label{en1} for every $ s \in \{-1,0,+1\}^{<\omega}$, $\sum_{i <|s|} s(i)x_{i} \in \tilde G$;
\item\label{en2} for every $ s \in \{-1,0,+1\}^{<\omega}\sm \{\es\}$, $d_{\tilde G}(\sum_{i <|s|-1} s(i)x_{i}, \sum_{i <|s|} s(i)x_{i}) \leq 1/2^{|s|}$.
\een
Let $x_{0} \in G$ be arbitrary. Let $0 < n < \omega$ and suppose $x_{i}$ $(i < n)$ are defined such that \ref{en1} and \ref{en2} hold for every $s \in \{-1,0,+1\}^{\leq n}$. By \ref{en1}, if $s \in \{-1,0,+1\}^{n}$ and $\sum_{i <n} s(i)x_{i} \neq 0$ then $\tilde G$ is comeager in a neighborhood of $\sum_{i <n} s(i)x_{i}$. Since zero is adherent to $V(G)$, by the Baire Category Theorem we can pick $x_{n} \in G$ sufficiently close to zero such that \ref{en0} holds; and for every $s \in \{-1,0,+1\}^{n+1}$ with $\sum_{i <n} s(i)x_{i} \neq 0$ we have $\sum_{i <n+1} s(i)x_{i} \in \tilde G$, hence by $x_{n} \in G$, $\sum_{i <n+1} s(i)x_{i} \in \tilde G$ for every $s \in \{-1,0,+1\}^{n+1}$; and in addition \ref{en2} holds. This completes the inductive step.
We show $$P = \b\{\sum_{n < \omega}\sigma(n)x_{n} \colon \sigma \in 2^{\omega}\j\}$$ fulfills the requirements. By \ref{en2}, for every $\sigma \in \{-1,0,+1\}^{\omega}$, $(\sum_{i <n} \sigma(i)x_{i})_{n < \omega}$ is a Cauchy sequence in $\tilde G$, so $\sum_{n <\omega} \sigma(n)x_{n} \in \tilde G$. In particular $P \ss G\cup \{0\}$.
Let $x,x' \in P$, $x=\sum_{n < \omega}\sigma(n)x_{n}$ and $x' = \sum_{n < \omega}\sigma'(n)x_{n}$ with $\sigma, \sigma' \in 2^{\omega}$, $\sigma \neq \sigma'$; say for the first $n < \omega$ with $\sigma(n) \neq \sigma'(n)$ we have $\sigma(n) = 0$, $\sigma'(n) = 1$. Then for $\delta \in \{-1,0,+1\}^{\omega}$, $\delta(n) = \sigma'(n) - \sigma(n)$ $(n < \omega)$ we have $$|x' - x| = x' - x = \sum_{ n < \omega} \delta (n) x_{n} \in \tilde G, $$ moreover by \ref{en0}, $x' - x > 0$ i.e.\ $x' - x \in G$. Thus $P$ is a nonempty perfect set and satisfies (\ref{DI}), which completes the proof.$\bs$
\medskip
The last lemma points out a property of an $f$ discontinuous in zero.
\begin{lemma} \label{dis1} Let $ f \colon [0,1] \rar \mathbb{R}^{+}$ be a bounded Borel function such that $\bE_{f}$ is an equivalence relation. If $f$ is not continuous in zero then there exists an $a > 0$ such that $G = \{x \in [0,1] \colon f(x) > a\}$ satisfies the condition of Lemma \ref{dis}, i.e.\ zero is adherent to $V(G)$.
\end{lemma}
\textbf{Proof. } Let $C$ be the constant of $(R_{2})$. Since $f$ in not continuous in zero, there exists an $a > 0$ such that zero is adherent to $\{x \in [0,1] \colon f(x) > 2Ca\}$. If for every $x \in [0,1]$ with $f(x) > 2Ca$, $x$ is adherent to $$\bigcup \{U \ss (x,1) \colon U \textrm{ is open, } \{y \in U \colon f(y) > a\} \textrm{ is comeager in } U\}$$ then the statement follows. If not, by $f$ being Borel, there is an $x \in [0,1]$ with $f(x) > 2Ca$ and a $\delta > 0$ such that $$Y = \{y \in (x, x+\delta) \colon f(y) \leq a\}$$ is comeager in $(x, x+\delta)$. Since $f(x) > 2Ca$, by $(R_{2}b)$ we have $f(y-x) >a$ whenever $y \in Y$. Hence $\{x \in [0,1] \colon f(x) > a\}$ is comeager in $(0,\delta)$, which finishes the proof.$\bs$
\medskip
\textbf{Proof of Theorem \ref{folyt}.} Suppose first $f$ is not continuous in zero. By Lemma \ref{dis} and Lemma \ref{dis1}, there is an $a > 0$ and a nonempty perfect set $P \ss [0,1]$ such that $f(|y - x|) > a$ for every $x,y \in P$, $x \neq y$. Thus $\bE_{f}$ restricted to $P^{\omega}$ is $E_{1}$.
Suppose now that $f$ is continuous in zero. By Proposition \ref{f_e}, we can assume $f$ is continuous on $[0,1]$. If $f \equiv 0$ then $E_{1} \not \leq _{B} \bE_{f}$ is obvious. Else by Lemma \ref{cont} and Lemma \ref{idea}, $\bE_{f} \leq _{B}\bE_{\tilde f}$ where $\bE_{\tilde f}$ is induced by a Polish group action. Hence $E_{1} \not \leq _{B} \bE_{\tilde f}$ by \cite[Thorem 4.1 p.\ 238]{KL} and \cite{Ke}; in particular $E_{1} \not \leq _{B} \bE_{f}$. This completes the proof.$\bs$
\section{Reducibility results} \label{red}
In the remaining part of the paper, in most cases, we restrict our attention to equivalence relations $\bE_{f}$ where $ f \colon [0,1] \rar \mathbb{R}^{+}$ is a \emph{continuous} function. As we have seen in Proposition \ref{f_e}, requiring continuity on $[0,1]$ and continuity in zero for $f$ are equivalent, and by Theorem \ref{folyt}, for Borel $f$ it is the necessary and sufficient condition to have $E_{1} \not \leq_{B} \bE_{f}$. This assumption is acceptable to us since we aim to study equivalence relations $\bE_{f}$ for which $\bE_{f} \leq _{B} \bE_{\mathrm{Id}^{q}}$ for some $1 \leq q < \infty$.
The main restriction, in addition to $(R_{1})+(R_{2})$, we impose in the sequel on the function $f$ is formulated in the following definition.
\begin{definition}\label{essi}\rm Let $(R ,\leq)$ be an ordered set and $f \colon R \rar \mathbb{R}^{+}$ be a function. We say $f$ is \emph{essentially increasing} if for some $C\geq 1$, $\forall x, y \in R$ $(x \leq y\Rightarrow f(x) \leq Cf(y))$. Similarly, $f$ is \emph{essentially decreasing} if for some $C\geq 1$, $\forall x, y \in R$ $(x \leq y \Rightarrow C f(x) \geq f(y))$.
\end{definition}
\begin{lemma}\label{EINC} With the notation of Definition \ref{essi}, $f$ is essentially increasing (resp.\ essentially decreasing) if and only if there is an increasing (resp.\ decreasing) function $\tilde f$ such that $\tilde f \approx f$.
\end{lemma}
\textbf{Proof. } If $f$ is essentially increasing, set $\tilde f \colon R \rar \mathbb{R}^{+}$, $\tilde f (x) = \sup \{f(y) \colon y \leq x\}$. Then $\tilde f$ is increasing and $f(x) \leq \tilde f(x) \leq C f(x)$ $(x \in R)$. If $f$ is essentially decreasing, let $ \tilde f (x) = \inf \{f(y) \colon y \leq x\}$; then, as above, $\tilde f \approx f$. The other directions are obvious, so the proof is complete.$\bs$
\medskip
We remark that for $R = [0,1]$ or $R = (0,1]$, by its definition above, $\tilde f$ is continuous if $f$ is so. By $\tilde f \approx f$, $\tilde f$ and $f$ have the same asymptotic behavior in 0.
In this section we prove the following two theorems.
\begin{theorem}\label{ala} Let $1 \leq \alpha < \infty$ and let $\psi \colon (0,1] \rar (0,+\infty)$ be an essentially decreasing continuous function such that $\mathrm{Id}^{\alpha} \psi$ is bounded and for every $\delta > 0$, $\liminf_{x \rar +0} x^{\delta} \psi(x) = 0$. Set $g(x) = x^{\alpha}\psi(x)$ for $0 < x \leq 1$ and $g(0)=0$. Suppose $\bE_{g}$ is an equivalence relation. Then $\bE_{\mathrm{Id}^{\alpha}} \leq _{B} \bE_{g}$.
\end{theorem}
\begin{theorem}\label{fole} Let $f,g \colon [0,1] \rar \mathbb{R}^{+}$ be continuous essentially increasing functions such that $\bE_{f}$ and $\bE_{g}$ are equivalence relations. Suppose there exists a function $\kappa \colon \{1/2^i \colon i < \omega\} \rar [0,1]$ satisfying the recursion \Keq\label{U2}f(1)=g(\kappa(1)),~f(1/2^{n})=\sum_{i=0}^{n} g(\kappa(1/2^{i})/2^{n-i})~(0 < n < \omega)\Zeq such that for some $L \geq 1$, \Keq\label{U3}\sum_{i=n}^{\infty} g(\kappa(1/2^{i})) \leq L \sum_{i=0}^{n} g(\kappa(1/2^{i})/2^{n-i}) ~(n < \omega)\Zeq and \Keq\label{star} \kappa(1/2^{n}) \leq L \cdot \max \{\kappa(1/2^{i})/2^{n-i}\colon i < n\}~(n < \omega).\Zeq Then $\bE_{f} \leq _{B} \bE_{g}$.
\end{theorem}
Theorem \ref{ala} illustrates, e.g.\ by choosing $\psi(x) = 1-\log(x)$ $(0 < x \leq 1)$, that reducibility among the $\bE_{f}$s is \emph{not} characterized by the growth order of the $f$s. Theorem \ref{fole} is a stronger version of \cite[Theorem 1.1 p.\ 1836]{DH}, but we admit that our improvement is of technical nature. However, in Section \ref{con} it will allow us to show the reducibility among $\bE_{f}$s for new families of $f$s.
These results neither give a complete description of the reducibility between the equivalence relations $\bE_{f}$ nor are optimal. Nevertheless, we note that in
Theorem \ref{ala}, $\mathrm{Id}^{\alpha}$ cannot be replaced by an arbitrary ``nice" function: as we will see, e.g.\ $\bE_{\mathrm{Id}^{\alpha}} < _{B} \bE_{\mathrm{Id}^{\alpha}/(1-\log)}$. Also, the condition $\psi$ is decreasing cannot be left out: e.g.\ we need the techniques of Theorem \ref{fole} in order to treat the $\psi(x) = x$ case, i.e.\ to show $\bE_{\mathrm{Id}^{\alpha}} \leq _{B} \bE_{\mathrm{Id}^{\alpha+1}}$. We comment on the optimality of Theorem \ref{fole} after its proof.
We start with a technical lemma.
\begin{lemma} \label{disz} Let $f,g \colon [0,1] \rar \mathbb{R}^{+}$ be continuous functions such that $\bE_{f}$, $\bE_{g}$ are equivalence relations. Suppose there exists $K > 0$ and $I \in [\omega]^{\omega}$ such that for every $n \in I$ there is a mapping $\vt_{n} \colon \{i/n \colon 0 \leq i \leq n\} \rar [0,1]^{\omega}$ satisfying \Keq\label{U1}\frac{1}{K} f((j-i)/n) \leq \|\vt_{n}(j/n)-\vt_{n}(i/n)\|_{g} \leq K f((j-i)/n)~(0 \leq i < j \leq n).\Zeq Then $\bE_{f} \leq _{B} \bE_{g}$.
\end{lemma}
\textbf{Proof. } For $x \in [0,1]$ and $0 < n < \omega$ set $[x]_{n} = \max\{i/n\colon i/n \leq x, ~0 \leq i \leq n\}$. Since $f$ is uniformly continuous on $[0,1]$, for every $k < \omega$ there is an $n_{k} \in I$ such that $|f(x) - f([x]_{n_{k}})| \leq 1/2^{k}$ $(x \in [0,1])$. We show that $\vt \colon [0,1]^{\omega} \rar [0,1]^{\omega \cdot \omega}$, $$\vt((x_{k})_{k < \omega}) = (\vt_{n_{k}}([x_{k}]_{n_{k}}))_{k < \omega},$$ after reindexing the coordinates of the range, is a Borel reduction of $\bE_{f}$ to $\bE_{g}$.
Let $(x_{k})_{k < \omega}, (y_{k})_{k < \omega} \in [0,1]^{\omega}$. We have $$[|y_{k} - x_{k}|]_{n_{k}} \leq |[y_{k}]_{n_{k}} - [x_{k}]_{n_{k}}| \leq [|y_{k} - x_{k}|]_{n_{k}} + 1/n_{k}~(k < \omega).$$ So by the choice of $n_{k}$, $|f(|y_{k} - x_{k}|) -f([|y_{k} - x_{k}|]_{n_{k}} )| \leq 1/2^{k}$ and $|f([|y_{k} - x_{k}|]_{n_{k}} )- f(|[y_{k}]_{n_{k}} - [x_{k}]_{n_{k}}|)| \leq 1/2^{k}$, thus $$|f(|y_{k} - x_{k}|) - f(|[y_{k}]_{n_{k}} - [x_{k}]_{n_{k}}| ) |\leq 2/2^{k}~(k < \omega).$$ By (\ref{U1}), $\|\vt_{n_{k}}([y_{k}]_{n_{k}}) - \vt_{n_{k}}([x_{k}]_{n_{k}})\|_{g} \approx f(|[y_{k}]_{n_{k}} - [x_{k}]_{n_{k}}| )$ $(k < \omega)$, so the statement follows.$\bs$
\medskip
\textbf{Proof of Theorem \ref{ala}.} For some $B \geq 1$, let $x^{\alpha}\psi(x) \leq B$ $(0 <x \leq 1)$. We find a $K > 0$ such that for every $0<n < \omega$ there exist $M < \omega$ and $0 < \mu \leq 1$ such that for every $0 \leq i < j \leq n$, \Keq\label{A1} \frac{1}{K}\b(\frac{j-i}{n}\j)^{\alpha} \leq M \b(\frac{j-i}{n}\mu\j)^{\alpha} \psi\b(\frac{j-i}{n}\mu\j)\leq K\b(\frac{j-i}{n}\j)^{\alpha}.\Zeq Once this done, the conditions of Lemma \ref{disz} are satisfied by the mapping $\vt_{n} \colon \{i/n \colon 0 \leq i \leq n\} \rar [0,1]^{\omega}$, $$\vt_{n}(i/n) = (\underbrace{i\mu/n, \dots, i\mu/n}_{M},0, \dots).$$ Observe that (\ref{A1}) is equivalent to $$1/K\leq M \mu^{\alpha} \psi((j-i)\mu/n) \leq K~(0 \leq i < j \leq n).$$ Since $\psi$ is essentially decreasing, it is enough to have $1/2 \leq M\mu^{\alpha}\psi(\mu)$ and $M\mu^{\alpha}\psi(\mu/n) \leq 2B$. We will find a $0 < \mu \leq 1$ satisfying $ \psi(\mu/n) \leq 2\psi(\mu)$. Then by choosing $M$ to be minimal such that $1/2 \leq M\mu^{\alpha}\psi(\mu)$, by $\mu^{\alpha}\psi(\mu) \leq B$ and $B \geq 1$ we have $M\mu^{\alpha}\psi(\mu/n) \leq 2 M\mu^{\alpha}\psi(\mu) \leq 2B$, so we fulfilled the requirements.
Suppose such a $\mu$ does not exist, i.e.\ $\psi(\mu/n) > 2 \psi(\mu)$ $(0 < \mu \leq 1)$. Then for every $k < \omega$ and $\mu \in [1/n,1]$, $\psi(n^{-k}\mu) \geq 2^k \psi(\mu)$. We have $x = n^{-k}\mu$ runs over $(0,1]$ as $(k,\mu)$ runs over $\omega \times [1/n,1]$. So since $\psi$ is essentially decreasing, with $\delta = \log(2)/\log(n)$ we have $\psi(x)x^{\delta} \gtrsim 1/n^{\delta}\psi(1) >0$ $(0 < x \leq 1)$. This contradicts $\liminf_{x \rar +0} x^{\delta} \psi(x) = 0$, so the proof is complete.$\bs$
\medskip
\textbf{Proof of Theorem \ref{fole}.} Let $n < \omega$ be fixed. For $0 < l \leq 2^{n}$ let $r(l) \leq n$, $s(l) < \omega$ be such that $l/2^{n} = s(l)/2^{r(l)}$ and $s(l)$ is odd. With $\PR_{l}x$ standing for the $l^{\textrm{th}}$ coordinate of $x \in [0,1]^{2^{n}}$, for every $0 \leq i \leq 2^{n}$ we define $\vt(i/2^{n})$ by \Kem\label{DD1}\PR_{l}\vt(i/2^{n}) = \DS \b(1-2^{r(l)}\b|\frac{i}{2^{n}}-\frac{l}{2^{n}}\j|\j)\kappa(1/2^{r(l)}) \textrm{ if } \\ l > 0 \textrm{ and } \b|\frac{i}{2^{n}}-\frac{l}{2^{n}}\j| \leq 1/2^{r(l)},\end{multline} else let $\PR_{l}\vt(i/2^{n}) =0$. We show (\ref{U1}) holds for $\vt_{2^{n}} = \vt$.
Let $0 \leq i < j \leq 2^{n}$ be arbitrary. Let $m \leq n$ be minimal such that for some $e < 2^{m}$ we have $$\frac{i}{2^{n}} \leq \frac{e}{2^{m}} < \frac{(e+1)}{2^{m}} \leq \frac{j}{2^{n}}.$$
We distinguish several cases.
Suppose first $i/2^{n} =e/2^{m}$ and $j/2^{n} =(e+1)/2^{m}$. For every $k \leq m$ there is exactly one $l$ with $r(l)=k$ such that $$|e/2^{m}-l/2^{n}| \leq 1/2^{k} ~and~ |(e+1)/2^{m}-l/2^{n}| \leq 1/2^{k};$$ and for this $l$, by (\ref{DD1}), $$\b|\PR_{l}(\vt((e+1)/2^{m}) - \vt(e/2^{m}))\j| = \kappa(1/2^{k})/2^{m-k}.$$ All the other coordinates of $\vt((e+1)/2^{m}) $ and $\vt(e/2^{m})$ are zero so by (\ref{U2}), \Keq\label{ezis}\|\vt((e+1)/2^{m}) - \vt(e/2^{m}))\|_{g} = \sum_{k=0}^{m} g(\kappa(1/2^{k})/2^{m-k}) = f(1/2^{m}),\Zeq i.e.\ (\ref{U1}) holds with $K=1$.
Next suppose $e$ is even, $i/2^{n} =e/2^{m}$ and $(e+1)/2^{m}<j/2^{n}$; then we have $m \geq 1$. Observe that by the choice of $m$ we have $j/2^{n}< (e+2)/2^{m}$. For every $k < m$ there is exactly one $l$ with $r(l)=k$ such that $$|e/2^{m}-l/2^{n}| \leq 1/2^{k} ~and~|j/2^{n}-l/2^{n}| \leq 1/2^{k};$$ and for this $l$, $l/2^{n} \notin (e/2^{m},j/2^{n})$. So by (\ref{DD1}), $$\DS \frac{\kappa(1/2^{k})}{2^{m-k}} \leq \b| \PR_{l} (\vt(j/2^{n}) - \vt(e/2^{m}))\j| \leq \DS 2\frac{\kappa(1/2^{k})}{2^{m-k}} ~(0 \leq k < m).$$ Since $e$ is even, $\vt(e/2^{m})$ has no other nonzero coordinates. For every $m \leq k \leq n$ there is exactly one $l$ with $r(l)=k$ such that $ |j/2^{n}-l/2^{n}| \leq 1/2^{k},$ and for this $l$, $\PR_{l}(\vt(j/2^{n})) \leq \kappa(1/2^{k})$. Since $g$ is essentially increasing, we have \begin{multline}\label{U6} \sum_{k=0}^{m-1} g\b(\frac{\kappa(1/2^{k})}{2^{m-k}}\j) \lesssim \|\vt(j/2^{n}) - \vt(e/2^{m}))\|_{g} \lesssim \\ \sum_{k=0}^{m-1} g\b(2\frac{\kappa(1/2^{k})}{2^{m-k}}\j)+ \sum_{k=m}^{n} g(\kappa(1/2^{k})).\end{multline}
By $(R_{2})$, \Keq\label{U5}g\b(2\frac{\kappa(1/2^{k})}{2^{m-k}}\j) \lesssim g\b(\frac{\kappa(1/2^{k})}{2^{m-k}}\j) ~(0 \leq k < m).\Zeq By (\ref{U2}) and since $f$ is essentially increasing, $$f\b(\frac{j}{2^n} - \frac{e}{2^m}\j) \lesssim f\b(\frac{1}{2^{m-1}}\j) = \sum_{k=0}^{m-1} g\b(\frac{\kappa(1/2^{k})}{2^{m-1-k}}\j),$$ so by (\ref{U6}), \Keq\label{U11}f\b(\frac{j}{2^n} - \frac{e}{2^m}\j) \lesssim \|\vt(j/2^{n}) - \vt(e/2^{m}))\|_{g}.\Zeq
By (\ref{U3}) and (\ref{U5}), the right hand side of (\ref{U6}) is $\lesssim \sum_{k=0}^{m} g\b(\kappa(1/2^{k})/2^{m-k}\j)$, so since $f$ is essentially increasing, \begin{multline}\label{U10}\|\vt(j/2^{n}) - \vt(e/2^{m}))\|_{g} \lesssim \\ \sum_{k=0}^{m} g\b(\frac{\kappa(1/2^{k})}{2^{m-k}}\j) = f(1/2^{m}) \lesssim f\b(\frac{j}{2^n} - \frac{e}{2^m}\j) .\end{multline} The case $e+1$ is even, $i/2^{n} < e/2^{m}$ and $j/2^{n} = (e+1)/2^{m}$ can be treated by an analogous argument.
Suppose now $e$ is even, $i/2^{n} < e/2^{m}$ and $(e+1)/2^{m} \leq j/2^{n}$; then we have $m \geq 2$. By $(R_{2})$, (\ref{U10}), and also by (\ref{ezis}) if $j/2^{n}=(e+1)/2^{m} $, \begin{multline}\notag\|\vt(j/2^{n}) - \vt(i/2^{n}))\|_{g} \lesssim \\ \|\vt(j/2^{n}) - \vt(e/2^{m}))\|_{g} +\|\vt(e/2^{m}) - \vt(i/2^{n}))\|_{g} \lesssim \\ \b(f\b(\frac{j}{2^n} - \frac{e}{2^m}\j)+ f\b(\frac{e}{2^m} - \frac{i}{2^n}\j)\j) \lesssim f\b(\frac{j-i}{2^n} \j).\end{multline} To have a lower bound, observe that for every $k < m-1$ there is exactly one $l$ with $r(l)=k$ such that $$|i/2^{n}-l/2^{n}| \leq 1/2^{k}~and~ |j/2^{n}-l/2^{n}| \leq 1/2^{k}.$$ For this $l$, $l/2^{n} \notin (i/2^{n},j/2^{n})$ if $l/2^{n} \neq e/2^{m}$, i.e.\ if $k \neq k_{0} = r(2^{n-m}e)$. So by (\ref{DD1}), $$\DS \frac{\kappa(1/2^{k})}{2^{m-k}} \leq \b|\PR_{l} (\vt(j/2^{n}) - \vt(i/2^{n}))\j|~(k < m-1, ~k \neq k_{0})).$$ By (\ref{star}), $(R_{2})$ and since $g$ is essentially increasing, $$g\b(\frac{\kappa(1/2^{k_{0}})}{2^{m-k_{0}}}\j) \lesssim \max_{i < k_{0}}g\b(\frac{\kappa(1/2^{i})}{2^{m-i}}\j).$$ So by $(R_{2})$ and since $g$ is essentially increasing, \begin{multline}\notag f(1/2^{m-2}) = \sum_{k=0}^{m-2} g\b(\frac{\kappa(1/2^{k})}{2^{m-2-k}}\j) \lesssim\sum_{k=0}^{m-2} g\b(\frac{\kappa(1/2^{k})}{2^{m-k}}\j) \lesssim \\ \sum\b\{g\b(\frac{\kappa(1/2^{k})}{2^{m-k}}\j) \colon k < m-1,~k \neq k_{0} \j\} \lesssim \|\vt(j/2^{n}) - \vt(i/2^{n}))\|_{g}.\end{multline} By the choice of $m$ we have $(j-i)/2^{n}< 4/2^{m}$. So since $f$ is essentially increasing, $f((j-i)/2^n) \lesssim f(4/2^m) = f(1/2^{m-2})$; thus $f\b((j-i)/2^n \j) \lesssim \|\vt(j/2^{n}) - \vt(i/2^{n})\|_{g}$.
The case $e+1$ is even, $i/2^{n} \leq e/2^{m}$ and $(e+1)/2^{m}<j/2^{n}$ follows similarly, so the proof is complete.$\bs$
\medskip
The assumptions of Theorem \ref{fole} are not necessary, they merely make possible to imitate the construction in the proof of \cite[Theorem 1.1 p.\ 1836]{DH}. We note however that the problem of characterizing whether $\{i/2^n \colon 0 \leq i \leq 2^n\}$ endowed with the $\|\cdot\|_{f}$-distance Lipschitz embeds into $[0,1]^{\omega}$ endowed with the $\|\cdot\|_{g}$-distance is very hard even if the distances $\|\cdot\|_{f}$ and $\|\cdot\|_{g}$ can be related to norms (see e.g.\ \cite{Ma} and the references therein). So it is unlikely that there is a simple characterization of reducibility among $\bE_{f}$s using the approach of Lemma \ref{disz}.
\section{Nonreducibility results} \label{nonred}
In this section we improve \cite[Theorem 2.2 p.\ 1840]{DH} in order to obtain nonreducibility results for a wider class of $\bE_{f}$s, as follows.
\begin{theorem}\label{Nr} Let $1 \leq \alpha < \infty$ and let $\vp, \psi \colon [0,1] \rar [0,+\infty)$ be continuous functions. Set $f=\mathrm{Id}^{\alpha}\vp$, $g(x) = \mathrm{Id}^{\alpha}\psi$ and suppose that $f,g$ are bounded and $\bE_{f}$ and $\bE_{g}$ are equivalence relations. Suppose $\psi(x) > 0$ $(x > 0)$, and
\ben
\item[$(A_{1})$] there exist $\ve > 0$, $M < \omega$ such that for every $n > M$ and $x,y \in [0,1]$, $$\vp(x) \leq \ve \vp(y) \vp(1/2^n) \Rightarrow x \leq \frac{y}{2^{n+1}};$$
\item[$(A_{2})$] $\lim _{n \rar \infty} \psi(1/2^{n})/\vp(1/2^{n}) = 0$.
\een Then $\bE_{g} \not \leq _{B} \bE_{f}$.
\end{theorem}
Observe that $\vp\equiv 1$, $\psi = \mathrm{Id}^{\beta}$ $(0 < \beta < \infty)$ satisfy the assumptions of Theorem \ref{Nr}, so it generalizes \cite[Theorem 2.2 p.\ 1840]{DH}.
The proof of \cite[Theorem 2.2 p.\ 1840]{DH} has two fundamental constituents. The first idea is to pass to a subspace $X \ss [0,1]^{\omega}$ where a hypothetic Borel reduction $\vt$ of $\bE_{g}$ to $\bE_{f}$ is modular, i.e.\ for $x \in X$, $\vt(x)$ consists of finite blocks, each of which depends only on a single coordinate of $x$. This technique can be adopted without any difficulty. The second tool is an excessive use of the fact that for $f = \mathrm{Id}^{p}$, $f^{-1}(\|\cdot\|_{f})$ is a norm, which does not follow from the assumptions of Theorem \ref{Nr}. We get around this difficulty by exploiting that $\vp$ is a perturbation when compared to $\mathrm{Id}^{\alpha}$.
\medskip
\textbf{Proof of Theorem \ref{Nr}.} Suppose $\vt \colon [0,1]^{\omega} \rar [0,1]^{\omega}$ is a Borel reduction of $\bE_{g}$ to $ \bE_{f}$. With $Z_{k} = \{i/2^{k}\colon 0 \leq i \leq 2^{k}\}$, set $Z = \prod_{k < \omega} Z_{k}$; then $\vt$ is a Borel reduction of $\bE_{g}|_{Z \times Z}$ to $\bE_{f}$. For every finite sequence $t \in \prod_{i < |t|} Z_{i}$, let $N_{t} = \{z \in Z \colon z(i) = t(i)~(i < |t|)\}$. We import several lemmas from \cite{DH}.
\begin{lemma}\label{DH1}{\rm (\cite[Claim (i) p.\ 1840]{DH})} For any $j, k < \omega$ there exist $l < \omega$, a finite sequence $s^{\star} \in \prod_{i < |s^{\star}|} Z_{k+i}$, and a comeager set $D \ss Z$ such that for all $x, \hat x \in D$, if we have $x = r^{\frown}s^{\star \frown} y$
and $\hat x = \hat r ^{\frown} s^{\star \frown} y$ for some $r, \hat r \in [0,1]^{k}$ and $y \in [0,1]^{\omega}$, then $\|(\vt (x)- \vt(\hat x))|_{\omega \sm l}\|_{f} < 2^{-j}.$
\end{lemma}
\textbf{Proof. } For every $l < \omega$, we define $F_{l} \colon Z \rar \mathbb{R}$ by \Kem\notag F_{l}(x) = \max\{ \|(\vt (z)- \vt(\hat z))|_{\omega \sm l}\|_{f} \colon \\ z,\hat z \in Z,~z(i) = \hat z(i) = x(i)~(k \leq i < \omega)\}.\end{multline}
For fixed $x \in Z$, there are only finitely many $z, \hat z \in Z$ satisfying $z(i) = \hat z(i) = x(i)$ $(k \leq i < \omega)$. For each such pair we have $\|z - \hat z\|_{g} < \infty$, hence $\|\vt(z) - \vt( \hat z)\|_{f} < \infty$, in particular $\lim_{l \rar \infty} \|(\vt(z) - \vt(\hat z))|_{\omega \sm l}\|_{f} = 0$. So $F_{l}(x) < \infty$ for all $l < \omega$ and $\lim_{l \rar \infty} F_{l}(x) = 0$ $(x \in Z)$. Therefore, by the Baire Category
Theorem, there exists an $l < \omega$ such that $\{x \in Z \colon F_{l}(x) < 2^{-j}\}$ is not meager. By $f$ being Borel, this set has
the property of Baire, so there is a nonempty open set $O$ on which it is relatively
comeager.
We can assume $O = N_{t}$ for some finite sequence $t \in \prod_{i < |t|} Z_{i}$, and we can also
assume $|t| \geq k$. Let $t = r^{\star \frown} s^{\star}$ where $|r^{\star}| = k$. But $F_{l}(x)$ does not depend
on the first $k$ coordinates of $x$, so $\{x \in Z \colon F_{l}(x) < 2^{-j}\}$ is also relatively comeager
in $N_{r^{\frown}s^{\star}}$ for all $r \in \prod_{i < k } Z_{i}$. Let $D$ be a comeager set such that $F_{l}(x) < 2^{-j}$
whenever $x \in D \cap N_{r^{\frown}s^{\star}}$ for any $r$ of length $k$. Now the conclusion of the claim
follows from the definition of $F_{l}$.$\bs$
\medskip
By \cite[(8.38) Theorem p.\ 52]{K} there is a dense $G_{\delta}$ set $C \ss Z$ such that $\vt|_{C}$ is continuous.
\begin{lemma}\label{DH2}{\rm (\cite[Claim (ii) p.\ 1841]{DH})} For any $j, k,l < \omega$ there is a finite sequence $s^{\star \star} \in \prod_{i < |s^{\star \star}|} Z_{k+i}$ such that for all $x, \hat x \in C$, if we have $x = r^{\frown}s^{\star \star \frown} y$
and $\hat x = r^{\frown} s^{\star\star \frown} \hat y$ for some $r \in [0,1]^{k}$ and $y , \hat y \in [0,1]^{\omega}$, then $\|(\vt (x)- \vt(\hat x))|_{l}\|_{f} < 2^{-j}.$
Furthermore, if $G$ is a given dense open subset of $Z$, then $s^{ \star \star}$ can be
chosen such that $N_{r ^{\frown}s^{\star \star}} \ss G$ for all $r \in \prod_{i < k} Z_{i}$.
\end{lemma}
\textbf{Proof. } There are only finitely many $r \in\prod_{i < k } Z_{i}$; enumerate them as $r_{0}, r_{1}, \dots, r_{M-1}$. We construct $s^{\star \star}$ by successive extensions.
Let $t_{0} = \es$. Let $m < M$ and suppose that we have the finite
sequence $t_{m} \in \prod_{i < |t_{m}|} Z_{k+i}$. The basic open
set $N_{r_{m}^{\frown}t_{m}}$ meets the comeager set $C$, so we can pick $w \in C \cap N_{r_{m}^{\frown}t_{m}}$. Since $\vt$ is continuous on $C$ and $f$ is continuous, we can pass to a smaller open neighborhood $O$ of $w$ such
that for all $x, \hat x \in C \cap O$, $\|(\vt(x) - \vt(\hat x))|_{l}\|_{f} < 2^{-j}$. We can assume $O = N_{r_{m}^{\frown}t_{m}'}$ for some extension $t_{m}'$ of $t_{m}$. Since $G$ is dense open, we can further extend $t_{m}'$ to get
$t_{m+1}$ such that $N_{r_{m}^{\frown}t_{m+1}} \ss G$.
Once the sequences $t_{m}$ $(m \leq M)$ are constructed, $s ^{\star \star} = t_{M}$ fulfills the requirements.$\bs$
\begin{lemma}\label{DH3}{\rm \cite[Claim (iii) p.\ 1842]{DH}} There exist strictly increasing sequences $(b_{i})_{i < \omega}, (l_{i})_{i < \omega} \ss \omega$ and functions $f_{i} \colon Z_{b_{i}} \rar [0,1]^{l_{j+1}-l_{j}}$ such that $b_{0} = l_{0} = 0$, for $Z' = \prod_{i < \omega} Z_{b_{i}}$ and $\vt' \colon Z' \rar [0,1]^{\omega}$, $\vt'(x) = f_{0}(x_{0}) ^{\frown} \dots ^{\frown} f_{i}(x_{i}) ^{\frown} \dots$ we have \Keq\label{DH3_5}\|x-\hat x\|_{g} < \infty \Leftrightarrow \|\vt'(x)-\vt'(\hat x)\|_{f} < \infty.\Zeq
\end{lemma}
\textbf{Proof. } We construct the sequences $(b_{i})_{i < \omega}, (l_{i})_{i < \omega} \ss \omega$, finite sequences $s_{i} $ $(i < \omega)$ and dense open sets $D^{j}_{i}$ $(i,j < \omega)$ by induction, as follows.
We have $b_{0} = l_{0} = 0$. Let $j < \omega$ and suppose that we have $b_{j}$, $l_{j}$ and $D_{i}^{j'}$ for every $i < \omega$ and $j' < j$. We apply Lemma \ref{DH1} for $j$ and $k = b_{j}+1$ to get $l_{j+1} = l < \omega$, a finite sequence $s_{j}^{\star} \in \prod_{i < |s_{j}^{\star}|} Z_{b_{j}+1+i}$ and a comeager set $D^{j} \ss Z$ satisfying the conclusions of Lemma \ref{DH1}. We can assume $l_{j+1} > l_{j}$ and $D^{j} \ss C$. Let $(D^{j}_{i})_{i < \omega}$ be a decreasing sequence of dense open subsets of $Z$ such that $ \bigcap_{i < \omega}D^{j}_{i} \ss D^{j}$. We apply Lemma \ref{DH2} for $j$, $k = b_{j} + 1 + |s_{j}^{\star}|$, $l = l_{j+1}$, and $G = \bigcap_{j' < j} D^{j'}_{j}$ to get $s_{j}^{\star \star}$ as in Lemma \ref{DH2}. We set $s_{j} = s_{j}^{ \star \frown} s_{j}^{\star \star}$ and $b_{j+1} = b_{j} + 1 + |s_{j}|$.
Let $Z' = \prod_{i < \omega} Z_{b_{i}}$ and set $h \colon Z' \rar Z$, $$h(x) = x_{0} ^{\frown}s_{0}^{\frown}x_{1} ^{\frown}s_{1}^{\frown} \dots ^{\frown}x_{i} ^{\frown}s_{i}^{\frown} \dots . $$ For every $i < \omega$, we define $f_{i} \colon Z_{b_{i}} \rar [0,1]^{l_{j+1}-l_{j}}$ by \Keq\label{FFE}f_{i}(a) = \vt(h(\underbrace{0 ^{\frown} \dots ^{\frown} 0}_{i}~\!\! ^{\frown} a ^{\frown} 0 ^{\frown} 0 ^{\frown} \dots))|_{l_{j+1} \sm l_{j}};\Zeq and we set $\vt' \colon Z' \rar [0,1]^{\omega}$, $\vt'(x) = f_{0}(x_{0}) ^{\frown} \dots ^{\frown} f_{i}(x_{i}) ^{\frown} \dots$
It remains to prove (\ref{DH3_5}). To see this, it is enough to prove $\|\vt'(x) - \vt (h(x))\|_{f} < \infty$ for every $x \in Z'$ since then for every $x, \hat x \in Z'$, by $(R_{2})$, \begin{multline} \notag \|\vt'(x) - \vt ' (\hat x)\|_{f} < \infty \iff \|\vt(h(x)) - \vt (h(\hat x))\|_{f} < \infty \iff \\ \|h(x) - h(\hat x)\|_{g} < \infty \iff \|x - \hat x \|_{g} < \infty .\end{multline}
Let $x \in Z'$ be arbitrary; for every $j < \omega$ we define $e_{j}, e'_{j} \in Z'$ by setting $$\PR_{i} e_{j} = \b\{ \begin{array}{ll} x_{i}, & \textrm{ if } i = j; \\ 0, & \textrm{ if } i \in \omega \sm \{ j\}; \end{array} \j.,~\PR_{i} e'_{j} = \b\{ \begin{array}{ll} x_{i}, & \textrm{ if } i \leq j; \\ 0, & \textrm{ if } j < i < \omega. \end{array} \j.$$ Since $h(x)$ and $h(e'_{j})$ agree on all coordinates below $b_{j+1}$, by the definition of $s_{j}^{\star \star}$, $$\|(\vt(h(x)) - \vt(h(e'_{j})))|_{l_{j+1}}\|_{f} < 2^{-j} ~(j < \omega).$$ On the other hand, for $j > 0$, $h(e'_{j})$ and $h(e_{j})$ agree on all coordinates above $b_{j-1}$, so by the definition of $s_{j-1}^{\star}$, \Keq\label{potya}\|(\vt(h(e'_{j})) - \vt(h(e_{j})))|_{\omega \sm l_{j}}\|_{f} < 2^{-j+1} ~(0 <j < \omega).\Zeq Moreover, (\ref{potya}) holds for $j=0$, as well. Then by $(R_{2})$, \begin{multline}\notag \|(\vt'(x) - \vt (h(x)))|_{l_{j+1} \sm l_{j}}\|_{f} = \|(\vt(h(e_{j})) - \vt (h(x)))|_{l_{j+1} \sm l_{j}}\|_{f} \lesssim \\ \|(\vt(h(e_{j})) - \vt (h(e'_{j})))|_{\omega\sm l_{j}}\|_{f}+\|(\vt(h(e'_{j})) - \vt (h(x)))|_{l_{j+1}}\|_{f} \leq 3 \cdot 2^{-j}. \end{multline} Therefore $$\|(\vt'(x) - \vt (h(x)))\|_{f} = \sum _{j <\omega} \|(\vt'(x) - \vt (h(x)))|_{l_{j+1} \sm l_{j}}\|_{f} \leq \sum _{j <\omega} 3 \cdot 2^{-j}< \infty, $$ as required.$\bs$
\begin{lemma}\label{DH4}{\rm \cite[Claim (iv) p.\ 1843]{DH}} There exist $ c >0$ and $N < \omega$ such that with the notation of (\ref{FFE}), for every $i >N$,
$\| f_{i}(1) - f_{i}(0)\|_{f} > c$.
\end{lemma}
\textbf{Proof. } If not, then we can find a strictly increasing sequence $(j_{m})_{m < \omega} \ss \omega$ such that
$\|f_{j_{m}}(1) - f_{j_{m}}(0)\|_{f} \leq 2^{-m}$ $(m < \omega)$. Let $\hat x$ be the constant 0 sequence, and let $x$ be the sequence which is 1 at each coordinate $j_{m}$ $(m < \omega)$ and 0 at all other
coordinates. Then $\|x - \hat x\|_{g} = \infty$ but
\begin{multline} \notag \| \vt ' (x) - \vt ' (\hat x)\|_{f} = \sum_{j < \omega} \| f_{j}(x(j)) - f_{j}( \hat x(j))\|_{f} = \\ \sum_{m < \omega} \|f_{j_{m}}(1) - f_{j_{m}}(0)\|_{f} \leq \sum_{m < \omega} 2^{-m} < \infty,\end{multline}
contradicting (\ref{DH3_5}).$\bs$
\begin{lemma}\label{DH5} Let $c > 0$, $N < \omega$ be as in Lemma \ref{DH4}. For every $0< D < \omega$ there exists $ N_{D} > \max\{N,D\}$ such that for every $i \geq N_{D}$ there is a $0 \leq k< 2^{b_{N_{D}}}$ with \Keq\label{KE3}\|f_{i}((k+1)/2^{b_{N_{D}}}) - f_{i}(k/2^{b_{N_{D}}}) \|_{f} \geq Dg(1/2^{b_{N_{D}}}).\Zeq
\end{lemma}
\textbf{Proof. } Let $\ve > 0$ and $M < \omega$ be as in the assumptions of Theorem \ref{Nr}. Fix $0<D < \omega$; by $(A_{2})$ there exists $N_{D} > \max\{M,N,D\}$ such that with $n = 2^{b_{N_{D}}}$, $2D/c < \ve \vp(1/n) / \psi(1/n)$. Fix $i \geq N_{D}$, set $l = l_{i+1} - l_{i}$ and $$\gamma_{j} = |\PR_{j}(f_{i}(1) - f_{i}(0))|~(j < l).$$ For every $x= (x_{j})_{j < l} \in [-1,1]^{l}$ set $$\|x\|_{\Delta} = \b(\sum_{j < l} |x_{j}|^{\alpha}\vp(\gamma_{j})\j)^{1/\alpha};$$ then $\|\cdot\|_{\Delta}$ satisfies the triangle inequality on $[0,1]^{l}$. Since $\|f_{i}(1) - f_{i}(0)\|_{f} =\|f_{i}(1) - f_{i}(0)\|_{\Delta} ^{\alpha} = \sum_{j < l}\gamma_{j}^{\alpha}\vp(\gamma_{j})$, by the triangle inequality there is a $0 \leq k < n $ such that $$\|f_{i}((k+1)/n) - f_{i}(k/n) \| _{\Delta} \geq \frac{1}{n}\|f_{i}(1) - f_{i}(0)\|_{f}^{1/\alpha}.$$ With such a $k$, set $$\delta_{j} = |\PR_{j}(f_{i}((k+1)/n) - f_{i}(k/n))|~ (j < l);$$ i.e.\ we have \Keq\label{KE1}\sum_{j < l} \delta_{j}^{\alpha}\vp(\gamma_{j})\geq \frac{1}{n^{\alpha}}\|f_{i}(1) - f_{i}(0)\|_{f}.\Zeq
Set $J = \{j < l \colon \vp(\gamma_{j}) \leq \vp(\delta_{j})c/(2D\psi(1/n)) \}$. Then \Keq\label{KE2}\sum_{j < l} \delta_{j}^{\alpha}\vp(\gamma_{j}) \leq \sum_{j \in J} \delta_{j}^{\alpha}\vp(\delta_{j})\frac{c}{2D\psi(1/n)} +\sum_{j \notin J} \delta_{j}^{\alpha}\vp(\gamma_{j}).\Zeq By the choice of $N_{D}$, $2D/c < \ve \vp(1/n) / \psi(1/n)$. So for $j \notin J$, $\vp(\delta_{j}) < \ve\vp(\gamma_{j})\vp(1/n)$. This, by $(A_{1})$ and by $b_{N_{D}} \geq N_{D} >M$, implies $\delta_{j} \leq \gamma_{j}/(2n)$ $(j \notin J)$. Hence $$\sum_{j \notin J} \delta_{j}^{\alpha}\vp(\gamma_{j}) \leq \frac{1}{(2n)^{\alpha}} \sum_{j \notin J} \gamma_{j}^{\alpha}\vp(\gamma_{j}) = 2^{-\alpha}\frac{\|f_{i}(1) - f_{i}(0)\|_{f}}{n^{\alpha}} .$$ So by (\ref{KE1}) and (\ref{KE2}), $$\sum_{j \in J} \delta_{j}^{\alpha}\vp(\delta_{j})\frac{c}{2D\psi(1/n)} \geq (1-2^{-\alpha})\frac{\|f_{i}(1) - f_{i}(0)\|_{f}}{n^{\alpha}} $$ which implies $$\|f_{i}((k+1)/n) - f_{i}(k/n) \| _{f} = \sum_{j < l} \delta_{j}^{\alpha}\vp(\delta_{j})\geq D\frac{\psi(1/n)}{n^{\alpha}} = Dg(1/n),$$ as required.$\bs$
\medskip
For every $0<D < \omega$ let $N_{D}$ be as in Lemma \ref{DH5}. Since $g(0) = 0$ and $g$ is continuous, by reassigning $N_{D}$ we can assume $g(1/2^{b_{N_{D}}}) \leq 1/D^{2}$ $(0<D < \omega)$. Let $I_{D} \ss \omega \sm N_{D}$ $(0<D < \omega)$ be pairwise disjoint sets such that $ 1/D \leq |I_{D}| D g(1/2^{b_{N_{D}}}) < 2/D$. For every $0<D < \omega$ and $i \in I_{D}$ pick a $0 \leq k_{i,D}< 2^{b_{N_{D}}}$ satisfying (\ref{KE3}). Define $x , \hat x \in Z'$ by $x(i)= k_{i,D}/2^{b_{N_{D}}}$, $\hat x(i)= (k_{i,D}+1)/2^{b_{N_{D}}}$ $(i \in I_{D}, ~0 < D < \omega)$, else $x(i) = \hat x (i) = 0$. Then \Kem\notag \|\vt'(\hat x)- \vt'(x)\|_{f} = \\ \sum_{0 < D < \omega} \sum_{i \in I_{D}} \|f_{i}((k_{i,D}+1)/2^{b_{N_{D}}}) - f_{i}(k_{i,D}/2^{b_{N_{D}}}) \|_{f} \geq \\ \sum_{0 < D < \omega}D|I_{D}|g(1/2^{b_{N_{D}}})= \infty\end{multline} while $$\|\hat x- x \|_{g} = \sum _{0 < D < \omega} |I_{D}| g(1/2^{b_{N_{D}}}) < \sum_{0 < D < \omega} \frac{2}{D^{2}} < \infty; $$ i.e.\ $x \bE_{g} \hat x$ but $\vt'(x) \not\!\!\bE_{f} \vt'(\hat x)$. This contradiction completes the proof.$\bs$
\section{Applications} \label{con}
In this section we construct several families of functions for which our reducibility and nonreducibility results can be applied. Let $1 \leq \alpha \leq \beta< \infty$, let $\vp \colon (0,1] \rar \mathbb{R}$, $\psi \colon [0,1] \rar \mathbb{R}$ be continuous functions and set $f = \mathrm{Id}^{\alpha}\vp$, $f(0) = 0$ and $g = \mathrm{Id}^{\beta}\psi$.
\subsection{Definition of $\vp$ from $\psi$ and $\kappa$}
In order to facilitate the checking of the conditions of Theorem \ref{fole}, we may use the following approach.
Instead of defining $\kappa$ from $\vp$ and $\psi$, we may define $\vp$ from $\psi$ and $\kappa$. To this end we set $\kappa(1/2^{n}) = \mu(n)/2^{n\alpha/\beta}$ $(n < \omega)$ where $\mu$ will be specified later. We assume $\mu(0) = \vp(1) = \psi(1)=1$. Then (\ref{U2}), (\ref{U3}) and (\ref{star}) read as \Keq\label{Z1}\vp\b(\frac{1}{2^{n}}\j) =\sum_{i=0}^{n} 2^{(\alpha-\beta)(n-i)}\mu(i)^{\beta} \psi\b(\frac{2^{(1-\alpha/\beta)i}\mu(i)}{2^{n}}\j) ~(n < \omega),\Zeq \Keq\label{Z2}\sum_{i=n}^{\infty}\frac{1}{2^{i\alpha}} \mu(i)^{\beta} \psi\b(\frac{\mu(i)}{2^{i\alpha/\beta}}\j) \leq L\sum_{i=0}^{n}2^{i(\beta - \alpha)}\frac{\mu(i)^{\beta} }{2^{n\beta}} \psi\b(\frac{2^{(1-\alpha/\beta)i}\mu(i)}{2^{n}}\j),\Zeq \Keq\label{Zstar} \mu(n) \leq L\cdot \max_{i < n} \mu(i)\frac{2^{(n-i)\alpha/\beta}}{2^{n-i}} ~(n < \omega).\Zeq Given $\mu$ and $\psi$, we can define $\vp(1/2^{n})$ $(n < \omega)$ by (\ref{Z1}) and then extend $\vp$ to $(0,1]$ to be a continuous function which is affine on $[1/2^{n+1}, 1/2^{n}]$ $(n < \omega)$. When we say below ``we define $\vp$ from $\mu$, $\alpha$, $\beta$ and $\psi$", we mean this definition.
We show that for a $\vp$ defined this way, if there exist $\ve>0$, $M < \omega$ such that for $n > M$, \Keq\label{Z14}\vp(1/2^{i}) \leq \ve \vp(1/2^{j})\vp(1/2^{n}) \Rightarrow i \geq j+n+3 ~(i,j < \omega)\Zeq then $(A_{1})$ of Theorem \ref{Nr} holds. Let $x,y \in (0,1]$, say $1/2^{i+1} < x \leq 1/2^{i}$ and $1/2^{j+1} < y \leq 1/2^{j}$. We have $$\vp(x) \in [\vp(1/2^{i+1}), \vp(1/2^{i})],~\vp(y) \in [\vp(1/2^{j+1}), \vp(1/2^{j})],$$ thus $\vp(x) \leq \ve \vp(y)\vp(1/2^{n})$ implies $$ \min\{\vp(1/2^{i+1}), \vp(1/2^{i})\} \leq \ve \max \{\vp(1/2^{j+1}), \vp(1/2^{j})\}\vp(1/2^{n}).$$ So by (\ref{Z14}), for $n > M$ we have $i \geq n+j+2$, which implies $x \leq y/2^{n+1}$, as required.
\subsection{Explicit examples}
We introduce a family of functions for which our theorems can be applied and whose growth order is easy to calibrate. For $n < \omega$, let $t_{n} \colon (0,1] \rar \mathbb{R}$, $$t_{n}(x) = \underbrace{1+\log(1+ \dots \log}_{n}(1-\log(x))\dots)~(0 < x \leq 1).$$ For $\eta \in [0,1)^{< \omega}$ we define $l_{\eta} \colon (0,1] \rar \mathbb{R}$, $l_{\eta}(x)= \prod_{i < |\eta|} t_{i}^{\eta_{i}}~(0 < x \leq 1);$ e.g., \begin{multline}\notag l_{\es}(x) = 1, ~l_{(\eta_{0})}(x) =(1-\log(x))^{\eta_{0}}, \\ l_{(\eta_{0}\eta_{1})}(x) =(1-\log(x))^{\eta_{0}}(1+\log(1-\log(x)))^{\eta_{1}},~\textrm{etc.}\end{multline} Let $<_{\textrm{lex}}$ denote the lexicographic order. We summarize some elementary properties of the functions $l_{\eta}$, which will be used in the sequel.
\begin{lemma}\label{y} For every $\eta, \eta' \in [0,1)^{< \omega}$ with $\eta <_{\textrm{lex}} \eta'$, $1 \leq \alpha < \infty$ and $\delta > 0$,
\ben[(a)]
\item \label{bEc} $1 \leq l_{\eta}(xy) \leq l_{\eta}(x)l_{\eta}(y) ~(0 < x,y \leq 1)$;
\item \label{y05} $l_{\eta} \circ \mathrm{Id}^{\delta} \approx l_{\eta} $ and $ l_{\eta} \lesssim \mathrm{Id}^{-\delta}$;
\item \label{y07} $l_{\eta} (1/2^{n+1}) -l_{\eta} (1/2^{n}) \leq 1$ for every $n < \omega$ sufficiently large;
\item\label{y1} $l_{\eta}$ is continuous and strictly decreasing, moreover if $\eta < _{\mathrm{lex}} \eta'$ then $l_{\eta}/l_{\eta'}$ is strictly increasing in a neighborhood of 0, so by $l_{\eta}(x)/l_{\eta'}(x) > 0$ $(x > 0)$, $l_{\eta}/l_{\eta'}$ is essentially increasing and $\lim _{x \rar +0} l_{\eta}(x)/l_{\eta'}(x)=0$;
\item\label{y2} $\mathrm{Id}^{\delta}l_{\eta}$ is bounded and $\lim _{x \rar +0} x^{\delta}l_{\eta}(x) = 0$;
\item\label{y3} $f(x)=x^{\delta}l_{\eta}(x)$ $(0 < x < 1)$, $f(0)=0$ is continuous, strictly increasing in a neighborhood of 0, so by $f(x) > 0$ $(x > 0)$, $f$ is essentially increasing;
\item\label{y4} $f(x)=x^{\alpha}l_{\eta}(x)$ $(0 < x < 1)$, $f(0)=0$ is continuous, satisfies $(R_{1})$ and $(R_{2})$ hence $\bE_{f}$ is an equivalence relation;
\item\label{y5} $f(x)=x^{\alpha}/l_{\eta}(x)$ $(0 < x < 1)$, $f(0)=0$ is continuous and strictly increasing, satisfies $(R_{1})$ and $(R_{2})$ hence $\bE_{f}$ is an equivalence relation;
\item\label{y6} $\vp = 1/ l_{\eta}$ satisfies $(A_{1})$ of Theorem \ref{Nr}.
\een
\end{lemma}
\textbf{Proof. }
It is enough to prove (\ref{bEc}) for $t_{n}$ $(n < \omega)$. We do this by induction on $n$. For $n = 0$, the statement follows from \begin{multline}\notag 1 \leq 1-\log(xy) = 1 - \log(x) - \log(y) \leq \\ 1 - \log(x) - \log(y) + \log(x)\log(y) = (1 - \log(x))(1 - \log(y)).\end{multline} Let now $n > 1$; then $t_{n} = 1+\log t_{n-1}$, hence $ 1 \leq t_{n}$. By the inductive hypothesis, \begin{multline}\notag t_{n}(xy) = 1+\log t_{n-1}(xy) \leq 1+\log t_{n-1}(x) + \log t_{n-1}(y) \leq \\ (1+\log t_{n-1}(x)) (1+t_{n-1}(y)) = t_{n}(x)t_{n}(y), \end{multline} as required.
Similarly, it is enough to show (\ref{y05}) for $t_{n}$ $(n < \omega)$; we use induction on $n$. For $n=0$, the first statement follows from $1-\log(x^{\delta}) = 1-\delta \log(x)$ $(0 < x \leq 1)$, while $ t_{0} \lesssim \mathrm{Id}^{-\delta}$ is elementary analysis. Let now $n > 1$; we have $t_{n} = 1+\log t_{n-1}$. By the inductive hypothesis and $t_{n-1} \geq 1$, $1+\log (t_{n-1} \circ \mathrm{Id}^{\delta}) \approx 1+\log t_{n-1}$, so the first statement follows. Also by the inductive hypothesis, $1+\log t_{n-1} \lesssim 1-\delta \log \lesssim \mathrm{Id}^{-\delta}$, so the proof is complete.
We show $(l_{\eta} (1/2^{n+1}) -l_{\eta} (1/2^{n}))_{n < \omega}$ is a null sequence; then (\ref{y07}) follows. By elementary analysis, for every $\delta \in [0,1)$ and $m < \omega$, $(t_{m}^{\delta} (1/2^{n+1}) -t_{m}^{\delta}(1/2^{n}))_{n < \omega}$ is a null sequence. Since $l_{\eta}$ is a finite product of $t_{m}^{\delta} $s, the statement follows.
Statements (\ref{y1}), (\ref{y2}) and (\ref{y3}) are elementary analysis. For (\ref{y4}), $(R_{1})$ is immediate; $(R_{2}a)$ follows from $(x+y)^{\alpha} \lesssim x^{\alpha} +y^{\alpha}$ $(0 \leq x,y \leq 1)$ and $l_{\eta}$ being decreasing; while $(R_{2}b)$ follows from $\mathrm{Id}^{\alpha}l_{\eta}$ being essentially increasing.
Consider now (\ref{y5}). Since $l_{\eta}$ is strictly decreasing, $\mathrm{Id}^{\alpha}/l_{\eta}$ is strictly increasing. So $(R_{1})$ is immediate and $(R_{2}b)$ holds. To see $(R_{2}a)$, observe that by (\ref{bEc}), for $0<v/2 \leq u \leq v \leq 1$ we have $$l_{\eta}(u) \leq l_{\eta}(v/2) \leq l_{\eta}(1/2) l_{\eta}(v).$$ So for $0 < x , y \leq 1$, $$(x+y)^{\alpha}/ l_{\eta}(x+y) \lesssim l_{\eta}(1/2) (x^{\alpha} / l_{\eta}(x) + y^{\alpha} / l_{\eta}(y)),$$ as required.
It remains to prove (\ref{y6}). It is enough to show that for every $n < \omega$, $$l_{\eta}(x) \geq l_{\eta}(1/2) l_{\eta}(y) l_{\eta}(1/2^{n}) \Rightarrow x \leq \frac{y}{2^{n+1}}~(i,j < \omega).$$ By (\ref{bEc}), $l_{\eta}(y/2^{n+1}) \leq l_{\eta}(1/2) l_{\eta}(y) l_{\eta}(1/2^{n})$, so since $l_{\eta}$ is decreasing, the statement follows.$\bs$
\begin{corollary}\label{ET} Let $1 \leq \alpha < \infty$ and let $\eta, \eta' \in [0,1)^{< \omega}$ satisfy $\eta <_{\textrm{lex}} \eta'$.
\ben
\item\label{ET1} The functions $\psi = l_{\eta}$, $g(x) = x^{\alpha}l_{\eta}(x)$ $(0 < x \leq 1)$, $g(0) = 0$ satisfy the conditions of Theorem \ref{ala}.
\item\label{ET3} The functions $\vp(x) = 1/l_{\eta}(x)$, $\psi(x) = 1/l_{\eta'}(x)$ $(0 < x \leq 1)$, $\vp(0) = \psi(0)=0$ and $f = \mathrm{Id}^{\alpha} / l_{\eta}$, $g = \mathrm{Id}^{\alpha} / l_{\eta'}$ satisfy the conditions of Theorem \ref{Nr}.
\een
\end{corollary}
\textbf{Proof. } Statement \ref{ET1} follows from (\ref{y1}), (\ref{y2}) and (\ref{y4}) of Lemma \ref{y}. For \ref{ET3}, $\bE_{f}$ and $\bE_{g}$ are equivalence relations by (\ref{y5}) of Lemma \ref{y}; while $(A_{1})$ and $(A_{2})$ follow from (\ref{y6}) and (\ref{y1}) of Lemma \ref{y}. This completes the proof.$\bs$
\subsection{The counterintuitive case}
In this section we present an example illustrating that the comparison of the growth order of functions does not decide Borel reducibility. Let $\alpha=\beta$ and $\psi \equiv 1$. Then (\ref{Z1}) turns to $\vp(1/2^{n}) =\sum_{i=0}^{n}\mu(i)^{\alpha}$, i.e.\ \Keq\label{Mumu}\mu(n)^{\alpha} = \vp(1/2^{n}) - \vp(1/2^{n-1})~(0 < n < \omega);\Zeq (\ref{Z2}) reads as \Keq\label{G1}\sum_{i=0}^{\infty}\frac{1}{2^{i\alpha}} \mu(n+i)^{\alpha} \leq L \vp(1/2^{n});\Zeq and (\ref{Zstar}) means \Keq\label{Mustar} \mu(n) \leq L \cdot \max_{i < n} \mu(i).\Zeq Since $\mu(n)^{\alpha} \leq \vp(1/2^{n})$, (\ref{G1}) holds if \Keq\label{G2}\sum_{i=0}^{\infty}1/2^{i\alpha} \vp(1/2^{n+i}) \leq L \vp(1/2^{n}).\Zeq
\begin{corollary}\label{HGF}
Let $\vp \colon (0,1] \rar (0,+\infty)$ be an essentially decreasing continuous function such that $\mathrm{Id}^{\alpha}\vp$ is essentially increasing, $ \bE_{\mathrm{Id}^{\alpha}\vp}$ is an equivalence relation, for every $\delta > 0$, $\liminf_{x \rar +0} x^{\delta} \vp(x) = 0$ and (\ref{G2}) holds. Define $\mu$ by (\ref{Mumu}) and suppose (\ref{Mustar}) holds. Then $\bE_{\mathrm{Id}^{\alpha}}$ and $\bE_{\mathrm{Id}^{\alpha}\vp}$ are Borel equivalent.
\end{corollary}
\textbf{Proof. } By Theorem \ref{ala}, $\bE_{\mathrm{Id}^{\alpha}} \leq _{B} \bE_{\mathrm{Id}^{\alpha}\vp}$. By Lemma \ref{EINC}, we can assume in addition that $\vp$ is decreasing. Then the definition of $\mu$ in (\ref{Mumu}) is valid. So by Theorem \ref{fole}, $\bE_{\mathrm{Id}^{\alpha}\vp} \leq _{B} \bE_{\mathrm{Id}^{\alpha}}$.$\bs$
\medskip
We show that (\ref{G2}) holds if for some $\ve > 0$, $\mathrm{Id}^{\alpha-\ve} \vp$ is essentially increasing. Then $$1/2^{(n+i)(\alpha - \ve)} \vp(1/2^{n+i}) \lesssim 1/2^{n(\alpha - \ve)} \vp(1/2^{n})~ (i < \omega),$$ i.e.\ $1/2^{i\alpha} \vp(1/2^{n+i}) \lesssim 1/2^{i\ve} \vp(1/2^{n})$ $(i < \omega)$, so the statement follows. In particular, by Corollary \ref{ET}.\ref{ET1} and by (\ref{y1}), (\ref{y3}) and (\ref{y4}) of Lemma \ref{y}, $\vp = l_{\eta}$ fulfills these requirements for every $\eta \in [0,1)^{< \omega}$. By (\ref{y07}) of Lemma \ref{y} and by $\mu(0)=1$, (\ref{Mustar}) also holds for $\vp = l_{\eta}$ $(\eta \in [0,1)^{< \omega})$. That is, $\bE_{\mathrm{Id}^{\alpha}l_{\eta}}$ and $\bE_{\mathrm{Id}^{\alpha}}$ are Borel equivalent. We will see below in (\ref{totref}) that for every $\eta \in [0,1)^{< \omega}$, $\bE_{\mathrm{Id}^{\alpha}} <_{B} \bE_{\mathrm{Id}^{\alpha}/l_{\eta}}$. So the comparison of the growth order of functions does not decide Borel reducibility.
\subsection{The $\alpha < \beta$ case}
Since the previous and following sections contain the analysis of the reducibility of $\bE_{\mathrm{Id}^{\beta}}$ to $\bE_{\mathrm{Id}^{\beta}\psi}$, in the $\alpha < \beta$ case we assume $\psi \equiv 1$. Then (\ref{Z1}) and (\ref{Z2}) turn to \Keq\label{G3}\vp\b(\frac{1}{2^{n}}\j) =\sum_{i=0}^{n} 2^{(\alpha-\beta)(n-i)}\mu(i)^{\beta} ~(n < \omega),\Zeq \Keq\label{G4}\sum_{i=0}^{\infty}\frac{1}{2^{i\alpha}} \mu(n+i)^{\beta} \leq L\sum_{i=0}^{n}2^{(\alpha-\beta)(n-i)}\mu(i)^{\beta} .\Zeq To satisfy (\ref{G3}), we have to define \Keq \label{MUMM} \mu(n)^{\beta} = \vp(1/2^{n}) - \vp(1/2^{n-1})/2^{\beta-\alpha} ~(0 < n < \omega),\Zeq and then (\ref{G4}) follows from (\ref{G2}).
\begin{corollary}\label{megm} Let $1 \leq \alpha < \beta < \infty$. Suppose $\vp \colon [0,1] \rar \mathbb{R}^{+}$ is continuous, essentially increasing, $\vp/\mathrm{Id}^{\beta-\alpha}$ is essentially decreasing, $ \bE_{\mathrm{Id}^{\alpha}\vp}$ is an equivalence relation and for the $\mu$ defined by (\ref{MUMM}), (\ref{Zstar}) holds. Then $\bE_{\mathrm{Id}^{\alpha}\vp} \leq _{B} \bE_{\mathrm{Id}^{\beta}}$.
\end{corollary}
\textbf{Proof. } By Lemma \ref{EINC}, we can assume $\vp/\mathrm{Id}^{\beta-\alpha}$ is decreasing, so that (\ref{MUMM}) is valid; while $\vp$ being essentially increasing implies (\ref{G2}). So $\bE_{\mathrm{Id}^{\alpha}\vp} \leq _{B} \bE_{\mathrm{Id}^{\beta}}$ follows from Theorem \ref{fole}.$\bs$
\medskip
The assumptions of Corollary \ref{megm} are affordable:
\bit
\item[-] if $\vp$ is essentially decreasing, Corollary \ref{HGF} gives the Borel equivalence of $\bE_{\mathrm{Id}^{\alpha}\vp}$ and $\bE_{\mathrm{Id}^{\beta}}$ under suitable assumptions;
\item[-] in order to not to be in the counterintuitive case, we may assume that $\vp/\mathrm{Id}^{\beta-\alpha-\delta}$ is decreasing for some $\delta > 0$, so by Corollary \ref{megm}, $\bE_{\mathrm{Id}^{\alpha}\vp} \leq _{B} \bE_{\mathrm{Id}^{\beta-\delta}} < _{B} \bE_{\mathrm{Id}^{\beta}}$;
\eit
So Corollary \ref{megm} indicates that in the $\alpha < \beta$ case growth order decides Borel reducibility. Moreover, in the next section we will see that in order to guarantee $\bE_{\mathrm{Id}^{\alpha}\vp} \leq _{B} \bE_{\mathrm{Id}^{\beta}}$ by growth order estimates, we need $\mathrm{Id}^{\beta}/(\mathrm{Id}^{\alpha} \vp)$ to be bounded; the assumptions of Corollary \ref{megm} reflect this constraint.
Finally we check that for every $\eta \in [0,1)^{< \omega}$, the function $\vp(0) = 0$, $\vp(x) = 1/l_{\eta}(x)$ $(0 < x \leq 1)$ satisfies the assumptions of Corollary \ref{megm}. By
$\lim_{n \rar \infty} l_{\eta}(1/2^{n+1})/l_{\eta}(1/2^{n})=1$ we have $$\mu(n+1) \lesssim 1/l_{\eta}^{1/\beta}(1/2^{n+1}) \lesssim 1/l_{\eta}^{1/\beta}(1/2^{n})\lesssim \mu(n) ~(n < \omega),$$ i.e.\ (\ref{Zstar}) holds. The other assumptions follow from (\ref{y1}), (\ref{y3}) and (\ref{y5}) of Lemma \ref{y}. So \Keq\label{totref}\bE_{\mathrm{Id}^{\alpha}/l_{\eta}} \leq_{B} \bE_{\mathrm{Id}^{\gamma}} < _{B} \bE_{\mathrm{Id}^{\beta}}~ (\eta \in [0,1)^{< \omega},~ 1 \leq \alpha < \gamma < \beta < \infty).\Zeq
\subsection{The $\alpha = \beta$ case}
This is the most interesting case for us. Now (\ref{Z1}), (\ref{Z2}) and (\ref{Zstar}) turn to \Keq\label{G5}\vp\b(\frac{1}{2^{n}}\j) =\sum_{i=0}^{n} \mu(i)^{\alpha} \psi\b(\frac{\mu(i)}{2^{n}}\j) ~(n < \omega),\Zeq \Keq\label{G6}\sum_{i=0}^{\infty}\frac{1}{2^{i\alpha}} \mu(n+i)^{\alpha} \psi\b(\frac{\mu(n+i)}{2^{n+i}}\j) \leq L\sum_{i=0}^{n}\mu(i)^{\alpha} \psi\b(\frac{\mu(i)}{2^{n}}\j) ~(n < \omega),\Zeq \Keq\label{Gstar} \mu(n) \leq L \cdot \max_{i < n} \mu(i) ~(n < \omega).\Zeq
We obtain a sufficient condition for (\ref{G6}) and (\ref{Gstar}).
\begin{lemma}\label{2eset} Assume $\psi$ is essentially increasing, $\psi(x) > 0$ for $x > 0$ and $\mu(n) \leq 1$ for every $n < \omega$ sufficiently large. Then (\ref{G6}) and (\ref{Gstar}) hold.
\end{lemma}
\textbf{Proof. } Since $\psi$ is essentially increasing, for every $n$ sufficiently large we have $\psi(\mu(n+i)/2^{n+i}) \lesssim \psi(1/2^{n}) $ $(0 \leq i < \omega)$. Hence \Keq\label{Z23}\sum_{i=0}^{\infty}\frac{1}{2^{i\alpha}} \mu(n+i)^{\alpha} \psi\b(\frac{\mu(n+i)}{2^{n+i}}\j) \lesssim \frac{1}{(1-1/2^{\alpha})} \psi(1/2^{n})~(n < \omega),\Zeq thus by $\mu(0) = 1$, (\ref{G6}) follows. Also by $\mu(0)=1$ we have (\ref{Gstar}), so the proof is complete.$\bs$
\subsubsection{The question of S.\ Gao}
In this section, in the spirit of (\ref{nov}), we give the negative answer to the question of S.\ Gao mentioned in the introduction.
\begin{corollary}\label{Rc} Let $1 \leq \alpha < \infty$ be arbitrary. Let $\mu \colon \omega \rar [0, \infty)$ be such that $\mu(0)=1$. Let $\psi \colon [0,1] \rar [0,\infty)$ be a continuous essentially increasing function such that $\psi(1)=1$, (\ref{G6}) holds and there is a $K > 0$ for which \Keq\label{Z12}\frac{1}{K}\psi(1/2^{n}) \leq \psi\b(\frac{\mu(i)}{2^{n}}\j) \leq K \psi(1/2^{n})~( 0 \leq i \leq n < \omega).\Zeq
Set $\sigma_{\mu^{\alpha}}(n) = \sum_{i=1}^{n}\mu^{\alpha}(i)$ $(n < \omega)$. Suppose $(\b(1+ \sigma_{\mu^{\alpha}}(n) \j) \psi\b(1/2^{n}\j))_{n < \omega}$ is essentially decreasing and (\ref{Gstar}) holds.
Define $\vp$ from $\mu$, $\alpha$ and $\psi$. Set $f(x) = x^{\alpha}\vp(x)$ $(0 < x \leq 1)$, $f(0)=0$ and $g= \mathrm{Id}^{\alpha}\psi$ and suppose $\bE_{f}$ and $\bE_{g}$ are equivalence relations. Then $\bE_{f} \leq _{B} \bE_{g}$.
If, in addition, $\vp$ satisfies $A_{1}$ of Theorem \ref{Nr}, or equivalently $\vp$ satisfies (\ref{Z14}), and $\lim_{n \rar \infty} \sigma_{\mu^{\alpha}}(n) = \infty$, then $\bE_{g} \not \leq _{B} \bE_{f}$.
\end{corollary}
\textbf{Proof. } By (\ref{Z12}), from (\ref{G5}) we get \Keq\label{Z113}\frac{1}{K}\b(1+ \sigma_{\mu^{\alpha}}(n) \j) \psi\b(\frac{1}{2^{n}}\j) \leq \vp\b(\frac{1}{2^{n}}\j) \leq K\b(1+ \sigma_{\mu^{\alpha}}(n) \j) \psi\b(\frac{1}{2^{n}}\j).\Zeq Since $((1+ \sigma_{\mu^{\alpha}}(n)) \psi(1/2^{n}))_{n < \omega}$ is essentially decreasing, $\vp$ is essentially increasing. So by Theorem \ref{fole}, $\bE_{f} \leq _{B} \bE_{g}$.
Moreover, if $\vp$ satisfies $A_{1}$ of Theorem \ref{Nr}, which follows e.g.\ if $\vp$ satisfies (\ref{Z14}), then since $\lim_{n \rar \infty} \sigma_{\mu^{\alpha}}(n) = \infty$ implies $(A_{2})$ of Theorem \ref{Nr}, $\bE_{g} \not \leq _{B} \bE_{f}$. This completes the proof.$\bs$
\medskip
Many natural functions satisfy the conditions of Corollary \ref{Rc} for both $\vp$ and $\psi$, in particular the functions $1/l_{\eta}$. By Lemma \ref{mege}, the following result gives the negative answer to the question of S.\ Gao.
\begin{corollary}\label{logos} For every $1 \leq \alpha < \beta < \infty$ and $\eta, \eta' \in [0,1)^{< \omega}$ with $\eta <_{\textrm{lex}} \eta'$, $$\bE_{\mathrm{Id}^{\alpha}} < _{B} \bE_{\mathrm{Id}^{\alpha}/l_{\eta}} < _{B} \bE_{\mathrm{Id}^{\alpha}/l_{\eta'}} <_{B} \bE_{\mathrm{Id}^{\beta}}.$$
\end{corollary}
\textbf{Proof. } By Lemma \ref{y} (\ref{y5}), $\bE_{\mathrm{Id}^{\alpha}/l_{\eta}}$ is an equivalence relation, and in (\ref{totref}) we obtained $\bE_{\mathrm{Id}^{\alpha}/l_{\eta}}<_{B} \bE_{\mathrm{Id}^{\beta}}.$ By Lemma \ref{y} (\ref{y1}), $(l_{\eta'}(1/2^{n})/l_{\eta}(1/2^{n}))_{n < \omega}$ is strictly increasing for $n$ sufficiently large. Thus there is a function $\mu \colon \omega \rar \mathbb{R}^{+}$ such that $\mu(0)=1$, and for every $n < \omega$ sufficiently large,
$$\mu^{\alpha}(n) = l_{\eta'}(1/2^{n})/l_{\eta}(1/2^{n}) - l_{\eta'}(1/2^{n-1})/l_{\eta}(1/2^{n-1}).$$
Let $\psi=1/l_{\eta'}$ $(0 < x \leq 1)$, $\psi(0)=0$ and define $\vp$ from $\mu$, $\alpha$ and $\psi$. We check the conditions of Corollary \ref{Rc}.
First we show that for every $\ve > 0$, \Keq\label{UTS}2^{-n\ve} \leq \mu^{\alpha}(n) \leq 1 \Zeq holds for $n$ sufficiently large. By Lemma \ref{y} (\ref{bEc}), (\ref{y07}) and (\ref{y1}), for every $n$ sufficiently large, \begin{multline}\notag \mu^{\alpha}(n) = \frac{l_{\eta'}(1/2^{n})}{l_{\eta}(1/2^{n})}-\frac{l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n-1})} = \\ \frac{l_{\eta'}(1/2^{n})-l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n})} + \frac{l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n})} - \frac{l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n-1})} \leq \\ \frac{l_{\eta'}(1/2^{n})-l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n})} \leq \frac{1}{l_{\eta}(1/2^{n})} \leq 1. \end{multline} For the lower bound, take an $m > |\eta'|$ and consider $t_{m}$. By Lemma \ref{y} (\ref{y1}), $l_{\eta'}(1/2^{n})/(l_{\eta}(1/2^{n})t_{m}(1/2^{n}))$ is still strictly increasing for $n$ sufficiently large. So for $n$ sufficiently large, $$\mu^{\alpha}(n) = \frac{l_{\eta'}(1/2^{n})}{l_{\eta}(1/2^{n})}-\frac{l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n-1})} \geq \frac{l_{\eta'}(1/2^{n-1})}{l_{\eta}(1/2^{n-1})}\frac{t_{m}(1/2^{n}) - t_{m}(1/2^{n-1})}{t_{m}(1/2^{n-1})}.$$ It is elementary analysis that $t_{m}(1/2^{n})-t_{m}(1/2^{n-1}) \geq 1/n^{2}$ for $n$ sufficiently large, so the statement follows.
By Lemma \ref{y} (\ref{y1}), $\psi$ is continuous, essentially increasing and $\psi(1) = 1$. Lemma \ref{2eset} gives (\ref{G6}) and (\ref{Gstar}). Also, (\ref{Z12}) follows from Lemma \ref{y} (\ref{y05}) using that $2^{-n/2} \leq \mu^{\alpha}(n) \leq 2^{n/2}$ holds for $n$ sufficiently large.
We have $$\b(1+ \sigma_{\mu^{\alpha}}(n) \j) \psi\b(1/2^{n}\j) \approx 1/l_{\eta}(1/2^{n})~(n < \omega),$$ so $(\b(1+ \sigma_{\mu^{\alpha}}(n) \j) \psi\b(1/2^{n}\j))_{n < \omega}$ is essentially decreasing.
By (\ref{Z113}), $$\vp\b(\frac{1}{2^{n}}\j) \approx (1+\sigma_{\mu^{\alpha}}(n))\psi\b(\frac{1}{2^{n}}\j) \approx l_{\eta'}(1/2^{n})/l_{\eta}(1/2^{n}) \psi\b(\frac{1}{2^{n}}\j) \approx 1/l_{\eta}(1/2^{n}),$$ so by Corollary \ref{Rc}, $\bE_{\mathrm{Id}^{\alpha}/l_{\eta}} \leq _{B} \bE_{\mathrm{Id}^{\alpha}/l_{\eta'}}$.
By Lemma \ref{y} (\ref{y1}), $\lim_{x \rar +0} l_{\eta'}(x)/l_{\eta}(x) = \infty$, i.e.\ $\lim_{n \rar \infty} \sigma_{\mu^{\alpha}}(n) = \infty$. By Lemma \ref{y} (\ref{y6}), $1/l_{\eta}$ satisfies $A_{1}$ of Theorem \ref{Nr}, so again by Corollary \ref{Rc}, $\bE_{\mathrm{Id}^{\alpha}/l_{\eta'}} \not \leq _{B} \bE_{\mathrm{Id}^{\alpha}/l_{\eta}} $. The $\eta = \es$ special case gives $\bE_{\mathrm{Id}^{\alpha}}< _{B} \bE_{\mathrm{Id}^{\alpha}/l_{\eta}} $, so the proof is complete.$\bs$
\subsubsection{Embedding long linear orders}
In this section we show that every linear order which can be embedded into $(\mc{P}(\omega)/\mathrm{fin}, \subset)$ also embeds into the set of Borel equivalence relations $\bE_{f}$ satisfying $\bE_{\mathrm{Id}^{\alpha}} \leq _{B} \bE_{f} \leq _{B} \bE_{\mathrm{Id}^{\alpha}/(1-\log)}$ ordered by $<_{B}$. We refer to \cite{B-U} for results on embedding ordered sets into $(\mc{P}(\omega)/\mathrm{fin}, \subset)$, here we only remark that it is consistent with ZFC, e.g.\ under the Continuum Hypothesis, that every ordered set of size continuum embeds into $(\mc{P}(\omega)/\mathrm{fin}, \subset)$.
\begin{corollary}\label{om} Let $1 \leq \alpha < \infty$ be fixed. There is a mapping $\mc{F} \colon \mc{P}(\omega)/\mathrm{fin} \rar C[0,1]$ such that for every $U, V \in \mc{P}(\omega)/\mathrm{fin}$, $\bE_{\mc{F}(U)}$ is an equivalence relation satisfying $\bE_{\mathrm{Id}^{\alpha}/(1-\log)^{1-\log(17/16)}} \leq _{B} \bE_{\mc{F}(U)} \leq _{B} \bE_{\mathrm{Id}^{\alpha}/(1-\log)}$ and $U \subset V \Rightarrow \bE_{\mc{F}(V)} <_{B} \bE_{\mc{F}(U)}$.
\end{corollary}
\textbf{Proof. } Let $ \gamma= 17/16$. For every $U \in \mc{P}(\omega)$ set $$\mu_{U}(0) =1, ~\mu^{\alpha}_{U}(n) = \gamma^{|U \cap \lfloor 1+ \log (n)\rfloor |} ~(0 < n < \omega).$$ Let $\psi_{0}(x) = 1/(1-\log(x))^2$ $(0< x \leq 1)$, $\psi_{0}(0) = 0$. For every $U \in \mc{P}(\omega)$ we define $\vp_{U}$ from $\mu_{U}$, $\alpha$ and $\psi_{0}$, and we set $\mc{F}(U) = \mathrm{Id}^{\alpha} \vp_{U}$.
First we show that for every $U \in \mc{P}(\omega)$, $\mu_{U}$ and $\psi_{0}$ satisfy (\ref{Z12}). By definition, $1 \leq \mu^{\alpha}_{U}(n) \leq \gamma^{1+\log(n)} \leq \gamma n$ $(0< n < \omega)$, so (\ref{Z12}) follows.
Next we show that for every $U \in \mc{P}(\omega)$, $\vp_{U}$ is essentially increasing. Since (\ref{Z12}) holds, by (\ref{Z113}) it is enough to show that $((1+\sigma_{\mu^{\alpha}_{U}}(n))\psi_{0}(1/2^{n}))_{n < \omega}$ is essentially decreasing. We have $\psi_{0}(1/2^{n}) \approx 1/n^{2}$ $(0 < n < \omega)$. Let $0 < n < m < \omega$ be fixed, say $m = \rho n$ for some $\rho > 1$. If $\mu_{U}^{\alpha}(n) = \gamma^{k}$, then $\sigma_{\mu^{\alpha}_{U}}(n) \geq n\gamma^{k-1}/2$, and $$\sigma_{\mu^{\alpha}_{U}}(m) \leq \sigma_{\mu^{\alpha}_{U}}(n) + (m-n) \gamma^{k+1+\log(m) - \log(n) } \leq \sigma_{\mu^{\alpha}_{U}}(n) + (\rho-1)n \gamma^{k+1+\log(\rho)}.$$ Hence \begin{multline} \notag \frac{\sigma_{\mu^{\alpha}_{U}}(m)}{m^{2}} \leq \frac{\sigma_{\mu^{\alpha}_{U}}(n) + (\rho-1) n \gamma^{k+1+\log(\rho) }}{(\rho n )^{2}} \leq \\ \frac{\sigma_{\mu^{\alpha}_{U}}(n)}{n^{2}} + \frac{\gamma^{k+1+\log(\rho) }}{\rho n } \leq \frac{\sigma_{\mu^{\alpha}_{U}}(n)}{n^{2}} \b(1+ \frac{2 \gamma^{2+\log(\rho) }}{\rho} \j) \leq 9 \frac{\sigma_{\mu^{\alpha}_{U}}(n)}{n^{2}}. \end{multline} This shows $((1+\sigma_{\mu^{\alpha}_{U}}(n))\psi_{0}(1/2^{n}))_{n < \omega}$ is essentially decreasing.
Next we check that for every $U \in \mc{P}(\omega)$, $\bE_{\mc{F}(U)}$ is an equivalence relation. By definition, $(R_{1})$ holds; $(R_{2}a)$ holds for $\mathrm{Id}^{\alpha}\psi_{0}$ with $C=8\alpha$, so since $\vp_{U}/\psi_{0}$ is decreasing, $(R_{2}a)$ holds for $\mathrm{Id}^{\alpha}\vp_{U}$, as well. Finally $(R_{2}b)$ follows from $\mathrm{Id}^{\alpha}\vp_{U}$ is essentially increasing.
Our task is to prove that if $U,V \in \mc{P}(\omega)$ satisfy $U \ss ^{\star} V$, $|V \sm U| = \infty$ then $\bE_{\mc{F}(V)} <_{B} \bE_{\mc{F}(U)}$. Observe that if $U, U' \in \mc{P}(\omega)$ differ only by a finite set then $\mc{F}(U) \approx \mc{F}(U')$ hence $\bE_{\mc{F}(U)}=\bE_{\mc{F}(U')}$. So we can assume $U \ss V$, $0 \in V \sm U$.
Our strategy is to show that $\vp=\vp_{V}$ can be obtained from $\psi = \vp_{U}$ as in (\ref{G5}) with a $\mu$ satisfying the assumptions of Corollary \ref{Rc}. Set $\mu(0)=1$, $$\mu^{\alpha}(n+1) = \frac{1+\sigma_{\mu^{\alpha}_{V}}(n+1)}{1+\sigma_{\mu^{\alpha}_{U}}(n+1)} - \frac{1+\sigma_{\mu^{\alpha}_{V}}(n)}{1+\sigma_{\mu^{\alpha}_{U}}(n)} ~(n < \omega).$$ Later on will prove \Keq\label{F1}\frac{\gamma-1}{(n+2)^3} \leq \mu^{\alpha}(n) \leq 1 ~(n < \omega);\Zeq now we assume (\ref{F1}) and verify the conditions of Corollary \ref{Rc}.
We have $\vp_{U}$ is continuous and $\vp_{U}(0) = 1$. As we have seen above, $\vp_{U}$ is essentially increasing. By $\mu \leq 1$, Lemma \ref{2eset} gives (\ref{G6}) and (\ref{Gstar}). By (\ref{F1}), $$\vp_{U}\b(\frac{1}{2^{n+3\lfloor \log(n+2)\rfloor +7}}\j) \lesssim \vp_{U}\b(\frac{\mu(i)}{2^{n}}\j) \lesssim \vp_{U}\b(\frac{1}{2^{n}}\j) ~(0 \leq i \leq n < \omega),$$ so (\ref{Z12}) follows from \begin{multline} \notag\vp_{U}\b(1/2^{n}\j) \approx (1+\sigma_{\mu^{\alpha}_{U}}(n))\psi_{0}(1/2^{n})\approx \\ (1+\sigma_{\mu^{\alpha}_{U}}(n+3\lfloor \log(n+2)\rfloor +7))\psi_{0}(1/2^{n+3\lfloor \log(n+2)\rfloor +7}) \approx \\ \vp_{U}\b(1/2^{n+3\lfloor \log(n+2)\rfloor +7}\j).\end{multline}
Let $\vp$ be defined from $\mu$, $\alpha$ and $\vp_{U}$. We have $$1+ \sigma_{\mu^{\alpha}}(n) = (1+\sigma_{\mu^{\alpha}_{V}}(n))/(1+\sigma_{\mu^{\alpha}_{U}}(n))~(n < \omega),$$ so by (\ref{Z113}), \begin{multline} \notag \vp\b(\frac{1}{2^{n}}\j) \approx \frac{1+\sigma_{\mu^{\alpha}_{V}}(n)}{1+\sigma_{\mu^{\alpha}_{U}}(n)} \vp_{U}(n) \approx \\ \frac{1+\sigma_{\mu^{\alpha}_{V}}(n)}{1+\sigma_{\mu^{\alpha}_{U}}(n)} (1+\sigma_{\mu^{\alpha}_{U}}(n)) \psi_{0}\b(\frac{1}{2^{n}}\j) = \vp_{V}\b(\frac{1}{2^{n}}\j) .\end{multline} Thus $\bE_{\mc{F}(V)}= \bE_{\mathrm{Id}^{\alpha}\vp}$; and $(\b(1+ \sigma_{\mu^{\alpha}}(n) \j) \psi\b(1/2^{n}\j))_{n < \omega}$ is essentially decreasing. So by Corollary \ref{Rc}, $\bE_{\mc{F}(V)} \leq_{B} \bE_{\mc{F}(U)}$.
Observe that $\psi_{0}$ satisfies (\ref{Z14}) with $M=0$ and $\ve = 1/8$. Since $(1+\sigma_{\mu^{\alpha}_{U}}(n))_{n < \omega}$ is increasing, $\vp_{U}(1/2^{n}) \approx (1+\sigma_{\mu^{\alpha}_{U}}(n))\psi_{0}(1/2^{n})$ $(n < \omega)$ also satisfies (\ref{Z14}) with the same $M$ and a smaller $\ve$. Thus $\vp_{U}$ satisfies $A_{1}$ of Theorem \ref{Nr}.
Since $ |U\cap \log(n)|+k \leq |V \cap \log(n)|$ implies $\gamma^{k} \mu^{\alpha}_{U}(n) \leq \mu^{\alpha}_{V}(n)$, we have $$\lim_{n \rar \infty} (1+\sigma_{\mu^{\alpha}_{U}}(n))/(1+\sigma_{\mu^{\alpha}_{V}}(n)) = 0$$ hence $\lim_{n \rar \infty} \sigma_{\mu^{\alpha}}(n) = \infty$. So again by Corollary \ref{Rc}, $\bE_{\mc{F}(U)} \not \leq_{B} \bE_{\mc{F}(V)}$. For $U = \es$ and $V = \omega$, $\vp_{U} \approx 1/(1-\log)$ and $\vp_{V} \approx 1/(1-\log)^{1-\log(\gamma)}$, so $\bE_{\mathrm{Id}^{\alpha}/(1-\log)^{1-\log(\gamma)}} \leq _{B} \bE_{\mc{F}(U)} \leq _{B} \bE_{\mathrm{Id}^{\alpha}/(1-\log)}$ $(U \in \mc{P}(\omega))$.
It remains to prove (\ref{F1}). For $n=1$, $\mu^{\alpha}(1) = (1+\gamma)/2 - 1$; for $n=2$, $\mu^{\alpha}(2) = (1+2\gamma)/3 - (1+\gamma)/2$. So (\ref{F1}) holds for $n=1,2$. Let $n \geq 2$; then $$1+\sigma_{\mu^{\alpha}_{V}}(n) =1+\gamma + a_{n}, ~1+\sigma_{\mu^{\alpha}_{U}}(n) =2 + b_{n}$$ and $$1+\sigma_{\mu^{\alpha}_{V}}(n+1) =1+\gamma + a_{n}+\gamma^{c}, ~1+\sigma_{\mu^{\alpha}_{U}}(n+1) =2 + b_{n}+\gamma^{d},$$ where $c \geq d+1$ and $\gamma \leq a_{n}/b_{n}\leq \gamma^{c-d}$ $(1 <n < \omega)$. Then for every $2 \leq n < \omega$, \begin{multline}\label{F2}\frac{1+\sigma_{\mu^{\alpha}_{V}}(n+1)}{1+\sigma_{\mu^{\alpha}_{U}}(n+1)} - \frac{1+\sigma_{\mu^{\alpha}_{V}}(n)}{1+\sigma_{\mu^{\alpha}_{U}}(n)} = \frac{1+\gamma + a_{n}+\gamma^{c}}{2 + b_{n}+\gamma^{d}} - \frac{1+\gamma + a_{n}}{2 + b_{n}} = \\ \frac{\gamma^{c} - \gamma^{d}\frac{1+\gamma+a_{n}}{2+b_{n}}}{2+b_{n}+\gamma^{d}} = \frac{\gamma^{c} - \gamma^{d}\b(\frac{a_{n}}{b_{n}} - \frac{\frac{2a_{n}}{b_{n}} - (1 + \gamma)}{2+b_{n}}\j)}{2+b_{n}+\gamma^{d}} \geq \\\frac{\gamma^{c} - \gamma^{d}\b(\gamma^{c-d} - \frac{2\gamma - (1 + \gamma)}{2+b_{n}}\j)}{2+b_{n}+\gamma^{d}} =\gamma^{d} \frac{\gamma-1}{(2+b_{n})(2+b_{n}+\gamma^{d})} .\end{multline} We have $d \leq \lfloor \log(n)\rfloor$, so $b_{n} \leq n\gamma^{d} \leq n^2$ $(2 \leq n < \omega)$. So (\ref{F2}) can be estimated from below by $$\frac{\gamma-1}{(2/\gamma^{d}+b_{n}/\gamma^{d})(2+b_{n}+\gamma^{d})} \geq \frac{\gamma-1}{(2/\gamma^{d}+n)(2+n^{2}+n)} \geq \frac{\gamma-1}{(n+2)^{3}},$$ as stated.
For the upper bound, as we have seen in (\ref{F2}), it is enough to show $$\gamma^{c} - \gamma^{d}\frac{1+\gamma+a_{n}}{2+b_{n}} \leq 2+b_{n}+\gamma^{d}~(2 \leq n < \omega).$$ Since $c \leq 1+\log(n+1)$ and $n-2 \leq b_{n}$, $$\gamma^{c} \leq \gamma (n+1)^{\log(\gamma)} \leq n \leq 2+b_{n}~(2 \leq n < \omega)$$ for our $\gamma$. This completes the proof.$\bs$
|
1,116,691,497,636 | arxiv | \section{Pulsar Timing Arrays for Gravitational Wave Detection}
Exactly a hundred years after Einstein's formulation of general relativity, we are on the verge of finding the ``holy grail" that would open a new window on the universe: gravitational waves (GWs). In addition to Earth-based interferometers such as LIGO and VIRGO, and the future space-based eLISA, pulsar timing arrays (PTAs) are actively searching for GWs using millisecond pulsars (MSPs) and Earth as test masses. GWs affect the space-time between Earth and pulsars, and introduce offsets in the times-of-arrival (TOAs) of radio pulses emitted by pulsars. Due to their long baselines (Earth-pulsar distances), PTAs are sensitive to GWs in the nanohertz frequency range, making it a complementary tool to LIGO and the planned eLISA. This frequency range includes the predicted GW emission from supermassive black hole binaries and cosmic strings. For the purpose of GW detection, PTAs monitor the timing residuals of MSPs (defined by the the difference between expected and observed TOAs) over a long period ranging from 10 to 30 years. A detection can be achieved by studying the timing residuals of a number of different pulsars and finding specific correlations in the timing residuals between pulsar pairs.
In order to estimate any ``disturbance" in the arrival times of radio pulses, we need to account for every rotation of the pulsar. In a typical PTA observation, we observe a pulsar for minutes to hours; correct for interstellar dispersion and fold on the known pulsar period; determine TOAs by correlating the obtained profile with a template profile; improve the pulsar timing model; study timing residuals. GW detection pipelines can then be applied to achieve a detection or set an upper limit to a background of GWs.\cite{EPTA}
Current PTAs include the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), the European Pulsar Timing Array (EPTA) and the Parkes Pulsar Timing Array (PPTA), which collaborate to form the International Pulsar Timing Array (IPTA). Here we will focus on the EPTA's efforts to detect GWs using high-precision pulsar timing at five large radio telescopes: the 100-meter Effelsberg telescope in Germany; the 94m-equivalent Nan{\c c}ay Radio Telescope in France; the 94m-equivalent Westerbork Synthesis Radio Telescope (WSRT) in the Netherlands; the 76-m Lovell Telescope in the UK, and the 64-m newly-commissioned Sardinia Radio Telescope (SRT) in Italy.
\section{LEAP Project Overview}
As part of the EPTA's efforts to detect GWs, the Large European Array for Pulsars (LEAP) project conducts simultaneous observations of MSPs with the five 100-m class European telescopes, which allows us to add the pulsar data from each telescope coherently and improve TOA precision. By conducting simultaneous observations, the telescopes are combined into a tied array, effectively creating a single, fully-steerable telescope with the equivalent size of a 195-m dish, which is comparable to the aperture of the illuminated Arecibo dish with a large range of declinations (-30 to 90 degrees). The large aperture improves TOA precision and enables the timing of weaker pulsars. In a tied-array telescope, the signals from different telescopes are corrected for differences in time delay, then added in phase. These time delays can be due to differences in geometry, observatory clocks, instruments or atmospheric conditions. Therefore in addition to improving TOA precision, the LEAP project also helps to calibrate the instrument delays between telescopes and spot any anomalous offsets (``jumps") at one of the telescopes.
The LEAP project officially started in 2009, and routine, monthly observations started in early 2012 with three telescopes (Effelsberg, Lovell and WSRT). The Nan{\c c}ay telescope was added in the middle of 2012, while SRT joined with one 16-MHz sub-band in July 2013 and with the full bandwidth in March 2014 during the telescope's scientific validation phase. Each monthly session is 25 hours in length and typically monitors 22-23 MSPs simultaneously at all five telescopes; both pulsars and phase calibrators are observed. The observing is done at L-band, with a center frequency of 1396 MHz and a bandwidth of 128 MHz. Baseband data (corresponding to raw voltages) are recorded at each telescope and saved to disk. Disks are later shipped to Jodrell Bank Observatory for correlations. The LEAP pipeline designed to correlate the data was developed from scratch for the purpose of coherent addition. Calibrator and pulsar data are correlated to obtain time and phase offsets between telescopes (with the process of ``fringe finding"). Once these offsets are found, they are applied to the pulsar data, which are then added coherently to maximize S/N. Added LEAP baseband files are obtained, from which we generate archives and TOAs using standard pulsar software. This pipeline has many applications. It can for example be used to add other telescopes, such as in the case of the J1713+0747 global campaign (GiantLEAP). The LEAP project is effectively a precursor to SKA science and provides a TOA precision comparable to SKA Phase 1. A detailed overview of the LEAP project is found in Ref.~\refcite{LEAP}.
\section{LEAP Project Implementation at the Sardinia Radio Telescope}
The Sardinia Radio Telescope was completed in 2011 and is a fully-steerable 64-meter dish with three main focal positions, a wide planned frequency range (300 MHz to 100 GHz) and an active surface. It has been undergoing its scientific validation phase.\cite{AV}. Its active participation in the VLBI and LEAP international collaborations is described in Ref.~\refcite{EVN}. SRT is the latest addition to the LEAP project. The addition of one telescope leads to a huge increase in sensitivity since the LEAP data are coherently added: in fact, the S/N increases linearly with the number of telescopes (for identical telescopes), as opposed to the square root of the number of telescopes in the case of incoherent addition.
The dual L/P receiver was installed at the primary focus of the telescope in the spring of 2013. A ROACH backend \footnote{https://casper.berkeley.edu/wiki/ROACH} was installed at the site in July 2013 and, using the PSRDADA software \footnote{http://psrdada.sourceforge.net}, was set up to record baseband data to disk. SRT joined LEAP for the first time in July 2013 for a single 16-MHz sub-band (1428-1444 MHz), and observations were repeated monthly. In February 2014, an 8-node CPU cluster was installed to allow the simultaneous recording of baseband data in 8 16-MHz channels (16 MHz per node), therefore covering the full LEAP band. Starting in March 2014, SRT joined monthly LEAP runs with the full bandwidth for 20 pulsars. Data acquisition during LEAP runs was subsequently fully automated.
The LEAP storage cluster, with 96 TB of storage, was installed in April 2014. It is able to store the 40 TB of LEAP data taken at SRT each month. After each run, data are copied from the CPU cluster to the storage cluster and disks are shipped to Jodrell Bank Observatory. Standard pulsar software (PSRCHIVE and TEMPO2) were upgraded to enable data analysis with SRT data (PSRCHIVE was upgraded as of August 2014 \footnote{thanks to Willem van Straten} and TEMPO2 as of May 2015 \footnote{thanks to George Hobbs}).
The first fringes using the LEAP correlating pipeline were found during a test between SRT and WSRT in May 2014, as seen in Fig. 1. The 5-telescope data are now correlated on a monthly basis at Jodrell Bank. The SRT data however lags in quality compared to the other telescopes because of the presence of strong Radio Frequency Interference. In particular, a malfunctioning military radar is severely impacting the quality of the entire LEAP bandwidth.
\vspace{0.4cm}
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{fringe}
\end{center}
\caption{First fringe found between SRT and WSRT using the LEAP correlation pipeline: 5 seconds of quasar 3C454 at 1420 MHz (May 2014)}
\end{figure}
\section {LEAP First Results}
The data acquisition hardware and software are now in place at all telescopes, including SRT, and all five telescopes are fully participating in monthly LEAP sessions. The LEAP reduction pipeline is now up-to-date and the data reduction is done on a month-to-month basis, while a backlog of existing data is being reduced in parallel.
For 100$\%$ coherence, we expect the S/N of the added LEAP data to be equal to the sum of the S/N of the individual telescopes. Coherence is generally achieved in the strongest pulsars such as J1022+1001, as shown in Fig. 2.
\begin{figure}
\begin{center}
\includegraphics[width=1.5in,angle=-90]{prof_pgplot}
\includegraphics[width=1.5in,angle=-90]{J1022+1001}
\vspace{0.2cm}
\caption{Coherence of added LEAP data. On the left, pulsar profiles of PSR J1022+1001 at MJD 56500 for individual telescope data and for the 4-telescope addition (Effelsberg, Lovell, Nan{\c c}ay, WSRT). On the right, S/N for individual telescopes and coherently-added data for different epochs from February 2012 until February 2015. The pulsar shows near perfect coherence, in that the LEAP S/N is roughly equal to the sum of the S/N of the individual telescopes.}
\end{center}
\end{figure}
A higher S/N in the pulsar profiles leads to an improvement in timing precision, i.e. we are better able to constrain pulsar models and ultimately, find gravitational waves. We show an example of increased timing precision in Fig. 3.
\begin{figure}
\begin{center}
\includegraphics[width=2in,angle=-90]{J1713_residuals}
\end{center}
\caption{Timing residuals of PSR J1713+0747 for single telescope data as well as coherently-added LEAP data over a five-year timespan. The data from the individual telescopes (including the LEAP data) have a residual rms of 0.25 $\mu s$, while the LEAP data alone has an rms of 0.18 $\mu s$.}
\end{figure}
By improving the S/N of pulsar profiles, LEAP improves pulsar timing precision (i.e. lowers the rms of timing residuals), which is necessary for extracting a gravitational wave signal. However, LEAP can also be used to study phase jitter (Liu et al, in preparation) or perform pulsar searching for weak pulsars with known positions. Details are found in Ref.~\refcite{LEAP}.
\section{Tests of Strong Gravity in Double Neutron Star Systems}
Double neutron star (DNS) systems provide great tests of strong gravity. In particular, DNS systems help us constrain post-keplerian parameters, such as the rate of advance of periastron, the Einstein delay, the Shapiro delay, and the orbital period decay. The constraints on these parameters can be significantly improved by observing DNS systems with LEAP. Indeed, the coherent addition of pulsar data with LEAP significantly increases the S/N of pulsar observations, leading to a lower residual rms and better constraints on the pulsar model. We are planning a campaign to monitor DNS sources on a monthly basis during LEAP runs for the purpose of testing strong gravity (Perrodin et al, in preparation).
In the pulsars that are known DNS sources, one can also search for a pulsar companion, discovering new double-pulsar systems, which would provide excellent tests of strong gravity. In fact, one of the many applications for LEAP is pulsar searching. Thanks to the increased sensitivity of the LEAP tied-array beam, LEAP could search for new double-pulsar systems. As a test, we performed a blind search on 5 minutes of coherently-added LEAP data of the known double neutron star PSR J1518+4904 with Effelsberg and WSRT, searching for the pulsation of the neutron star companion. 33 candidates were found with the same DM as J1518+4904, but were found to be harmonics of the pulsar or RFI. J1518+4904 is observed monthly with LEAP, and we can continue to search for a pulsar companion in the added data.
\section{Conclusion}
PTAs aim to detect nanohertz GWs from supermassive black hole binaries. For this purpose, LEAP achieves higher sensitivity, leading to higher constraints on a GW background from supermassive black hole binaries. The recent addition of SRT to LEAP increases the sensitivity of the ``LEAP telescope". LEAP can also be used to constrain the strong gravity environment of pulsars in double neutron star systems.
\section*{Acknowledgments}
The authors acknowledge the support of colleagues in the EPTA collaboration. The presented work has been funded by the ERC Advanced Grant ``LEAP", Grant Agreement Number 227947 (PI M. Kramer). The work at the Sardinia Radio Telescope, which is operated by the Istituto Nazionale di Astrofisica (INAF), was done during the scientific validation phase of the telescope. We thank the SRT Astrophysical Validation Team \footnote{http://www.srt.inaf.it/astronomers/astrophysical-validation-team/}.
|
1,116,691,497,637 | arxiv | \section{introduction}
Twisted bilayer graphene (TBG) with flat bands has opened an avenue to explore abundant phenomena, for instance, localized and correlated states\cite{pnas2011moire,cao2018correlated,nature2019fulltb}, unconventional superconductivity\cite{cao2018unconventional,science2019tuning}, and electronic collective excitations\cite{plasexp2019cao,pnasplas2019intrinsically,Kuang2021}. Collective excited modes arising from quasi-localized states of flat bands, named as flat-band plasmons, feature intrinsically undamped behaviors and constant energy dispersion\cite{pnasplas2019intrinsically,Kuang2021}, giving insight into the unconventional superconductivity\cite{prs2020superconductivity,lewandowski2020pairing,tbg2021plassuper} and linear resistivity experimentally observed in TBG\cite{marginal}. Recently, ultraflat bands are detected in twisted bilayer transition metal dichalcogenides (tb-TMDs) with a wide range of angles\cite{tb-mos2018banddft, Zhan2020,flatband2019visualization,flatband2021tb-tmd,Zhang2020,flat2021wsesoc}, making tb-TMDs ideal platforms to extensively investigate many-body states\cite{tb-wse2020correlat,tb-wse2021manybody,tb-mos2021kpmodel, tb-wse2021manybody,collec2020tbmos2,exptb-wse2019corrlatesuper,exptb-wse2020correlate,exptb-wse2020flatband,exptb-wse2021flatband2} and optical excitons\cite{tb-tmd2021exciton,exptb-mos2021optical,tb-tmd2019excition}. For example, zero-resistance pockets are observed on doping away from half filling of the flat band in twisted bilayer WSe$_2$, which indicates a possible transition to a superconducting state\cite{exptb-wse2019corrlatesuper}. Theoretical studies establish that heterobilayer transition metal dichalcogenides are unique platforms to realize chiral superconductivity\cite{fuliang2021supertmd,chiral2021super}. Potential superconducting parings arising from magnon and spin-valley fluctuations are proposed in tb-TMDs\cite{magnon2022super, fuliang2021supertmd}.
Previous studies show that plasmon properties play a role in the pairing interaction responsible for superconductivity in TBG\cite{prs2020superconductivity, lewandowski2020pairing,tbg2021plassuper}. The plasmon-mediated superconductivity is determined by a ratio of \textcolor{black}{the} plasmon energy to \textcolor{black}{the} flat-band bandwidth. That is, with the flat-band plasmon energy scale comparable to the flat-band bandwidth\cite{pnasplas2019intrinsically, Kuang2021}, a superconducting state can be realized in TBG\cite{prs2020superconductivity, lewandowski2020pairing}. Then, we wonder whether plasmons in flat-band tb-TMDs possess similar properties as that in TBG and could contribute to parings in tb-TMDs.
Up to now, plasmonic properties of flat-band tb-TMDs are still not clear, which hinders us from further studying the plasmon-mediated superconductivity. The presence of flat bands in tb-{MoS$_2$} may result in different plasmon properties from those surveyed in monolayer\cite{1Lmos2013plasstauber,1Lmosacoustic2014,1lmos2016vallyplas,1Lmos2017plas_acoustic,1lmos2017nonlocalplas,review2019plas2D}, two-layer\cite{2Lmos2017plasacoustic}, few-layer\cite{few-Lmos2017plasexciton,1L-mos2020plasdft}, and one-sheet {MoS$_2$} systems\cite{sheetmos2014plas,1lmos2021plas-optic}, since the unique flat-band plasmons detected in TBG are distinct from those discovered in monolayer and bilayer graphene\cite{monogra2007dielectric,tonylow2014novel}. In practice, such \textcolor{black}{a} unique flat-band plasmon with undamped and quasi-flat characteristics can also lead to special applications such as photon-based quantum information processing toolbox and perfect lens\cite{pnasplas2019intrinsically,nano2016plas}.
All in all, the property of plasmons in flat-band tb-TMDs deserves further investigation.
In this paper, we mainly focus on flat-band plasmons in twisted bilayer {MoS$_2$} (tb-{MoS$_2$}). Previous studies show that the tb-{MoS$_2$} systems are semiconductors with ultra-flat bands in the valence band maximum (VBM). The flat bands are discovered in tb-{MoS$_2$} with a wide range of twist angles and have narrower bandwidth at a smaller angle\cite{tb-mos2018banddft,flatband2021tb-tmd,isoband2020tbmos2}. After introduce hole doping in the VBM, we employ a full tight-binding (TB) model to investigate low-energy plasmons in tb-{MoS$_2$}. It is known that the bandwidth of \textcolor{black}{a} flat band obviously modulates plasmon properties in TBG\cite{nano2016plas,Kuang2021}. By changing the twist angle of tb-{MoS$_2$}, we can also study how flatness of the flat band modifies the collective excitations. Moreover, the lattice relaxation significantly changes the electronic properties of tb-TMDs \textcolor{black}{with} small twist angles\cite{Zhan2020}. With lattice relaxation considered in tb-{MoS$_2$}, the band gap between the flat \textcolor{black}{VBM} and other valence bands disappears at large twist angles\cite{tb-mos2018banddft,flatband2021tb-tmd}. Will the absence of the band gap affect the flat-band plasmon? In principle, the polarization function can be calculated via the Lindhard function\cite{linear2005quantum}. With this method, we can investigate the effect of band cutoff\textcolor{black}{s} on the flat-band plasmon. For example, we can perform a one-band calculation where only flat-band intraband transitions contribute to the flat-band plasmon. Meanwhile, a full-band calculation can also be realized via a combination of the Kubo formula and tight-binding propagation method (TBPM)\cite{yuan2010tipsi,yuan2011kubo}. In the full-band calculation, both intraband and interband polarizations are taken into account. Therefore, interband transition effects on plasmons \textcolor{black}{in tb-{MoS$_2$}} are investigated by comparing the modes obtained from the one-band and full-band calculations. Our work could be an example to study flat-band plasmons in other twisted 2D semiconductors.
This paper is organized as follows. In Sec. II, the tight-binding model and computational methods are introduced. In Sec. III and Sec. IV, flat-band plasmons are explicitly studied in both relaxed (consider the atomic relaxation) and rigid (without \textcolor{black}{the} atomic relaxation) tb-{MoS$_2$}, respectively. In Sec. V, we pay attention to the effects of band cutoff\textcolor{black}{s} and chemical potentials on plasmons. Finally, we give a summary and discussion of our work.
\section{Numerical methods}
\label{sec2}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{structure2.pdf}
\caption{(a) Top view of the atomic structure of $3.5^\circ$ tb-{MoS$_2$}. (b) The first BZ with high symmetry points for the hexagonal lattice of tb-{MoS$_2$}.}
\label{fig:structre}
\end{figure}
\subsection{Tight-binding model}
We construct atomic structures of tb-{MoS$_2$} with a commensurate approach used in building TBG structures\cite{structure2016universal,strustrain2018prl}. The twisted structures are generated by starting from a 2H stacking ($\theta = 0^\circ$), which has the Mo (S) atom in the top layer directly above the S (Mo) in the bottom layer, and then rotating layers with the origin at an atom site\cite{Zhang2020}. The atomic structure of a moir\'e pattern of tb-{MoS$_2$} with $\theta=3.5^\circ$ is shown in Fig. \ref{fig:structre}(a), which contains 1626 atoms. In this paper, we mainly focus on plasmonic properties of tb-{MoS$_2$} with $\theta=3.5^\circ$ and $\theta=5.1^\circ$. The fully atomic relaxations are simulated via \textcolor{black}{the} Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)\cite{lammps1995computational} with the intralayer Stilliner-Weber potential\cite{Jiang_2015} and the interlayer Lennard-Jones potential\cite{lammps-lj}. The relaxation effects on flat bands of tb-{MoS$_2$} are investigated in previous works\cite{tb-mos2018banddft,Zhan2020,flatband2021tb-tmd}.
Here, we employ an accurate multi-orbital TB model to investigate the plasmons of tb-{MoS$_2$}. In this TB model, one unit cell of monolayer transition metal dichalcogenides comprises 11 orbitals, 5 d orbitals from one Mo atom, and 6 p orbitals from two S atoms\cite{Fang2015}. The total Hamiltonian of twisted bilayer {MoS$_2$} can be written as
\begin{equation}
\hat H = \hat H_1^{(1L)}+\hat H_2^{(1L)}+\hat H_{int}^{(2L)},
\label{hal}
\end{equation}
where $ \hat H_{1(2)}^{(1L)}$ is the eleven-orbital single layer Hamiltonian, which contains the on-site energy, the hopping terms between orbitals of the same type at first-neighbor positions, and the hopping terms between orbitals of different type at first- and second-neighbor positions. The term $\hat H_{int}^{(2L)}$ is the interlayer interaction expressed as
\begin{eqnarray}
\hat H_{int}^{2L} = \displaystyle\sum_{p_i',\mathbf r_2,p_j,\mathbf r_1}\hat \phi_{2,p_i'}^\dagger(\mathbf r_2)t_{p_i',p_j}^{(LL)}(\mathbf r_2-\mathbf r_1)\hat \phi_{1,p_j}(\mathbf r_1) + \mathrm{H. c.},\nonumber\\
\end{eqnarray}
where $\hat \phi_{i,p_j}$ is the $p_j$ orbital basis of $i$-th monolayer. The interlayer hoppings in Slater-Koster (SK) relation are expressed with distance and angle as\cite{SK1954simplified}
\begin{equation}
t_{p_i',p_j}^{(LL)}(\mathbf r) = (V_{pp,\sigma}(r)-V_{pp,\pi}(r))\frac{r_ir_j}{r^2}+V_{pp,\pi}(r)\delta_{i,j},
\end{equation}
where $r=|\mathbf r|$ and the distance-dependent SK parameter is
\begin{equation}
V_{pp,b}=\nu_be^{[-(r/R_b)^{\eta_b}]},
\label{inter}
\end{equation}
where $b=\sigma,\pi$, $\nu_b$, $R_b$ and $\eta_b$ are constant values taken from the Ref. \onlinecite{Fang2015}. In this paper, the interlayer interactions in twisted bilayer {MoS$_2$} are included in the TB Hamiltonian by adding hoppings between p orbitals of S atoms in the top and bottom layers with a distance smaller than 5 \AA~. The recent study shows that such a first-neighbor interlayer hopping approximation is appropriately enough\cite{isoband2020tbmos2}. When we relax the system, atoms move away from their equilibrium position in both in-plane and out-of-plane \textcolor{black}{directions}. As a consequence, we also need to change the intralayer hopping in Eq. (\ref{hal}). The intralayer hoppings in relaxed samples are modified with the form\cite{model2015intrahopping}
\begin{equation}
t_{ij,\mu\nu}^{intra}(\mathbf r_{ij})=t_{ij,\mu\nu}^{intra}(\mathbf r_{ij}^0)\bigg(1-\Lambda_{ij,\mu\nu}\frac{|\mathbf r_{ij}-\mathbf r_{ij}^0|}{|\mathbf r_{ij}^0|}\bigg)
\end{equation}
where $t_{ij,\mu\nu}^{intra}$ is the intralayer hopping between the $\mu$ orbital of the $i$ atom and $\nu$ orbital of the $j$ atom, $\mathbf r_{ij}^0$ and $\mathbf r_{ij}$ are the distance between the $i$ and $j$ atoms in the equilibrium and relaxed cases, $\Lambda_{ij,\mu\nu}$ is the dimensionless bond-resolved local electron-phonon coupling. We assume that $\Lambda_{ij,\mu\nu}=3,4,5$ for the S-S $pp$, S-Mo $pd$, and Mo-Mo $dd$ hybridizations, respectively\cite{model2015intrahopping}. Note that a large Hamiltonian matrix describing a rigid or relaxed tb-{MoS$_2$} supercell will be generated. For example, the items in the Hamiltonian matrix of $3.5^\circ$ {MoS$_2$} are more than five thousand. Consequently, it is tough to diagonalize such a large matrix. Next, we will introduce \textcolor{black}{the} numerical methods of exploring plasmon properties in \textcolor{black}{the} hole-doped tb-{MoS$_2$}.
\subsection{Plasmon}
Polarization functions can be obtained from the Kubo formula\cite{kubo1957statistical}
\begin{equation}\label{kubo}
\begin{aligned}
\Pi_K(\mathbf{q},\omega)=& -\frac{2}{S}\int_{0}^{\infty}\mathrm dt\; e^{i\omega t}\mathrm{Im}\langle \varphi| n_F(H)e^{iHt}\\
&\times\rho(\mathbf{q})e^{-iHt}[1-n_F(H)]\rho(-\mathbf{q})|\varphi\rangle,
\end{aligned}
\end{equation}
where $n_F(H)=\frac{1}{e^{\beta (H-\mu)}+1}$ is the Fermi-Dirac distribution operator, $\beta = \frac{1}{k_BT}$ being $T$ the temperature, $k_B$ the Boltzmann constant and $\mu$ the chemical potential. $\rho(\mathbf{q})=\sum_{i}c_i^{\dagger}c_i$exp$(i\mathbf{q}\cdot\mathbf{r}_i)$ is the density operator, $\mathbf{r}_i$ is the position of $i$ orbital and $S$ is the area of a unit cell. As we mentioned before, each unit cell of tb-TMDs contains thousands of orbitals, which makes the diagonalization of the Hamiltonian very challenging. In this paper, we calculate the polarization function by combining the Kubo formula with a TBPM method. The TBPM is based on the numerical solution of time-dependent Schr\"{o}dinger equation and requires no diagonalization processes\cite{yuan2010tipsi}. By using the TBPM method, it is possible to obtain the electronic properties of large-scale systems, for instance, the \textcolor{black}{density of states (DOS)} of TBG with rotation angle $\theta$ down to $0.48^\circ$\cite{zhan2020large} and of dodecagonal graphene quasicrystal\cite{yu2019dodecagonal,dos_method}. The key idea in TBPM is to perform \textcolor{black}{an} average over initial states $|\varphi\rangle$, a random superposition of all basis states\cite{yuan2010tipsi, TBPM2000fast}
\begin{equation}\label{random}
|\varphi\rangle = \sum_{i}a_i|i\rangle,
\end{equation}
where ${|i\rangle}$ are all basis states in real space and $a_i$ are random complex numbers normalized as $\sum_{i}|a_i|^2 = 1$. By introducing the time evolution of two wave functions
\begin{eqnarray}
|\varphi_1(\mathbf{q},t)\rangle&&=e^{-iHt}[1-n_F(H)]\rho(-\mathbf{q})|\varphi\rangle,\\\nonumber
|\varphi_2(t)\rangle&&=e^{-iHt}n_F(H)|\varphi\rangle.
\end{eqnarray}
Then the real and imaginary parts of the dynamical polarization are
\begin{eqnarray}\label{tbpm}
\mathrm{Re}\Pi(\textbf{q}, \omega)&&=-\frac{2}{S}\int_{0}^{\infty}\mathrm dt\cos(\omega t)\mathrm{Im}\langle\varphi_2(t)|\rho(\mathbf{q})|\varphi_1(t)\rangle,\nonumber\\\\
\mathrm{Im}\Pi(\textbf{q}, \omega)&&=-\frac{2}{S}\int_{0}^{\infty}\mathrm dt\sin(\omega t)\mathrm{Im}\langle\varphi_2(t)|\rho(\mathbf{q})|\varphi_1(t)\rangle.\nonumber
\end{eqnarray}
The dynamical polarization function can be obtained from the Lindhard function as well\cite{linear2005quantum}
\begin{equation}\label{Lindhard}
\begin{aligned}
\Pi(\textbf{q}, \omega) = &\frac{g_s}{(2\pi)^2}\int_\mathrm{BZ}d^2\textbf{k}\sum_{l,l'}\frac{n_\mathrm{F}(E_{\mathbf{k'} l'}) - n_\mathrm{F}(E_{\mathbf{k}l})}
{E_{\mathbf{k'} l' - E_{\mathbf{k}l}}-\omega-\mathrm{i}\delta}\\
& \times |\langle \mathbf{k'} l'|\mathrm e^{\mathrm{i}\mathbf{q\cdot r}}|\mathbf{k}l \rangle |^2,
\end{aligned}
\end{equation}
where $|\mathbf{k}l \rangle$ and $E_{\mathbf{k}l}$ are eigenstates and eigenvalues of the TB Hamiltonian in Eq. (\ref{hal}), respectively, with $\mathit{l}$ and $\mathit{l}'$ being band indices, $\mathbf{k'}$=$\mathbf{k}$+$\mathbf{q}$, $\delta \rightarrow 0^+$. Generally, the integral is taken over the whole first Brillouin zone (BZ) shown in Fig. \ref{fig:structre} (b). It is convenient to analyze the contribution of band transitions to the polarization function as Eq. (\ref{Lindhard}) can be written as the sum of two parts
\begin{equation}\label{intrainter}
\Pi(\textbf{q}, \omega) = \Pi_{intra}(\textbf{q}, \omega) + \Pi_{inter}(\textbf{q}, \omega),
\end{equation}
where $\Pi_{intra}(\textbf{q}, \omega)$ and $\Pi_{inter}(\textbf{q}, \omega)$ denote intraband and interband contributions corresponding to $\mathit{l} = \mathit{l}'$ and $\mathit{l} \neq \mathit{l}'$ in Eq. (\ref{Lindhard}), respectively. It is hard to sum over all bands obtained by diagonalizing TB Hamiltonian in Eq. (\ref{hal}) of a supercell that contains thousand of atoms. Therefore, we use the Eq. (\ref{tbpm}) to do full-band calculations. The validity of Eq. (\ref{tbpm}) has been verified by comparing the polarization function obtained from Eq. (\ref{tbpm}) and from a full-band calculation with Eq. (\ref{Lindhard})\cite{yuan2011kubo, Kuang2021}.
With the polarization function acquired from either Kubo formula in Eq. (\ref{tbpm}) or Lindhard function in Eq. (\ref{Lindhard}), the dielectric function that describes the electronic response to extrinsic electric perturbation, can be written within \textcolor{black}{the} random phase approximation (RPA) as
\begin{equation}
\varepsilon(\textbf{q}, \omega) = 1- V(q)\Pi(\textbf{q}, \omega),
\label{dielectric}
\end{equation}
in which $V(q)=2\pi e^2/{(\varepsilon_\mathrm{B}}{q})$ is the Fourier component of two-dimensional Coulomb interaction, with $ \varepsilon_\mathrm{B}$ being the background dielectric constant. In our calculations, we set $\varepsilon_\mathrm{B}=3.03$ to represent the bulk dielectric constant of hexagonal boron nitride (hBN)\cite{hbn2018dielectric}. \textcolor{black}{The} electron energy loss (EL) function can be expressed as
\begin{equation}
S(\mathbf q, \omega)=-\mathrm {Im}(1/\varepsilon(\mathbf q, \omega)),
\label{loss}
\end{equation}
which is an experimentally observable quantity to reflect the electronic response intensity. We can obtain \textcolor{black}{the} intraband EL function ($S_{intra}(\textbf{q}, \omega)$) or interband EL function ($S_{inter}(\textbf{q}, \omega)$) by only taking $\Pi_{intra}(\textbf{q}, \omega)$ or $\Pi_{inter}(\textbf{q}, \omega)$ into account in Eq. (\ref{intrainter}). In this way, we can analyze intraband and interband transition contributions to \textcolor{black}{the} EL function by comparing $S_{intra}(\textbf{q}, \omega)$ and $S_{inter}(\textbf{q}, \omega)$ to $S(\textbf{q}, \omega)$, respectively. A plasmon mode with frequency $\omega_p$ and wave vector $\textbf{q}$ is well defined when a peak exists in \textcolor{black}{the} EL loss function at $\omega_p$.
\subsection{Density of states}
The density of states is calculated with TBPM as\cite{yuan2010tipsi, TBPM2000fast}
\begin{equation}\label{dos}
D(E)=\lim\limits_{N \to \infty}\frac{1}{2\pi N}\displaystyle\sum_{p=1}^{N}\int_{-\infty}^{\infty}e^{iEt}\langle\varphi_p|e^{-iHt}|\varphi_p\rangle dt,
\end{equation}
where \textit N is the total number of initial states. In our calculations, the convergence of electronic properties can be guaranteed by utilizing a large enough system with more than 10 million atoms\cite{yuan2010tipsi}.
\section{flat-band plasmons in relaxed tb-{MoS$_2$} with different twist angles}\label{relax3.5}
\begin{figure*}[htp]
\includegraphics[width=1\textwidth]{relaxplas.pdf}
\caption{EL function $S(\textbf{q},\omega)$ intensity plots of relaxed tb-{MoS$_2$} with (a)-(c) $\theta=5.1^\circ$ and (e)-(g) $\theta=3.5^\circ$ under different band cutoff calculations. Particle-hole (p-h) continuum region is marked with ``p-h continuum" and boundaries with green solid lines. Band structures and DOS for (d) $5.1^\circ$ and (h) $3.5^\circ$ are shown over a small energy range. The dashed lines crossing the flat bands in (d) and (h) denote chemical potentials, $\mu$= -10.0 meV and $\mu$= -2.4 meV, respectively. Here, the Fermi energy zero is set as the valence band maximum (VBM). In (a) and (e), the ``1b-cut" calculation means only the single doped flat-band (blue line) is included with $l=l'=1$ in Eq. (\ref{Lindhard}), while the ``20b-cut" calculation sums over 40 bands near zero energy (20 conduction bands and 20 valence bands) in (b) and (f). ``Full-band" calculations are performed via Eq. (\ref{tbpm}) to consider all the bands in (c) and (g). The maximum value for q (a.b.) = 1 represents the length from $\Gamma$ to M point denoted in Fig. \ref{fig:structre} and the minimum value of q is 0.0625$|\Gamma M|$. Temperature is set to 1 K.}
\label{fig: relaxplas}
\end{figure*}
In this section, we focus on flat-band plasmons in \textcolor{black}{the} relaxed hole-doped tb-{MoS$_2$}. A flat band (blue line) appears in the VBM at both $5.1^\circ$ and $3.5^\circ$, as shown in Figs. \ref{fig: relaxplas}(d) and \ref{fig: relaxplas}(h). The bandwidth $W$ of the flat band (\textcolor{black}{an} energy difference between the $\Gamma$ and K points of BZ) in $3.5^\circ$ ($W = 5.9$ meV) is much smaller than the one in $5.1^\circ$ ($W = 16.2$ meV). The density of states show high peaks, the van Hove singularities (VHS), at flat band energies. The doping levels with $\mu=-10.0$ meV and -2.4 meV in Figs. \ref{fig: relaxplas}(d) and (h) correspond to the near half filling of flat bands, respectively. In the EL function ($S(\textbf{q},\omega)$) spectra, particle-hole continuum ($\mathrm{Im}\Pi(\textbf{q}, \omega)<0$) regions are labeled by ``p-h continuum'' with boundaries ($\mathrm{Im}\Pi(\textbf{q}, \omega)=0$) illustrated by green solid lines (details in Appendix \ref{app-dyn}). The first and second rows in Fig. \ref{fig: relaxplas} show the results of tb-{MoS$_2$} with $5.1^\circ$ and $3.5^\circ$, respectively. The results in Figs. \ref{fig: relaxplas}(a)-(b) and \ref{fig: relaxplas}(e)-(f) are obtained from the Lindhard function in Eq. (\ref{Lindhard}). Full-band calculation results in Figs. \ref{fig: relaxplas}(c) and \ref{fig: relaxplas}(g) are performed via the Kubo formula in Eq. (\ref{tbpm}). The spectra with notation ``1b-cut" are calculated by only considering the single doped flat band, and the spectra with notation ``20b-cut'' are obtained by summing over 40 bands near zero energy (20 conduction bands (CBs) and 20 valence bands (VBs)) in Eq. (\ref{Lindhard}).
In the 1b-cut calculation, only intraband transitions with possible transition energies $\omega$ ($0< \omega < W$) are taken into account, whereas interband transitions between the doped flat band and other bands are neglected in Eq. (\ref{intrainter}). In this case, as shown in Figs. \ref{fig: relaxplas}(a) and \ref{fig: relaxplas}(e), the plasmons show quasi-flat dispersions, and are free from damping into electron-hole pairs as the plasmons locate above \textcolor{black}{the} p-h continuum region. Such unique dispersion can be well understood via a finite-bandwidth two-dimensional electron gas model (FBW-2DEG) (details in Appendix~\ref{app-Pi}). In the long wavelength limit $q < 0.25|\Gamma M|$ and $q < 0.45|\Gamma M|$ in Figs. \ref{fig: relaxplas}(a) and \ref{fig: relaxplas}(e), respectively, the plasmon dispersion can be well fitted with an ideal 2DEG model\cite{plas2020isolate}
\begin{equation}\label{2DEG}
\omega_{pl} = \sqrt{\frac{2\pi n e^2q}{m\varepsilon_\mathrm{B}}},
\end{equation}
where n is the charge density related to a chemical potential $\mu$. The effective mass $m$ of the flat band at $\mu$ is obtained by fitting the band from $\Gamma$ to $M$ as a parabolic band. Then we obtain $m/m_e \approx -3.24$ at $\mu = -10.0$ meV and $m/m_e \approx -4.17$ at $\mu = -2.4$ meV. The dashed curves ($\omega_{pl} = a\sqrt{q}$) with \textcolor{black}{the} coefficients $a_{5.1}^{1b} = 92.1$ meV in Fig. \ref{fig: relaxplas}(a)-(b) and $a_{3.5}^{1b} = 33.4$ meV in Fig. \ref{fig: relaxplas}(e)-(f) are obtained via Eq. (\ref{2DEG}).
When $q > 0.25|\Gamma M|$ and $q > 0.45|\Gamma M|$ in Figs. \ref{fig: relaxplas}(a) and \ref{fig: relaxplas}(e) plasmons deviate from $\sqrt{q}$ relation and show slightly negative dispersions. The reason is that the flat bands in Fig. \ref{fig: relaxplas} are not infinite parabolic bands but have finite bandwidths. The slightly negative dispersion can be well fitted by an analytical plasmon energy expression in the FBW-2DEG model\cite{plas2020isolate}
\begin{equation}\label{neg}
\overline{\omega}_p = \sqrt{\frac{\mu(2E_c -\mu)}{\text{exp}(q/q_{TF}) -1} + E_c^2},
\end{equation}
with $|E_c|$ as an effective finite bandwidth of the flat band and $q_{TF}$, the two-dimensional Thomas-Fermi vector gives as
\begin{equation}\label{TFvector}
q_{TF} = \frac{2\pi e^2}{\varepsilon_\mathrm{B}} D(\mu),
\end{equation}
where $D(\mu)$ is the DOS value at $\mu$. The calculated $q_{TF}= 14.71$ nm$^{-1}$ with $\mu = -10.0$ meV at $5.1^\circ$ is very close to the value 14.77 nm$^{-1}$ with $\mu = -2.4$ meV at $3.5^\circ$. The two curves (dot-dashed lines) in Figs. \ref{fig: relaxplas}(a) and \ref{fig: relaxplas}(e) are obtained by setting $E_c = \mu = -10.0$ meV at $5.1^\circ$ and $E_c = E_M = -3.9$ meV at $3.5^\circ$($E_M$ is \textcolor{black}{the} flat-band energy at M point), respectively. Here, plasmon modes in \textcolor{black}{the} 1b-cut calculations are governed simply by intraband transitions inside the flat band, verifying that the single flat band guarantees undamped quasi-flat plasmons in the highly simplified one-band model. Moreover, the quasi-flat plasmon energy in Fig. \ref{fig: relaxplas}(e) is lower than that in Fig. \ref{fig: relaxplas}(a) due to the decrease of the flat-band width with reduced twist angles.
As seen from the band structure in Figs. \ref{fig: relaxplas}(d) and \ref{fig: relaxplas}(h), it is obvious that the doped flat bands are not completely separated from other VBs. In principle, both the transitions within flat bands and the effects of interband transitions on the flat-band plasmons should be considered. When 40 bands are considered in the polarization function (20b-cut calculation), plasmons with energy $\omega_p^{20b}$ exhibit $a^{20b}\sqrt{q}$ dispersion (black solid lines) in Figs. \ref{fig: relaxplas}(b) and \ref{fig: relaxplas}(f), and are away from the Landau damping regions. The coefficients $a_{5.1}^{20b}=98.3$ meV and $a_{3.5}^{20b}=38.4$ meV slightly exceed those in the 2DEG mode \textcolor{black}{(dashed lines)}. Comparing the plasmon modes in Figs. \ref{fig: relaxplas}(b) and \ref{fig: relaxplas}(f) to those in Figs. \ref{fig: relaxplas}(a) and \ref{fig: relaxplas}(e), respectively, the effect of interband transitions on the doped flat-band plasmons is significant. That is, the inclusion of the interband transitions changes the quasi-flat dispersion of plasmon modes into $\sqrt{q}$ dispersion and dramatically enhances the energy of plasmons with \textcolor{black}{a} larger q.
Previous works show that screening of high-energy interband transitions will decrease plasmon energies in monolayer and bilayer TMDs\cite{Louies2020universal,Nbse2012bandlfeffect,1l2lNbSe2013dftplas}. In order to figure out how \textcolor{black}{interband transitions will} modulate flat-band plasmons, in the 20b-cut calculation we compare EL functions $S$ (red lines) with intraband EL functions $S_{intra}$ (blue lines), and interband EL functions $S_{inter}$ (black lines) at sampled momenta $q$ for relaxed $3.5^\circ$ tb-{MoS$_2$} in Fig. \ref{fig:relaxloss}(a). The plasmon modes extracted from $S$, $S_{intra}$, and $S_{inter}$ are named as $p$, $p_{intra}$, and $p_{inter}$, respectively.
For a small q = 0.0625$|\Gamma M|$, the plasmon mode $p$ is overlapped with the intraband plasmon mode $p_{intra}$, which means that \textcolor{black}{the EL function $S$} is \textcolor{black}{solely dominated} by intraband transitions. For a large q =1.0$|\Gamma M|$, \textcolor{black}{the} EL function $S$ has a similar shape with $S_{inter}$, implying that $p$ is mainly contributed by the interband plasmon mode $p_{inter}$. For q from 0.25$|\Gamma M|$ to 0.75$|\Gamma M|$, plasmon modes $p$ originate from both intraband and interband transitions and are affected by the interplay between $p_{intra}$ and $p_{inter}$. For example, when q = 0.5$|\Gamma M|$, the non-zero parts of $S_{intra}$ and $S_{inter}$ are overlapped in an energy range ($0 < \omega < 50$ meV). The interplay of $p_{inter}$ and $p_{intra}$ yields \textcolor{black}{a mode with larger energy} (blue and black arrows) in Fig. \ref{fig:relaxloss}(a). This kind of interplay in relaxed tb-{MoS$_2$} is due to the fact that the flat band is not separated from other VBs, so \textcolor{black}{the} intraband transition energy can be overlapped with \textcolor{black}{the} interband transition energy in an energy range $0<\omega<W-|\mu|$.
\begin{figure}[btp]
\includegraphics[width=0.5\textwidth]{5qloss.pdf}
\caption{EL functions $S$ (red solid lines), intraband EL functions $S_{intra}$ (blue solid lines) and interband EL functions $S_{inter}$ (black solid lines) at five sampled momenta q for (a) relaxed and (b) rigid tb-{MoS$_2$} with $\theta=3.5^\circ$ under 20b-cut calculation. $S$ are contributed by both intraband and interband transitions, while $S_{intra}$ and $S_{inter}$ are calculated by only taking intraband and interband transitions into account, respectively. Intraband and interband plasmon modes are marked by $p_{intra}$ (blue dashed lines) and $p_{inter}$ (black dashed lines), respectively. The notations $p$, $p_1$ and $p_2$ (red dashed line) represent the plasmon mode extracted from EL functions $S$. EL loss functions are shifted vertically for clarity and their zeros are denoted by gray dashed lines. }
\label{fig:relaxloss}
\end{figure}
\begin{figure*}[htp!]
\includegraphics[width=1\textwidth]{35rigidplas.pdf}
\caption{EL function intensity plots for rigid $3.5^\circ$ tb-{MoS$_2$} with chemical potential $\mu$= -2.0 meV (dashed line in (d)) and temperature T = 1 K, under (a) 1b-cut , (b) 20b-cut, and (c) full-band calculations. The p-h continuum regions with boundaries (green lines) are illustrated as well. (d) Band structure and DOS for rigid $3.5^\circ$ MoS$_2$. A band gap (shaded region) emerges between the doped flat VB (blue line) and other remote VBs.}
\label{fig: rigidplas}
\end{figure*}
We further investigate how will interband transitions from much higher energy bands to the doped flat band affect the plasmonic properties. \textcolor{black}{As seen in Figs. \ref{fig: relaxplas}(c) and \ref{fig: relaxplas}(g),} \textcolor{black}{the} plasmon modes have lower energy with fitted $\sqrt{q}$ relation (solid lines) and tend to decay into p-h pairs at large momenta. The plasmon modes marked by black lines tend to be a linear dispersion with \textcolor{black}{a} larger q. Such tendency is caused by the screening effect of high-energy interband transitions on plasmons as aforementioned in previous works\cite{Louies2020universal,Nbse2012bandlfeffect,1l2lNbSe2013dftplas,pnasplas2019intrinsically}. Next, we qualitatively explain these phenomena via an expression for plasmon energy\cite{pnasplas2019intrinsically}
\begin{equation}\label{BqAq}
\omega_p ^2 \approx \frac{B(\textbf{q})}{1 + A(\textbf{q})},
\end{equation}
where $B(\textbf{q})$ contains the contribution of band transitions with \textcolor{black}{the} transition energy satisfying $|E_{\mathbf{k'} l'} - E_{\mathbf{k}l}| < \omega_p$, while $A(\textbf{q})$ is contributed by band transitions with relatively higher energies $|E_{\mathbf{k'} l'} - E_{\mathbf{k}l}| > \omega_p$ (detailed $A(\textbf{q})$ and $B(\textbf{q})$ in Appendix~\ref{app-Pi}). As shown in Fig. \ref{fig: relaxplas}, the plasmon modes in 20b-cut calculations have higher energies than that in the 1b-cut calculation. In the 20b-cut case, apart from a contribution of intraband transitions, the term $B(\textbf{q})$ has an extra contribution from the interband transitions with energies smaller than $\omega_p$, which results in an increment of the plasmon mode energy. Then, in the full-band calculation, the plasmon mode energy becomes smaller again because states from higher-energy bands (beyond the 40 bands) satisfy the condition $|E_{\mathbf{k'} l'} - E_{\mathbf{k}l}| > \omega_p$ and contribute to the term $A(\textbf{q})$. In full-band calculations, the plasmon energy becomes smaller when the twist angle decreases from $5.1^\circ$ \textcolor{black}{to} $3.5^\circ$. \textcolor{black}{The} twist angle effect on plasmon energy and flat bandwidth are similar (see Fig. \ref{fig:twistangle} in Appendix~\ref{app-angle}). Therefore, the flat-band plasmon can be a clue to detect the flat band.
In this part, we have analyzed the intraband and interband contributions to the plasmonic properties via \textcolor{black}{the three kinds of calculations} with different band cutoffs. The quasi-flat plasmon only appears in the one-band calculation, induced only by intraband transitions in the doped flat band. After that, if we consider more band effects, the plasmonic features are notably affected by \textcolor{black}{the} interband transitions. The effects of multi-band transitions on the flat-band plasmons in tb-{MoS$_2$} are different from that in TBG\cite{pnasplas2019intrinsically}. Here, \textcolor{black}{the} lower-energy quasi-flat plasmon dispersion in the simplified one-band calculation changes to higher-energy $\sqrt{q}$ \textcolor{black}{relation} in \textcolor{black}{both} multi-band and full-band calculations. However, for magic-angle TBG, the plasmon dispersion changes in a contrary way after considering more bands. That is, \textcolor{black}{the} classical plasmons with $\sqrt{q}$ relation in a simplified toy model (only including two flat bands), alter to \textcolor{black}{the} lower-energy quasi-flat plasmons obtained from a multi-band continuum model or full-band TB model\cite{pnasplas2019intrinsically,Kuang2021}. The different band cutoff effects on flat-band plasmons of TBG and tb-{MoS$_2$} could originate from the different features of the flat bands in the two twisted systems. The flat bands in relaxed TBG are entirely separated from other bands with gaps at least two times larger than the bandwidth \cite{Kuang2021}, while the flat band in relaxed tb-{MoS$_2$} only detaches from conduction bands above zero energy but touches its adjacent VB at K point, as shown in Figs. \ref{fig: relaxplas}(d) and \ref{fig: relaxplas}(h). So extra interband transitions in \textcolor{black}{the} multi-band calculation contribute to $A(\textbf{q})$ in magic-angle TBG\cite{pnasplas2019intrinsically} but to $B(\textbf{q})$ term in relaxed flat-band tb-{MoS$_2$}.
\section{flat-band plasmons in rigid tb-{MoS$_2$} with $\theta=3.5^\circ$}\label{rigid3.5}
We further study the effect of the lattice relaxation on plasmons in tb-{MoS$_2$}. For tb-{MoS$_2$} with $\theta = 3.5^\circ$ without relaxation (rigid tb-{MoS$_2$}), the flat band (blue line) with bandwidth $W=4.3$ meV is completely separated from other bands, as shown in Fig. \ref{fig: rigidplas}(d). The band gap $\Delta$ between the flat band and other VBs (\textcolor{black}{the} shaded region in Fig. \ref{fig: rigidplas} (d)) is 15.8 meV, three times larger than the bandwidth $W$. The plasmon spectra obtained via 1b-cut, 20b-cut, and full-band calculations are shown in Figs. \ref{fig: rigidplas}(a)-(c) with chemical potential $\mu$ = -2.0 meV (dashed line) near half filling of the flat band. In this case, \textcolor{black}{a} quasi-flat plasmon dispersion with the energy around 20 meV appears in the 1b-cut calculation. Interestingly, such \textcolor{black}{a plasmon dispersion with nearly constant energy} also emerges in both 20b-cut and full-band calculations with low energies even though it tends to vanish at \textcolor{black}{a} larger q. Besides, higher-energy interband plasmons also appear when $q> 0.25|\Gamma M|$ in \textcolor{black}{both} 20b-cut and full-band calculations. When q near 0.5$|\Gamma M|$, the two plasmon dispersions (one from the intraband transitions and the other from interband transitions) are separated and can coexist in the spectra. In 20b-cut and full-band calculations, the p-h continuum regions are separated into two parts due to the presence of the band gap $\Delta$ (see Fig. \ref{fig:phc}(b) in Appendix~\ref{app-dyn}). The quasi-flat plasmon modes are low-damped as they reside at the gap between the two p-h continuum regions.
To gain insights into the distinct plasmon features in relaxed and rigid cases, we compare the contribution of band transitions to EL functions under 20b-cut calculation. In Fig. \ref{fig:relaxloss}(b), a signifcant difference is the existence of two plasmon branches \textcolor{black}{compared to Fig. \ref{fig:relaxloss} (a)}. The plasmon modes $p_1$ and $p_2$ correspond to the lower-energy quasi-flat and higher-energy plasmons in Fig. \ref{fig: rigidplas}(b), respectively, and $p_2$ is enhanced while $p_1$ is weakened with larger momenta. The two peaks $p_1$ and $p_2$ are contributed from intraband plasmon $p_{intra}$ and interband plasmon $p_{inter}$ (arrows in Fig. \ref{fig:relaxloss}(b)), respectively. For $q = 0.5|\Gamma M|$, unlike the relaxed case where intraband and interband transitions \textcolor{black}{can be} superimposed in an energy range, the plasmons $p_{inter}$ and $p_{intra}$ always separate in the rigid case.
The profound explanation to the two plasmon modes is that due to the band gap $\Delta$ emerging in \textcolor{black}{the} rigid $3.5^\circ$ tb-{MoS$_2$}, interband transition energies $\omega> \Delta + W - |\mu|$ no longer overlap with intraband transition energies $W > \omega > 0$. As a result, $p_{intra}$ ($p_{inter}$) are softened (hardened) to $p_1$ ($p_2$) by the extra higher-energy interband transitions (lower-energy intraband transitions) contributing to $A(\textbf{q})$ ($B(\textbf{q})$), as shown with arrows in Fig. \ref{fig:relaxloss}(b). We can also find that \textcolor{black}{the} interband transitions play an important role in generating different plasmon features in relaxed and rigid tb-{MoS$_2$} from Fig. \ref{fig:relaxloss}, and \textcolor{black}{the} interband plasmons $p_{inter}$ gradually dominate plasmons with larger momenta, which could owe to the enhancement of \textcolor{black}{the} interband coherence factor in Eq. (\ref{Lindhard}) (see Fig. \ref{fig:corre} in Appendix~\ref{app-corre}).
In brief, the flat band can lead to quasi-flat and low-damped plasmons in the rigid sample. The intraband plasmons can coexist with the higher-energy interband plasmons at some momenta in \textcolor{black}{both} multi-band and full-band calculations.
The presence of the band gap $\Delta$ ensures that \textcolor{black}{the} two plasmon branches appear \textcolor{black}{simultaneously} in \textcolor{black}{the} EL spectra. On the contrary, in the relaxed sample, due to the absence of the band gap between the flat band and its adjacent VBs, interband transitions start to contribute in a very tiny energy. As a result, the single plasmon mode has both contributions from the interband and intraband transitions. However, we detect a quasi-flat plasmon in relaxed tb-{MoS$_2$} with \textcolor{black}{an} angle smaller than $3.5^\circ$ (see Fig. \ref{fig:1.6degree}(a) in Appendix~\ref{app-angle}), at which a band gap $\Delta$ also appears. As a conclusion, the separation of the flat band from other bands in tb-{MoS$_2$} plays a crucial role in the exploration of quasi-flat and low-damped plasmons since the band gap affects the interband contribution to plasmons.
\section{Band cutoff and doping effects}
\label{sec-bandcutmu}
\begin{figure}[htp!]
\includegraphics[width=0.5\textwidth]{mucutcom.pdf}
\caption{Band cutoff effect on plasmon energy versus q for relaxed tb-{MoS$_2$} (a) with $\theta=5.1^\circ$ and $\mu$ = -10 meV and (c) $\theta=3.5^\circ$ and $\mu$ = -2.4 meV. The doping effect on plasmon energy versus q for relaxed tb-{MoS$_2$} with (b) $5.1^\circ$ and (d) $3.5^\circ$ under the 20b-cut calculation.
1b-cut (black square), 20b-cut (red circle), 40b-cut (blue triangle) and 80b-cut (green triangle) in (a) and (c) stand for the 1 band (the doped flat
VB), 40 bands (20 CBs above and 20 VBs below zero energy), 80 bands (40 CBs above and 40 VBs below zero energy), and 160 bands (80 CBs above and 80 VBs below zero energy) cutoff calculations via Eq. (\ref{Lindhard}) with $l$ and $l'$ summing over 1 band, 40 bands, 80 bands and 160 bands, respectively. Plasmon energy changes with three different chemical potentials in (b) and (d) with 20b-cut calculations via Eq. (\ref{Lindhard}).}
\label{fig:mucutcom}
\end{figure}
In this part, we move forward to investigating the band cutoffs and doping effects on plasmons in \textcolor{black}{the} relaxed tb-{MoS$_2$} with $5.1^\circ$ and $3.5^\circ$ in Fig. \ref{fig:mucutcom}. First, we compare the plasmons calculated with different band cutoffs in Fig. \ref{fig:mucutcom}(a) for $5.1^\circ$ and Fig. \ref{fig:mucutcom}(c) for $3.5^\circ$. The plasmon energy significantly increases from 1b-cut calculation (black squares) to 20b-cut calculation (red dots) with \textcolor{black}{a momentum q} getting larger. Then, after taking more bands (blue and green triangles) into account, the plasmon energy decreases when $q > 0.25|\Gamma M|$ and $q > 0.56|\Gamma M|$ in Figs. \ref{fig:mucutcom} (a) and \ref{fig:mucutcom} (c), respectively. In 40b-cut and 80b-cut calculations, other higher-energy interband transitions contribute to $A(\textbf{q})$, and decrease the plasmon energy.
For $q<0.25|\Gamma M|$ in $5.1^\circ$ and $q < 0.56|\Gamma M|$ in $3.5^\circ$, the plasmon energy converges even in the 20b-cut calculation. Therefore, only for limited small wave numbers, it is accurate enough to model the flat-band plasmon with an appropriate band cutoff calculation. This also implies that if plasmon in relaxed tb-{MoS$_2$} is studied via a low-energy continuum model \cite{tb-mos2021kpmodel,tb-mos2021contituummodel,tb-tmd2021conti128band}, the plasmon energy will be overestimated at a larger twist angle and momentum. The low-energy continuum model only accurately describes a finite number of bands near \textcolor{black}{the} Fermi energy, which neglects the effects of interband polarization of higher electron bands. Such \textcolor{black}{an} overestimation of \textcolor{black}{the} plasmon energy could affect the prediction of plasmon-mediated superconductivity\cite{prs2020superconductivity, lewandowski2020pairing}.
Next, we show that modulating chemical potential $\mu$ is another way to change the contribution of interband transitions to plasmons dramatically. In \textcolor{black}{the} relaxed tb-{MoS$_2$} with $5.1^\circ$ in Fig. \ref{fig:mucutcom}(b) and \textcolor{black}{with} $3.5^\circ$ in Fig. \ref{fig:mucutcom}(d), plasmon energies $\omega_p^{20b}$ (circles) with different $\mu$ are obtained via 20b-cut calculations. The results are also fitted with $\sqrt{q}$ curves (solid black lines).
On the one hand, decreasing the magnitude of chemical potential $\mu$ leads to smaller plasmon energy. The plasmon energy tends to be constant at large momenta when \textcolor{black}{the} doping level closes to 0, as shown in Fig. \ref{fig:mucutcom}(d) with blue dots. With \textcolor{black}{a} larger hole doping introduced, interband transitions are enhanced via modulating the Fermi-Dirac factor in Eq. (\ref{Lindhard}) (see Fig. \ref{fig:fermi} in Appendix~\ref{app-corre}), which results in larger plasmon energies at higher hole-doping levels (black and red circles) in Figs. \ref{fig:mucutcom}(b) and \ref{fig:mucutcom}(d). This can be further verified by investigating how \textcolor{black}{will} intraband plasmon $p_{intra}$ and interband plasmon $p_{inter}$ contribute to plasmon $p$ in EL functions at different doping levels with a sampled $q = 0.5|\Gamma M|$, as seen in Fig. \ref{fig:muloss}(a). The interband plasmon modes $p_{inter}$ monotonously increase with $|\mu|$, while the intraband plamon mode $p_{intra}$ has the maximum energy near half filling of the flat band. Furthermore, as shown in Fig. \ref{fig:muloss}(a), the plasmon mode $p$ is almost completely contributed by $p_{intra}$ at $\mu$ = -0.4 meV, then generated by both of $p_{intra}$ and $p_{inter}$ when $\mu$ = -2.4 meV, and mainly attributed by $p_{inter}$ at $\mu$ = -4.4 meV with the flat band almost fully hole-doped. In fact, \textcolor{black}{the} low-energy intraband plasmons $p_{intra}$ dominate \textcolor{black}{the} plamsons $p$ for most of momenta (except for q near $|\Gamma M|$) at $\mu$ = -0.4 meV, whereas \textcolor{black}{the} plasmons $p$ are mainly contributed by higher-energy interband plasmons $p_{inter}$ for all of q (even for q near 0) at $\mu$ = -4.4 meV (see Fig. \ref{fig:relaxmu5q} in Appendix~\ref{app-lossqu}). Therefore, we can conclude that \textcolor{black}{the} stronger and higher-energy interband plasmons at larger hole-doping levels play key roles in enhancing the plasmon energies in Figs. \ref{fig:mucutcom}(b) and \ref{fig:mucutcom}(d). There is still only one plasmon peak appearing in EL functions $S$ by tuning hole-doping levels. As a result, the quasi-flat plasmon mode is not observed in high and low hole-doping levels in relaxed tb-{MoS$_2$} with $3.5^\circ$. Thus, the separation of \textcolor{black}{the} flat band from other bands in tb-{MoS$_2$} is still the key to explore quasi-flat and low-damped plasmon modes.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{3muloss.pdf}
\caption{EL functions $S$ (red solid lines), intraband EL functions $S_{intra}$ (blue solid lines), and intrerband EL functions $S_{inter}$ (black solid lines) at different chemical potentials $\mu$, for (a) relaxed and (b) rigid tb-{MoS$_2$} with $3.5^\circ$ under 20b-cut calculations. Intraband and interband plasmon modes are marked by $p_{intra}$ (blue dashed lines) and $p_{inter}$ (black dashed lines), respectively. The notations $p$, $p_1$ and $p_2$ (red dashed line) represent the plasmon mode extracted from EL functions $S$. EL loss functions are shifted vertically for clarity and their zeros are denoted by gray dashed lines.}
\label{fig:muloss}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{rigidmuplas.pdf}
\caption{EL function intensity spectra for rigid $3.5^\circ$ tb-{MoS$_2$} with chemical potential (a) $\mu$= -4.0 and (b) $\mu$ = -0.5 meV (b) in 20b-cut calculations. The p-h continuum region with boundaries (green lines) are also illustrated.}
\label{fig:rigidmuplas}
\end{figure}
Based on the fact that plasmon dispersion are obviously altered by $\mu$ in relaxed cases, we turn to study how \textcolor{black}{the} quasi-flat plasmons appearing in rigid case are influenced by chemical potentials. When the isolated flat band is slightly doped with $\mu$ = -0.5 meV, both quasi-flat plasmons and higher-energy interband plasmons still exist in Fig. \ref{fig:rigidmuplas}(b). The quasi-flat plasmons are still low-damped, whereas the interband plasmons are overdamped. Once tuning $\mu$ to -4.0 meV with the flat band nearly full filled, only one undamped plasmon dispersion appears in Fig. \ref{fig:rigidmuplas}(a). To unveil how quasi-flat plasmons are affected by doping levels, we study \textcolor{black}{the} intraband and interband contribution to plasmons at the three hole-doping levels with a fixed q = 0.5$|\Gamma M|$ in Fig. \ref{fig:muloss}(b). First, \textcolor{black}{the} interband plasmons $p_{inter}$ are enhanced with larger hole-doping levels. From $\mu$ = -0.5 to -2.0 meV, the enhancement of $p_{inter}$ at $\mu$ = -2.0 meV weakens $p_1$, despite of stronger $p_{intra}$ with larger energy compared to $\mu$ = -0.5 meV. The quasi-flat plasmon spectra weights in Fig. \ref{fig: rigidplas}(b) thus become weaker comparing to Fig. \ref{fig:rigidmuplas}(b). Besides, such \textcolor{black}{an} enhanced $p_{inter}$ with higher energy also causes the disappearance of the quasi-flat plasmons $p_1$ and the only existence of higher-energy plasmons at $\mu$ = -4.0 meV in Fig. \ref{fig:rigidmuplas}(a). In fact, \textcolor{black}{the} plasmon mode $p_1$ arising from $p_{intra}$ is visible only for q = 0.0625$|\Gamma M|$ (see Fig. \ref{fig:rigidmu5q}(b) in Appendix~\ref{app-lossqu}). The emergence of $p_1$ is because much weaker higher-energy interband transitions at the smallest q does not completely quench intraband plasmon $p_{intra}$ via the term $A(\textbf{q})$ in Eq. (\ref{BqAq}), as discussed in Fig. \ref{fig:relaxloss}(b). We can also see that the single plasmon dispersion with $\mu$ = -4.0 meV is almost completely contributed by \textcolor{black}{the} interband plasmons $p_{inter}$ except for q = 0.0625$|\Gamma M|$ (shown in Fig. \ref{fig:rigidmu5q}(b) in Appendix~\ref{app-lossqu}). Besides, the relatively weaker and higher-energy interband plasmons $p_{inter}$ at $\mu$ = -0.5 meV (see Fig. \ref{fig:rigidmu5q}(a) in Appendix~\ref{app-lossqu}) ensure that the intraband and interband plasmon modes coexist with $q > 0.5|\Gamma M|$ in Fig. \ref{fig:rigidmuplas}(b). The chemical potential is essential for observing the coexistence of the two plasmon modes in the rigid case.
All in all, plasmonic properties in relaxed and rigid tb-{MoS$_2$} can be notably affected by interband transitions at different hole-doping levels. The quasi-flat plasmons can be killed with a large hole-doping level since \textcolor{black}{the} intraband plasmons are strongly weakened by the enhanced higher-energy interband transitions at
\textcolor{black}{a} larger $|\mu|$, which simultaneously contribute to \textcolor{black}{the} stronger and higher-energy interband plasmons. This also implies that a slighter hole-doping level is more conducive to observing the quasi-flat plasmons in rigid tb-{MoS$_2$}. Besides, the isolated flat band can not always promise the existence of quasi-flat plasmons because of the doping effect on interband contributions.
\section{summary and discussion}
In summary, we investigated flat-band plasmons in \textcolor{black}{the} hole-doped tb-{MoS$_2$} with and without \textcolor{black}{lattice} relaxations, and analyzed the intraband and interband contribution to plasmons. Different band cutoffs are considered in the polarization function to tune interband transitions between the single flat band and other bands. In relaxed cases, the flat band is not separated from other valence bands so that the interband and intraband transitions can interfere with each other in the low-energy range. When interband transitions are introduced in multi-band calculations, the quasi-flat plasmons emerging in the one-band calculation are transformed into classical plasmons of 2DEG with $\sqrt{q}$ dispersion. The full-band calculation, including higher-energy interband transitions, decreases plasmonic energies of \textcolor{black}{the} classical plasmons observed in \textcolor{black}{the} multi-band calculation. We also compared plasmons in \textcolor{black}{the} relaxed tb-{MoS$_2$} with $5.1^\circ$ to those with $3.5^\circ$. The plasmon energy becomes smaller when the twist angle decreases since a smaller angle gives rise to a flatter band. In \textcolor{black}{the} rigid tb-{MoS$_2$} with $3.5^\circ$, the flat band is separated from valence bands with a gap three times larger than its bandwidth. As a consequence, \textcolor{black}{the} interband and intraband transitions occur in different energy ranges. We observe two plasmon branches in \textcolor{black}{the} rigid tb-{MoS$_2$}. One is a lower-energy quasi-flat plasmon (intraband plasmon), and the other is a higher-energy plasmon (interband plasmon). Moreover, the quasi-flat plasmons can be observed in one-band, multi-band, and full-band calculations. For other tb-TMDs, for example, twisted bilayer MoSe$_2$, twisted bilayer WS$_2$ and twisted bilayer WSe$_2$, such a band gap also disappears in relaxed cases\cite{flatband2021tb-tmd}. Therefore, similar plasmon properties could be observed in these tb-TMDs.
Besides, different band cutoffs in multi-band calculations can change the plasmon energy at \textcolor{black}{a} large q in \textcolor{black}{the} relaxed tb-{MoS$_2$}. Tuning hole-doping levels can notably
change plasmon energy in relaxed tb-{MoS$_2$} and affect
the coexistence of two plasmon branches in rigid tb-{MoS$_2$}, as interband contributions to plasmon can be significantly tuned by \textcolor{black}{the} doping of the flat band. Plasmons are gradually dominated by \textcolor{black}{the} enhanced interband transitions with more holes filled in the flat band. When the flat band is almost full-filled with holes, only one interband plasmon dispersion is observed in \textcolor{black}{both} rigid and relaxed cases, and the quasi-flat plasmons disappear in \textcolor{black}{the} rigid tb-{MoS$_2$}. In the future, flat band systems remain potential platforms to explore undamped, low-energy dispersionless plasmons and their applications like plasmonic superconductivity. Based on the analysis of band cutoff calculation, we also need to think about the validity of low-energy models for studying flat-band plasmons in twisted two-dimensional semiconductors, especially when the interband transitions play a dominant role.
\begin{acknowledgments}
We thank Francisco Guinea for his valuable discussions. This work was supported by the National Natural Science Foundation of China (Grants No. 12174291 and No. 12047543). S.Y. acknowledges funding from the National Key R\&D Program of China (Grant No. 2018YFA0305800). X.K. acknowledges the financial support from China Scholarship Council (CSC). Numerical calculations presented in this paper have been performed on the supercomputing system in the Supercomputing Center of Wuhan University.
\end{acknowledgments}
|
1,116,691,497,638 | arxiv | \section{Results}
We start with a compact differentiable
manifold $M$ equipped with a
Riemannian metric $g$ resp. a
non-reversible Finsler metric $f.$
Then the corresponding \emph{norm}
$\|v\|$ of a tangent vector $v$ is defined
by $\|v\|^2=g(v,v)$ resp. $\|v\|=f(v).$
In the following we use as common notation
also for a Finsler metric the letter $g.$
For a non-negative integer
$k \in \mathbb{N}_0=\n \cup \{0\}$ let
$\conj(k) \in (0,\infty]$ be the infimum of all
$L>0$ such that any geodesic
$c$ of length at least $L$
has Morse index $
\ind_{\Omega}(c)$ at least $(k+1).$
Hence for any geodesic $c:[0,1]\longrightarrow M$
of length $>\conj (k)$ the Morse index
is at least $(k+1).$
By the \emph{Morse index theorem}
it follows that there are $(k+1)$ conjugate points $c(s)$
with $0<s<1,$ which are conjugate to $c(0)$ along
$c|[0,s].$ Here we count conjugate points with
multiplicity, cf.~\cite[Sec. 2.5]{Kl}.
And we conclude:
$\conj(k)\le (k+1)\,\conj(0).$
Let $P([0,1],M)$ be the space of $H^1$-curves
$\gamma:[0,1]\longrightarrow M$ on the manifold
$M.$
Let $l, E, F: P=P([0,1],M)\longrightarrow \R$
denote the following functionals on this space.
The \emph{length} $l(c),$ resp. the \emph{energy}
$E(c)$ is defined as
\begin{equation*}
l(\gamma)=\int_0^1 \|\gamma'(t))\|\,dt\,;\,
E(\gamma)=\frac{1}{2}\int_0^1 \|\gamma'(t)\|^2\,dt\,.
\end{equation*}
We use instead of $E$ the
\emph{square root energy functional} $F: P([0,1],M)\longrightarrow\R$ with $F(\gamma)=\sqrt{2E(\gamma)},$
cf.~\cite[Sec. 1]{HR}.
For a curve parametrized proportional to arc length
we have $F(\gamma)=l(\gamma).$ We consider the following subspaces of $P.$
The \emph{free loop space} $\Lambda M$ is the subset of
loops $\gamma$ with $\gamma(0)=\gamma(1).$
For points $p,q\in M$ the space
$\Omega_{pq} M$ is the subspace of curves
$\gamma$ joining $p=\gamma(0)$ and
$q=\gamma(1).$ The \emph{(based) loop space}
$\Omega_p M$ equals $\Omega_{pp}M .$
As common notation
we use $X,$ i.e. $X$ denotes $\Lambda M,
\Omega_{pq}M,$ or $ \Omega_pM.$
It is well known that the critical points of
the
square root energy
functional $F:X \longrightarrow \R$ are
geodesics joining $p$ and $q$ for $X=\Omega_{pq}(M),$
the closed (periodic) geodesics for $X=\Lambda M,$
and the geodesic loops for $X=\Omega_p(M).$
The index form
$I_c$ can be identified with the hessian
$d^2 E(c)$ of the energy functional, for the two
cases $X=\Lambda M$ resp. $X=\Omega_{pq}M$
(allowing also $p=q$)
we obtain different indices
$\ind_{\Lambda} (c)$ resp.
$\ind_{\Omega} (c).$
If
$c\in \Lambda M$ is a closed geodesic
with index $\ind_{\Lambda}(c)$ then
for $p=c(0)$
it is at the same time a geodesic loop
$c \in \Omega_p M$
with index $\ind_{\Omega}(c).$
The difference
$\conc (c)=\ind_{\Lambda}c -\ind_{\Omega} c$
is called \emph{concavity}. It satisfies
$0 \le \conc (c) \le n-1,$
cf.~\cite[thm. 2.5.12]{Kl} for the Riemannian case
and~\cite[Sec. 6]{Ra04} for the Finsler case.
We use the following notation for sublevel sets
of $F:$
$
X^{\le a}=\{\gamma \in X,; F(\gamma)\le a\},
X^a=\{\gamma \in X;F(\gamma)=a\}.
$
For a non-trivial homology class $h \in H_j(X,X^{\le b};R)$ we
denote by $\cri_X(h)$ the \emph{critical value}, i.e.
the minimal value $a\ge b$ such that
$h$ lies in the image of the homomorphism
$ H_j(X^{\le a},X^{\le b};R)
\longrightarrow
H_j(X,X^{\le b};R)$
induced by the inclusion, cf.~\cite[Sec.1]{HR}.
It follows that for a non-trivial homology
class $h \in H_j(X,X^{\le b};R)$
there exists a geodesic in $X$ with
length $l(c)=\cri_X(h).$
Its index satisfies $\ind_X(c)\le j.$
The Morse theory of the functional
$F: X \longrightarrow \R$ implies
\begin{theorem}
\label{thm:one}
Let $M$ be a compact manifold endowed with a Riemannian
metric resp. non-reversible Finsler metric $g.$
Let $h \in H_*(X,X^{\le b};R)$
be a non-trivial homology class
of degree $\deg(h)$
for some coefficient field $R.$ Then $\cri_X(h)\le \conj(\deg(h))\le
(1+\deg(h))\,\conj(0),$
and
the homomorphism
\begin{equation*}
H_j(X^{\le \conj(\deg(h))},X^{\le b};R)\longrightarrow
H_j(X,X^{\le b};R)
\end{equation*}
induced by the inclusion is surjective
for all $j\le \deg(h).$
\end{theorem}
For positive Ricci curvature
$\ric$ and
for positive sectional curvature $K$
(resp. positive flag curvature
$K$ in the case of
a Finsler metric) we obtain
in Lemma~\ref{lem:conj} upper bounds for
the sequence $\conj(k), k\in \n_0.$
As a consequence we
obtain:
\begin{theorem}
\label{thm:two}
Let $(M,g)$ be a compact
$n$-dimensional
Riemannian or Finsler
manifold.
\smallskip
(a) If $\ric \ge (n-1) \delta$
for $\delta >0$ then
$\cri_X (h) \le \pi (\deg (h)+1) /\sqrt{\delta}$
for a non-trivial homology class
$h \in H_{*}(X,X^{\le b};R)$
of degree $\deg(h).$
\smallskip
(b) If $K \ge \delta$ for $\delta >0$ then
$\cri_X(h) \le \pi \{1+\deg (h)/(n-1)\} /\sqrt{\delta}$
for a non-trivial homology class
$
h \in H_{*}(X, X^{\le b};R)$
of degree $\deg (h).$
\smallskip
(c) If $K \le 1$ then
$\cri_{\Omega}(h)\ge \left[\deg(h)/(n-1)\right]\pi$
for $h \in H_*(\Omega_{pq}M;R)$
and $\cri_{\Lambda}(h)\ge \left\{\left[\deg(h)/(n-1)\right]-1\right\}\pi$
for $h \in H_*(\Lambda M, \Lambda^{\le b}M;R).$
Here for a real number $x$ we denote
by $[x]$ the largest integer $\le x.$
\end{theorem}
As consequence from
Theorem~\ref{thm:two}(a) we obtain
an upper bound for the length of
a shortest closed geodesic on a manifold of
positive Ricci curvature:
\begin{theorem}
\label{thm:three}
Let $(M,g)$ be a compact and simply-connected
Riemannian or
Finsler manifold
of dimension $n$
of positive Ricci curvature
$\ric\ge (n-1)\delta$ for some
$\delta>0.$
And let
$m$ be the smallest integer with
$1\le m\le n-1$ for which $M$ is
$m$-connected and $\pi_{m+1}(M)\not=0.$
We denote by $L=L(M,g)$ the length
of a (non-trivial) shortest closed geodesic.
Then
$L \le \pi (m+1)/\sqrt{\delta},$
in particular $L \le \pi n /\sqrt{\delta}.$
\end{theorem}
\begin{remark}
(a) This improves the estimate
$L \le 8\pi m\le 8 \pi (n-1)$ given
in \cite[Thm. 1.2]{Ro}.
\smallskip
(b) If $(M,g)$ is not simply-connected and
$\ric \ge (n-1)\delta$ for some positive
$\delta$ then there is a shortest closed curve
$c$
which is homotopically non-trivial. This closed
curve is a closed geodesic and
$\ind_{\Lambda}(c)=\ind_{\Omega}(c)=0.$ From Lemma~\ref{lem:conj}
we obtain $L(c)\le \pi /\sqrt{\delta}.$
On the other hand choose $k\in \n$ such
that $l(c^k)=kl(c)>\pi /\sqrt{\delta,}$
here $c^k(t)=c(kt)$ denotes the $k$-th
iterate of the closed geodesic $c.$
Then we conclude from
Remark~\ref{rem:morse-schoenberg}(a)
that
$\ind_{\Lambda}(c^k)\ge
\ind_{\Omega}(c)\ge 1,$
hence the closed geodesic $c$
is not \emph{hyperbolic,}
cf. \cite[Thm. 3.3.9]{Kl}.
\smallskip
(c) For a compact
and simply-connected
Riemannian manifold $(M,g)$ of
positive sectional curvature $K\ge \delta$
it follows from the estimate
$\conj(n-1) \le 2\pi/\sqrt{\delta}$ that the length $L$ of a
shortest closed geodesic satisfies
$L\le 2\pi/\sqrt{\delta}.$ In the limiting case
$L=2\pi/\sqrt{\delta}$ the metric is of constant
sectional curvature,
cf.~\cite[Cor. 1]{Ra21}.
\end{remark}
\begin{theorem}
\label{thm:four}
Let $(M,g)$ be a compact Riemannian or
Finsler manifold
of dimension $n$ with $\ric \ge (n-1)\delta$
(resp. $K \ge \delta$) for some positive $\delta.$
For any pair $p,q\in M$ of points (also allowing
$p=q$) and $k \in \n$
there exist at least $k$ geodesics joining
$p$ and $q$ (i.e. geodesic loops for $p=q$) with length
$\le (2(n-1)k+1)\pi/\sqrt{\delta},$
(resp.
$\le (2k+1)\pi/\sqrt{\delta}$).
\end{theorem}
\begin{remark}
(a)
This result improves the bounds
$16\pi(n-1)k$ resp.
$(16(n-1)k+1)\pi$ given in
\cite[Thm. 1.3]{Ro} for $\delta=1.$
\smallskip
(b)
Here two geodesics $c_1,c_2 \in \Omega_{pq}M$
are called \emph{distinct} if
their lengths $l(c_1)\not=l(c_2)$ are distinct.
From a geometric point of view this is not
very satisfactory.
If we choose distinct points $p,q\in S^n$
on the sphere with the standard metric of
constant sectional curvature $K=1,$
which are not antipodal points,
then any geodesic joining $p$ and $q$ is part
of the unique great circle
$c:\R \longrightarrow S^n$ through $p$ and $q.$
So in this case the geodesics whose existence
is claimed in Theorem~\ref{thm:four} all
come from a single closed geodesic,
cf. \cite[p.181]{Kl}.
Closed geodesics are called
\emph{geometrically distinct} if they are
different as subsets of $M$ (or in the case
of a non-reversible Finsler metric if
their
orientations are different when they agree
as subsets of $M$).
If the metric $g$ is bumpy then there
are only finitely many geometrically distinct closed
geodesics below a fixed length. Hence for a bumpy metric for almost
all pairs of points $p,q$ on $M$ there is no
closed geodesic through these points.
Hence in this case the geodesics
constructed in Theorem~\ref{thm:four}
do not come from a single closed
geodesic.
\smallskip
(c) There are related
\emph{curvaturefree} estimates depending only on
the diameter due to
Nabutovsky and Rotman.
In ~\cite{NR} they show that for
any pair $p,q$ of points in a compact
$n$-dimensional Riemannian manifold with diameter
$d$ and for every $k \in \n$ there are
at least
$k$ distinct geodesics joining $p$ and $q$
of length $\le 4nk^2d.$
\end{remark}
\section{Proofs}
\begin{proof}[Proof of Theorem~\ref{thm:one}]
(a) We first give the proof for the case
$X=\Omega_{pq}M$ for points $p,q$
and for a homology class
$h \in H_k(\Omega_{pq}M, \Omega^{\le b}M;R).$
Here we also allow
the case $p=q.$
We denote by $d: M \times M \longrightarrow \R$ the \emph{distance} induced by the metric $g.$
We choose a sequence
$(q_j)_{j\ge 1}\subset M$ such that $\lim_{j\to \infty} d(p,q_j)=0$
and such that along any geodesic joining $p$ and
$q_j$ the point $q_j$ is not a conjugate point
to $p.$
This is possible as a consequence of
Sard's theorem, cf.
\cite[Cor. 18.2]{Mi} for the Riemannian
case and
\cite[Cor. 8.3]{Ra04} for the Finsler case.
As a consequence the square root energy functional
$F_j=F:
\Omega_{pq_j}M \longrightarrow \R$ is a \emph{Morse function.}
There is a homotopy equivalence
$\zeta_{qq_j}:
\Omega_{pq}M \longrightarrow \Omega_{pq_j}M$
between loops spaces with
$F(\gamma)=\lim_{j\to \infty}F(\zeta_{qq_j}(\gamma))$
for all $\gamma \in \Omega_{pq}M,$
cf. \cite[Lem.1]{Ra21}.
Let $h\in H_k(\Omega_{pq}M,\Omega_{pq}^{\le b}M;R),$
then it follows from Morse theory for the
functional $F_j$ that there is a
geodesic $c_j$
joining $p$ and $q_j$
whose length $l(c_j)$ equals the
critical value $\cri (\zeta_{qq_j}(h))$
of the homology class
$\zeta_{qq_j,*}(h)\in H_k
(\Omega_{pq_j}M,\Omega_{pq_j}^{\le b}M;R).$
The Morse index
$\ind_{\Omega} (c_j)$ as critical point of $F_j$ equals
the degree of the homology class by the
Morse lemma,
cf. \cite[Sec. 8]{Ra04}. By definition of $\conj(k)$ we
obtain $l(c_j)\le \conj(k).$
But since $\cri_{\Omega} (h)
=\lim_{j\to \infty} l(c_j)\le \conj (k)$ we
finally arrive at the claim
$\cri_{\Omega}(h)\le \conj(k).$
\smallskip
(b) Now we assume $X=\Lambda M.$
Then we use a sequence $g_j$ of bumpy Riemannian
or Finsler
metrics
converging to the metric $g$
with respect to the strong $C^r$ topology for
$r \ge 2,$ resp. $r\ge 4$ in the Finsler case.
We can choose such a sequence by the
\emph{bumpy metrics theorem} for Riemannian metrics
due to Abraham~\cite{Ab} and Anosov~\cite{An}, and by the
generalization to the Finsler case,
cf. \cite{RT2020}.
The square root energy functional
$F_j: \Lambda M \longrightarrow \R$ is then a
\emph{Morse-Bott function,} the critical set equals the
set of closed geodesics which is the union
of disjoint
and non-degenerate critical
$S^1$-orbits. Hence all closed geodesics
are non-degenerate, i.e. there is no periodic Jacobi
field orthogonal to the geodesic. Then for any
$j$ there is a closed geodesic of $g_j$
such that the length $l(c_j)$ with respect to $g_j$
equals the critical value
$\cri_{\Lambda,j}(h)$ with respect to $g_j.$
Hence Morse theory implies that
the index $\ind_{\Lambda}(c_j)\in \{k,k-1\},$
since the critical submanifold is $1$-dimensional.
Then $\ind_{\Omega,j}(c_j)\le k$ which implies
that the length $l_j(c_j)$ of $c_j$
with respect to the metric $g_j$ satisfies
$\cri_{\Lambda,j}(h)=l_j(c_j)\le \conj_j(k).$
Here $\conj_j(k)$ is defined with respect
to the metric $g_j.$
Then $\cri_{\Lambda}(h)=
\lim_{j\to \infty} \cri_{\Lambda,j}(h)
\le \lim_{j\to \infty}\conj_j(k)
=\conj(k).$
\end{proof}
\begin{remark}
\label{rem:morse-schoenberg}
The Morse-Schoenberg
comparison result~\cite[Thm. 2.6.2]{Kl},
\cite[Lem.~3]{Ra04}
implies:
Let $c:[0,1]\longrightarrow M$ be a
geodesic of length $l(c),$ and
$k \in \n.$
\smallskip
(a) If $\ric \ge (n-1)\delta$
for $\delta >0$
and if
$l(c) > \pi k /\sqrt{\delta},$
then $\ind_{\Omega}(c)\ge k.$
\smallskip
(b) If $K \ge \delta$
for a positive $\delta$
and if
$l(c) > \pi k/\sqrt{\delta},$
then $\ind_{\Omega}(c)\ge k(n-1).$
\smallskip
(c) If $K\le 1$ and if $l(c)\le \pi k$
then $\ind_{\Omega}(c)\le (k-1)(n-1).$
\end{remark}
This implies
\begin{lemma}
\label{lem:conj}
Let $(M,g)$ be a manifold with Riemannian
metric resp. Finsler metric $g.$
\smallskip
(a) If $\ric \ge (n-1)\delta $
for $\delta >0$ then
$\conj(k)\le (k+1)\pi/\sqrt{\delta}$
for $k \in \n_0.$
\smallskip
(b) If $K \ge \delta$ for $\delta>0$
we have
$\conj(k(n-1))\le (k+1)\pi/\sqrt{\delta}$
for $k\in \n_0.$
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:two}]
From Theorem~\ref{thm:one} and
Lemma~\ref{lem:conj} we immediately obtain
the statements (a) and (b).
Statement (c) follows analogously to the arguments
in the proof of Theorem~\ref{thm:one} together
with Remark~\ref{rem:morse-schoenberg}(c) and
the estimate $\conc(c)\le n-1.$
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:three}]
By assumption there is a homotopically non-trivial
map $\phi: S^{m+1}\longrightarrow M,$
which also defines a
homotopically non-trivial map
$\tilde{\phi}: (D^{m},S^{m-1})\longrightarrow
(\Lambda M, \Lambda^0 M),$
cf. \cite[Thm.~2.4.20]{Kl}.
This defines a non-trivial
homology class $h \in H_{m}(\Lambda M, \Lambda^0 M;R)$
for some
coefficient field $R.$
Then there exists a closed geodesic
$c$ with length
$l(c)=\cri_{\Lambda}(h).$
We conclude from Theorem~\ref{thm:two}(a)
that $l(c)=
\cri_{\Lambda}(h)\le \pi (m+1)/\sqrt{\delta}.$
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:four}]
Since $M$ is simply-connected
we conclude from a minimal model for the
rational homotopy type of $\Omega M:$
There exists
a non-trivial cohomology class
$\omega \in H^{2l}(\Omega_{pq};\q)$
of even degree $2l$
for some $1\le l\le n-1,$
which is not a torsion class with respect to
the cup product, i.e. $\omega^k\not=0$ for all
$k\ge 1.$
There is a sequence $h_k\in H_*(\Omega_{pq}M,\Omega_{pq}^{\le b}M;R), k\ge 1$
of non-trivial homology classes
with $h_k=\omega \cap h_{k+1},
\deg(h_k)=2lk, k\ge 1.$
Here $\cap$ denotes the \emph{cap product.}
Then we use the principle of
\emph{subordinated homology classes,}
cf.~\cite[p.225--226]{BTZ} and conclude:
$\cri_{\Omega}(h_k)\le
\cri_{\Omega}(h_{k+1})$ for all
$k\ge 1.$
Here
equality only holds if there
are infinitely many distinct
geodesics in $\Omega_{pq}(M)$
of equal length $l(c)=\cri_{\Omega}(h_k)=
\cri_{\Omega}(h_{k+1}).$
Hence we can assume that
$\cri_{\Omega}(h_k)<
\cri_{\Omega}(h_{k+1})$
and obtain a sequence
$c_k \in \Omega_{pq}M$ of geodesics
with $l(c_k)=\cri_{\Omega}(h_k).$
Since $\deg(h_k)=2lk\le 2(n-1)k$
we obtain the claim from
Theorem~\ref{thm:two}.
\end{proof}
\begin{remark}
(a) If $M$ is simply-connected and compact then
it was shown by Gromov~\cite[Thm. 7.3]{Gr} that there exist
positive constants $C_1=C_1(g),C_2=C_2(g)$ depending on
the metric $g$ such that for all
homology classes $h \in H_*(\Lambda M;R)$
the following inequalities hold:
\begin{equation*}
C_1 \, \cri_{\Lambda}(h)
<
\deg (h)
<
C_2 \, \cri_{\Lambda}(h)\,.
\end{equation*}
\smallskip
(b)
If $M=S^n$ is a sphere of dimension $n\ge 3$
it is shown in \cite[Thm. 1.1]{HR} that
there are positive numbers $\overline{\alpha}
=\overline{\alpha}(g),
\beta=\beta(g),$ depending on $g$ such that
\begin{equation*}
\overline{\alpha}\, \cri_{\Lambda}(h)-\beta
<
\deg (h)
<
\overline{\alpha} \,\cri_{\Lambda}(h)+\beta
\end{equation*}
holds for all $h \in H_*(\Lambda S^n).$
The number $\overline{\alpha}$ is called
\emph{global mean frequency.}
In case of positive Ricci curvature
$\ric\ge (n-1)\delta$ we conclude
from Theorem~\ref{thm:two}(a):
$\sqrt{\delta}/\pi \le
\overline{\alpha}.$
If $K\le 1$ then $\overline{\alpha}\le
(n-1)/\pi.$
\end{remark}
|
1,116,691,497,639 | arxiv | \section{Introduction}
There has recently been renewed interest in the possibility of `texture' zeros
in the fermion mass matrices~\cite{Ludl:2014axa,Ferreira:2014vna,Ludl:2015lta}.
Texture zeros are well grounded in renormalizable field theories,
since they can always be implemented
through models with suitable Abelian symmetries and
(possibly many)
scalar fields with vacuum expectation values~\cite{Grimus:2004hf}.
The revival of interest was partially motivated
by the realization of the fact that a popular alternative approach,
where lepton mixing
is \emph{completely determined}\/ by a non-Abelian symmetry,
seems to have been fully explored~\cite{Fonseca:2014koa}.
With three Majorana neutrinos,
the mass Lagrangian is
\begin{equation}
\mathcal{L}_\mathrm{mass} =
- \bar \ell_L M_\ell \ell_R
- \bar \ell_R M_\ell^\dagger \ell_L
+ \frac{1}{2} \left( \nu^T C^{-1} M \nu
- \bar \nu M^\ast C \bar \nu^T \right),
\end{equation}
where $C$ is the charge-conjugation matrix in Dirac space.
The column-vector $\nu$ contains the three left-handed light-neutrino fields.
The neutrino Majorana mass matrix $M$ acts in flavour space and is symmetric.
In ref.~\cite{Frampton:2002yf},
$M_\ell$ was assumed to be diagonal
while $M$ had two zero matrix elements.\footnote{The matrix $M$
is $3 \times 3$ symmetric and therefore it has,
in general,
six independent matrix elements.
We say that ``$M$ has $n$ zero matrix elements''
if $n$ out of those six \emph{independent}\/ matrix elements vanish.
The actual total number of zero entries in $M$ will be larger than $n$
if some of the vanishing entries are off-diagonal.}
This was later generalized to the situation wherein $M_\ell$ is diagonal
and $M^{-1}$ has two zero matrix elements~\cite{Lavoura:2004tu};
mixed situations in which $M$ and $M^{-1}$ have one zero matrix element each,
while $M_\ell$ remains diagonal,
were studied in ref.~\cite{Dev:2010if}.
Recently,
all the cases in which both $M_\ell$ and $M$ sport texture zeros
were mapped~\cite{Ludl:2014axa,Ferreira:2014vna}.
It is known that the three light neutrinos are \emph{exceedingly}\/ light;
one of them may actually be massless.
Possibly the most popular theory for explaining that extreme lightness
is the (type~I) see-saw mechanism.
In that theory,
$M$ is not really a fundamental mass matrix,
rather
\begin{equation}
M = - M_D M_R^{-1} M_D^T
\end{equation}
is just the effective (approximate) mass matrix for the light neutrinos
arising out of the Dirac mass matrix $M_D$
connecting the standard neutrinos to some gauge-singlet
(``right-handed'') neutrino fields
and of the Majorana mass matrix $M_R$ of the latter.\footnote{We shall assume
in this paper that the number of right-handed neutrinos is three.}
In this context,
assuming the presence of texture zeros in $M$ seems unwarranted;
one should rather consider texture zeros in $M_D$ and $M_R$.
Indeed,
that was the rationale for ref.~\cite{Lavoura:2004tu},
where $M_D$ was assumed to be diagonal
(which means that it has six texture zeros)
and two texture zeros were enforced in $M_R$.
In this paper I want to map \emph{all}\/ the cases
in which there are texture zeros in $M_D$ and $M_R$
while $M_\ell$ remains diagonal
(which in itself means that $M_\ell$ has six texture zeros).
I will look for \emph{predictive}\/ cases,
\textit{i.e.}\ for cases which lead to non-trivial fits
for the lepton mixing (PMNS) matrix and/or for the neutrino mass ratios.
The case in which $M_D$ has six texture zeros
and $M_R$ has two texture zeros was considered in ref.~\cite{Lavoura:2004tu};
here I consider additional cases in which
$M_D$ has either five or four texture zeros and,
correspondingly,
$M_R$ has either three or four texture zeros,
respectively.
In my search I have recovered the cases studied
in refs.~\cite{Dev:2010if} and~\cite{Kageyama:2002zw};
additionally,
I have uncovered some extra cases which had not been,
to my knowledge,
studied before.
Since in my search $M_\ell$ is kept diagonal,
the light neutrinos in the column vector
$\nu = \left( \nu_e,\ \nu_\mu,\ \nu_\tau \right)^T$
may be labelled through their flavour.
Then,
$M= \left[ M_{\alpha \beta} \right]$,
where $\alpha$ and $\beta$ may be either $e$,
$\mu$,
or $\tau$.
In some of the new cases that I present in this paper
the constraints on $M$ may most conveniently be written in terms of the matrix
$A = \left[ A_{\alpha \beta} \right]$ defined by
\begin{equation}
A_{\alpha \beta} \equiv M_{\alpha \beta} \left( M^{-1} \right)_{\beta \alpha}.
\end{equation}
(I do not use the summation convention in this paper.)
The matrix $A$ was first used in the context of lepton mixing
in refs.~\cite{Ferreira:2013zqa,Ferreira:2013oga}.
It has the properties that the sum of its matrix elements
over any of its rows or columns is equal to one
and that it is invariant either under a rephasing of $M$,
\begin{equation}
\label{obpdsk}
M_{\alpha \beta} \to e^{i \left( \xi_\alpha + \chi_\beta \right)} M_{\alpha \beta},
\end{equation}
or under the multiplication of $M$
by any number.\footnote{Since $M$ is symmetric,
in its specific case one must set $\chi_\beta = \xi_\beta$ in eq.~(\ref{obpdsk}).}
In our case,
since $M$ is symmetric,
$A$ is symmetric too.
I have found that some texture-zero models predict one diagonal matrix
element of $A$ to be one and another diagonal matrix element of $A$ to be zero;
thus,
but for a permutation of its rows and columns,
\begin{equation}
\label{a}
A = \left( \begin{array}{ccccc}
0 & & t & & 1-t \\ t & & 1 & & -t \\ 1-t & & -t & & 2t
\end{array} \right),
\end{equation}
where $t$ is some complex number.
In this case,
$A$ has only two degress of freedom
(the real and the imaginary parts of $t$),
instead of the six degrees of freedom of the general case.
\section{Possibilities for $M_D$}
I shall consider
the possibility of a permutation of the rows and columns of $M$
only at the end.\footnote{Both $M$ and $M_R$ are Majorana mass matrices
and therefore they are symmetric.
Hence,
a permutation of their rows
necessarily entails a permutation of their columns too.}
Such a permutation corresponds to a transformation
\begin{equation}
\label{MZM}
M \to Z M Z^T,
\end{equation}
where
\begin{equation}
Z \in S_3 \equiv \left\{ A^2,\ A,\ B,\ A B A,\ A B,\ B A \right\},
\end{equation}
where
\begin{equation}
A \equiv \left( \begin{array}{ccc}
0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1
\end{array} \right),
\quad
B \equiv \left( \begin{array}{ccc}
0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0
\end{array} \right).
\end{equation}
The transformation~(\ref{MZM}) is equivalent to
\begin{equation}
\label{MDZMD}
M_D \to Z M_D,
\end{equation}
which is a permutation of the rows of $M_D$.
Since I am going to consider
the possibility of a transformation~(\ref{MZM}) at the end,
I do not need to consider the possibility of a transformation~(\ref{MDZMD})
now at the beginning.
So,
\emph{I shall take two matrices $M_D$ which only differ
through a permutation of their rows to be equivalent}.
Under this proviso,
a matrix $M_D$ with four texture zeros must be of one of the following forms:
\begin{equation}
\label{6}
\left( \begin{array}{ccc}
0 & 0 & 0 \\ 0 & \times & \times \\ \times & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & 0 & 0 \\ \times & 0 & \times \\ \times & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & 0 & 0 \\ \times & \times & 0 \\ \times & \times & \times
\end{array} \right),
\end{equation}
\begin{equation}
\label{7}
\left( \begin{array}{ccc}
0 & 0 & \times \\ 0 & 0 & \times \\ \times & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & 0 \\ 0 & \times & 0 \\ \times & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ \times & 0 & 0 \\ \times & \times & \times
\end{array} \right),
\end{equation}
\begin{equation}
\label{8}
\left( \begin{array}{ccc}
0 & 0 & \times \\ 0 & \times & 0 \\ \times & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & 0 & \times \\ \times & 0 & 0 \\ \times & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & 0 \\ \times & 0 & 0 \\ \times & \times & \times
\end{array} \right),
\end{equation}
\begin{equation}
\label{9}
\begin{array}{c}
{\displaystyle
\left( \begin{array}{ccc}
0 & 0 & \times \\ 0 & \times & \times \\ 0 & \times & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & 0 & \times \\ \times & 0 & \times \\ \times & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & 0 \\ 0 & \times & \times \\ 0 & \times & \times
\end{array} \right),
}
\\*[8mm]
{\displaystyle
\left( \begin{array}{ccc}
0 & \times & 0 \\ \times & \times & 0 \\ \times & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ \times & 0 & \times \\ \times & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ \times & \times & 0 \\ \times & \times & 0
\end{array} \right),
}
\end{array}
\end{equation}
\begin{equation}
\label{10}
\left( \begin{array}{ccc}
0 & 0 & \times \\ \times & \times & 0 \\ \times & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & 0 \\ \times & 0 & \times \\ \times & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ 0 & \times & \times \\ 0 & \times & \times
\end{array} \right),
\end{equation}
\begin{equation}
\label{11}
\left( \begin{array}{ccc}
0 & 0 & \times \\ 0 & \times & \times \\ \times & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & 0 \\ 0 & \times & \times \\ \times & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ \times & 0 & \times \\ \times & \times & 0
\end{array} \right),
\end{equation}
\begin{equation}
\label{11n}
\begin{array}{c}
{\displaystyle
\left( \begin{array}{ccc}
0 & 0 & \times \\ 0 & \times & \times \\ \times & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & 0 & \times \\ \times & 0 & \times \\ \times & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & 0 \\ 0 & \times & \times \\ \times & 0 & \times
\end{array} \right),
}
\\*[8mm]
{\displaystyle
\left( \begin{array}{ccc}
0 & \times & 0 \\ \times & 0 & \times \\ \times & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ 0 & \times & \times \\ \times & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & 0 \\ 0 & \times & \times \\ \times & \times & 0
\end{array} \right).
}
\end{array}
\end{equation}
In matrices~(\ref{6}--\ref{11n})
the symbol $\times$ represents a non-zero entry.
If we consider the possibility of transformations~(\ref{MDZMD}),
then there are three matrices of each of the types~(\ref{7}),
(\ref{9}),
and~(\ref{10}),
and six matrices of each of the types~(\ref{6}),
(\ref{8}),
(\ref{11}),
and~(\ref{11n}).
So,
altogether there are $3 \times 12 + 6 \times 15 = 126$
possible matrices $M_D$ with four texture zeros;
this is as it should be,
since $M_D$ has nine independent matrix elements and
$\left( 9 \times 8 \times 7 \times 6 \right) \left/ 4! \right. = 126$.
Matrices $M_D$ with a row of zeros are uninteresting
since they yield one massless,
decoupled neutrino.
Therefore,
the matrices~(\ref{6}) may be neglected.
The matrices~(\ref{7}) may also be neglected because they lead to `scaling',
\textit{i.e.}\ the matrix $M$ has a right-eigenvector with one zero entry
corresponding to the eigenvalue zero~\cite{Mohapatra:2006xy};
this implies that the PMNS matrix has one zero matrix element,
which contradicts experiment.
In order to write down all the possible forms of matrices $M_D$
with five textures zeros,
one simply has to interchange the zeros with the $\times$ symbols
in the matrices~(\ref{6}--\ref{11n}).
One obtains
\begin{equation}
\label{13}
\left( \begin{array}{ccc}
\times & \times & \times \\ \times & 0 & 0 \\ 0 & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & \times & \times \\ 0 & \times & 0 \\ 0 & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & \times & \times \\ 0 & 0 & \times \\ 0 & 0 & 0
\end{array} \right),
\end{equation}
\begin{equation}
\label{14}
\left( \begin{array}{ccc}
\times & \times & 0 \\ \times & \times & 0 \\ 0 & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & \times \\ \times & 0 & \times \\ 0 & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ 0 & \times & \times \\ 0 & 0 & 0
\end{array} \right),
\end{equation}
\begin{equation}
\label{15}
\left( \begin{array}{ccc}
\times & \times & 0 \\ \times & 0 & \times \\ 0 & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & \times & 0 \\ 0 & \times & \times \\ 0 & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & \times \\ 0 & \times & \times \\ 0 & 0 & 0
\end{array} \right),
\end{equation}
\begin{equation}
\label{16}
\begin{array}{c}
{\displaystyle
\left( \begin{array}{ccc}
\times & \times & 0 \\ \times & 0 & 0 \\ \times & 0 & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & \times & 0 \\ 0 & \times & 0 \\ 0 & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & \times \\ \times & 0 & 0 \\ \times & 0 & 0
\end{array} \right),
}
\\*[8mm]
{\displaystyle
\left( \begin{array}{ccc}
\times & 0 & \times \\ 0 & 0 & \times \\ 0 & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ 0 & \times & 0 \\ 0 & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ 0 & 0 & \times \\ 0 & 0 & \times
\end{array} \right),
}
\end{array}
\end{equation}
\begin{equation}
\label{17}
\left( \begin{array}{ccc}
\times & \times & 0 \\ 0 & 0 & \times \\ 0 & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & \times \\ 0 & \times & 0 \\ 0 & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ \times & 0 & 0 \\ \times & 0 & 0
\end{array} \right),
\end{equation}
\begin{equation}
\label{18}
\left( \begin{array}{ccc}
\times & \times & 0 \\ \times & 0 & 0 \\ 0 & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & \times \\ \times & 0 & 0 \\ 0 & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ 0 & \times & 0 \\ 0 & 0 & \times
\end{array} \right),
\end{equation}
\begin{equation}
\label{18n}
\begin{array}{c}
{\displaystyle
\left( \begin{array}{ccc}
\times & \times & 0 \\ \times & 0 & 0 \\ 0 & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & \times & 0 \\ 0 & \times & 0 \\ 0 & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
\times & 0 & \times \\ \times & 0 & 0 \\ 0 & \times & 0
\end{array} \right),
}
\\*[8mm]
{\displaystyle
\left( \begin{array}{ccc}
\times & 0 & \times \\ 0 & \times & 0 \\ 0 & 0 & \times
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ \times & 0 & 0 \\ 0 & \times & 0
\end{array} \right),
\quad
\left( \begin{array}{ccc}
0 & \times & \times \\ \times & 0 & 0 \\ 0 & 0 & \times
\end{array} \right).
}
\end{array}
\end{equation}
Matrices $M_D$ with a row of zeros are uninteresting.
Therefore,
forms~(\ref{13}),
(\ref{14}),
and~(\ref{15}) may be neglected.
Forms~(\ref{16}) and~(\ref{17}) may also be neglected
because they lead to scaling.
Therefore,
only the nine forms~(\ref{18}) and~(\ref{18n}) should be considered.
\section{Possibilities for $M_R$}
The matrix $M$ is invariant under
\begin{equation}
M_D \to M_D Z,
\quad
M_R \to Z^T\! M_R Z
\end{equation}
because $Z^T = Z^{-1},\ \forall Z \in S_3$.
Therefore,
a permutation of the rows and columns of $M_R$
is equivalent to a permutation of the columns of $M_D$.
Since in the preceding section
I have not chosen any particular order for the columns of $M_D$,
I am free in this section
to restrict the order of the rows and columns of $M_R$.
Moreover,
if one particular form of $M_R$ is invariant under some $S_2$ subgroup of $S_3$,
then one may disconsider the action of that $S_2$ on the columns of $M_D$.
If one wants to obtain a model as predictive
as those in the literature\footnote{The models in ref.~\cite{Lavoura:2004tu}
have six zeros in $M_D$ and two zeros in $M_R$.
Models as predictive should have either
five zeros in $M_D$ and three zeros in $M_R$ or
four zeros in $M_D$ and four zeros in $M_R$.}
and if $M_D$ has four texture zeros,
then $M_R$ should also have four texture zeros.
Since $\det{M_R}$ must be nonzero,\footnote{If $M_R$ is a singular matrix
then one right-handed neutrino is massless and the see-saw mechanism
is not fully operative; see ref.~\cite{Branco:1988ex}.}
there is only one possible form for $M_R$,
\begin{equation}
\label{MR}
\left( \begin{array}{ccc}
\times & 0 & 0 \\ 0 & 0 & \times \\ 0 & \times & 0
\end{array} \right),
\end{equation}
but for permutations of the rows and columns---which are equivalent to
permutations of the columns of $M_D$.
Equation~(\ref{MR}) leads to
\begin{equation}
\label{igopr}
M_R^{-1} = \left( \begin{array}{ccc}
x & 0 & 0 \\ 0 & 0 & y \\ 0 & y & 0
\end{array} \right).
\end{equation}
The form~(\ref{MR}) of $M_R$ is invariant under the interchange
of the second and third rows and columns.
Therefore,
when $M_R$ is of that form,
one may disconsider
the possibility of a permutation of the second and third columns of $M_D$.
Thus,
out of the 21 forms~(\ref{8}--\ref{11n}) of $M_D$
only the following 13 must be considered:
\begin{subequations}
\label{24}
\begin{eqnarray}
\label{24a}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ 0 & b & 0 \\ c & d & e
\end{array} \right),
\\
\label{24b}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ b & 0 & 0 \\ c & d & e
\end{array} \right);
\end{eqnarray}
\end{subequations}
\begin{subequations}
\label{25}
\begin{eqnarray}
\label{25a}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ 0 & b & c \\ 0 & d & e
\end{array} \right),
\\
\label{25b}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ b & 0 & c \\ d & 0 & e
\end{array} \right),
\\
\label{25c}
M_D &=& \left( \begin{array}{ccc}
a & 0 & 0 \\ b & 0 & c \\ d & 0 & e
\end{array} \right);
\end{eqnarray}
\end{subequations}
\begin{subequations}
\label{26}
\begin{eqnarray}
\label{26a}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ b & c & 0 \\ d & e & 0
\end{array} \right),
\\
\label{26b}
M_D &=& \left( \begin{array}{ccc}
a & 0 & 0 \\ 0 & b & c \\ 0 & d & e
\end{array} \right);
\end{eqnarray}
\end{subequations}
\begin{subequations}
\label{27}
\begin{eqnarray}
\label{27a}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ 0 & b & c \\ d & 0 & e
\end{array} \right),
\\
\label{27b}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ 0 & b & c \\ d & e & 0
\end{array} \right),
\\
\label{27c}
M_D &=& \left( \begin{array}{ccc}
0 & 0 & a \\ b & 0 & c \\ d & e & 0
\end{array} \right),
\\
\label{27d}
M_D &=& \left( \begin{array}{ccc}
0 & a & 0 \\ b & 0 & c \\ d & e & 0
\end{array} \right),
\\
\label{27e}
M_D &=& \left( \begin{array}{ccc}
a & 0 & 0 \\ 0 & b & c \\ d & 0 & e
\end{array} \right),
\\
\label{27f}
M_D &=& \left( \begin{array}{ccc}
a & 0 & 0 \\ b & 0 & c \\ d & e & 0
\end{array} \right).
\end{eqnarray}
\end{subequations}
If $M_D$ has five texture zeros then $M_R$ should have three texture zeros.
Since $\det{M_R}$ must be nonzero,
the following are the only possible forms for $M_R$:
\begin{subequations}
\begin{eqnarray}
\label{u2}
\left( \begin{array}{ccc}
\times & 0 & 0 \\ 0 & \times & 0 \\ 0 & 0 & \times
\end{array} \right),
& &
\left( \begin{array}{ccc}
0 & \times & \times \\ \times & 0 & \times \\ \times & \times & 0
\end{array} \right),
\\
\label{u3}
\left( \begin{array}{ccc}
\times & \times & 0 \\ \times & 0 & 0 \\ 0 & 0 & \times
\end{array} \right),
& & \left( \begin{array}{ccc}
\times & \times & 0 \\ \times & 0 & \times \\ 0 & \times & 0
\end{array} \right).
\end{eqnarray}
\end{subequations}
They correspond to
\begin{subequations}
\begin{eqnarray}
\label{f1}
M_R^{-1} &=& \left( \begin{array}{ccc}
x & 0 & 0 \\ 0 & y & 0 \\ 0 & 0 & z
\end{array} \right),
\\
\label{f4}
M_R^{-1} &=& \left( \begin{array}{ccc}
- x & \sqrt{x y} & \sqrt{x z} \\
\sqrt{x y} & - y & \sqrt{y z} \\
\sqrt{x z} & \sqrt{y z} & - z
\end{array} \right),
\\
\label{f2}
M_R^{-1} &=& \left( \begin{array}{ccc}
0 & x & 0 \\ x & y & 0 \\ 0 & 0 & z
\end{array} \right),
\\
\label{f3}
M_R^{-1} &=& \left( \begin{array}{ccc}
x & 0 & \sqrt{x z} \\ 0 & 0 & y \\ \sqrt{x z} & y & z
\end{array} \right),
\end{eqnarray}
\end{subequations}
respectively.
The forms of the matrices~(\ref{u2}) are invariant under $S_3$
while the forms of the matrices~(\ref{u3}) are \emph{not}\/ invariant
under any non-trivial permutation of their rows and columns.
Therefore,
with either eq.~(\ref{f1}) or eq.~(\ref{f4}) one may take,
instead of the nine possibilities~(\ref{18}--\ref{18n}) for $M_D$,
just the following two possibilities:
\begin{subequations}
\label{ibvuty}
\begin{eqnarray}
\label{ibvuty3}
M_D &=& \left( \begin{array}{ccc}
a & b & 0 \\ c & 0 & 0 \\ 0 & d & 0
\end{array} \right),
\\
\label{ibvuty4}
M_D &=& \left( \begin{array}{ccc}
a & b & 0 \\ c & 0 & 0 \\ 0 & 0 & d
\end{array} \right).
\end{eqnarray}
\end{subequations}
With eqs.~(\ref{f2}) and~(\ref{f3}),
on the other hand,
one must use the full set of nine possibilities for $M_D$:
\begin{subequations}
\label{iby}
\begin{eqnarray}
\label{iby1}
M_D &=& \left( \begin{array}{ccc}
a & b & 0 \\ c & 0 & 0 \\ 0 & d & 0
\end{array} \right),
\\
\label{iby2}
M_D &=& \left( \begin{array}{ccc}
a & 0 & b \\ c & 0 & 0 \\ 0 & 0 & d
\end{array} \right),
\\
\label{iby3}
M_D &=& \left( \begin{array}{ccc}
0 & a & b \\ 0 & c & 0 \\ 0 & 0 & d
\end{array} \right),
\end{eqnarray}
\end{subequations}
\begin{subequations}
\label{ibi}
\begin{eqnarray}
\label{ibi1}
M_D &=& \left( \begin{array}{ccc}
a & b & 0 \\ c & 0 & 0 \\ 0 & 0 & d
\end{array} \right),
\\
\label{ibi2}
M_D &=& \left( \begin{array}{ccc}
a & b & 0 \\ 0 & c & 0 \\ 0 & 0 & d
\end{array} \right),
\\
\label{ibi3}
M_D &=& \left( \begin{array}{ccc}
a & 0 & b \\ c & 0 & 0 \\ 0 & d & 0
\end{array} \right),
\\
\label{ibi4}
M_D &=& \left( \begin{array}{ccc}
a & 0 & b \\ 0 & c & 0 \\ 0 & 0 & d
\end{array} \right),
\\
\label{ibi5}
M_D &=& \left( \begin{array}{ccc}
0 & a & b \\ c & 0 & 0 \\ 0 & d & 0
\end{array} \right),
\\
\label{ibi6}
M_D &=& \left( \begin{array}{ccc}
0 & a & b \\ c & 0 & 0 \\ 0 & 0 & d
\end{array} \right).
\end{eqnarray}
\end{subequations}
\section{Constraints on $M$}
\subsection{Possibilities with eq.~(\ref{f1})}
With this form of $M_R^{-1}$ one should use the two
options~(\ref{ibvuty}) for $M_D$.
With eq.~(\ref{ibvuty4}) one neutrino decouples;
this is incompatible with experiment.
With eq.~(\ref{ibvuty3}) one obtains\footnote{To be sure,
from eqs.~(\ref{f1}) and~(\ref{ibvuty3})
one obtains $\det{M} = 0$ and $M_{23} = 0$.
But now I generalize and consider other options for $M_D$
that differ from eq.~(\ref{ibvuty3}) through a permutation of the rows;
then I obtain,
in general,
the conditions~(\ref{uit}).
In the same fashion,
throughout this section I shall consider,
for each particular case,
the results that follow after considering all possible permutations
of the rows of $M_D$.}
\begin{equation}
\label{uit}
\det{M} = 0, \quad M_{\alpha \beta} = 0\ (\alpha \neq \beta),
\end{equation}
which are constraints on $M$ which have not yet
been considered in the literature.
\subsection{Possibilities with eq.~(\ref{f4})}
With this form of $M_R^{-1}$ one should once again use
the two options~(\ref{ibvuty}) for $M_D$.
With eq.~(\ref{ibvuty4}) one obtains
\begin{equation}
\label{obpt}
\left( M^{-1} \right)_{\alpha \alpha} = \left( M^{-1} \right)_{\beta \beta} = 0
\quad (\alpha \neq \beta),
\end{equation}
which has already been considered in ref.~\cite{Lavoura:2004tu}.
With eq.~(\ref{ibvuty3}) one obtains
\begin{equation}
\det{M} = 0, \quad
M_{\alpha \alpha} M_{\beta \beta} - \left( M_{\alpha \beta} \right)^2 = 0
\quad (\alpha \neq \beta),
\end{equation}
which is new and potentially interesting.
\subsection{Possibilities with eq.~(\ref{f2})}
With this form of $M_R^{-1}$ one should use
either one of the nine options~(\ref{iby}--\ref{ibi}) for $M_D$.
With eq.~(\ref{iby2}) one of the neutrinos decouples.
With eq.~(\ref{iby3}) one recovers the conditions~(\ref{uit}).
With eq.~(\ref{iby1}) one obtains
\begin{equation}
\label{uit2}
\det{M} = 0, \quad M_{\alpha \alpha} = 0,
\end{equation}
which must be studied.
Both eq.~(\ref{ibi1}) and eq.~(\ref{ibi2})
lead to the decoupling of one neutrino.
With either eq.~(\ref{ibi3}) or eq.~(\ref{ibi6}) one gets
\begin{equation}
\label{svort}
M_{\alpha \alpha} = M_{\alpha \beta} = 0
\quad (\alpha \neq \beta),
\end{equation}
which has been studied in ref.~\cite{Frampton:2002yf}.
With eq.~(\ref{ibi4}) one has
\begin{equation}
\label{nutr1}
\left( M^{-1} \right)_{\alpha \alpha} = 0, \quad
M_{\alpha \beta} = 0 \quad (\alpha \neq \beta),
\end{equation}
while with eq.~(\ref{ibi5}) one has
\begin{equation}
\label{nutr2}
M_{\alpha \alpha} = 0, \quad
\left( M^{-1} \right)_{\alpha \beta} = 0 \quad (\alpha \neq \beta).
\end{equation}
Both constraints~(\ref{nutr1}) and~(\ref{nutr2})
have already been studied in ref.~\cite{Dev:2010if}.
\subsection{Possibilities with eq.~(\ref{f3})}
With this form of $M_R^{-1}$ one should use
either one of the nine options~(\ref{iby}--\ref{ibi}) for $M_D$.
Equation~(\ref{iby1}) leads to one neutrino decoupling
and eq.~(\ref{iby2}) leads to scaling.
Equation~(\ref{iby3}) reproduces the conditions~(\ref{uit2}).
Equation~(\ref{ibi1}) leads once again to the conditions~(\ref{obpt}).
Both eq.~(\ref{ibi2}) and eq.~(\ref{ibi5})
reproduce the conditions~(\ref{svort}).
Equation~(\ref{ibi4}) leads to
\begin{equation}
\label{biuft}
M_{\alpha \alpha} = 0, \quad \left( M^{-1} \right)_{\alpha \alpha} = 0.
\end{equation}
These conditions have also been studied in ref~\cite{Dev:2010if}.
Equation~(\ref{ibi3}) leads to
\begin{equation}
M_{\alpha \alpha} = M_{\alpha \beta} = 0, \quad
\left( M^{-1} \right)_{\alpha \alpha} = 0 \quad (\alpha \neq \beta).
\end{equation}
This is one constraint too many,
but I shall consider it later.
Equation~(\ref{ibi6}) gives
\begin{equation}
\label{blpde}
\left( M^{-1} \right)_{\alpha \alpha} = 0,
\quad
A_{\beta \beta} = 1,
\quad
M_{\gamma \gamma} \neq 0
\quad
(\alpha \neq \beta \neq \gamma \neq \alpha),
\end{equation}
which is new.
I have explicitly written down
the condition $M_{\gamma \gamma} \neq 0$ in conditions~(\ref{blpde})
in order to distinguish this model from cases $A_{1,2}$ and $B_{3,4}$
of ref.~\cite{Frampton:2002yf}.
For instance,
$M_{11} = M_{12} = 0$ in case $A_1$;
this leads to $\left( M^{-1} \right)_{33} = 0$ and $A_{22} = 1$,
corresponding to $t = 0$ in eq.~(\ref{a}).
\subsection{Possibilities with eq.~(\ref{igopr})}
With eq.~(\ref{igopr}) one must use for $M_D$ the 13 options
in eqs.~(\ref{24}--\ref{27}).
Equation~(\ref{24a}) yields
\begin{equation}
M_{\alpha \alpha} = M_{\beta \beta} = 0
\quad (\alpha \neq \beta).
\end{equation}
These conditions have been studied in ref.~\cite{Frampton:2002yf}
(see also ref.~\cite{Grimus:2004az}).
Equation~(\ref{24b}) reproduces the conditions~(\ref{svort}).
Equation~(\ref{25a}) yields once again the conditions~(\ref{uit2}).
Equations~(\ref{25b}) and~(\ref{25c}) lead to two massless neutrinos.
Equation~(\ref{26a}) reproduces the conditions~(\ref{biuft}).
Equation~(\ref{26b}) makes one neutrino decouple.
Equations~(\ref{27a}),
(\ref{27c}),
and~(\ref{27d}) reproduce the conditions~(\ref{svort}).
Equation~(\ref{27f}) reproduces the conditions~(\ref{obpt}).
Equation~(\ref{27e}) reproduces the conditions~(\ref{nutr1}).
Finally,
eq.~(\ref{27b}) yields
\begin{equation}
\label{kpft}
M_{\alpha \alpha} = 0, \quad A_{\beta \beta} = 1,
\quad
\left( M^{-1} \right)_{\gamma \gamma} \neq 0
\quad
(\alpha \neq \beta \neq \gamma \neq \alpha),
\end{equation}
which is new.
\subsection{Summary}
Most matrices $M$ that I have found embody conditions
that have already been treated in the literature.
A few matrices $M$,
though,
present features which,
to my knowledge,
have not yet been studied.
These are
\begin{eqnarray}
\label{o1}
& & \det{M} = 0 \quad \mbox{and} \quad M_{\alpha \alpha} = 0;
\\
\label{o2}
& & \det{M} = 0 \quad \mbox{and} \quad M_{\alpha \beta} = 0;
\\
\label{o3}
& & \det{M} = 0 \quad \mbox{and} \quad
M_{\alpha \alpha} M_{\beta \beta} - \left( M_{\alpha \beta} \right)^2 = 0;
\\
\label{extra}
& & M_{\alpha \alpha} = M_{\alpha \beta} = 0 \quad \mbox{and} \quad
\left( M^{-1} \right)_{\alpha \alpha} = 0,
\\
\label{o4}
& & M_{\alpha \alpha} = 0, \quad A_{\beta \beta} = 1, \quad \mbox{and} \quad
\left( M^{-1} \right)_{\gamma \gamma} \neq 0;
\\
\label{o5}
& & \left( M^{-1} \right)_{\alpha \alpha} = 0, \quad A_{\beta \beta} = 1,
\quad \mbox{and} \quad M_{\gamma \gamma} \neq 0.
\end{eqnarray}
In eqs.~(\ref{o2}--\ref{o5}) it should be understood that
$\alpha \neq \beta \neq \gamma \neq \alpha$.
According to ref.~\cite{Dev:2010if},
the possibility~(\ref{extra}) should be excluded because
$M_{\alpha \alpha} = 0$ together with $\left( M^{-1} \right)_{\alpha \alpha} = 0$
is experimentally excluded for any value of $\alpha = e, \mu, \tau$.
So in the next section I shall only consider conditions~(\ref{o1}--\ref{o3}),
(\ref{o4}),
and~(\ref{o5}).
\section{Comparison with the data}
\subsection{Introduction}
\paragraph{PMNS matrix:}
Since $M_\ell$ is diagonal,
the unitary matrix that diagonalizes $M$ is the lepton mixing (PMNS) matrix $U$:
\begin{equation}
M = U^\ast\, \mbox{diag} \left( \mu_1,\ \mu_2,\ \mu_3 \right) U^\dagger,
\end{equation}
where the $\mu_j$ are complex;
the neutrino masses are $ m_j = \left| \mu_j \right|$
($j = 1, 2, 3$).
The matrix $U$ is written
\begin{equation}
\label{U}
U = \left( \begin{array}{ccc}
c_{12} c_{13} & s_{12} c_{13} & \epsilon^\ast \\
- s_{12} c_{23} - \epsilon c_{12} s_{23} &
c_{12} c_{23} - \epsilon s_{12} s_{23} &
s_{23} c_{13} \\
s_{12} s_{23} - \epsilon c_{12} c_{23} &
- c_{12} s_{23} - \epsilon s_{12} c_{23} &
c_{23} c_{13}
\end{array} \right),
\end{equation}
where $\epsilon \equiv s_{13} \exp{\left( i \delta \right)}$.
In eq.~(\ref{U}),
$s_{jj^\prime} \equiv \sin{\theta_{jj^\prime}}$
and $c_{jj^\prime} \equiv \cos{\theta_{jj^\prime}}$.
\paragraph{The data:}
I define
\begin{equation}
r_\mathrm{solar} \equiv \sqrt{\frac{m_2^2 - m_1^2}{\left| m_3^2 - m_1^2 \right|}}.
\end{equation}
I use the $3 \sigma$ ranges~\cite{Forero:2014bxa}\footnote{There are other
phenomenological fits to the data---see
refs.~\cite{Fogli:2012ua,Gonzalez-Garcia:2014bfa}.}
\begin{equation}
\label{normalranges}
\begin{array}{rcccl}
0.278 & \le & s_{12}^2 & \le & 0.375, \\*[1mm]
0.0177 & \le & s_{13}^2 & \le & 0.0294, \\*[1mm]
0.392 & \le & s_{23}^2 & \le & 0.643, \\*[1mm]
0.0268 & \le & r_\mathrm{solar}^2 & \le & 0.0356
\end{array}
\end{equation}
for `normal' ordering of the neutrino masses ($m_3 > m_2 > m_1$),
and
\begin{equation}
\label{invertedranges}
\begin{array}{rcccl}
0.278 & \le & s_{12}^2 & \le & 0.375, \\*[1mm]
0.0183 & \le & s_{13}^2 & \le & 0.0297, \\*[1mm]
0.403 & \le & s_{23}^2 & \le & 0.640, \\*[1mm]
0.0280 & \le & r_\mathrm{solar}^2 & \le & 0.0372
\end{array}
\end{equation}
for `inverted' ordering ($m_2 > m_1 > m_3$).
\paragraph{Neutrino mass observables:}
The models in this paper
cannot predict the absolute value of the neutrino masses,
since all the predictions in eqs.~(\ref{o1})--(\ref{o5})
are invariant under $M \to c M$,
where $c$ is an arbitrary complex number.
They may,
though,
predict the \emph{relative}\/ value of any two neutrino mass observables.
Most conveniently,
one of those observables should be chosen to be
the square root of the atmospheric squared-mass difference,
\begin{equation}
m_\mathrm{atmospheric} \equiv \sqrt{\left| m_3^2 - m_1^2 \right|}
\approx 0.05\, \mathrm{eV}.
\end{equation}
The other relevant mass observables---besides
$m_\mathrm{solar} \equiv \sqrt{m_2^2 - m_1^2}$---are
\begin{subequations}
\begin{eqnarray}
m_\mathrm{cosmological} &\equiv& m_1 + m_2 + m_3, \\
m_{\beta \beta} &\equiv& \left| M_{ee} \right|
= \left| \sum_{j=1}^3 \mu_j^\ast \left( U_{ej} \right)^2 \right|, \\
m_{\nu_e} &\equiv& \sum_{j=1}^3 m_j \left| U_{ej} \right|^2.
\end{eqnarray}
\end{subequations}
Indeed,
$m_\mathrm{cosmological}$ may be derived from various cosmological observations;
$m_{\beta \beta}$ may be derived
from the rates of neutrinoless double-$\beta$ decay of various nuclides;
and $\left\langle m_{\nu_e} \right\rangle$
is the average mass of the electron neutrino
to be measured in experiments on the electron energy end-point
of tritium $\beta$ decay.
I define
\begin{equation}
r_\mathrm{cosmological} \equiv \frac{m_\mathrm{cosmological}}
{m_\mathrm{atmospheric}},
\quad
r_{\beta \beta} \equiv \frac{m_{\beta \beta}}{m_\mathrm{atmospheric}},
\quad
r_{\nu_e} \equiv \frac{m_{\nu_e}}{m_\mathrm{atmospheric}}.
\end{equation}
\subsection{The conditions~(\ref{o3})}
When $\det{M} = 0$ either $\mu_1 = 0$ (normal ordering)
or $\mu_3 = 0$ (inverted ordering).
With normal ordering one has
\begin{eqnarray}
0 &=& M^\ast_{\alpha \alpha} M^\ast_{\beta \beta} - \left( M^\ast_{\alpha \beta} \right)^2
\nonumber\\ &=&
\left[ \mu_2^\ast \left( U_{\alpha 2} \right)^2
+ \mu_3^\ast \left( U_{\alpha 3} \right)^2 \right]
\left[ \mu_2^\ast \left( U_{\beta 2} \right)^2
+ \mu_3^\ast \left( U_{\beta 3} \right)^2 \right]
\nonumber\\ & &
- \left( \mu_2^\ast U_{\alpha 2} U_{\beta 2}
+ \mu_3^\ast U_{\alpha 3} U_{\beta 3} \right)^2.
\end{eqnarray}
This gives
\begin{equation}
0 = \mu_2^\ast \mu_3^\ast
\left( U_{\alpha 2} U_{\beta 3} - U_{\alpha 3} U_{\beta 2} \right)^2,
\end{equation}
hence $U_{\alpha 2} U_{\beta 3} - U_{\alpha 3} U_{\beta 2} = 0$.
But $U$ is a unitary matrix,
therefore
\begin{equation}
\left| U_{\alpha 2} U_{\beta 3} - U_{\alpha 3} U_{\beta 2} \right|
= \left| U_{\gamma 1} \right|,
\end{equation}
where $\gamma \neq \alpha, \beta$.
One concludes that the conditions~(\ref{o3}) predict,
in the case of normal ordering,
one matrix element of the first column of $U$ to vanish.
This contradicts experiment.
In the case of inverse ordering,
conditions~(\ref{o3}) predict a matrix element of the third column of $U$
to vanish.
This also contradicts experiment.
Thus,
conditions~(\ref{o3}) are experimentally excluded.
\subsection{The conditions~(\ref{o1})}
With normal ordering one has $\mu_1 = 0$ and
\begin{equation}
0 = M^\ast_{\alpha \alpha} = \mu_2^\ast \left( U_{\alpha 2} \right)^2
+ \mu_3^\ast \left( U_{\alpha 3} \right)^2.
\end{equation}
Therefore,
\begin{equation}
\label{cond1}
\left| \frac{U_{\alpha 3}}{U_{\alpha 2}} \right|^2
= \frac{m_2}{m_3} = r_\mathrm{solar}.
\end{equation}
Equation~(\ref{cond1}) is incompatible with experiment
for any $\alpha = e, \mu, \tau$.
With inverted ordering one has instead $\mu_3 = 0$ and
\begin{equation}
0 = M^\ast_{\alpha \alpha} = \mu_1^\ast \left( U_{\alpha 1} \right)^2
+ \mu_2^\ast \left( U_{\alpha 2} \right)^2.
\end{equation}
Therefore,
\begin{equation}
\label{cond2}
\left| \frac{U_{\alpha 1}}{U_{\alpha 2}} \right|^2 = \frac{m_2}{m_1}
= \sqrt{1 + r_\mathrm{solar}^2}.
\end{equation}
Equation~(\ref{cond2}) can fit the phenomenology
in the cases $\alpha = \mu$ and $\alpha = \tau$.
If $\alpha = \mu$,
then $s_{12}$ should not be too low;
$s_{23}$ and (to a lesser extent) $s_{13}$ are also preferably
above their central values;
moreover,
$\cos{\delta} \gtrsim 0.5$ is predicted.
If $\alpha = \tau$,
then $s_{12}$ and $s_{13}$ should be at or above their best-fit values
while $\theta_{23}$ lies preferably in the first octant;
$\cos{\delta} \lesssim -0.5$ in this case.
In both cases,
the predictions for the neutrino mass ratios are
\begin{equation}
\label{pred23}
2.014 \le r_\mathrm{cosmological} \le 2.018, \quad
0.24 \le r_{\beta \beta} \le 0.42, \quad
0.974 \le r_{\nu_e} \le 0.988.
\end{equation}
To summarize,
conditions~(\ref{o1}) can only hold with an inverted neutrino mass spectrum,
with either $\alpha = \mu$ or $\alpha = \tau$,
and with large $\left| \cos{\delta} \right|$.
\subsection{The conditions~(\ref{o2})}
With normal ordering the conditions~(\ref{o2}) produce
\begin{equation}
\label{cond3}
\left| \frac{U_{\alpha 3} U_{\beta 3}}{U_{\alpha 2} U_{\beta 2}} \right|
= r_\mathrm{solar}
\end{equation}
while with inverted ordering one obtains instead
\begin{equation}
\label{cond4}
\left| \frac{U_{\alpha 1} U_{\beta 1}}{U_{\alpha 2} U_{\beta 2}} \right|
= \sqrt{1 + r_\mathrm{solar}^2}.
\end{equation}
Equation~(\ref{cond3}) is incompatible with experiment.
Condition~(\ref{cond4}) may agree with the phenomenology
in the cases $\alpha = e$ and either $\beta = \mu$ or $\beta = \tau$;
the mixing angles are free but there is a stringent prediction
$\left| \cos{\delta} \right| < 0.1$.
The predictions for $r_\mathrm{cosmological}$ and $r_{\nu_e}$
are the same as in inequalities~(\ref{pred23})
while $0.948 \le r_{\beta \beta} \le 0.983$ is higher in this case.
To summarize,
conditions~(\ref{o2}) can only hold if the neutrino mass spectrum is inverted
and if $\left( \alpha, \beta \right)$
is either $\left( e, \mu \right)$ or $\left( e, \tau \right)$.
A tiny $\left| \cos{\delta} \right|$ is predicted.
\subsection{The conditions~(\ref{o4})}
I firstly define
\begin{equation}
\label{xy}
V_{\beta j} \equiv \left( U_{\beta j}^\ast \right)^2,
\quad
x \equiv \frac{\mu_1}{\mu_3},
\quad
y \equiv \frac{\mu_2}{\mu_3}.
\end{equation}
I then define
\begin{subequations}
\begin{eqnarray}
c_1 &\equiv& V_{\alpha 1}, \\
c_2 &\equiv& V_{\alpha 2}, \\
c_3 &\equiv& V_{\alpha 3}, \\
c_4 &\equiv& V_{\beta 1} V_{\beta 2}^\ast, \quad c_5 \equiv c_4^\ast, \\
c_6 &\equiv& V_{\beta 1} V_{\beta 3}^\ast, \quad c_7 \equiv c_6^\ast, \\
c_8 &\equiv& V_{\beta 2} V_{\beta 3}^\ast, \quad c_9 \equiv c_8^\ast,\\
c_{10} &\equiv& \sum_{j=1}^3 \left| V_{\beta j} \right|^2 - 1.
\end{eqnarray}
\end{subequations}
Then,
from the first condition~(\ref{o4}),
\begin{equation}
0 = M_{\alpha \alpha} = \mu_1 V_{\alpha 1} + \mu_2 V_{\alpha 2} + \mu_3 V_{\alpha 3}.
\end{equation}
Therefore,
\begin{equation}
\label{iorex}
0 = c_1 x + c_2 y + c_3.
\end{equation}
The second condition~(\ref{o4}) is
\begin{eqnarray}
1 &=& A_{\beta \beta}
\nonumber\\ &=& M_{\beta \beta} \left( M^{-1} \right)_{\beta \beta}
\nonumber\\ &=& \left(
\mu_1 V_{\beta 1} + \mu_2 V_{\beta 2} + \mu_3 V_{\beta 3}
\right) \left(
\frac{V_{\beta 1}^\ast}{\mu_1} + \frac{V_{\beta 2}^\ast}{\mu_2}
+ \frac{V_{\beta 3}^\ast}{\mu_3}
\right)
\nonumber\\ &=& \left| V_{\beta 1} \right|^2 + \left| V_{\beta 2} \right|^2
+ \left| V_{\beta 3} \right|^2
+ \frac{y}{x}\, V_{\beta 1}^\ast V_{\beta 2}
+ \frac{x}{y}\, V_{\beta 1} V_{\beta 2}^\ast
\nonumber\\ & &
+ \frac{1}{x}\, V_{\beta 1}^\ast V_{\beta 3}
+ x V_{\beta 1} V_{\beta 3}^\ast
+ \frac{1}{y}\, V_{\beta 2}^\ast V_{\beta 3}
+ y V_{\beta 2} V_{\beta 3}^\ast.
\end{eqnarray}
Therefore,
\begin{equation}
0 = c_4 x^2 + c_5 y^2 + c_6 x^2 y + c_7 y + c_8 x y^2 + c_9 x + c_{10} x y.
\label{uviop}
\end{equation}
Equations~(\ref{iorex}) and~(\ref{uviop})
determine $x$ and $y$ through
\begin{subequations}
\label{jo}
\begin{eqnarray}
0 &=& c_2 \left( c_2 c_6 - c_1 c_8 \right) y^3
\nonumber\\ & &
+ \left( 2 c_2 c_3 c_6 - c_1 c_2 c_{10} - c_1 c_3 c_8 + c_1^2 c_5
+ c_2^2 c_4 \right) y^2
\nonumber\\ & &
+ \left( 2 c_2 c_3 c_4 - c_1 c_3 c_{10} - c_1 c_2 c_9 + c_1^2 c_7
+ c_3^2 c_6 \right) y
\nonumber\\ & &
+ c_3 \left( c_3 c_4 - c_1 c_9 \right),
\\*[1mm]
x &=& \frac{- c_2 y - c_3}{c_1}.
\end{eqnarray}
\end{subequations}
In this way,
the first two conditions~(\ref{o4}) allow one to,
by using as input the PMNS matrix,
exactly determine both the Majorana phases
and the ratios among the neutrino masses.
One must still impose the third condition~(\ref{o4}),
\textit{viz.}
\begin{equation}
V_{\gamma 1}^\ast y + V_{\gamma 2}^\ast x + V_{\gamma 3}^\ast x y \neq 0,
\end{equation}
on the values of $x$ and $y$ that have been determined.
One must choose the input,
\textit{viz.}\ the PMNS matrix,
in such a way that the resulting $x$ and $y$ satisfy
$\left| x \right| < \left| y \right|$,
\textit{i.e.}\ $m_\mathrm{solar} > 0$,
and that
\begin{equation}
r_\mathrm{solar} = \sqrt{\frac{\left| y \right|^2 - \left| x \right|^2}
{\left| 1 - \left| x \right|^2 \right|}}
\end{equation}
is in its experimentally allowed range.
If $1 > \left| x \right|$ the neutrino mass spectrum is normal;
it is inverted if $\left| x \right| > 1$.
For the neutrino mass ratios one has
\begin{subequations}
\begin{eqnarray}
r_\mathrm{cosmological} &=&
\frac{\left| x \right| + \left| y \right| + 1}
{\sqrt{\left| 1 - \left| x \right|^2 \right|}},
\\
r_{\beta \beta} &=&
\frac{\left| x V_{e1} + y V_{e2} + V_{e3} \right|}
{\sqrt{\left| 1 - \left| x \right|^2 \right|}},
\\
r_{\nu_e} &=&
\frac{\left| x V_{e1} \right| + \left| y V_{e2} \right|
+ \left| V_{e3} \right|}
{\sqrt{\left| 1 - \left| x \right|^2 \right|}}.
\end{eqnarray}
\end{subequations}
Numerically,
I have found that there are two types of cases
in which the conditions~(\ref{o4}) are able to fit the experimental values.
In the first type of cases,
$\left| \cos{\delta} \right| \gtrsim 0.5$ must be close
to unity\footnote{In this paper,
the expression ``$a \gtrsim b$'' means the following:
the quantity $a$ has a lower bound that is approximately equal to $b$,
but $a$ may as well be much larger than $b$.}
and the neutrino masses are of order $m_\mathrm{atmospheric}$.
This type of cases occurs for an inverted neutrino mass spectrum
when $\beta = e$ and either $\alpha = \mu$ or $\alpha = \tau$;
in the first case $\cos{\delta} \gtrsim 0.5$
and in the second one $\cos{\delta} \lesssim -0.5$.
One obtains for these cases
\begin{equation}
\begin{array}{rcccl}
2.019 &<& r_\mathrm{cosmological} &<& 2.035, \\
0.24 &<& r_{\beta \beta} &<& 0.44, \\
0.974 &<& r_{\nu_e} &<& 0.989.
\end{array}
\end{equation}
In the second type of cases neutrino masses are quasi-degenerate,
$\theta_{23}$ is in a well-defined octant,
and $\cos{\delta}$ is extremely close to zero.
Moreover,
when $\theta_{23} \to \pi/4$,
the neutrino masses grow towards infinity
and $\cos{\delta} \to 0$.\footnote{In ref.~\cite{Grimus:2011sf}
it had already been noted that models $B_{3,4}$ of ref.~\cite{Frampton:2002yf}
display the property that $\theta_{2,3} \to \pi/4$ and $\cos{\delta} \to 0$
when the neutrino masses become quasi-degenerate.
Our models,
though,
do \emph{not}\/ coincide with models $B_{3,4}$.
In those models $A_{ee} = 1$ and $A_{\mu \mu} = A_{\tau \tau} = 0$;
in our models $A_{ee} = 1$ and either $A_{\mu \mu}$ vanishes
or $A_{\tau \tau}$ vanishes,
but they do not \emph{both}\/ vanish.}
These cases occur when $\beta = e$ and
either $\alpha = \mu$ for a normal neutrino mass spectrum
or $\alpha = \tau$ for an inverted neutrino mass spectrum;
in both cases $s_{23}^2 < 0.5$.
If one wants to have $s_{23}^2 > 0.5$ instead,
then we must interchange $\alpha = \mu$ with $\alpha = \tau$.
In all these cases $\left| \cos{\delta} \right| < 0.1$
(approaching zero when $s_{23}^2$ approaches $0.5$)
and $r_\mathrm{cosmological} \gtrsim 2.5$,
$r_{\beta \beta} \approx r_{\nu_e} \gtrsim 0.5$
(approaching infinity when $s_{23}^2$ approaches $0.5$).
\subsection{The conditions~(\ref{o5})}
Equations~(\ref{o5}) are the same as eqs.~(\ref{o4})
with $M \leftrightarrow M^{-1}$,
\textit{i.e.}\ with $\mu_j \to \mu_j^{-1}$ and $V \to V^\ast$.
In practice,
this means that,
for each input PMNS matrix,
one may use eqs.~(\ref{jo}) in the previous subsection,
but now they will yield $1 \! \left/ x^\ast \right.$
and $1 \! \left/ y^\ast \right.$ instead of $x$ and $y$,
respectively.
Thus,
one should use
\begin{equation}
V_{\beta j} \equiv \left( U_{\beta j}^\ast \right)^2,
\quad
x \equiv \frac{\mu_3^\ast}{\mu_1^\ast},
\quad
y \equiv \frac{\mu_3^\ast}{\mu_2^\ast}
\end{equation}
instead of eqs.~(\ref{xy}).
The condition $m_\mathrm{solar} > 0$
now requires $\left| x \right| > \left| y \right|$
and
\begin{equation}
r_\mathrm{solar} = \sqrt{\frac{\left| x \right|^2 - \left| y \right|^2}
{\left| y \right|^2 \left| \left| x \right|^2 - 1 \right|}}
\end{equation}
must be in its experimentally allowed range.
If $1 > \left| x \right|$ then the neutrino mass spectrum is inverted;
it is normal if $\left| x \right| > 1$.
For the neutrino mass ratios one has in this case
\begin{subequations}
\begin{eqnarray}
r_\mathrm{cosmological} &=&
\frac{\left| x \right| + \left| y \right| + \left| x y \right|}
{\left| y \right| \sqrt{\left| 1 - \left| x \right|^2 \right|}},
\\
r_{\beta \beta} &=&
\frac{\left| y^\ast V_{e1} + x^\ast V_{e2} + x^\ast y^\ast V_{e3} \right|}
{\left| y \right| \sqrt{\left| 1 - \left| x \right|^2 \right|}},
\\
r_{\nu_e} &=&
\frac{\left| y^\ast V_{e1} \right| + \left| x^\ast V_{e2} \right|
+ \left| x^\ast y^\ast V_{e3} \right|}
{\left| y \right| \sqrt{\left| 1 - \left| x \right|^2 \right|}}.
\end{eqnarray}
\end{subequations}
Numerically,
I have found that the conditions~(\ref{o5}) fit experiment
in the same two types of cases as the conditions~(\ref{o4}).
Thus,
for an inverted neutrino mass spectrum and $\alpha = e$,
one has either $\cos{\delta} \gtrsim 0.4$ for $\beta = \mu$
or $\cos{\delta} \lesssim -0.4$ for $\beta = \tau$.
In both cases
\begin{equation}
\begin{array}{rcccl}
2.058 &<& r_\mathrm{cosmological} &<& 2.087, \\
0.35 &<& r_{\beta \beta} &<& 0.54, \\
0.977 &<& r_{\nu_e} &<& 0.990.
\end{array}
\end{equation}
On the other hand,
for $\beta = e$ there is another set of cases,
with $\left| \cos{\delta} \right| \lesssim 0.15$,
$r_\mathrm{cosmological} \gtrsim 2$,
and $r_{\beta \beta} \approx r_{\nu_e} \gtrsim 0.5$.
These cases may have $\theta_{23}$ either in the first octant---for a
normal neutrino mass spectrum when $\alpha = \tau$ and for an
inverted neutrino mass spectrum when $\alpha = \mu$---or
in the second octant---interchanging $\alpha = \mu$ with $\alpha = \tau$.
The remarkable feature of this second set of cases
is that the neutrino masses become almost degenerate
when $\theta_{23}$ approaches $\pi/4$.\footnote{In ref.~\cite{Grimus:2006wy}
a model was discussed in which the neutrino masses approach degeneracy
when $s_{12}^2 \to 1/3$,
with $s_{12}^2 < 1/3$ for a normal neutrino mass spectrum
and $s_{12}^2 > 1/3$ for an inverted spectrum.
The present cases display analogous features,
with $s_{12}$ replaced by $s_{23}$ and $1/3$ replaced by $1/2$
just as in ref.~\cite{Grimus:2011sf}.}
\section{Conclusions}
In this paper I have exhaustively classified all the predictive cases where
a type-I see-saw mechanism based on three right-handed neutrinos
has a diagonal charged-lepton mass matrix $M_\ell$
and both the neutrino Dirac mass matrix $M_D$
and the right-handed-neutrino Majorana mass matrix $M_R$ have texture zeros.
Most of the cases with predictive power
had already been studied in the literature,
but I have discovered a few new ones.
The new cases predict either $\left| \cos{\delta} \right| \gtrsim 0.5$
or $\left| \cos{\delta} \right| \lesssim 0.1$;
some of the latter cases feature quasi-degenerate neutrinos
when $\theta_{23}$ is very close to $\pi/4$.
\vspace*{6mm}
\noindent I thank Walter Grimus for reading the manuscript and commenting on it.
This work was supported through the projects PEst-OE-FIS-UI0777-2013,
PTDC/FIS-NUC/0548-2012,
and CERN-FP-123580-2011
of \textit{Funda\c c\~ao para a Ci\^encia e a Tecnologia};
those projects are partially funded through POCTI (FEDER),
COMPETE,
QREN,
and the European Union.
|
1,116,691,497,640 | arxiv | \section{Introduction}
Solar flares, prominence/filament eruptions and coronal mass ejections (CME) are the
three most intense solar activities. The high-speed magnetized plasma contained in
magnetic flux ropes (MFRs) released during these solar activities may interact with the
magnetosphere and ionosphere, and seriously affect the safety of human high-tech
infrastructures. The MFR, a set of helical magnetic field lines wrapping around a
common axis, is generally considered to be the fundamental structure in the CME/flare
dynamical process \citep{Shibata1995,Zhang2012,Cheng2016,Liu2020}.
The formation of MFR is still an ongoing debate \citep{Patsourakos2020}. Some
studies believe that the MFR is present before the eruption
\citep{Kopp1976,Forbes1995,Titov1999}. This pre-existent MFR may be formed in the
convection zone, but it is forced by magnetic buoyancy to emerge into the corona
\citep{Rust1994,Fan2003,Fan2009,Aulanier2012}. Or it is formed due to the slow magnetic
reconnection between sheared arcades in the low corona
\citep{Green2009,Liu2010,Green2011,Patsourakos2013}, that is, the shear motion in the
photosphere leads to the evolution of the potential field into sheared magnetic field,
and as the magnetic field shear increases, the sheared arcades reconnect and the MFR is
formed after multiple reconnections \citep{vanBallegooijen1989}. Other studies suggest
that the MFR is formed during the eruption process. In this case, only sheared arcades
exist before eruption. During the eruption, an MFR is formed due to magnetic
reconnection between sheared arcades, which then continuously increases the magnetic
flux of the MFR \citep{Antiochos1999,Moore2001,Karpen2012}. Recent studies
demonstrate that there may exist a ``hybrid" scenario, in which a seed MFR forms within
the sheared arcades via tether-cutting reconnection before eruption, and builds up
via flare reconnection during the eruption \citep{Gou2019,Liu2020}.
\citet{Patsourakos2020} also suggest the pre-eruptive configuration as a
``hybrid" state consisting of both sheared arcades and MFRs, with different
configurations at different evolutionary stages.
According to whether the magnetic reconnection is involved or not, the triggering
mechanism of MFR eruption can be divided into two types. One is
magnetohydrodynamic (MHD) instability, including torus instability (TI,
\citealt{Kliem&Torok2006}) and kink instability (KI,
\citealt{Fan2003,Fan2004,Torok2004}). In addition, the double arc instability
(DAI) is considered to be able to drive the early stage of the eruption
\citep{Ishiguro2017,Kusano2020}. The other type is based on magnetic reconnection.
Magnetic reconnections occurring above and below the MFR, correspond to the break-out
model \citep{Antiochos1999,Lynch2008} and the tether-cutting model
\citep{Moore2001,Jiang2021}, respectively. The formation time of MFR is also different
in these mechanisms. In the MHD model, the MFR already exists before the
eruption. In the tether-cutting model, the MFR is formed
prior to the slow rise phase and grows up in the acceleration phase
\citep{Moore2001}. While the MFR is not formed until the acceleration phase in
the break-out model \citep{Karpen2012,Cheng2016}.
There are many observational structures in the solar atmosphere related to MFRs,
including the sigmoids \citep{Green&Kliem2014}, filaments and filament channels
\citep{vanBallegooijen1998,Mackay2010,Su2012}, hot channels
\citep{Cheng2011,Zhang2012} and coronal cavities \citep{Low1995}. As a new
evidence of MFR observation, hot channels usually appear as EUV blobs in the high
temperature bands and as dark cavities in the low temperature bands
\citep{Cheng2016,Cheng2017}. The height evolution of the hot channel can be fitted with
a function consisting of linear and exponential components, which corresponds to the
slow rise and the impulsive acceleration of the hot channel, respectively.
The hot channel shows a remarkable morphological evolution in the early stage of the
eruption. It often appears as a twisted and/or writhed sigmoidal structure, and then
transforms into a semi-circular shape in the slow rise phase, after which the impulsive
accelerate begins \citep{Zhang2012}. Writhe is related to the rotation of erupting MFR,
and the mechanism governing this rotation has received extensive attention. When the KI
occurs, the axis of the MFR rotates rapidly, transforming part of the twist into writhe
\citep{Baty2001,Torok2014}. While recent MHD simulations find that writhe can
be transformed into twist along with the rotation of MFR \citep{Zhou2022}. In
addition, the external sheared field can also contribute to the rotation of the MFR
\citep{Isenberg2007,Lynch2009}. And the rotations caused by these two mechanisms point
in the same direction. The rotation by the external sheared field tends to distribute
across a larger height range, and if the sources of the external stabilizing field have
a smaller distance than the footpoints of the erupting flux, the external sheared field
yields the major contribution to the rotation. The rotation due to twist relaxation
tends to work mainly in the low corona with a height range up to several times of the
distance between the footpoint of the MFR \citep{Kliem2012}. In addition, magnetic
reconnection with the surrounding magnetic field \citep{Shiota2010}, the straightening
from the initial S-shape \citep{Torok2010}, and the asymmetric deflection of the rising
flux during propagation through the overlying field can all contribute to the rotation
of the MFR \citep{Yurchyshyn2009,Panasenco2011,Kliem2012}.
The formation, eruption and rotation of the MFR are accompanied by the motion of
footpoint brightenings, which has been comprehensively studied in a few decades.
\citet{Su2006} shows that the footpoint brightenings observed by TRACE widely separate
along the PIL in the initial stage, and then move away from the PIL gradually during the
impulsive phase. A statistical study of footpoint motion of 50 X and M class two-ribbon
flares indicates that both shear motion of conjugate footpoints and ribbon separation
are common features in two-ribbon flares \citep{Su2007}. Following the eruption,
post-flare loops and their footpoints propagate along the PIL are also
observed, and the separation of the flare ribbons perpendicular to the PIL occurs at the
same time or immediately after that \citep{Tripathi2006,Li2009}. In addition,
hard X-ray (HXR) kernel motions parallel and perpendicular to the PIL have also been
reported in previous studies \citep{Krucker2005,Liu2006,Yang2009}. \citet{Qiu2009} terms
the two distinct stages of the flare ribbons evolution as stages of ``parallel
elongation" and ``perpendicular expansion", which can be well explained by the two
stages of three-dimensional (3D) reconnection of the erupted flux ropes (3D ``zipper
reconnection" and quasi-2D ``main phase" reconnection) proposed by \citet{Priest2017}.
The M6.5 class flare occurred on 2015 June 22 in active region NOAA 12371 is a
complex flare which contains many physical processes including the tether-cutting
reconnection, DAI and TI as suggested by \citet{Kang2019}. It
has been the subject of numerous studies. Main progresses include the sudden
flare-induced rotation of a sunspot and the association with the back reaction of the
flare-related restructuring of coronal magnetic field \citep{Liu2016a}, the rotational
motions of the photospheric magnetic flux and shear flows \citep{Bi2017,Wang2018a}, the
evidence of a large-scale, long-duration, slipping-type reconnection \citep{Jing2017},
flare-ribbon-related photospheric magnetic field changes and the first evidence of the
HXR coronal channel \citep{Liu2018a,Sahu2020}. \citet{Wang2017} studies the two
precursors of this flare, and finds the low-atmospheric precursor emissions are
closely related to the onset of the main flare.
In our previous paper (\citealt{Liu2022}, hereafter Paper 1), we have studied
the footpoint rotation and writhe of the two hot channels in the M6.5 class
flare on 2015 June 22. However, the formation mechanism of the hot channels and their
relationship with the two flare precursors are still unclear. The causes of the
footpoint rotation and writhe of the hot channels also need further investigation.
Therefore, these questions will be addressed in this study. Data set is introduced in
Section 2. In Section 3, we present the observations results. We carry out magnetic
topology analysis in Section 4. We summarize major findings and discuss the results in
Section 5.
\section{Data Set}
The data used in this study are mainly from the Solar Dynamics
Observatory (SDO, \citealt{Pesnell2012}). The Atmospheric Imaging Assembly (AIA,
\citealt{Lemen2012}) onboard SDO can simultaneously provide full-disk observations in
EUV and UV passbands with temporal cadence of 12 and 24 seconds respectively, and the
pixel size is 0.$^{\prime\prime}$6. AIA observations in 131~\AA, 304~\AA, 1600~\AA~are
used to understand the hot channels, filaments, bright kernels and flare ribbons
in this region \citep{O'Dwyer2010}. The magnetograms are provided by the Helioseismic
Magnetic Imager (HMI, \citealt{Schou2012}) aboard SDO with a spatial resolution of
0.$^{\prime\prime}$5/pixel, and a cadence of 45 seconds for line of sight (LOS)
magnetograms (hmi.M$\_$45s series) and 720 seconds for vector magnetograms
(hmi.sharp$\_$cea$\_$720s series). During ~16:25–22:50 UT on June 22, 2015, the
1.6-meter Goode Solar Telescope (GST, \citealt{Cao2010}) at the Big Bear Solar
Observatory (BBSO) took observations of the NOAA AR 12371 under excellent
seeing conditions. The Visible Imaging Spectrometer (VIS) observations in H$\alpha$
(6563~\AA) line center with the time cadence of 28 seconds and pixel size of
0.$^{\prime\prime}$03 are used to study the fine-scale structures at the chromosphere in
unprecedented detail \citep{Jing2016}. The soft X-ray (SXR) emission of the flare has
been recorded by GOES.
\section{Observations}
\subsection{Event Overview}
A C1.1 class flare occurs in AR 12371, which is a confined flare without CME
\citep{Awasthi2018}, and the peak time is 16:45 UT. Then two flare precursors are
observed prior to the main phase of the M6.5 class eruptive flare
\citep{Wang2017}. The peak times of the two flare precursors
(17:27 UT and 17:45 UT) are marked by the green and blue vertical dashed lines in the
GOES light curve shown in Figure \ref{fig:lightcurve}(a). During the first precursor,
seed hot channels build up and rise slowly, being accelerated at the peak of the second
precursor, as shown in Figures \ref{fig:lightcurve}(b)-(d).
The observed height-time plot of the hot channel along the erupting direction
(marked as black dash-dotted line in Figure \ref{fig:lightcurve}(d)) is fitted by an
analytic approximation with the combination of linear (slow rise) and exponential (fast
rise) functions developed by \citet{Cheng2020}, which is shown as the black dotted line
in Figure \ref{fig:lightcurve}(b). Because this active region is close to the solar
disk, the estimation of the heights, velocities, and accelerations of the hot channels
may be significantly influenced by the projection effect, but the characteristics of the
temporal profile of the hot channels are not affected \citep{Cheng2020}. The fitting
results show that the onset time of impulsive acceleration phase is 17:45:21 UT ($\pm
1$ m, marked by the red arrows in Figures \ref{fig:lightcurve}(a)-(b)). A comparison
of Figures \ref{fig:lightcurve}(a) and \ref{fig:lightcurve}(b) shows that the onset of
the impulsive acceleration of the seed hot channels is earlier than the flare onset
(marked by the orange arrows in Figures \ref{fig:lightcurve}(a)-(b)).
After that, the seed hot channels rise rapidly while the filaments in AIA 304~\AA~ and
VIS H$\alpha$ images stay behind. After the flare onset, two hot channels form
one after another (marked by the yellow arrows in Figures
\ref{fig:lightcurve}(e)-(f)), and they both exhibit a kinking structure with negative
crossing. The eruption of these two hot channels produce two peaks (marked by the gray
and magenta vertical dashed lines in Figure \ref{fig:lightcurve}(a)) on the flare's GOES
light curve. The morphological evolution and footpoint motion of these two hot channels
are studied in Paper 1.
\subsection{Formation and Buildup of the Seed Hot Channels}
After a high-resolution investigation of the two flare precursors before the M6.5
class flare, \citet{Wang2017} has concluded that the eruption of the main flare is
resulted from the successive reconnection between the sheared loops. In this study we
focus on the relationship between the flare precursors and the hot channels. The
morphological evolution of the two successive episodes of precursors observed by AIA/SDO
and VIS/GST is presented in Figures \ref{fig:precursor1}-\ref{fig:precursor2}. At the
beginning of these two precursors, brightenings appear on the two sides of the PIL at
around 17:24 UT (marked by the green boxes in Figure \ref{fig:precursor1}(d))
and 17:42 UT (marked by the green boxes in Figure \ref{fig:precursor2}(d))
respectively. Starting from 17:24 UT, a few hot loops are observed in 131~\AA~
by AIA (Figures \ref{fig:precursor1}(a)-(c)), which are the early morphology of the hot
channel, thus termed as seed hot channels. These seed hot channels are manifested as a
group of brightened branches in high-temperature passbands, and the magenta dashed lines
in Figures \ref{fig:precursor1}(a)-(c) mark the outer edge of them. As more and more hot
branches brighten, the shape of the seed hot channels become clear. Two filaments can be
identified in the corresponding AIA 304~\AA~images (pointed by the blue arrows in Figure
\ref{fig:precursor1}(d)) and the VIS/GST H$\alpha$ line center images (Figures
\ref{fig:precursor1}(g)-(i)), the field of view (FOV) of which is shown in the white
box in Figure \ref{fig:precursor1}(d). The shape and location of these two filaments
remain unchanged during the first flare precursor.
A comparison of Figure \ref{fig:precursor1} and Figure \ref{fig:precursor2} shows that,
the brightenings in AIA 131~\AA, 304~\AA, and VIS H$\alpha$ images (Figure
\ref{fig:precursor2}) during the second flare precursor are brighter than those in the
first precursor. At the second precursor, the outer edge of the seed hot channels
(marked by the magenta dashed lines in Figures \ref{fig:precursor2}(a)-(c))
becomes higher and longer, and the two end points almost connect the two distant ends of
the two filaments. As the surrounding brightenings increase, the filaments cannot be
clearly recognized in 304~\AA~ by AIA. As shown in the corresponding H$\alpha$
images in Figures \ref{fig:precursor2}(g)-(i), brightenings and motions of
filament materials are identified and the filament becomes wider, which is similar to
the seed hot channels.
The seed hot channels appear during the first flare precursor, and becomes
significantly larger after the second flare precursor. Their footpoint
brightenings are observed at both sides of the PIL during the precursors which is
different from the flare ribbons observed during the flare main phase. As we mentioned
before, the outer edge of the seed hot channels doesn't change obviously during
the first precursor, and the two footpoints only extend for about 2$^{\prime\prime}$
(Figures \ref{fig:precursor1}(a)-(c)). Before the onset of the second precursor, the
northern footpoints of the seed hot channels continue to move northward for
2$^{\prime\prime}$, and the southern footpoints expand southward by about
38$^{\prime\prime}$, then the seed hot channels are clearly larger. The
locations of the two footpoints hardly change during the second precursor, but the width
of the seed hot channels increases significantly. In other words, the
observations indicate a buildup process of the seed hot channels from the
first precursor to the second precursor.
\subsection{Propagation of the Footpoint Brightenings }
In this subsection, we focus on the propagation direction of the brightenings during the
two precursors and the flare main phase. The bright kernels in AIA 1600~\AA~are shown
in Figure \ref{fig:bright}. At the onset of the first precursor, the bright kernels
first appear at the east side of the PIL, and they are tracked by the magenta
dashed arrow in Figure \ref{fig:bright}(a), and the bright kernels gradually move
northward (see Figures \ref{fig:bright}(a)-(b)). Subsequently, a bright kernel
appears on the west side of the PIL, which also moves northward parallel to the PIL (see
Figures \ref{fig:bright}(b)-(d)). At the west side of the PIL another bright kernel
appears at the south end and gradually extends southward, then evolves into a southward
jet (marked by orange arrows in Figures \ref{fig:bright}(b)-(d)), while the northward
propagating bright kernels gradually disappear at the end of the first flare precursor.
At the beginning of the second flare precursor, bright kernels reappear on both sides of
the PIL and are more widely distributed than those at the beginning of
the first flare precursor (see Figure \ref{fig:bright}(e)). Immediately, these bright
kernels expand and spread toward the north (see Figure \ref{fig:bright}(f)).
Next, the west bright ribbon almost stops moving while the east bright ribbon splits
into two parts (northern and southern parts, separated by the yellow dashed line
in Figure \ref{fig:bright}(g)) that move northward and southward, respectively
(see Figures \ref{fig:bright}(f)-(g)). And the southern part of the east bright ribbon
almost stops moving after two minutes. After the peak of the second flare precursor, the
west bright ribbon and the southern part of the east bright ribbon start converging
along the PIL. The northern part of the east bright ribbon keeps moving northward
during this process (see Figures \ref{fig:bright}(f)-(h)). During the second flare
precursor, there is also a bright kernel at the southern end of the west bright ribbon,
which gradually expands and then evolves into a southward jet (marked by orange arrows
in Figures \ref{fig:bright}(f)-(h)).
After the beginning of the flare main phase, the southern part of the east bright
ribbon disappears, and the northern part begins to strengthen significantly, forming a
flare ribbon (labeled `ER' in Figures \ref{fig:bright}(i)-(l)) and gradually moving
southward. Another flare ribbon (labeled `WR' in Figures \ref{fig:bright}(i)-(l))
evolved from the west bright ribbon moves northward obviously. The subsequent southward
jet from the western flare ribbon is more intense than those during the two flare
precursors (marked by orange arrow in Figure \ref{fig:bright}(j)). From about
17:56:40 UT, the anti-parallel motion of the two flare ribbons begins to accompany the
separation motion, which lasts about ten minutes. Then, with the appearance of flare
loops, the two flare ribbons only move away from each other in the direction
perpendicular to the PIL (see Figures \ref{fig:bright}(k)-(l)).
In general, the bright kernels on both sides of the PIL mainly show northward
parallel motion during the first precursor. The brightenings show prominent parallel
motion toward the north and converging motion along the PIL during the second
precursor. During the flare main phase, the brightenings first display converging motion
along the PIL, and then expand with converging along the PIL, finally move perpendicular
to the PIL.
\subsection{Formation and Evolution of the Two Hot Channels}
After the onset of the impulsive phase, the seed hot channels expand
rapidly. The AIA images in 131~\AA~show that the seed hot channels consist of multiple
branches, which can be seen more clearly from the running-difference images
(see Figure \ref{fig:reconnection}). Parallel flux tubes with the same twist can merge
into a single flux tube near the point of contact \citep{Linton2001}.
At 17:55:32 UT, two seed hot channels marked by the orange and yellow arrows (labeled
`SH1', `SH2') in Figure \ref{fig:reconnection}(a) are identified. These two seed hot
channels are close together, but there is a clear gap between them (see Figure
\ref{fig:reconnection}(a) and its inset). With the expansion of the hot channel, SH1
and SH2 gradually intersect (see Figures \ref{fig:reconnection}(b)-(c)),
accompanied by the appearance of footpoint brightenings of the seed hot channels (marked
by the black boxes in Figure \ref{fig:reconnection}(f)), which suggest the occurrence of
merge reconnection. In the process of impulsive acceleration, the different branches of
the seed hot channels continuously merge, and at 17:59:08 UT, a longer and more twisted
hot channel (i.e., the first kinking hot channel, labeled `KHC1' in Figure
\ref{fig:reconnection}(d)) forms, which subsequently appears as a kinking structure (see
Figure \ref{fig:reconnection}(d)). About three minutes later, the kinking structure
disappears and both footpoints of the first kinking hot channel display an apparent
clockwise rotation during the unwrithing of this hot channel, which has been studied in
detail in Paper 1.
During the unwrithing of the first kinking hot channel (KHC1), another hot channel
(KHC2) appears near the right leg of KHC1 with a kinking structure (marked by the purple
box in Figure \ref{fig:reconnection}(i)), which can only be barely
distinguished at this time due to the envelope of KHC1's leg. By 18:20:20 UT, the
footpoint rotation of KHC1 ends, the kinking structure of KHC2 can be clearly observed,
and obvious brightening can be seen at its right footpoint
(see Figure \ref{fig:reconnection}(j)). Immediately, the left and right footpoints of
KHC2 begin to move northward and westward, respectively. The movement of the right
footpoint is more obviously, and the KHC2 gradually unwrithes as the right footpoint
slides to the west (see Figures \ref{fig:reconnection}(j)-(k)). By 18:38:20 UT, the
kinking structure almost disappears, and only the brightening at the right footpoint can
be seen (see Figure \ref{fig:reconnection}(l)).
\section{Magnetic Field Modeling}
To understand the 3D topology of the source regions of this events, we analyze the
magnetic field characteristics of this active region based on the nonlinear
force-free field (NLFFF) extrapolations by \citet{Awasthi2018}. The vector
magnetograms at different times are remapped at the original resolution by the Lambert
(cylindrical equal area; CEA) projection method. Then a “pre-processing" procedure is
used to remove the net force and torque of the photospheric field to best fit the
force-free condition \citep{Wiegelmann2006}. Finally, these “preprocessed” magnetograms
are input into the NLFFF code proposed by \citet{Wiegelmann2004} as boundary conditions,
and the “weighted optimization" method is applied to obtain the time series of the NLFFF
models. The MFR can be identified through mapping magnetic connectivities and computing
the twist number ($T_{w}$) for each individual field line \citep{Liu2016b}. $T_{w}$
measures the number of turns of a field line winding, and it is calculated by
integrating the local density of
$T_{w}$, $\nabla\times\boldsymbol{B}\cdot\boldsymbol{B}/4\pi B^{2}$, along each field
line \citep{Awasthi2018, Liu2018a}.
\subsection{Twist Evolution of the Flux Ropes in the NLFFF Extrapolations}
Using the method proposed by \citet{Liu2016b}, we calculate the distribution of twist
number in this active region. The photospheric vector magnetogram at 17:12 UT is
presented in Figure \ref{fig:twist}(a). The cross sections of the $T_{w}$ maps in the
X-Z plane along the green line marked in Figure \ref{fig:twist}(a) are presented in
Figures \ref{fig:twist}(c)-(i), in which the contour with $T_{w} = -1.75$ is outlined in
magenta. At 17:10 UT, the absolute $T_{w}$ values in three regions exceed 1.75 (see
Figure \ref{fig:twist}(d)). With the beginning of flare precursors, the area of these
three regions increases, and the increase of the upper (marked as `A') and right (marked
as `B') regions are more obvious (see Figures \ref{fig:twist}(e)-(f)). At the later
stage of the second flare precursor (Figure \ref{fig:twist}(g)), the left region becomes
slender, the area of the upper and right regions increases. Next, the upper and right
regions gradually approach. At 18:10 UT, there is only one larger area with
$T_{w} \le -1.75$ on the left (see Figures \ref{fig:twist}(h)-(i)).
Based on the coronal magnetic field obtained by NLFFF extrapolation, we calculate the
variation of mean and maximum values of $T_{w}$ over time in the area
where $T_{w}$ is lower than $-1.75$ in Figures \ref{fig:twist}(d)-(i), and the
calculation results are shown in Figure \ref{fig:twist}(b). The calculated maximum and
mean $T_{w}$ increase during the flare precursors (except during 17:22 UT-17:34 UT),
reach the maximum at 17:58 UT, and then begin to decline.
We trace the magnetic field lines passing through these regions with high $T_{w}$ at
different times, and the results are presented in Figure \ref{fig:flux rope}. A 3D view
of selected magnetic field lines of the NLFFF at 17:34 UT is shown in Figure
\ref{fig:flux rope}(a), the $T_{w}$ map in the X-Z plane is also shown in this
panel. Yellow, pink and cyan lines represent the flux ropes passing through the strong
$T_{w}$ regions on the left, upper and right sides. We select the central area of the
active region (marked by the white box in Figure \ref{fig:twist}(a)) to study the
evolution of these twisted flux ropes. The flux ropes represented by the same color are
displayed with the same resolution at different times. From 17:22 UT to 17:58 UT, the
projections of pink and cyan flux ropes on the X-Y plane are getting closer and
closer, while the yellow flux rope disappears at 17:58 UT. The shape and relative
positions of the cyan and pink flux ropes at 17:22 UT and 17:58 UT are similar to the
seed hot channels H1 and H2 (marked by magenta arrows in
Figures \ref{fig:flux rope}(b)-(c)) observed in AIA 131~\AA~ at the
corresponding time. The FOV of images in panels (d)-(h) is
marked by the green boxes in panels (b) and (c). It can be seen that the height and
length of the extrapolated magnetic field lines are smaller than those of the observed
hot channels. This may be caused by uncertainties of the magnetic field measurements or
over-smoothing of the vector field by the extrapolation preprocessing. Because the
flux ropes during the eruption process is under Lorentz force, it is difficult to be
well reproduced by NLFFF \citep{Cheng2016}. At 18:10 UT, only one new twisted flux rope
can be identified (purple line in Figure \ref{fig:flux rope}(h)), and its X-Z cross
section corresponds to the area surrounded by the magenta contour in
Figure \ref{fig:twist}(i). The right footpoint of this new flux rope is near the right
footpoint of the disappeared pink flux rope, but the left footpoint is far away from the
left footpoint of the original flux rope.
In order to study the footpoint motion of these flux ropes during the flare, we
select the area surrounded by the orange box in Figure \ref{fig:flux rope}(d) to show the
temporal evolution of the left footpoint of these flux ropes, and the results are
presented in Figure \ref{fig:left footpoint}. To better show the evolution, only the
twist distribution at the locations with $T_{w} \le -1$ is shown, and the twist values
at other locations are set to zero. The magenta contours represent the $T_{w}$ at
-1.75. The footpoints of the cyan, yellow and pink flux ropes are filled with
corresponding colors respectively. At 17:10:25 UT, these three flux ropes are
separated from each other (see Figure \ref{fig:left footpoint}(a)). The footpoint areas
(especially the cyan and pink areas) increase significantly with the beginning of the
first precursor (see Figure \ref{fig:left footpoint}(b)), so they appear to close to
each other. At the end of the first precursor (see Figure \ref{fig:left footpoint}(c)),
the footpoints seem to separate from each other again, which may be caused by the
decrease of the yellow areas and the deformation of the pink areas. During the second
precursor, with the significantly increase of the cyan and pink areas (see Figure
\ref{fig:left footpoint}(d)), the three footpoints gather together again. The yellow
footpoint is surrounded by the pink footpoint at 17:46:25 UT. By 17:58:25 UT, the yellow
footpoint area disappears, the cyan and pink footpoint areas connect together. At
18:10:25 UT, the footpoints of the original flux ropes disappear (see Figure
\ref{fig:left footpoint}(f)), and a flux bundle represented by the green lines (see
Figure \ref{fig:flux rope}(f)) appears with the same left footpoint position as the
original flux rope and smaller twist.
\subsection{Torus Instability of the Extrapolated Flux ropes}
As we mentioned before, the onset of the fast rise of the seed hot channels
is earlier than the flare onset, which suggests that the initiation of
the impulsive acceleration of the seed hot channels is unlikely caused by
flare reconnection. The torus instability occurs when the inward
tension force generated by the background magnetic field decreases faster than
the outward hoop force \citep{Kliem&Torok2006,Fan2007}. TI is quantified by the decay
index n, which is defined by $n = -\frac{\mathrm{d}\ln{B}}{\mathrm{d}\ln{z}}$. Here, B
denotes the background magnetic field strength and z denotes the height above
the solar surface \citep{Cheng2013b}. Previous studies suggest that the threshold value
of decal index is lie in a range of 1.1-2, and normally 1.5 for a toroidal flux rope
\citep{Torok&Kliem2005, Kliem&Torok2006, Aulanier2010}. To investigate the initiation
mechanism of the impulsive acceleration of the seed hot channels, we calculate the decay
index of the background magnetic field, the background field is obtained from the
potential field model. Since the vertical component does not contribute to the downward
constraining force applied to the flux rope, only the horizontal component of the
coronal magnetic field is considered in the calculation of the decay index
\citep{Cheng2013a}.
The relationship between the position of the magnetic flux ropes and the distribution of
decay index at different times are shown in Figure \ref{fig:decay index}. The
blue, green and white contours represent the values of decay index at 1.1, 1.5 and 2,
respectively. Panels (d) and (e) show two different 3D views at 17:46:25
UT, where the orange plane represents the distribution of the decay index on the X-Y
plane above the highest flux rope. It can be seen that the flux ropes are very close to
the contour of the decay index 1.5. From the positions of the top of the flux rope
closest to the $n = 1.5$ contour at different times, we show the X-Z plane that cuts the
X-Y plane at S1 (Figure \ref{fig:decay index}(e)) for the time 17:46:25 UT and 17:58:25
UT, and the X-Z plane at S2 for 18:10:25 UT. The distribution of the decay index of the
three different planes and the position of the flux ropes are shown in Figures
\ref{fig:decay index}(a)-(c). The highest pink flux ropes do not intersect with the
contour of $n = 1.5$ until 18:10:25 UT, so the extrapolation results suggest that the
seed hot channels do not reach the threshold of torus instability before the impulsive
acceleration. Combined with the fact that the onset of the seed hot channels
acceleration starts earlier than the associated flare, the fast ``flare reconnection"
unlikely triggers the hot channels acceleration \citep{Cheng2020}. In summary, the
reconnection during the second flare precursor perhaps contributes to the
initiation of the hot channels impulsive rise.
\subsection{Driving Mechanisms of the Observed Writhing Motion}
When the twist of the MFR exceeds the critical value (approximately 3.5 $\pi$,
i.e. 1.75 turn, but changes with different aspect ratio of the loops involved), the
kink instability will occur and part of the twist of the MFR will be transformed into
writhe \citep{Baty2001,Torok2014}. In a data-constrained MHD simulation,
\citet{Inoue2018} have found that a series of small flux ropes reconnect with each
other in the early stage of the eruption, forming a large and highly twisted flux rope,
which is similar to the first kinking hot channel in our event. Both hot channels in
this event show a writhing motion during the eruption. We calculate the
photospheric magnetic flux of the area surrounded by the green box in
Figure \ref{fig:flux rope}(d), which is large enough to include the whole left
footpoint area of the flux rope. The temporal evolution of magnetic flux in the region
with $T_{w} \le -1.0$ and $T_{w} \le -1.75$, and the evolution of the ratio of magnetic
flux in the region with $T_{w} \le -1.75$ to that in the region with $T_{w} \le -1.0$
are shown in Figure \ref{fig:decay index}(f). The results show that the magnetic flux in
the region with $T_{w} \le -1.75$ increases rapidly after 17:34:25 UT, while the
magnetic flux in the region with $T_{w} \le -1.0$ decreases
after 17:34:25 UT. The ratio of the magnetic flux in the region with $T_{w} \le -1.75$
to the magnetic flux in the region with $T_{w} \le -1.0$ increases obviously, but the
proportion in the region with $T_{w} \le -1.75$ does not exceed 30$\%$ both before and
after the writhing motion of the flux rope. Therefore, KI may not be the main driver of
writhing motion of the first kinking hot channel in this event \citep{Inoue2018}.
Once the flux rope starts to rise up, it is out of its initial equilibrium, and the
Lorentz force becomes non-zero and can act on the MFR and rotate it \citep{Isenberg2007}.
This can be reflected by the increase of the total current in the cross section of the
flux rope during its rising \citep{Inoue2018}. Figure \ref{fig:decay index}(g) shows
that the evolutions of total current vertically crossing the footpoint of the MFR
(marked by the green box in Figure \ref{fig:flux rope}(d)) in the region with
$T_{w} \le -1.75$ and $T_{w} \le -1$, which are similar to the evolution of magnetic
flux. The total current in the region with $T_{w} \le -1.75$ begins to increase rapidly
at 17:34 UT and decreases rapidly after the writhing motion of the hot channel. This
suggests that the Lorentz force might contribute to the writhing motion of MFR. In
addition, during the rise of the flux rope, the magnetic tension of the twisted magnetic
fields will be released, and whether KI is triggered or not, the release of magnetic
tension can also contribute to the writhing of the flux rope axis \citep{Kliem2012}.
Therefore, the writhing motion of the first kinking hot channel may be driven by a combination of these two mechanisms.
\section{Summary and Discussions}
We investigate the formation and eruption of hot channels during the M6.5 class
flare occurring on 2015 June 22 using both observations and NLFFF extrapolations.
There are two precursors before the flare main phase. The seed hot channels appear
after the onset of the first precursor and grow gradually. During the second precursor,
the footpoints' position of the seed hot channels hardly changes, but the width of the
seed hot channels increases significantly. After the peak of the second
precursor, the impulsive acceleration of the hot channels begins. In the process of
acceleration, merge reconnection between different seed hot channels likely occurs,
forming a longer and more twisted flux rope. The newly formed hot channel soon evolves
into an obvious kinking structure. After the first kinking hot channel disappears, the
second hot channel appears with an existing kinking structure at an adjacent location.
The eruption of these two hot channels produce two peaks on the flare's GOES light
curve, which is similar to \citet{Wang2018b}. In this study we focus on the
formation and writhing motion of the first hot channel, and those of the second hot
channel is unclear due to the strong background emission.
With the appearance and buildup of the hot channels, footpoint brightenings motion
parallel to and perpendicular to the PIL are observed by SDO/AIA. \citet{Priest2017}
proposed that magnetic reconnection has two phases: the 3D ``zipper reconnection"
between sheared arcades related to the elongation of the flare ribbons along the PIL,
and the quasi-2D ``main phase reconnection" of unsheared fields around the flux rope
related to the expansion of flare ribbons away from the PIL. In the current event, the
motion of brightenings along the PIL is observed during two flare precursors and the
beginning of the flare main phase, and the expansion of brightening perpendicular to the
PIL is observed after the onset of the flare main phase. During the first precursor of
this event, the brightenings on the two sides of the PIL mainly show parallel
motion toward the north, which may correspond to the ``simple zippettes" reconnection
between sheared arcades, and leads to the formation of flux ropes \citep{Priest2017}.
The brightenings on the two sides of the PIL mainly show parallel motion toward the
north during the second precursor, and this observation may be due to the occurrence of
``helical zippettes" reconnection on the foundation of the flux rope formed during the
first precursor, which results in the further twist accumulation in the flux ropes.
From the late stage of the second flare precursor to the impulsive rise of the flare,
the observed simultaneous southward and northward converging motion of footpoint
brightenings may correspond to the occurrence of ``converging reconnection" between the
flux ropes formed during the two flare precursors. That is, these flux ropes reconnect
into a new flux rope which is much longer and has a much stronger twist
\citep{Priest2017}. The converging reconnection is similar to the ``rr-fr" reconnection
that involves two flux-rope field lines that reconnect in to another multi-turn
flux-rope field line and a flare loop proposed by \citet{Aulanier2019}. In
addition to the footpoint motion, the three jets observed during the two flare
precursors and flare main phase accompanying the footpoint motion also support the
occurrence of magnetic reconnection.
We have constructed a series of NLFFF extrapolations based on the single,
isolated vector magnetograms observed by SDO/HMI before and during the eruption. The
extrapolations incorporate no information about the prior evolution of the photospheric
and coronal magnetic field \citep{Cheung2012}. One important condition for using NLFFF
is that the magnetic field must evolve slowly compared to the Alfv'en crossing time
\citep{Savcheva2012}. Therefore, our NLFFF extrapolations can model the buildup phase of
a flux rope before the eruption, but they fail in correctly describing the intrinsic
dynamic evolution of the magnetic configuration and plasma properties, since these
magnetic fields are independent of each other \citep{Kliem2013,Jiang2014}. Our NLFFF
extrapolations show that the mean and maximum values of $T_{w}$ in the region with
$T_{w} \le -1.75$ increase significantly during both flare precursors and main
phases, and reach a maximum after the onset of the flare main phase. These results
suggest the continuous twist accumulation through difference phases, which is
consistent with the increase of twist of the flux rope in the envisaged zipper
reconnection process. A series of NLFFF extrapolations show that two flux ropes
with negative twist gradually widen and hence approach each other and only one
flux rope is identified after the eruption onset. In the mean time, the AIA 131~\AA
observations show that seed hot channels gradual approach and merge into a much longer
and more twisted hot channel, which is consistent with the theory that parallel flux
tube with the same twist can merge into a single flux tube \citep{Linton2001}. Thus both
observations and extrapolations imply the occurrence of zipper reconnection and
merging between seed hot channels.
The impulsive acceleration of the hot channels begins after the peak of the second flare
precursor, which suggests that magnetic reconnection may contribute to the initiation of
the impulsive acceleration of the hot channels. The relative position between the
selected flux rope filed lines and the decay index distribution on the X-Z section
indicates the height of flux rope does not reach the critical height of TI until about
20 minutes after the beginning of impulsive acceleration. Therefore, the onset of
impulsive acceleration of the hot channels is unlikely driven by TI, which is consistent
with the conclusion of \citet{Kang2019}. Their study also suggests that the
system is unstable against the DAI caused by the additional upward Lorentz force on the
bent of double-arc current loop, and it can happen even if the system does not reach the
critical of TI \citep{Ishiguro2017,Kusano2020}. However, our study suggests that
magnetic reconnection during the flare precursors may also play an important role in the
onset of the impulsive acceleration of the hot channels.
Our results can be summarized as follows: (1) The seed hot channels appear and build up
during the two flare precursors, and at these stages the footpoint motion parallel to
the PIL suggests the occurrence of zipper reconnection. (2) ``Simple zippettes"
reconnection during the first flare precursor leads to the formation of the seed hot
channels and contributes to the continuous buildup. ``Helical zippettes"
reconnection during the second flare precursor plays an important role in the onset of
the impulsive acceleration of the hot channels. (3) The merging between the seed hot
channels leads to the substantial twist buildup of the first kinking hot channel, which
shows writhing motion during its fast rise. The increase of Lorentz force is identified
associated with the writhing motion, which may be driven by the combined effect of the
Lorentz force induced by the external sheared field and the magnetic tension release of
the twisted field. (4) Unlikely driven by TI, the impulsive acceleration of the hot
channels may be attributed to magnetic reconnection during the second flare precursor.
\acknowledgments
The authors thank the referee for providing constructive suggestions to improve the
paper. We also thank the SDO teams for providing the valuable data. This work is
supported by the National Key R\&D Program of China 2021YFA1600500
(2021YFA1600502), the Chinese foundations NSFC (12173092, 41761134088, 11790302
(11790300), U1731241, 41774150, 11925302, and 42188101), and the Strategic Priority
Research Program onSpace Science, CAS, Grant No. XDA15052200 and XDA15320301.
|
1,116,691,497,641 | arxiv | \section{Introduction}
The internet has evolved to become one of the main sources of textual information for many people. Through social media, reviews, and comment sections across the internet, people are continuously consuming information through text. With this, racially biased content has become more entrenched within the language of the internet. Racially biased content in this context refers to the attitudes or stereotypes expressed against marginalized races.
This is often as a result of implicit bias resulting into hate speech. In this work, we attempt to automatically detect this racially biased content from data collected from the web, including comments from online news outlets such as Fox News and and comments from YouTube videos. We label this dataset with pointers to racial bias and use machine learning techniques to automate this task. Specifically, we implement BERT as a base model to do this. We also implement a browser extension as a tool to help people identify racially biased content in the information they are consuming. We will also be releasing our curated dataset - BiasCorp to allow more research to be done in this direction.
\section{Related works}
One of the earliest papers to investigate machine learning approaches for the automatic detection of racially-biased online content is \cite{greevy_2004_princip}. The paper identified the potential use of bag-of-words, n-grams, and distributions of parts-of-speech tags as features for the task. Their bag-of-words features are informed by ideas from the field of information retrieval, and involve either word frequencies or counts of word occurrences. Using an SVM classifier, for bag-of-words features, they found that the use of frequency of words, rather than number of occurrence of words, yielded greater classification accuracies. The n-grams and parts-of-speech tags techniques were unavailable as of the time of their writing.
In \cite{warner}, authors followed the definition of \cite{nockleby} by defining hate speech as “any communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristic.” Their work focused more on detecting anti-Semitic hate speech. For their work, they created a dataset containing hate speech obtained from Yahoo! and the American Jewish Congress. Following the work of \cite{yarowsky_1994}, they employed hand-crafted template-based features. Apart from the fact that these features are hand-engineered, a potential drawback is their sheer size: a total of 3,537 features, which is prone to the curse of dimensionality. A counter-intuitive result reported by the paper is that the uni-gram features contributed best to classification accuracies. They used linear-kernel SVMs for classification.
The work of \cite{yahoo_nobata} dealt with the broad category of abusive language. Authors of the work gave definitions for distinguishing between three categories of abusive language: hate speech which subsumes racial bias, derogatory remarks and profanity. Further, they described reasons why automatic detection of abusive language, which subsumes racial bias, is difficult. Reasons include: clever evasion of detection engines by users via the use of mischievous permutations of words (e.g. Niggah written as Ni99ah); evolution of ethnic slurs with time; role of cultural context in the perception and interpretation of slurs, as a phrase that is considered derogative in one culture might be perfectly neutral in another culture. Towards building their classification model, they employed four categories of features namely, n-grams, lexical features, syntactic/parser features, and word-level as well as comment-level embeddings. They found that character-level n-grams gave the highest contribution to the model’s accuracy.
The authors of \cite{us_and_them} also developed techniques for detecting multiple hate speech categories including the racially-based category. Towards creating their datasets, they harnessed hate speech event-triggers. For example, to create their racial bias dataset, they collected tweets in a two-week interval following the re-election of Barrack Obama as U.S president. They explored a number of potential features towards building their classification algorithm: bag of words, lexicon of hateful terms, and typed dependencies. In addition, they experimented into classification via SVMs versus classification via random forests, and reported that the former yielded superior performance over the latter. Also, they compared the use of classifiers trained for each hate speech category against the use of a single classifier trained on data spanning all categories. As expected, the specialized classifiers outperformed their multi-category counterpart.
\cite{demo_embed} followed the definition of \cite{gelber_2007}, which states that hate speech is: ”speech or expression which is capable of instilling or inciting hatred of, or prejudice towards, a person or group of people on a specified ground, including race, nationality, ethnicity, country of origin, ethno-religious identity, religion, sexuality, gender identity or gender.” The main research thrust of their work was to apply demographic embeddings \cite{bamman} , \cite{hovy}, for the task of racial bias detection in tweets. Compared to other works such as \cite{us_and_them}, for instance, a particularly distinguishing result of \cite{demo_embed} is how their data extraction procedure is able to arrive at a better balanced ratio of racially-biased to non-racially-biased comments. For example, in the work, 40.58 percent of Canadian tweets were judged racially-biased by human annotators, whereas in \cite{us_and_them} only about 3.73 percent of the comments in the dataset are racially biased. Classification results using an SVM classifier revealed benefits of their proposed demographic embeddings over traditional features and embeddings.
In \cite{saleh_2020_white_sup}, the authors explored the detection of hate speech in White supremacist forums. They explored BiLSTM, logistic regression and BERT for their task. Also, they compared the use of domain-agnostic pre-trained word embedding (such as GloVe.6B.300d ) versus the use of a domain-aware 300-dimensional word2vec embedding trained on the specific dataset used in the work. Results showed that BERT yields better results than both logistic regression and BiLSTM. Further, results proved the domain-aware embeddings to be superior to the pre-trained embeddings.
\section{Method}
\subsection{Data curation and processing}
The datasets used for training were obtained from discussion channels of online news media by programmed web crawler based on Scrapy framework with all crawled data stored in PostgreSQL database. Since existing comments of online article were generally loaded by asynchronous API accessed by a specific key hidden in the articles before presenting them on website, the web crawler parsed keys for each article after completing a list with URLs of all articles waiting to be further crawled and then matched the keys with their corresponding API to retrieved stored comments for each article.
First, sentences containing neural racial words from a curated list were selected. Second, the sentiment score of each comment was calculated according to two lookup tables: a combined and augmented \cite{jockers_2017} and Rinker's augmented Hu and Liu \cite{sentimentr} \cite{hu_liu_aug_sentimentr} positive/negative word list as sentiment lookup values, and a racial-related English lookup table from Hatebase\footnote{https://hatebase.org/}. To guarantee these two tables influence the sentiment score consistently, the lookup values of the Hatebase table were adjusted by percentage. Then we extracted the data with bottom 20 percent of the sentiment score, and matched them up with other randomly selected comments appearing under the same articles or videos as random control. Finally, equal numbers of random controls are added into the data set, to ensure that approximately half of the data is racially discriminatory.
\subsection{Model Architecture}
Attention-based Transformer network \cite{vaswani2017attention} has been used widely across different natural language processing tasks. Based on the previous successes of the transformer network, we decided to use the BERT Architecture \cite{devlin2019bert} as our base model. Unlike previous variant of the attention-based language models such as \cite{gpt}, BERT learns to jointly conditions on the right and left context of the input representation at all the layers by randomly masking out segments of the input token. This is particularly useful for extracting contextual information from the input representation, and it's very applicable to our use case. We aim to build a variant of the model that can generalize \textit{sufficiently} well across different data distributions\footnote{distributions here implies different use cases or data environments/sources}. The notion of \textit{sufficiency} is evaluated by training, validating and testing our model on data across the different sources. We fine-tune the pretrained BERT model on our curated dataset rather than training from scratch (this choice was based on empirical results).
We are releasing a JavaScript library for developers to use our pretrained model in front facing applications such as chat app, to flag down racially biased comments. Consequently, we need to optimize for the model complexity without sacrificing performance gain. BERT has a huge number of parameters / large model size. Other methods have been employed to reduce the complexity without hurting the performance, such as knowledge distillation \cite{distillbert} and quantization \cite{bert_quant}. It has also been proven that pruning the weights of the pretrained model do not necessarily affect the model performance, within acceptable 'thresholds' \cite{gordon2020compressing}. In a similar fashion, we aim to reduce the complexity of BERT without sacrificing performance by replacing certain layers with the Hopfield layer \cite{ramsauer2020hopfield}.
Hopfield layer can be used to replace the attention-based layer of the BERT model; as it has been shown to approximate the functionality of the attention mechanism with a new Energy update rule (modified version of the Hopfield network extended to continuous state representation). The learning dynamics of BERT as shown in \cite{ramsauer2020hopfield} shows that the attention heads in the higher layers are mostly responsible for extracting task-specific features from the input representation. We replaced the self-attention mechanism in the last \( X \) layers of the pretrained BERT model with a Hopfield layer, where \( X \) is an hyperparameter. In a similar approach described in \cite{vaswani2017attention}, we use residual connection around the Hopfield sub-layer, followed by layer normalization \cite{ba2016layer}. It has been shown that residual connections help propagate positional information across layers.
The replaced Hopfield layer drastically reduced the parameter size of our model. To further improve the performance of the model, we use the Hopfield Pooling layer which acts as both a permutation equivariant layer and pools generated embedding from the modified BERT model. The Hopfield pooling layer also acts as a form of memory to store the hidden state of the last layer in the modified BERT model.
Finally, we add a classification layer on top of the pooling layer for the task in question.
\subsection{Model Training}
Given the disparity between the annotators for each sample in our dataset, averaging the labels with the confidence scores as weights might be noisy.
We computed the coefficient of variation \(CV\) among annotators for each sample in our dataset.
Using the recommended \cite{cov_article} \cite{DBLP:journals/corr/VeitACKGB17} \(CV\) of \textit{0.2} for the bias scores would imply dropping \(90\%\) of the dataset as seen in \ref{fig:cov}. In order to fully utilize the dataset and effectively manage the disparity between the annotators, we formulate a loss function \(\mathcal{L}_{model}\) given by
\begin{equation}
L_{model} = 1/N\sum_{i=1}^{N} CE\bigg(p\big(x_i\big),q\big(x_i\big)\bigg)
\end{equation}
where \(CE\big(p\big(x_i\big),q\big(x_i\big)\big)\) is the cross entropy between \(p(x_i)\) and \(q(x_i)\) for the \(ith\) sample, and \(N\) is the size of the dataset.
\begin{equation}
CE(p,q) = -\sum_{i=1}^{c}p_c(x)\log(\epsilon + q_c(x))
\end{equation}
\(q_c(x)\) is the predicted probability of sample \(x\) in class \(c\), equivalently, the output probabilities from the model and \(\epsilon\) is for numerical stability. \(p_c(x)\) is the probability of sample \(x\) in class \(c\), equivalently, \(p_c(x)\) is a \(c-length\) vector with entries such that \(\sum_{i=1}^{c}p_c(x)=1\). The entries of \(p_c(x)\) are the normalized confidence scores of the annotators with indices given by the respective voted classes. As an example, following the algorithm described in \ref{algo:form}, for a given sample shown in figure \ref{fig:sample}; the bias scores of the \(3\) different annotators with their confidence level is represented with an array of tuples, \(X\) where each tuple, \((b_i,s_i)\) is the bias score \(b_i\) with the associated confidence score, \(s_i\) by annotator \(i\). To calculate \(p_c(x)\), we first normalize the confidence scores across the \(3\) different annotators such that \(\sum_{i=1}^{3}s_i=1\). The resulting \(p_c(x)\) for the entry, \(S\), shown in \ref{fig:sample} is
\begin{align*}
X &= \bigg[ (4,4), (3,3), (2,5) \bigg] \\
X_{norm} &= \bigg[ (4,0.3333), (3,0.25), (2,0.4167) \bigg] \\
p_c(X) &= [ 0., 0., 0.4167, 0.25, 0.3333, 0. ]
\end{align*}
\normalsize
\begin{algorithm}
\SetAlgoLined\SetArgSty{2em}
\KwResult{\(p_c(x)\) }
\BlankLine
\textbf{\emph{Input:}} An array of target scores \textbf{\emph{t}}, and array of confidence scores
\textbf{\emph{s}} where \textbf{\emph{s[i]}} is the confidence score by \textbf{\emph{ annotator i}} for choosing target score \textbf{\emph{t[i]}} \\ Both arrays are of equal length N where N is the number of annotators. \textbf{\emph{C}} is the number of classes (equivalently the range/max of possible target scores if scores are integer.) \\
\BlankLine
\textbf{\emph{Step 1: Initialize \(p_c \leftarrow [\:\:.0 \quad \textbf{\emph{for}}\quad \textbf{\emph{\_}} \quad \textbf{\emph{in}} \quad C]\)}}\\
\BlankLine
\textbf{\emph{Step 2: Calculate normalizing constant K}}\\
\BlankLine
\textbf{ \(K \leftarrow \sum_{i=1}^{N} \textbf{\emph{$s_i$}} \) } \;
\BlankLine
\textbf{\emph{Step 3: Set the values of \(p_c\)}}\\
\BlankLine
\For{i in N}{
\( class\_index \leftarrow t[i] \)\;
\(p_c[ class\_index ] \xleftarrow{+} \frac{s[i]}{K}\) \;
}
\caption{Compute \(p_c(x)\) for a sample x}
\label{algo:form}
\end{algorithm}
\subsection{Evaluation Task and Metrics}
We evaluate the model performance across the validation and test set, given that they are from different distributions or sources. The test set contains only comments from YouTube while the validation set was randomly sampled from Fox News and BreitbartNews. The particular choices were due to the fact that the first batch of the dataset used for training contained very relatively few samples from YouTube.
We evaluate our approach using two methods; multiclass classification and multiclass-multilabel classification. \\
\textbf{Using the multiclass approach,} for a given sample, \( k \) and using the method described previously in calculating the target class, the class with the maximum confidence score was used as the target. We calculate the average precision for each class, \( AP_c \) and the mean average precision \( MAP\) averaged over the entire dataset with size \( N \) along the class dimension \( d \) as described in \cite{DBLP:journals/corr/VeitACKGB17}
\begin{align} \label{eq:metrics}
AP_c &= \frac{\sum_{k=1}^{N} Precision(k, c) \cdot rel(k,c)}{number of positives} \\
MAP &= 1/d\sum_{c=1}^{d} AP_c
\end{align}
\onecolumn
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{df_example.png}
\caption{Sample annotation}
\label{fig:sample}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfloat[Bias Score]{\includegraphics[width=0.5\textwidth]{bias_cv_prob.png} }
\subfloat[Confidence Score]{\includegraphics[width=0.5\textwidth]{confidence_cv_prob.png}}
\caption{Confidence of Variation}
\label{fig:cov}
\end{figure*}
\begin{figure*}[h!]
\subfloat[Train Vs Validation Loss]{\includegraphics[width=0.9\textwidth, height=0.3\textwidth]{trainvalloss.png} }
\label{fig:trainloss}
\end{figure*}
\begin{table*}[h!]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{Model} &
\multicolumn{3}{c|}{TopK Accuracy} &
\multirow{1}{*}{mAP} &
\multicolumn{3}{c|}{F1 @ k} &
\multicolumn{3}{c|}{IoU @ k} \\
& 1 & 2 & 3 & & 1 & 2 & 3 & 1 & 2 & 3\\
\hline
Baseline & 0.6015625 & 0.703125 & 0.7890625 & 0.29355 & 0.5859 & 0.6953 & 0.7734 & 0.2102 & 0.2114 & 0.2102\\
\hline
hBert & \textbf{0.640625} & 0.703125 & 0.765625 & \textbf{0.3501} & \textbf{0.6562} & \textbf{0.7109} & \textbf{0.8125} & \textbf{0.2266} & \textbf{0.2165} & \textbf{0.2281} \\
\hline
\end{tabular}
}
\caption{Test Metrics for selected trial for each model configuration}
\label{tab:summary1}
\end{table*}
\begin{table*}[h!]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{Model} &
\multicolumn{6}{c|}{AP} \\
& 0 & 1 & 2 & 3 & 4 & 5\\
\hline
Baseline & 0.2205 & 0.0967 & 0.1344 & 0.9564 & 0.1103 &0.2340 \\
\hline
hBert & 0.1195 & \textbf{0.1111} & \textbf{0.2132} & \textbf{0.9607} & \textbf{0.5049} & 0.1914 \\
\hline
\end{tabular}
}
\caption{The Average Precision (AP) for the different classes}
\label{tab:summary2}
\end{table*}
\begin{figure*}[tb!]
\centering
\includegraphics[width=1\textwidth, height=25em]{useHopfieldLayers.jpeg}
\captionsetup{justification=centering}
\caption{Parallel Coordinate Graph for multiple runs/trials across model configurations \\ The model configuration is the Baseline when the target variable (\textbf{\emph{useHopfieldLayers}} in the graph is False. \\ The \textbf{\emph{useHopfieldPool}} variable denotes whether the Hopfield Pooling layer was used \\
The \textbf{\emph{ lr, pool\_num\_heads, num\_hf\_layers, val\_loss\_epoch }} variables in the graph are the learning rate, the number of heads in the Hopfield Pooling Layer (if used), the number of Hopfield Layer and the validation loss respectively}. With a reduced model complexity, the hBert performs relatively as good as the baseline
\label{fig:paralel}
\end{figure*}
\twocolumn
where \(Precision(k,c) \) is the precision for class \( c\) for the \(kth\) sample and \( rel(k,c) \) is an indicator function that is 1 if the predicted and the target class for sample \(k\) is positive. We also report the \(topK\) accuracy, for \(k=[1,3]\) since we had a max of 3 annotators for each \(k\). \\
\textbf{Using the multilabel approach,} for a given sample, \( k \) and using the method described previously in calculating the target class, we take the top k classes as the target classes. We do the same for the predictions (obtained after passing the output logits through a softmax function). We compute the \(AP_c\) (for each class), \(mAP, F1\) score, and \(IoU\)
\section{Experiments}
\subsection{Training Details \& Result}
We run a multi-objective hyperparameter search (using Optuna\cite{optuna_2019}) optimizing for the following parameters: validation loss, FLOPs (indicative of the model complexity and ultimately the inference time), mAP on the validation and test set, and the Intersection over Union IoU scores (also known as the Jaccard Index) for the \(topk\) for \(k=[1,3]\) transformations described above. We use 4 NVidia V100SXM2 (16G memory) GPUs on a single node, with batch size of 32. We reduced the batch size (instead of say 64) because we had to run multiple trials and to avoid the notorious OOM error. For each model configuration, we run 10 trials with 5 epochs each.
As seen in \ref{fig:paralel}, the hBert perform relatively better with a reduced model complexity. In \ref{tab:summary1}, the models predictions were more accurate for an increasing \(k\). The hBert perform better than the Baseline for the Top1 accuracy. The F1 scores and Jaccard Index (IoU) for the hBert were relatively higher for \(k=[1,3]\). The \(mAP\), which is the average of the \(AP_c\) over the classes, is relatively low because of the low performing classes as seen in \ref{tab:summary2}
\subsection{Data statistics}
The data set contains 139,090 rows, and 67.70 percent of their sentiment scores are negative. Their average sentiment score is -0.1422, and the median value is -0.1203, ranging from -3.6206 to 2.1414. 66,998 of them are comments from Fox News, with an average sentiment score of -0.0997 and a median of -0.0884, ranging from -2.8591 to 2.1414. And 63,948 of the data are comments from Breitbart News, with an average sentiment score of -0.1760 and a median of -0.1721, ranging from -3.6206 to 1.3576. And 8,144 of the data are comments from YouTube, with an average sentiment score of -0.2259 and a median of -0.2694, ranging from -3.3000 to 1.4673.
In this work, we used the first batch of the dataset; which have been manually annotated using Amazon Mechanical Turk. After pre-processing the input text (removing irrelevant tokens such as mentions), the maximum length was 478 (it was 623 before preprocessing).
\section{Discussion}
In this work we have shown a way to detect racial bias in text. We experimented with a BERT-based model as we aim to reduce model complexity without sacrificing much of the performance.
We also discussed the BiasCorp, a manually labelled dataset containing racially biased comments from Fox News, BreitbartNews and YouTube. To enable developers make use of our pretrained hBERT model, we are releasing a Javascript Library, optimized for inference on the edge. A Chrome Extension will also be available for users to help report and identify racially bias text on the web. We also plan to extend this work to other forms of biases such as Gender. In a future work, we plan to further reduce the model complexity by using Gaussian Kernel as described in \cite{ramsauer2020hopfield} and other quantization tricks.
\section*{Acknowledgments}
This research was enabled in part by support provided by Calcul Québec (www.calculquebec.ca) and Compute Canada (www.computecanada.ca)
|
1,116,691,497,642 | arxiv | \section{Introduction}
The interplay of the strong spin-orbit interaction (SOI) in Rashba materials~\cite{manchon2015new} and Coulomb electron-electron (e-e) repulsion leads to a bunch of qualitatively new physical effects, including the emergence of new correlated states~\cite{PhysRevB.85.035116,PhysRevB.89.155103,PhysRevB.88.075115,PhysRevLett.115.026401}, unusual collective modes~\cite{PhysRevB.91.035106,D_Amico_2019,PhysRevB.102.195208}, and even bound electron pairs (BEPs)~\cite{PhysRevB.104.125103}.
The Rashba SOI is produced by the electric fields external to the crystal lattice. In quantum structures the common sources of this field include the confining potential, the charged impurities and structure defects. It is well known that the Coulomb fields of interacting electrons also produce the SOI which manifests directly in the e-e interaction Hamiltonian~\cite{bethe2012quantum}. As a result the interaction Hamiltonian gains the contribution that depends on the electron spins and momenta. Effects of this so-called pair spin-orbit interaction (PSOI) were until very recently considered only as a small perturbation in such problems as spin dynamics, spin-spin interaction, spin current generation, etc~\cite{BOGUSLAWSKI1980389,PhysRevB.72.161304,PhysRevB.79.195305,PhysRevB.84.033305}. However, similarly to the Rashba SOI, the PSOI is strongly enhanced in Rashba materials and therefore can produce strong changes in electronic states~\cite{2019arXiv190506340G}.
The strong PSOI can generate a plethora of non-trivial effects due to the effective attraction that this interaction creates. The attraction mechanism is quite clear~\cite{2019arXiv190506340G}. The PSOI created by the Coulomb electric field of a given electron decreases the energy of another electron possessing a particular spin orientation relative to its momentum. This effect increases with decreasing the distance between electrons, which exactly implies the attraction. The attraction can lead to the emergence of BEPs with highly unusual configuration of the charge and spin density~\cite{PhysRevB.98.115137,2018arXiv180410826G,2019arXiv190409510G}.
Of greatest interest is, of course, the collective behavior of a many-electron system with a strong PSOI, but this problem has been studied extremely poorly to date. In Ref.~\cite{PhysRevB.95.045138} we considered a specific situation of a gated one-dimensional quantum wire with the PSOI produced by means of the image charges on the gate. In this case the PSOI leads to the appearance of a correlated state with unusual collective excitations. One of the two collective modes strongly softens in the long-wavelength part of the spectrum with increasing the PSOI strength, and becomes unstable when the PSOI exceeds a critical value.
In this paper, we turn to a more general statement of the problem by considering a two-dimensional (2D) electron system with a richer configuration of Coulomb fields generating PSOI\@. The electron dynamics is described using the $k \cdot p$ method~\cite{voon2009kp}, assuming that the Coulomb electric fields are sufficiently smooth.
To begin with, we note that in the presence of the PSOI the effective strength of the e-e interaction is determined by two parameters. This is in stark contrast to the conventional case of an electron gas with Coulomb interaction only, where the interaction strength is characterized by the parameter $r_s$, which is the ratio of the inter-electron distance to the Bohr radius $a_B$. The e-e interaction Hamiltonian contains the PSOI component $H_{\mathrm{PSOI}}$ in addition to the usual Coulomb term $H_{\mathrm{Coul}}$, so that the system Hamiltonian is
\begin{equation}
\label{ham}
\begin{split}
H ={} & H_{\mathrm{kin}} + H_{\mathrm{Coul}} + H_{\mathrm{PSOI}}\\
={} & \sum_{i} \frac{\hat{\bm{p}}_i^2}{2m} + \frac12 \sum_{i \ne j} \mathcal{U} (\bm{r}_{i} - \bm{r}_{j}) \\
&{}+ \frac{\alpha}{\hbar} \sum_{i \ne j} \left( \hat{\bm{p}}_{i} \times \bm{\mathcal{E}} (\bm{r}_{i} - \bm{r}_{j}) \right) \cdot \bm{\sigma}_{i} \,.
\end{split}
\end{equation}
Here $\mathcal{U}(\bm{r}) = e^2/ \epsilon r$ is the Coulomb interaction potential, $\bm{\mathcal{E}}(\bm{r}) = \frac{1}{e} \nabla_{\! \bm{r}} \mathcal{U}(\bm{r})$ is the pair Coulomb field that produces PSOI, $\hat{\bm{p}}_i$ is the momentum operator of the $i$-th electron, $m$ is the effective mass, $\bm{\sigma} \equiv (\sigma_x,\sigma_y,\sigma_z)$ is the Pauli vector, and $\alpha$ stands for the Rashba constant of the material. It is convenient to introduce the dimensionless Rashba constant $\tilde{\alpha} = \alpha/e a_B^2$.
The e-e interaction strength is characterized by the ratio of the interaction energy to the Fermi energy. The parameter $r_s$ relates only to the Coulomb term. The contribution of the PSOI term is described by another parameter $\tilde{\alpha}/r_s$. It is remarkable that both parameters depend differently on the parameters of the electronic system. In particular, while the parameter $r_s$ decreases with increasing the electron density, the parameter $\tilde{\alpha}/r_s$, on the contrary, increases. Therefore the PSOI correlations can dominate when the density is high enough. While the effect of Coulomb correlations is largely understood~\cite{giuliani2005quantum}, the role of the PSOI-induced correlations and the conditions under which they lead to a radical rearrangement of the electronic system remain to be elucidated.
This paper aims to find out whether the PSOI creates characteristic correlations, under what conditions they become significant, and how this manifests itself in the spectrum of collective excitations. To this end, we study the collective excitations and charge correlations in a 2D electron system with the in-plane reflection symmetry, where the PSOI is produced by the in-plane pair Coulomb field. The calculations are carried out in the framework of the random phase approximation (RPA).
We have found that the static structure factor $S(q)$ as a function of the wave vector $q$ acquires a sharp peak around a certain value of $q = q_c$ when the PSOI parameter is large enough $\tilde{\alpha}/r_s \gtrsim 1/4 $, which indicates that the PSOI component of the e-e interaction is comparable in magnitude to the Fermi energy. The peak clearly shows the appearance of strong electron correlations on the $q_c$ scale, which are specific for the PSOI\@. They arise owing to the competition between the Coulomb repulsion of electrons and their attraction caused by the PSOI, which determines this characteristic spatial scale. Interestingly, the PSOI correlated state appears at rather high density of electrons and, correspondingly, at small $r_s$, when the usual Coulomb interaction is small.
When $\tilde{\alpha}/r_s$ exceeds a critical value, a new mode due to PSOI arises in the spectrum of collective excitations of the system in addition to common long-wave plasmons. The mode exists only in a finite band of wave vectors around $q_c$, the band width growing with $\tilde{\alpha}/r_s$. The mode frequency is purely imaginary. It is interesting that the electron density fluctuations growing with time are not polarized in spin. Thus the spatially uniform paramagnetic state of the electron system becomes unstable with respect to the charge density fluctuations on the $q_c$ scale. For realistic values of the SOI parameter $\tilde{\alpha} \ll 1$ in Rashba materials, the critical value $q_c \propto \tilde{\alpha}^{1/3} k_F$ lies in the long-wave part of the spectrum.
\section{Model and results}
In this section we consider the linear response of the 2D electron gas with PSOI to the external electric potential, the dynamic charge susceptibility, the static structure factor, and the spectrum of the collective modes.
The 2D electron system is assumed to be symmetric with respect to the inversion of the normal to the plane.
In this case the PSOI is produced by the in-plane pair Coulomb field in contrast to the gated one-dimensional quantum wire where only the normal component of the Coulomb field is important~\cite{PhysRevB.95.045138}. It is worth noting that the PSOI crucially depends on the geometry of the generating electric fields and momenta of interacting electrons. In the situation under consideration, both of these quantities are 2D vectors, the topology of which is determined self-consistently.
The results are obtained using the equation of motion for the quantum Wigner function, which we derive and solve in the RPA, following Ref.~\cite{PhysRevB.95.045138}. The details of the calculation are presented in Appendix~\ref{appa}.
\subsection{Charge susceptibility}
The density $n^{(s)}_{q\omega}$ of the electrons with the $z$-component of the spin equal to $s=\pm 1$, in units of $\tfrac{\hbar}{2}$, satisfies the following system of linear equations
\begin{align}
\label{linearsystem}
\chi_{0}^{-1} n^{(s)}_{q\omega} - V_{q\omega} \sum_{\varsigma=\pm} n^{(\varsigma)}_{q\omega} = \varphi_{q\omega}\,,
\end{align}
with the external potential $\varphi_{q\omega}$, and the interaction potential
\begin{equation}
V_{q \omega} = \mathcal{U}_q + 8 \frac{\alpha^2}{e^2} \mathcal{U}^2_q \chi_j\,.
\end{equation}
The first term of the interaction potential is due to the Coulomb e-e repulsion. For the 2D electron gas formed in a uniform system with a bulk dielectric constant $\epsilon$ the e-e repulsion is governed by the pure Coulomb potential $\mathcal{U}_{q} = 2 \pi e^2 /\epsilon q$. The second term of the interaction potential is exactly due to the PSOI\@. The dynamic susceptibilities $\chi_{0}$ and $\chi_j$ are given by Eq.~\eqref{chi0} and Eq.~\eqref{chij}.
Since $V_{q \omega}$ is spin-independent, the solutions of Eq.~\eqref{linearsystem} correspond to the equal response of up- and down-spin densities, $n^{(+)}_{q\omega} = n^{(-)}_{q\omega}$. The dynamic charge susceptibility is
\begin{equation}
\label{chin}
\chi_n(q,\omega) = \frac{1}{{(2 \chi_0)}^{-1} - V_{q \omega}}\,.
\end{equation}
\subsection{Static structure factor}
\begin{figure}[htp]
\includegraphics[width=0.9\linewidth]{structure_factor.pdf}
\caption{The static structure factor $S(q)$ as a function of $q$ for three values of the $r_s$. The PSOI magnitude is $\tilde{\alpha}=0.1$, which corresponds to $r_s^*=0.3$.\label{sfig}}
\end{figure}
Consider the static structure factor $S(q)$, which is related to the charge susceptibility of Eq.~\eqref{chin} via
\begin{equation}
S(q) = -\frac{\hbar}{\pi n} \int_0^{\infty} d\omega\, \im \chi_n(q,\omega)\,,
\end{equation}
$n$ being the mean electron density. It is of interest to study the structure factor as a function of $q$ for different values of the e-e interaction parameters. Since there are two such parameters, it is convenient to fix the value of the PSOI constant $\tilde{\alpha}$ and change the parameter $r_s$ in such a way that both interaction parameters, $r_s$ and $\tilde{\alpha}/r_s$, are varied. The result is plotted in Fig.~\ref{sfig}.
First of all, we found that the structure factor has a strong singularity at a certain value of the parameter $r_s=r_s^*$,
\begin{equation}
r_s^* = \frac{2^{\frac{13}{6}} \tilde{\alpha}}{\sqrt{2^{\frac13} +2 {\tilde{\alpha}}^{\frac23}3^{\frac23}}}\,.
\end{equation}
As $r_s$ lowers down to this critical value, the spectral weight is shifted towards the long-wave part of the spectrum, and eventually a sharp peak is formed in the structure factor at the critical value $q_c$ of the wave-vector, given by
\begin{equation}
\frac{q_c}{k_F} = \frac{2 {\tilde{\alpha}}^{\frac13} 3^{\frac13}}{\sqrt{2^{\frac13} +2 {\tilde{\alpha}}^{\frac23}3^{\frac23}}}\,,
\end{equation}
which indicates the appearance of strong electron correlations due to PSOI\@.
The characteristic spatial scale $q_c$ arises as a result of the competition between the Coulomb repulsion of electrons and their attraction caused by the PSOI\@. Its dependence on $\tilde{\alpha}$ is displayed in Fig.~\ref{qcfig}. When PSOI is extremely strong, $\tilde{\alpha} \gg 1$, the critical value tends to $q_c = \sqrt{2} k_F$. For small SOI parameter $\tilde{\alpha} \ll 1$, typically found in common Rashba materials, the critical value $q_c \propto \tilde{\alpha}^{1/3} k_F$ lies in the long-wave part of the spectrum. The dependence of the critical value of $r_s^*$ on the PSOI strength is plotted in Fig.~\ref{rsfig}.
\begin{figure}[htp]
\includegraphics[width=0.9\linewidth]{q_critical.pdf}
\caption{The critical value $q_c$ as a function of $\tilde{\alpha}$.\label{qcfig}}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=0.9\linewidth]{rs_critical.pdf}
\caption{The critical value $r_s^*$ as a function of $\tilde{\alpha}$.\label{rsfig}}
\end{figure}
\subsection{Collective modes}
The collective modes are given by zeroes of the denominator of Eq.~\eqref{chin}. This equation has two types of solutions defining two collective modes.
In the long-wave region the collective modes are common plasmons, the spectrum of which gets a correction from the PSOI\@. At $q \ll k_F$,
\begin{equation}
\omega_{pl} = \omega_{2D} \sqrt{1+ \frac{\alpha^2}{e^2} q^2 k_F^2}\,,
\end{equation}
with
\begin{equation}
\omega_{2D} = \sqrt{\frac{2 \pi e^2 n}{\epsilon m} q}
\end{equation}
being the standard plasmon dispersion in a 2D electron gas~\cite{PhysRevLett.18.546}. The correction, albeit small, can be accessible to experiment, since the high-accuracy measurements of the 2D plasmon spectra become available such as detecting the microwave absorption from the recombination photoluminescence spectrum~\cite{PhysRevB.102.081301,PhysRevB.105.L041403}. The plasmon spectrum is shown in Fig.~\ref{figpl}.
\begin{figure}[htp]
\includegraphics[width=0.9\linewidth]{plasmons_psoi.pdf}
\caption{The plasmon frequency $\omega$ as a function of wave vector for three values of the PSOI constant, with $r_s=1$. The frequency is normalized at $\omega_0 = v_F k_F$. The dashed line shows the boundary of the particle-hole continuum.\label{figpl}}
\end{figure}
Most importantly, a new collective mode due to PSOI arises in addition to plasmons as soon as $r_s \le r_s^*$. At $r_s=r_s^*$ the mode appears to exist at a single critical value $q_c$ of the wave-vector, whereas at $r_s<r_s^*$ the region where the mode exists expands to a finite band of wave vectors $[q_1,q_2]$, the band width growing with lowering $r_s$. The spectrum of this mode is illustrated by Fig.~\ref{spect}.
\begin{figure}[htp]
\includegraphics[width=0.9\linewidth]{spectrum_coulomb_rs.pdf}
\caption{The imaginary part of the frequency of a new collective mode due to PSOI as a function of wave vector. The mode dispersion is shown for three values of $r_s$ to trace how the instability develops in the system with increasing the PSOI interaction parameter of $\tilde{\alpha}/r_s$. The PSOI magnitude is fixed to be $\tilde{\alpha}=0.1$, which corresponds to $r_s^*=0.3$.\label{spect}}
\end{figure}
The mode frequency is purely imaginary. For every $q$ within the allowed band of $q \in (q_1,q_2)$ there are two branches, both with $\omega''>0$ and $\omega''<0$, forming together the petal-like shape. The frequencies of the two branches give, respectively, the increment and decrement of the time-dependent fluctuations in the system. The mode is characterized by the oscillations of up- and down-spin densities with equal amplitudes $n^{(+)}_{q\omega} = n^{(-)}_{q\omega}$, which corresponds to the excitation in the charge sector. Hence the electron density fluctuations growing with time as $\propto \exp(\omega'' t)$ are not polarized in spin. This indicates that an instability of the spatially uniform paramagnetic ground state of the 2D electron gas with PSOI develops with respect to the charge density fluctuations on the $q_c$ scale. However, at $r_s>r_s^*$ the system is stable.
The effects of PSOI are enhanced in 2D atomically thin layers, either freely suspended~\cite{doi:10.1063/1.5019906,ROSSLER2010861} or immersed in a weak dielectric. The e-e repulsion is governed there by the Rytova-Keldysh potential
\begin{equation}
\mathcal{U}_{q} = \frac{2 \pi e^2}{q (1 + q l)}\,,
\end{equation}
where $l$ is a characteristic length that can be roughly estimated as $\epsilon_{\parallel} d/2$, with $d$ being the layer thickness, $\epsilon_{\parallel}$ the in-plane dielectric constant of the layer material~\cite{RevModPhys.90.021001}.
The spectrum of the new collective mode for this case is displayed in Fig.~\ref{spect_rk}. Because of the decreased dielectric screening the critical value $r_s^*$ increases as compared to the purely Coulomb case considered above.
\begin{figure}[htp]
\includegraphics[width=0.9\linewidth]{spectrum_RK_rs.pdf}
\caption{The imaginary part of the frequency of the collective mode due to PSOI as a function of wave vector for three values of $r_s$. Here a 2D layer in vacuum is considered, its thickness being $d=0.02a_B$, the in-plane dielectric constant $\epsilon_{\parallel} = 15$. The PSOI magnitude is $\tilde{\alpha}=0.1$, which corresponds to $r_s^* = 1.054$. \label{spect_rk}}
\end{figure}
\section{Concluding remarks}
We studied electron correlations and collective modes in a 2D electron system with PSOI produced by the in-plane Coulomb electric fields of interacting electrons. The peculiarity of this system is that the e-e interaction strength is described by two interaction parameters, $r_s$ and $\tilde{\alpha}/r_s$, related to the Coulomb interaction and the PSOI\@. They are characterized by the opposite dependence on the electron density. The attractive PSOI prevails over the Coulomb repulsion when the density is high enough.
This gives rise to strong electron correlations on a certain spatial scale that manifest themselves as a sharp peak in the static structure factor at $q=q_c$. Moreover, as soon as $\tilde{\alpha}/r_s$ exceeds a critical value, a new mode with purely imaginary frequency appears in a band of wave vectors around $q_c$. In other words, the spatially uniform paramagnetic ground state becomes unstable with respect to the charge density fluctuations on the $q_c$ scale.
This altogether indicates a tendency for an electron state to form a striped structure of some sort on this spatial scale. Within the linear analysis undertaken in the present paper, it is impossible to predict a specific type of the electron state that would correspond to the true energy minimum. Instead of the perturbative RPA, the self-consistent approach, like the Hartree-Fock~\cite{PhysRevB.85.035116,PhysRevB.54.1853}, is necessary to attack this problem.
The obtained results, of course, do not imply that a sufficiently dense 2D electron gas with arbitrarily small PSOI is always unstable towards the density fluctuations. For very large electron densities the considerations based on the one-band model and $k\cdot p$ approximation lose their validity.
On the other hand, large values of $\tilde{\alpha} \gtrsim 1$ are not attainable at least in classical semiconductors with $sp^3$ band hybridization, where the upper limit for the Rashba constant is of the order of $\tilde{\alpha} \approx \epsilon^{-2}$, with $\epsilon$ being the bulk dielectric constant~\footnote{We are grateful to Sergei Tarasenko for pointing this out (S.~Tarasenko, private communication)}.
Nonetheless, the recent rise of 2D systems with giant SOI~\cite{doi:10.1021/acs.jpclett.1c03662,varignon2018new} gives us some hope for the realization of the extremely strong Rashba SOI by means of other physical mechanisms. Thus, in oxide heterostructures and films~\cite{Pai_2018,Stemmer_2018} the strong indications were found for the electronic nematicity~\cite{Levy_2020}, and for the formation of BEPs~\cite{Levy_2015,doi:10.1021/acs.nanolett.8b01614,mikheev2021clean} well beyond the superconducting phase. While the exact mechanisms behind these effects still remain unclear, at least some of them are likely due to the interplay of the giant SOI and collective effects~\cite{PhysRevB.104.125103}. That being said, the quest for a particular system with a giant SOI where the effects predicted in the present paper could develop is still a challenge for the future.
\begin{acknowledgments}
This work was carried out in the framework of the state task and supported by the Russian Foundation for Basic Research, Project No. 20--02--00126.
\end{acknowledgments}
|
1,116,691,497,643 | arxiv | \section{Introduction}\label{sc:intro}
MoEDAL (Monopole and Exotics Detector at the LHC)~\cite{moedal-web,moedal-tdr}, the $7^{\rm th}$ experiment at the Large Hadron Collider (LHC)~\cite{LHC}, was approved by the CERN Research Board in 2010. It is designed to search for manifestations of new physics through highly-ionising particles in a manner complementary to ATLAS and CMS~\cite{DeRoeck:2011aa}. The most important motivation for the MoEDAL experiment is to pursue the quest for magnetic monopoles and dyons at LHC energies. Nonetheless the experiment is also designed to search for any massive, stable or long-lived, slow-moving particles~\cite{Fairbairn07} with single or multiple electric charges arising in many scenarios of physics beyond the Standard Model (SM). A selection of the physics goals and their relevance to the MoEDAL experiment are described here and elsewhere~\cite{creta2016}. For an extended and detailed account of the MoEDAL discovery potential, the reader is referred to the \emph{MoEDAL Physics Review}~\cite{Acharya:2014nyr}. Emphasis is given here on recent MoEDAL results, based on the exposure of magnetic monopole trapping volumes to 7-TeV and 8-TeV proton-proton collisions.
The structure of this paper is as follows. Section~\ref{sc:detector} provides a brief description of the MoEDAL detector. Magnetic monopoles and monopolia are briefly discussed in Section~\ref{sc:mm}, whilst Section~\ref{sc:lightsearch} presents the MoEDAL results on monopole searches. Section~\ref{sc:susy} is dedicated to supersymmetric models predicting massive (meta)stable states. Scenarios with doubly-charged Higgs bosons and their observability in MoEDAL are highlighted in Section~\ref{sc:lrsm}. Highly-ionising exotic structures in models with extra spatial dimensions, namely microscopic black holes and D-matter, relevant to MoEDAL are briefly mentioned in Sections~\ref{sc:bh} and Sections~\ref{sc:dmatter}, respectively. The paper concludes with a summary and an outlook in Section~\ref{sc:summary}.
\section{The MoEDAL detector}\label{sc:detector}
The MoEDAL detector~\cite{moedal-tdr} is deployed around the intersection region at Point~8 of the LHC in the LHCb experiment Vertex Locator (VELO)~\cite{LHCb-detector} cavern. A three-dimensional depiction of the MoEDAL experiment is presented in Fig.~\ref{Fig:moedal-lhcb}. It is a unique and largely passive LHC detector comprised of four sub-detector systems.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.7\textwidth]{moedal-detector}
\caption{ A three-dimensional schematic view of the MoEDAL detector (on the right) around the LHCb VELO region at Point~8 of the LHC.}
\label{Fig:moedal-lhcb}
\end{center}
\end{figure}
\subsection{Low-threshold nuclear track detectors}\label{sc:ndt}
The main sub-detector system is made of a large array of CR39\textregistered, Makrofol\textregistered\ and Lexan\textregistered\ nuclear track detector (NTD) stacks surrounding the intersection area. The passage of a highly-ionising particle through the plastic detector is marked by an invisible damage zone along the trajectory. The damage zone is revealed as a cone-shaped etch-pit when the plastic detector is etched using a hot sodium hydroxide solution. Then the sheets of plastics are scanned looking for aligned etch pits in multiple sheets. The MoEDAL NTDs have a threshold of $Z/\beta\sim5$, where $Z$ is the charge and $\beta=v/c$ the velocity of the incident particle. In proton-proton collision running, the only source of known particles that are highly ionising enough to leave a track in MoEDAL NTDs are spallation products with range that is typically much less than the thickness of one sheet of the NTD stack. In that case the ionising signature will be that of a very low-energy electrically-charged \emph{stopped} particle. This signature is distinct to that of a \emph{penetrating} electrically or magnetically charged particle that will usually traverse every sheet in a MoEDAL NTD stack, accurately demarcating a track that points back to the collision point with a resolution of $\sim 1~{\rm cm}$. The part of the Run-2 NTD deployment which rests on top of the LHCb VELO is visible in Fig.~\ref{fg:ntd}. This is the closest possible location to the interaction point and represents a novelty of this run with respect to earlier installations during Run-1.
\begin{figure}[htb]
\begin{minipage}[b]{0.59\textwidth}
\includegraphics[width=\textwidth]{ntd}
\caption{\label{fg:ntd}Part of the Run~2 NTD deployment on top of the LHCb VELO.}
\end{minipage}\hspace{2pc}%
\begin{minipage}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{vhcc}
\caption{\label{fg:vhcc}The VHCC between RICH1 and TT installed for Run~2.}
\end{minipage}
\end{figure}
\subsection{Very high-charge catcher}\label{sc:vhcc}
Another new feature of the Run-2 deployment is the installation of a high-threshold NTD array ($Z/\beta\sim50$): the Very High Charge Catcher (VHCC). The VHCC sub-detector, consisting of two flexible low-mass stacks of Makrofol\textregistered\ in an aluminium foil envelope, is deployed in the forward acceptance of the LHCb experiment between the LHCb RICH1 detector and the Trigger Tracker (TT), as shown in Fig.~\ref{fg:vhcc}. It is the only NTD (partly) covering the forward region, adding only $\sim0.5\%$ to the LHCb material budget while enhancing considerably the overall geometrical coverage of MoEDAL NTDs.
\subsection{Magnetic trappers}\label{sc:mmt}
A unique feature of the MoEDAL detector is the use of paramagnetic magnetic monopole trappers (MMTs) to capture electrically- and magnetically-charged highly-ionising particles. Such volumes installed in IP8 for the 2015 proton-proton collisions is shown in Fig.~\ref{fg:mmt}. The aluminium absorbers of MMTs are subject to an analysis looking for magnetically-charged particles at a remote SQUID magnetometer facility~\cite{Joergensen:2012gy,DeRoeck:2012wua}. The search for the decays of long-lived electrically charged particles that are stopped in the trapping detectors will subsequently be carried out at a remote underground facility.
A trapping detector prototype was exposed to 8~TeV proton-proton collisions for an integrated luminosity of 0.75~fb$^{-1}$ in 2012. It comprised an aluminium volume consisting of 11~boxes each containing 18~cylindrical rods of 60~cm length and 2.5~cm diameter. For the 2015 run at 13~TeV, the MMT was upgraded to an array consisting of 672 square aluminium rods with dimension $19\times2.5\times2.5~{\rm cm}^3$ for a total mass of 222~kg in 14~stacked boxes that were placed 1.62~m from the IP8 LHC interaction point under the beam pipe on the side opposite to the LHCb detector. The results for both aforementioned configurations and energies, interpreted in terms of monopole mass and magnetic charge, are presented in Section~\ref{sc:lightsearch}.
\begin{figure}[htb]
\begin{minipage}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{mmt}
\caption{\label{fg:mmt}Deployment of the MMT for the LHC Run~2.}
\end{minipage}\hspace{2pc}%
\begin{minipage}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{timpix}
\caption{\label{fg:timpix}Run~2 deployment of TimePix chips in MoEDAL.}
\end{minipage}
\end{figure}
\subsection{TimePix radiation monitors}\label{sc:timepix}
The only non-passive MoEDAL sub-detector system comprises an array of TimePix pixel device arrays ($256\times256$ square pixels with a pitch of $55~{\rm \mu m}$) distributed throughout the MoEDAL cavern at IP8, forming a real-time radiation monitoring system of highly-ionising beam-related backgrounds. A photo of its readout setup for the 2015 installations is shown in Fig.~\ref{fg:timpix}. Each pixel of the innovative TimePix chip comprises a preamplifier, a discriminator with threshold adjustment, synchronisation logic and a 14-bit counter. The operation of TimePix in time-over-threshold mode allows a 3D mapping of the charge spreading effect in the whole volume of the silicon sensor, thus differentiating between different types of particles species from mixed radiation fields and measuring their energy deposition~\cite{timepix}.
\section{Magnetic monopoles}\label{sc:mm}
The MoEDAL detector is designed to fully exploit the energy-loss mechanisms of magnetically charged particles~\cite{Dirac1931kp,Diracs_idea,tHooft-Polyakov,mm} in order to optimise its potential to discover these messengers of new physics. There are various theoretical scenarios in which magnetic charge would be produced at the LHC~\cite{Acharya:2014nyr}: (light) 't Hooft-Polyakov monopoles~\cite{tHooft-Polyakov,Vento2013jua}, electroweak monopoles~\cite{Cho1996qd,cho2,You}, global monopoles~\cite{vilenkin,debate,nussinov,papav,sarkar} and monopolium~\cite{Diracs_idea,khlopov,Monopolium,Monopolium1}. Magnetic monopoles that carry a non-zero magnetic charge and dyons possessing both magnetic and electric charge are among the most fascinating hypothetical particles. Even though there is no generally acknowledged empirical evidence for their existence, there are strong theoretical reasons to believe that they do exist, and they are predicted by many theories including grand unified theories and superstring theory~\cite{Rajantie:2012xh,rajantiept}.
The theoretical motivation behind the introduction of magnetic monopoles is the symmetrisation of the Maxwell's equations and the explanation of the charge quantisation~\cite{Dirac1931kp}. Dirac showed that the mere existence of a monopole in the universe could offer an explanation of the discrete nature of the electric charge, leading to the Dirac Quantisation Condition (DQC),
\begin{equation} \alpha~ g = \frac{N}{2} e , \quad N = 1, 2, ... ,
\label{eq:dqc}\end{equation}
\noindent where $e$ is the electron charge, $\alpha = \frac{e^2}{4\pi \hbar\, c \varepsilon_0 } = \frac{1}{137}$ is the fine structure constant (at zero energy, as appropriate to the fact that the quantisation condition of Dirac pertains to long (infrared) distances from the centre of the monopole), $\varepsilon_0$ is the vacuum permittivity, and $g$ is the monopole magnetic charge. In Dirac's formulation, magnetic monopoles are assumed to exist as point-like particles and quantum mechanical consistency conditions lead to Eq.~(\ref{eq:dqc}), establishing the value of their magnetic charge. Although monopoles symmetrise Maxwell's equations in form, there is a numerical asymmetry arising from the DQC, namely that the basic magnetic charge is much larger than the smallest electric charge. A magnetic monopole with a single Dirac charge ($g_{\rm D}$) has an equivalent electric charge of $\beta(137e/2)$. Thus for a relativistic monopole the energy loss is around $4,\!700$ times ($68.5^2$) that of a minimum-ionising electrically-charged particle. The monopole mass remains a free parameter of the theory.
A possible explanation for the lack of experimental confirmation of monopoles is Dirac's proposal~\cite{Dirac1931kp,Diracs_idea,khlopov} that monopoles are not seen freely because they form a bound state called \emph{monopolium}~\cite{Monopolium,Monopolium1,Epele0} being confined by strong magnetic forces. Monopolium is a neutral state, hence it is difficult to detect directly at a collider detector, although its decay into two photons would give a rather clear signal for the ATLAS and CMS detectors~\cite{Epele1,Epele2}, which however would not be visible in the MoEDAL detector. Nevertheless according to a novel proposal~\cite{risto}, the LHC radiation detector systems can be used to turn the LHC itself into a new physics search machine by detecting final-state protons $pp\to pXp$ exiting the LHC beam vacuum chamber at locations determined by their fractional momentum losses. Such technique would be appealing for detecting monopolia. Furthermore the monopolium might break up in the medium of MoEDAL into highly-ionising dyons, which subsequently can be detected in MoEDAL~\cite{Acharya:2014nyr}. Moreover its decay via photon emission would produce a peculiar trajectory in the medium, should the decaying states are also magnetic multipoles~\cite{Acharya:2014nyr}.
\section{Searches for monopoles in MoEDAL}\label{sc:lightsearch}
The high ionisation of slow-moving magnetic monopoles and dyons, implies quite characteristic trajectories when such particles interact with the MoEDAL NTDs, which can be revealed during the etching process~\cite{moedal-tdr,Acharya:2014nyr}. In addition, the high magnetic charge of a monopole (which is expected to be at least one Dirac charge $g_D = 68.5 e$ (\emph{cf.} Eq.~(\ref{eq:dqc})) implies a strong magnetic dipole moment, which in turn may result in a strong binding of the monopole with the $^{27}_{13}{\rm Al}$ nuclei of the aluminium MoEDAL MMTs. In such a case, the presence of a monopole trapped in an aluminium bar of an MMT would de detected through the existence of a persistent current, defined as the difference between the currents in the SQUID of a magnetometer before and after the passage of the bar through the sensing coil.
In the context of MoEDAL searches, two configurations of MMTs have been used at two different LHC c.m.\ energies as described in Section~\ref{sc:mmt}. No magnetic charge exceeding $0.5g_{\rm D}$ was detected in any of the exposed samples when passed through the ETH Zurich SQUID facility, allowing limits to be placed on monopole production. Model-independent cross-section limits have been obtained in fiducial regions of monopole energy and direction for $1g_{\rm D}\leq|g|\leq 6g_{\rm D}$ with the 8-TeV analysis~\cite{MMT8TeV}. Model-dependent cross-section limits are obtained for Drell-Yan (DY) pair production of spin-1/2 and spin-0 monopoles for $1g_{\rm D}\leq|g|\leq 5g_{\rm D}$ at 13~TeV~\cite{MMT13TeV}, as shown in Fig.~\ref{fig:cross_section_limits}. Caution, however, should be exerted here in the sense that the non-perturbative nature of the large magnetic Dirac charge of the monopole invalidate any perturbative treatment based on Drell-Yan calculations of the pertinent cross sections and hence any result based on the latter is only indicative, due to the lack of any other concrete theoretical treatment.
The weaker limits for $|g|= g_{\rm D}$ displayed in Fig.~\ref{fig:cross_section_limits} when compared to higher charges are mostly due to loss of acceptance from monopoles punching through the trapping volume. For higher charges, monopoles ranging out before reaching the trapping volume decrease the acceptance for DY monopoles with increasing charge and reaches below 0.1\% for a charge of $6g_{\rm D}$. The spin dependence is solely due to the different event kinematics: more central and more energetic monopoles for spin 0.
\begin{figure}[ht]
\includegraphics[width=0.505\textwidth]{massPlot_fermion}
\includegraphics[width=0.505\textwidth]{massPlot_boson}
\caption{Cross-section upper limits at 95\% confidence level for DY monopole production as a function of mass for spin-1/2 (left) and for spin-0 monopoles (right). The various line styles correspond to different monopole charges. The solid lines represent DY cross-section calculations at leading order. From Ref.~\cite{MMT13TeV}.}
\label{fig:cross_section_limits}
\end{figure}
Under the assumption of Drell-Yan cross sections, mass limits are derived for $g_{\rm D}\leq|g|\leq4g_{\rm D}$ at the LHC, complementing previous results from ATLAS Collaboration~\cite{atlas7tev,atlas8tev}, which placed limits for monopoles with magnetic charge $|g|\leq1.5 g_{\rm D}$ (c.f.\ Fig.~\ref{fig:exclusion3}). The ATLAS bounds are better that the MoEDAL ones for $|g|=1 g_{\rm D}$ due to the higher luminosity delivered in ATLAS and the loss of acceptance in MoEDAL for small magnetic charges. On the other hand, higher charges are difficult to be probed in ATLAS due to the limitations of the electromagnetic-calorimeter-based level-1 trigger deployed for such searches. A comparison of the limits on monopole production cross sections set by other colliders with those set by MoEDAL is presented in Ref.~\cite{rajantiept}, while general limits including searches in cosmic radiation are reviewed in Ref.~\cite{patrizii}.
\begin{figure}[ht]
\includegraphics[width=0.66\linewidth]{exclusion3}
\caption{Excluded monopole masses for DY production for spin-$1/2$ (top) and spin-$0$ (bottom) monopoles. The MoEDAL results obtained at 8~TeV (yellow, light grey)~\cite{MMT8TeV} and 13~TeV (red, dark grey)~\cite{MMT13TeV} are superimposed on the ATLAS 8-TeV limits (hatched area)~\cite{atlas8tev}.}
\label{fig:exclusion3}
\end{figure}
\section{Beyond magnetic monopoles}\label{sc:physics}
\subsection{Electrically-charged long-lived particles in supersymmetry}\label{sc:susy}
Supersymmetry (SUSY) is an extension of the Standard Model which assigns to each SM field a superpartner field with a spin differing by a half unit. SUSY provides elegant solutions to several open issues in the SM, such as the hierarchy problem, the identity of dark matter, and the grand unification. SUSY scenarios propose a number of massive slowly moving electrically charged particles. If they are sufficiently long-lived to travel a distance of at least ${\cal O}(1{\rm m})$ before decaying and their $Z/\beta\gtrsim 0.5$, then they will be detected in the MoEDAL NTDs. No highly-charged particles are expected in such a theory, but there are several scenarios in which supersymmetry may yield massive, long-lived particles that could have electric charges $\pm 1$, potentially detectable in MoEDAL if they are produced with low velocities ($\beta \lesssim 0.2$) .
The lightest supersymmetric particle (LSP) is stable in models where $R$~parity is conserved. The LSP should have no strong or electromagnetic interactions, for otherwise it would bind to conventional matter and be detectable in anomalous heavy nuclei~\cite{EHNOS}. Possible weakly-interacting neutral candidates in the Minimal Supersymmetric Standard Model (MSSM) include the sneutrino, which has been excluded by LEP and direct searches, the lightest neutralino $\tilde{\chi}_1^0$ (a mixture of spartners of the $Z, H$ and $\gamma$) and the gravitino $\tilde{G}$.
\subsubsection{Supersymmetric scenarios with $R$-parity violation}
Several scenarios featuring metastable charged sparticles might be detectable in MoEDAL. One such scenario is that $R$~parity {\it may not be exact}, since there is no exact local symmetry associated with either $L$ or $B$, and hence no fundamental reason why they should be conserved. One could consider various ways in which $L$ and/or $B$ could be violated in such a way that $R$ is violated, as represented by the following superpotential terms:
\begin{equation}
W_{RV} \; = \; \lambda^{\prime \prime}_{ijk} {\bar U}_i {\bar D}_j {\bar D}_k
+ \lambda^{\prime}_{ijk} {L}_i {Q}_j {\bar D}_k
+ \lambda_{ijk} {L}_i {L}_j {\bar E}_k
+ \mu_i L_i H,
\label{Rviolation}
\end{equation}
where ${Q}_i, {\bar U}_i, {\bar D}_i, L_i$ and ${\bar E}_i$ denote chiral superfields corresponding to quark doublets, antiquarks, lepton doublets and antileptons, respectively, with $i, j, k$ generation indices. The simultaneous presence of terms of the first and third type in Eq.~(\ref{Rviolation}), namely $\lambda$ and $\lambda^{\prime \prime}$, is severely restricted by lower limits on the proton lifetime, but other combinations are less restricted. The trilinear couplings in Eq.~(\ref{Rviolation}) generate sparticle decays such as ${\tilde q} \to {\bar q} {\bar q}$ or $q \ell$, or ${\tilde \ell} \to \ell \ell$, whereas the bilinear couplings in Eq.~(\ref{Rviolation}) generate Higgs-slepton mixing and thereby also ${\tilde q} \to q \ell$ and ${\tilde \ell} \to \ell \ell$ decays~\cite{Mitsou:2015kpa}. For a nominal sparticle mass $\sim 1$~TeV, the lifetime for such decays would exceed a few nanoseconds for $\lambda, \lambda^{\prime}, \lambda^{\prime \prime} < 10^{-8}$.
If $R$~parity is broken, the LSP would be unstable, and might be charged and/or coloured. In the former case, it might be detectable directly at the LHC as a massive slowly-moving charged particle. In the latter case, the LSP would bind with light quarks and/or gluons to make colour-singlet states, the so-called \emph{R-hadrons}, and any charged state could again be detectable as a massive slowly-moving charged particle. If $\lambda \ne 0$, the prospective experimental signature would be similar to a stau next-to-lightest sparticle (NLSP) case to be discussed later. On the other hand, if $\lambda^{\prime}$ or $\lambda^{\prime \prime} \ne 0$, the prospective experimental signature would be similar to a stop NLSP case, yielding the possibility of charge-changing interactions while passing through matter. This could yield a metastable charged particle, created whilst passing through the material surrounding the intersection point, that would be detected by MoEDAL.
\subsubsection{Metastable lepton NLSP in the CMSSM with a neutralino LSP}
However, even if $R$~parity {\it is} exact, the NLSP may be long lived. This would occur, for example, if the LSP is the gravitino, or if the mass difference between the NLSP and the neutralino LSP is small, offering more scenarios for long-lived charged sparticles. In {\it neutralino dark matter} scenarios based on the constrained MSSM (CMSSM), for instance, the most natural candidate for the NLSP is the lighter stau slepton ${\tilde \tau_1}$~\cite{stauNLSP}, which could be long lived if $m_{\tilde \tau_1} - m_{\tilde{\chi}_1^0}$ is small. There are several regions of the CMSSM parameter space that are compatible with the constraints imposed by unsuccessful searches for sparticles at the LHC, as well as the discovered Higgs boson mass. These include a strip in the focus-point region where the relic density of the LSP is brought down into the range allowed by cosmology because of its relatively large Higgsino component, a region where the relic density is controlled by rapid annihilation through direct-channel heavy Higgs resonances, and a strip where the relic LSP density is reduced by coannihilations with near-degenerate staus and other sleptons. It was found in a global analysis that the two latter possibilities are favoured~\cite{MC8}.
In the coannihilation region of the CMSSM, the lighter ${\tilde \tau_1}$ is expected to be the lightest slepton~\cite{stauNLSP}, and the $\tilde\tau_1-\tilde{\chi}_1^0$ mass difference may well be smaller than $m_\tau$: indeed, this is required at large LSP masses. In this case, the dominant stau decays for $m_{\tilde \tau_1} - m_{\tilde{\chi}_1^0} > 160$~MeV are expected to be into three particles: $\tilde{\chi}_1^0 \nu \pi$ or $\tilde{\chi}_1^0 \nu \rho$. If $m_{\tilde \tau_1} - m_{\tilde{\chi}_1^0} < 1.2$~GeV, the ${\tilde \tau_1}$ lifetime is calculated to be so long, in excess of $\sim 100$~ns, that it is likely to escape the detector before decaying, and hence would be detectable as a massive, slowly-moving charged particle~\cite{Sato,oscar}. The relevance of such scenarios while considering cosmological constraints is demonstrated in Fig.~\ref{fg:oscar}. Even is lepton-flavour violating couplings are allowed, the long lifetime of the staus remains~\cite{oscar}.
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{oscar}\hspace{2pc}%
\begin{minipage}[b]{0.43\textwidth}\caption{\label{fg:oscar}Allowed parameter regions in $M_{1/2} - m_0$ plane fixing $A_0=600$~GeV and $\tan\beta=30$. The red (dark) narrow band is consistent region with dark matter abundance and $\delta m < m_\tau$ and the yellow (light) narrow band is that with $\delta m > m_\tau$. The green regions are inconsistent with the dark matter abundance, and in the white (excluded) area the LSP is the stau. The favoured regions of the muon anomalous magnetic moment at $1\sigma$, $2\sigma$ and $2.5\sigma$ confidence level are indicated by solid lines. From Ref.~\cite{oscar}.}
\end{minipage}
\end{figure}
\subsubsection{Metastable sleptons in gravitino LSP scenarios}
On the other hand, in {\it gravitino dark matter} scenarios with more general options for the pattern of supersymmetry breaking, other options appear quite naturally, including the lighter selectron or smuon, or a sneutrino~\cite{sleptonNLSP}, or the lighter stop squark ${\tilde t_1}$~\cite{stopNLSP}. If the gravitino ${\tilde G}$ is the LSP, the decay rate of a slepton NLSP is given by
\begin{equation}
\Gamma ( {\tilde \ell} \to {\tilde G} \ell) = \dfrac{1}{48 \pi M_*^2} \dfrac{m_{\tilde \ell}^5}{M_{\tilde G}^2}
\left[ 1 - \dfrac{M_{\tilde G}^2}{m_{\tilde \ell}^2} \right]^{4},
\label{telldecay}
\end{equation}
where $M_*$ is the Planck scale. Since $M_*$ is much larger than the electroweak scale, the NLSP lifetime is naturally very long.
Gravitino (or axino) LSP with a long-lived charged stau may arise in gauge mediation and minimal supergravity models~\cite{Nojiri}. Large part of the parameter space potentially attractive for long-lived slepton searches with MoEDAL are compatible with cosmological constraints on the dark-matter abundance in superweakly interacting massive particle scenarios~\cite{Feng}.
\subsubsection{Long-lived gluinos in split supersymmetry}
The above discussion has been in the context of the CMSSM and similar scenarios where all the supersymmetric partners of Standard Model particles have masses in the TeV range. Another scenario is ``split supersymmetry'', in which the supersymmetric partners of quarks and leptons are very heavy, of a scale $m_s$, whilst the supersymmetric partners of SM bosons are relatively light~\cite{splitSUSY}. In such a case, the gluino could have a mass in the TeV range and hence be accessible to the LHC, but would have a very long lifetime:
\begin{equation}
\tau \approx 8 \left( \dfrac{m_s}{10^9~{\rm GeV}} \right)^4 \left( \dfrac{1~{\rm TeV}}{m_{\tilde{g}}} \right)^5~{\rm s}.
\label{gluinotau}
\end{equation}
Long-lived gluinos would form long-lived gluino R-hadrons including gluino-gluon \emph{(gluinoball)} combinations, gluino-$q{\bar q}$ \emph{(mesino)} combinations and gluino-$qqq$ \emph{(baryino)} combinations. The heavier gluino hadrons would be expected to decay into the lightest species, which would be metastable, with a lifetime given by Eq.~(\ref{gluinotau}), and it is possible that this metastable gluino hadron could be charged.
In the same way as stop hadrons, gluino hadrons may flip charge through conventional strong interactions as they pass through matter, and it is possible that one may pass through most of a conventional LHC tracking detector undetected in a neutral state before converting into a metastable charged state that could be detected by MoEDAL.
\subsubsection{Experimental considerations}
There are several considerations supporting the complementary aspects of MoEDAL w.r.t.\ ATLAS and CMS when discussing the observability of (meta-)stable massive electrically-charged particles. Most of them stem from MoEDAL being ``time-agnostic'' due to the passive nature of its detectors. Therefore signal from very slowly moving particles will not be lost due to arriving in several consecutive bunch crossings. ATLAS and CMS, on the other hand, perform triggered-based analyses relying either on triggering on accompanying ``objects'', e.g.\ missing transverse energy, or by developing and deploying specialised triggers. In both cases the efficiency may lower and in the former the probed parameter space may be reduced. MoEDAL is mainly limited by the geometrical acceptance of the detectors, especially the MMTs, and by the requirement of passing the $Z/\beta$ threshold of NTDs. In general ATLAS and CMS have demonstrated to cover high-velocities $\beta \gtrsim 0.2$, while MoEDAL is sensitive to lower ones $\beta \lesssim 0.2$.
When discussing the detection of particles stopped \emph{(trapped)} in material that they may decay later, different possibilities are explored. CMS and ATLAS look in empty bunch crossings for decays of trapped particles into jets. MoEDAL MMTs may be monitored in a underground/basement laboratory for tracks arising from such decays. The background in the latter case, coming from cosmic rays, should be easier to control and assess. The probed lifetimes should be larger due to the unlimited monitoring time.
\subsection{Doubly-charged Higgs bosons}\label{sc:lrsm}
Doubly-charged particles appear in many theoretical scenarios beyond the SM. For example, doubly-charged scalar states, usually termed doubly-charged Higgs fields, appear in left-right symmetric models~ \cite{Pati1974yy,LRSM,LRSMa} and in see-saw models for neutrino masses with Higgs triplets. A number of models encompasses additional symmetries and extend the SM Higgs sector by introducing doubly-charged Higgs bosons. A representative example of such a model is the L-R Symmetric Model (LRSM)~\cite{Pati1974yy,LRSM,LRSMa}, proposed to amend the fact that the weak-interaction couplings are strictly left handed by extending the gauge group of the SM so as to include a right-handed sector. The simplest realisation is an LRSM~\cite{Pati1974yy, LRSM} that postulates a right-handed version of the weak interaction, whose gauge symmetry is spontaneously broken at high mass scale, leading to the parity-violating SM. This model naturally accommodates recent data on neutrino oscillations and the existence of small neutrino masses. The model generally requires Higgs triplets containing doubly-charged Higgs bosons ($H^{\pm\pm}$) $\Delta_{R}^{++}$ and $\Delta_{L}^{++}$, which could be light in the minimal supersymmetric left-right model~\cite{LRSUSY}.
Single production of a doubly-charged Higgs boson at the LHC proceeds via vector boson fusion, or through the fusion of a singly-charged Higgs boson with either a $W^\pm$ or another singly-charged Higgs boson. The amplitudes of the $W_{L} W_{L}$ and $W_{R} W_{R}$ vector boson fusion processes are proportional to $v_{L,R}$, the vacuum expectation values of the neutral members of the scalar triplets of the LRSM. For the case of $\Delta_{R}^{++}$ production, the vector boson fusion process dominates. Pair production of doubly-charged Higgs bosons is also possible via a Drell-Yan process, with $\gamma$, $Z$ or $Z_{R}$ exchanged in the $s$-channel, but at a high kinematic price since substantial energy is required to produce two heavy particles. In the case of $\Delta_{L}^{++}$, double production may nevertheless be the only possibility if $v_{L}$ is very small or vanishing.
The decay of a doubly-charged Higgs boson can proceed via several channels. The dilepton signature leads to the (experimentally clean) final state $q\bar{q} \rightarrow \Delta^{++}_L\Delta^{--}_L \rightarrow 4\ell$. However as long as the triplet vacuum expectation value, $v_\Delta$, is much larger than $10^{-4}~{\rm GeV}$, the doubly-charged Higgs decay predominantly into a pair of same-sign $W$ bosons. For very small Yukawa couplings $H_{\ell\ell} \lesssim 10^{-8}$, the doubly-charged Higgs boson can be quasi-stable~\cite{Chiang:2012dk}. In Fig.~\ref{fig:width}, the partial decay width of the doubly charged Higgs boson into a $W$ boson pair is shown as a function of its mass. For $v_\Delta \gg 10^{-4}~{\rm GeV}$, this partial width is roughly equal to the total width of the doubly charged Higgs boson. In the case of long lifetimes, slowly moving pseudo-stable Higgs bosons could be detected in the MoEDAL NTDs. For example with CR39, one could detect doubly-charged Higgs particles moving with speeds less than around $\beta \simeq 0.4$.
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{width}\hspace{2pc}%
\begin{minipage}[b]{0.43\textwidth}\caption{\label{fig:width}Partial decay width of $H^{\pm \pm} \rightarrow W^{\pm} W^{\pm}$ as a function of $m_{H^{\pm\pm}}$ for $v_{\Delta} = 55~{\rm GeV}$. From Ref.~\cite{Chiang:2012dk}.}
\end{minipage}
\end{figure}
\subsection{Black hole remnants in large extra dimensions}\label{sc:bh}
Over the last decades, models based on compactified extra spatial dimensions (ED) have been proposed in order to explain the large gap between the electroweak (EW) and the Planck scale of $M_{\rm EW}/M_{\rm PL} \approx 10^{-17}$. The four main scenarios relevant for searches at LHC the Arkani-Hamed-Dimopoulos-Dvali (ADD) model of large extra dimensions~\cite{ADD}, the Randall-Sundrum (RS) model of warped extra dimensions~\cite{Randall}, TeV$^{-1}$-sized extra dimensions~\cite{TEV-1}, and the Universal Extra Dimensions (UED) model~\cite{UED}.
The existence of extra spatial dimensions~\cite{ADD,Randall} and a sufficiently small fundamental scale of gravity open up the possibility that microscopic black holes be produced and detected~\cite{ED1, bhevaporation, fischler1, CHARYBDIS, ED2,ED3} at the LHC. Once produced, the black holes will undergo an evaporation process categorised in three stages~\cite{bhevaporation, fischler1}: the \emph{balding phase}, the actual \emph{evaporation phase}, and finally the Planck phase. It is generally assumed that the black hole will decay completely to some last few SM particles. However, another intriguing possibility is that the remaining energy is carried away by a stable remnant.
The prospect of microscopic black hole production at the LHC within the framework of models with large extra dimensions has been studied in Ref.~\cite{ADD}. Black holes produced at the LHC are expected to decay with an average multiplicity of $\sim10-25$ into SM particles, most of which will be charged, though the details of the multiplicity distribution depend on the number of extra dimensions~\cite{BHMULT}. After the black holes have evaporated off enough energy to reach the remnant mass, some will have accumulated a net electric charge. According to purely statistical considerations, the probability for being left with highly-charged black hole remnants drops fast with the deviation from the average. The largest fraction of the black holes should have charges $\pm1$ or zero, although a smaller but non-negligible fraction would be multiply charged.
The fraction of charged black-hole remnants has been estimated~\cite{BHMULT,Hossenfelder:2005ku} using the {\tt PYTHIA} event generator~\cite{PYTHIA} and the {\tt CHARYBDIS} program~\cite{CHARYBDIS}. It was assumed that the effective temperature of the black hole drops towards zero for a finite remnant mass, $M_{R}$. The value of $M_{R}$ does not noticeably affect the investigated charge distribution, as it results from the very general statistical distribution of the charge of the emitted particles.
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{BHR-charge}\hspace{2pc}%
\begin{minipage}[b]{0.43\textwidth}\caption{\label{Fig:Qofremnants}The distribution of black-hole remnant charges in proton-proton interactions at $\sqrt{s} = 14~{\rm TeV}$ calculated with the
{\tt PYTHIA} event generator~\cite{PYTHIA} and the {\tt CHARYBDIS} program~\cite{CHARYBDIS}. From Ref.~\cite{Hossenfelder:2005ku}.}
\end{minipage}
\end{figure}
Thus, independent of the underlying quantum-gravitational assumption leading to the remnant formation, it was found that about 30\% of the remnants are neutral, whereas $\sim$ 40\% would be singly-charged black holes, and the remaining $\sim$30\% of remnants would be multiply-charged. The distribution of the remnant charges obtained is shown in Fig.~\ref{Fig:Qofremnants}. The black hole remnants considered here are heavy, with masses of a TeV or more. A significant fraction of the black-hole remnants produced would have a Z/$\beta$ of greater than five, high enough to register in the CR39 NTDs forming the LT-NTD sub-detector of MoEDAL.
\subsection{D-matter}\label{sc:dmatter}
Some versions of string theory include higher-dimensional ``domain-wall''-like membrane \emph{(brane)} structures in space-time, called \emph{D-branes}. In some cases the bulk is ``punctured'' by lower-dimensional D-brane defects, which are either point-like or have their longitudinal dimensions compactified~\cite{westmuckett}. From a low-energy observer's perspective, such structures would effectively appear to be point-like \emph{D-particles}. The latter have dynamical degrees of freedom, thus they can be treated as quantum excitations above the vacuum~\cite{westmuckett,shiu} collectively referred to as {\it D-matter}. D-matter states are non-perturbative stringy objects with masses of order $m_D \sim M_s/g_s$, where $g_s $ is the string coupling, typically of order one so that the observed gauge and gravitational couplings is reproduced. Hence, the D-matter states could be light enough to be phenomenologically relevant at the LHC.
Depending on their type, D-branes could carry integral or torsion (discrete) charges with the lightest D-particle (LDP) being stable. Therefore the LDPs are possible candidates for cold dark matter~\cite{shiu}. D-particles are solitonic non-perturbative objects in the string/brane theory. As discussed in the relevant literature~\cite{shiu}, there are similarities and differences between D-particles and magnetic monopoles with non-trivial cosmological implications~\cite{Witten2002wb,westmuckett,Mavromatos:2010jt,mitsou}. An important difference is that they could have {\it perturbative} couplings, with no magnetic charge in general. Nonetheless, in the context of brane-inspired gauge theories, brane states with magnetic charges can be constructed, which would manifest themselves in MoEDAL in a manner similar to magnetic monopoles.
Non-magnetically-charged D-matter, on the other hand, could be produced at colliders and also produce interesting signals of direct relevance to the MoEDAL experiment. For instance, excited states of D-matter (${\rm D}^\star$) can be electrically-charged. For typical string couplings of phenomenological relevance, the first few massive levels may be accessible to the LHC. Depending on the details of the microscopic model considered, and the way the SM is embedded, such massive charged states can be relatively long-lived, and could likewise be detectable with MoEDAL. D-matter/antimatter pairs can be produced~\cite{Mavromatos:2010jt,mitsou} by the decay of intermediate off-shell $Z$-bosons, as shown in Fig.~\ref{fig:dproduction}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{signal}
\caption{An example of parton-level diagrams for production of D-particles by $q\bar{q}$ collisions in a generic D-matter low-energy model~\cite{mitsou}. }
\label{fig:dproduction}
\end{figure}
\section{Summary and outlook}\label{sc:summary}
MoEDAL is going to extend considerably the LHC reach in the search for (meta)stable highly ionising particles. The latter are predicted in a variety of theoretical models and include: magnetic monopoles, SUSY stable (or, rather, long-lived) spartners, quirks, strangelets, Q-balls, fractionally-charged massive particles, etc~\cite{Acharya:2014nyr}. Such particles can be light enough to be producible at the LHC energies (see e.g.\ Q-balls in the context of some SUSY or brane models~\cite{kehagias}). In this paper we have described searches for only a subset of those particles, due to lack of space. Specifically, we discussed monopoles, partners in some SUSY models, doubly charged Higgs bosons, black hole remnants in models with large extra spatial dimensions, as well as some (more exotic) scenarios on D-matter which characterises some brane models.
The MoEDAL design is optimised to probe precisely all such long lived states, unlike the other LHC experiments~\cite{DeRoeck:2011aa}. Furthermore it combines different detector technologies: plastic nuclear track detectors (NTDs), trapping volumes and pixel sensors~\cite{moedal-tdr}. The first physics results, pertaining to magnetic monopole trapping detectors, obtained with LHC Run~1 data~\cite{MMT8TeV}, and the corresponding analysis at 13~TeV has been published recently~\cite{MMT13TeV}. The MoEDAL Collaboration is preparing new analyses with more Run~2 data, with other detectors (NTDs) and with a large variety of interpretations involving not only magnetic but also electric charges.
\section*{Acknowledgments}
The author is grateful to the DISCRETE2016 Symposium organisers for the kind invitation to present this talk. She acknowledges support by the Spanish Ministry of Economy and Competitiveness (MINECO) under the project FPA2015-65652-C4-1-R, by the Generalitat Valenciana through the project PROMETEO~II/2013-017, by the Spanish National Research Council (CSIC) under the CT Incorporation Program 201650I002 and by the Severo Ochoa Excellence Centre Project SEV 2014-0398.
\section*{References}
|
1,116,691,497,644 | arxiv | \section{Introduction}
Blazars are radio-loud active galactic nuclei (AGN) with relativistic
jets aligned close to our line of sight \citep[e.g.,][]{blandford78}.
With very few exceptions \citep[e.g., 4C+55.17;][]{4C55.17}
they exhibit variable emission at all wavelengths, from radio to
$\gamma$ rays, on time scales as short as hours or even minutes
\citep{pks2155_hess_07,pks2155_hess_09,4C21.35_magic}.
Their spectral energy distributions (SEDs) in the $\nu \mathrm{F}_\nu$ representation
display two broad bumps usually attributed to
synchrotron and inverse Compton processes:
the first bump is located at infrared-optical frequencies but in some sources it can extend to X-ray frequencies,
while the second bump is found at X-ray/$\gamma$-ray frequencies.
The frequency of the lower energy peak ($\nu_{sync}$) has been used to subdivide
blazars into three different classes: high, intermediate, and low synchrotron peaked
(HSP, ISP and LSP, respectively) depending on whether
$\nu_{sync}>10^{15}$\ Hz, $10^{14}\ \mathrm{Hz}<\nu_{sync}<10^{15}$\ Hz or $\nu_{sync}<10^{14}$\ Hz \citep{bsl}.
For LSP blazars,
especially flat spectrum radio quasars (FSRQs; blazars with strong,
broad emission lines) it is common for the optical variations to be
correlated with the $\sim$\ GeV $\gamma$\ rays
\citep[e.g.,][]{bonning09,chatterjee11} indicating that emissions in
these wavebands probably originate from the same electron population. This correlated variability reflects what one would
expect for a model where the low-energy bump is created by synchrotron
emission and the high-energy one is created by Compton scattering of
some soft seed photon source. This idea is also reflected in their SEDs, in which the optical emission is generally seen
on the decreasing side of the low-energy bump and the GeV $\gamma$
rays fall on the decreasing side of the high-energy bump.
Reality, however, can be more complicated; a wide variety of blazar
variability behaviours is observed, some of which
can be difficult to explain in this simple picture. In some instances
for LSPs, the optical and $\gamma$-ray components show correlated
variability, but the optical has smaller flux variations than the $\gamma$
rays. This has been interpreted as contamination in the optical band
by an underlying accretion disk \citep{bonning09}. Although often the
$\gamma$-ray and optical flares are simultaneous
\citep[e.g.,][]{marscher10}, at other times the optical flare occurs
before the $\gamma$-ray flare \citep{marscher10} or lags behind the
$\gamma$-ray flare \citep{bonning09,LAT3C279}.
In recent years, FSRQ PKS~1424$-$418\ (\object{J1427-4206}) has presented an excellent opportunity to use multiwavelength variability as a probe of blazar behaviour, for several reasons:
\begin{enumerate}
\item The Automatic Telescope for Optical Monitoring \citep[ATOM;][]{ATOM09}
provides dense optical light curve coverage for PKS~1424$-$418.
\item This optical coverage complements the survey mode of the Large Area Telescope (LAT)
on the {\em Fermi Gamma-Ray Space Telescope} \citep{LATinstrument}.
\item PKS~1424$-$418\ exhibited two major flares in 2009-2010, seen in both optical \citep{ATOMAtel09, ATOMATel10} and
$\gamma$ rays \citep{LATATel09, LATAtel10} and largely isolated from
other activity. The clear pattern displayed during the outbursts optimized correlation studies.
\item With mean flux densities over 3\,Jy and known variability at multiple radio frequencies
\citep{tingay}, PKS~1424$-$418\ is a strong and variable radio source.
High resolution Very Long Baseline Interferometry (VLBI) observations of this blazar are being carried out
as part of the TANAMI program \citep{Ojha2010}, and radio monitoring is being done as part of the F-GAMMA project \citep{Fuhrmann2007,Angelakis2008}, the Submillimeter Array \citep[SMA,][]{Gurwell07}, and other radio telescopes.
\item The instruments on the \emph{Swift} satellite \citep{Gehrels2004} have provided additional optical, UV, and X-ray coverage of PKS~1424$-$418.
\end{enumerate}
PKS~1424$-$418\ has a redshift of $z=1.522$ \citep{white88}
giving it a luminosity distance\footnote{In a flat
$\Lambda$CDM cosmology where $H_0=67.11$\ km s$^{-1}$ Mpc$^{-1}$,
$\Omega_m=0.3175$, and $\Omega_\Lambda=0.6825$ \citep{plankcosmo}.}
of $d_L=11.3$\ Gpc. Although $\gamma$ radiation from PKS~1424$-$418\ was reported in the
Third EGRET Catalog as 3EG J1429$-$4217 \citep{3EGcatalog}, it was not
included in the \emph{Fermi}~LAT~ Bright Source List, based on data from
the first three months of the mission \citep{LATBSL}.
The blazar was then reported in the
First \citep[1FGL,][]{LAT1FGL} and Second \citep[2FGL,][]{nolan12} LAT Catalogs as as \object{1FGL J1428.2$-$4204} and \object{2FGL J1428.0$-$4206}, respectively.
In this work we report the results of the LAT and ATOM monitoring of PKS~1424$-$418\ accompanied
by multiwavelength observations across the electromagnetic spectrum.
The paper is structured as follows.
In section \ref{sec:multi} we present the collection of multiwavelength data used in this study.
Section \ref{sec:lc} describes the multiwavelength light curve, showing the isolated flares that are the focus of this work, while
Section \ref{sec:var} presents a detailed cross-correlation analysis of the optical and $\gamma$-ray data.
Section \ref{sec:sed} describes the modeling of the broadband SED,
and Section \ref{sec:conclu} presents the conclusions.
\section{Multiwavelength data}\label{sec:multi}
\subsection{ATOM optical observations}
The 75-cm telescope ATOM~\citep{Hauser2004}, located
in Namibia, monitors the flux of PKS~1424$-$418\ in two different
filters: B (440 nm) and R (640 nm)
according to \cite{Bessel1990}. ATOM is operated robotically by the High Energy Stereoscopic System (H.E.S.S.)
collaboration and obtains automatic observations of confirmed or potentially $\gamma$-ray-bright blazars. A 4$\arcsec$ radius aperture is used for both filter bands. Data analysis (debiassing, flat fielding, and photometry with Source-Extractor, Bertin \& Arnouts 1996) is conducted automatically using our own pipeline. The ATOM results are shown in panel 3 of Fig. \ref{mwllc}, which summarizes the multi-wavelength temporal flux behaviour of PKS~1424$-$418.
\begin{figure*}[!ht]
\centering
\leftskip-0.7cm
\includegraphics[width=1.1\textwidth]{mwlErr}
\vspace{-1.0cm}
\caption{Multiwavelength light curve of PKS~1424$-$418. From top to bottom:
first panel displays radio data from TANAMI in several bands between 4.8 and 40\,GHz;
second panel displays 230\,GHz (red symbols) and 345\,GHz (black symbols) data from SMA and APEX, respectively;
third panel displays optical B-band (red points) and R-band (green points) data from ATOM, \textit{Swift}-UVOT data (W1, M2, W2, and V, B, U filters);
fourth panel displays \textit{Swift}-XRT data;
fifth panel displays \emph{Fermi}~LAT~ flux (E$>$100 MeV).
Vertical lines denote the prominent outbursts
studied in this work: black flare A, red flare B1, blue flare B2 and green flare C.}
\label{mwllc}%
\end{figure*}
\subsection{Fermi LAT gamma-ray observations}
\emph{Fermi}~LAT~ is a pair-conversion telescope optimized for energies from 20 MeV to greater than 300 GeV \citep{LATinstrument}.
Taking advantage of the LAT's large field of view ($\sim$2.4 sr), the \emph{Fermi}~LAT~ observatory operated in
scanning mode provides coverage of the full sky every three hours and offers a good opportunity to follow PKS~1424$-$418\ at $\gamma$-ray energies.
We analysed the data sample, which covers observations from 2008 August 4 (MJD 54682)
to 2011 June 13 (MJD 55725), with the standard \textit{Fermi Science Tools} (version v9r32p5), the P7REP\_SOURCE\_V15 LAT Instrument Response Functions
(IRFs), and associated diffuse emission models\footnote{The P7REP data, IRFs, and diffuse models (gll\_iem\_v05.fit and iso\_source\_v05.txt) are available at http://fermi.gsfc.nasa.gov/ssc/.}.
The data selection was based on the following criteria.
Only good quality P7REP source class events \citep{bregeon}
were extracted from a circular region of interest (ROI) of $10^{\circ}$ radius centered at the location of
PKS~1424$-$418\ and considered in the analysis.
Time intervals when the LAT boresight was rocked with respect to the local zenith by more than $52^{\circ}$
(usually for calibration purposes or to point at specific sources) and events with a reconstructed angle with respect to the local zenith $>100^{\circ}$ were excluded.
The latter selection was necessary to limit the contamination from $\gamma$ rays produced by interactions of cosmic rays with the upper atmosphere of the Earth.
In addition, to correct the calculation of the exposure for the zenith cut, time intervals when any part of the ROI was observed at zenith angles $>100^{\circ}$ were excluded.
To derive the spectral fluxes we applied an unbinned maximum likelihood technique to events in the energy range from 100\,MeV to 300\,GeV \citep{mattox96}.
Sources from the 2FGL catalog \citep{nolan12} located within $15^{\circ}$ of PKS~1424$-$418\
were included in the model of the ROI by setting the spectral shapes and the initial parameters for the modeling to those published in the 2FGL catalog.
In the fitting procedure the parameters of sources located within a $7^{\circ}$ ROI, as well as the normalization of
the isotropic background and the Galactic diffuse emission components,
were left free to vary.
Parameters of sources located between $7^{\circ}$ and $15^{\circ}$ from the source of interest were instead fixed at their catalog values.
Instrumental systematic errors are typically $\sim$10\% and negligible compared to the large flux variations observed \citep{LATperform}.
During the LAT observation period, PKS~1424$-$418\ was not always significantly detected. Consequently, we calculated flux upper limits at the 95\% confidence level for each time bin where the Test Statistic\footnote{The Test Statistic value quantifies the probability of having a point $\gamma$-ray source at the location specified and corresponds roughly to the square of the standard deviation assuming one degree of freedom \citep{mattox96}.
It is defined as TS = $-2\log (L_0 / L)$,
where $L_0$ is the maximum likelihood value for a model without an additional source (the 'null hypothesis') and $L$ is the maximum likelihood value for a model with the additional source at the specified location.
}
(TS) value for the source was TS$<$10 or the number of predicted photons was $N_{\mathrm{pred}}<3$. The LAT flux results are shown in the bottom panel of Fig. \ref{mwllc}.
The binning used for the LAT light curves provides a good compromise between sufficient photon statistics and sensitivity to source flux variability. We explored smaller time bins for the flares, but the poor statistics did not allow sufficient detections, even during bright flares.
\subsection{\emph{Swift} observations}
The \emph{Swift} satellite \citep{Gehrels2004} performed 6 target of opportunity
(ToO) observations on PKS~1424$-$418\ in 2010 May, triggered by high optical
activity of the source~\citep{ATOMATel10}, and 2 ToO observations in 2011 May,
triggered by the third $\gamma$-ray flare observed by \emph{Fermi}~LAT~~\citep{ATEL3329}.
To investigate the source behaviour
over the years we also analysed three \emph{Swift} observations carried out before the
launch of \emph{Fermi}~LAT~ (2005 April and 2006 May) and another three observations
in 2010 February and September.
The observations were made with all three
on-board instruments: the UV/Optical Telescope (UVOT; \citealt{Roming2005}, 170--600 nm), the X-ray Telescope (XRT; \citealt{Burrows2005}, 0.2--10.0 keV), and the Burst Alert Telescope (BAT; \citealt{Barthelmy2005}, 15--150 keV). The hard X-ray flux of this source is below the sensitivity of the BAT instrument, not appearing in the 70-month BAT catalog \citep{baumgartner}.
The XRT data were processed with standard procedures (xrtpipeline v0.12.6), filtering, and screening criteria by using the Heasoft package (v.6.11). The
source count rate was low during all the observations (count rate $<$ 0.5 counts s$^{-1}$ in the 0.3--10 keV energy range), thus we only considered
photon counting data for our analysis, and further selected XRT event grades 0--12. Pile-up correction was not required. Source events were
extracted from a circular region with a radius of 20 or 25 pixels (1
pixel $\sim$ 2.36$\arcsec$), depending on the source count rate, while
background events were extracted from a circular region with radius 50 pixels and located
away from background sources. Ancillary response files were generated with the
task {\tt xrtmkarf}. These account for different extraction regions, vignetting
and point spread function corrections. We used the latest version of the spectral redistribution
matrices in the calibration database maintained by HEASARC. The adopted energy range
for spectral fitting is 0.3--10 keV. We summed two observations performed on
2010 February 11 in order to achieve higher statistics. When the number of counts was fewer
than 200 the Cash statistic \citep{Cash1979} on ungrouped data was used.
All the other spectra were rebinned with a minimum of 20 counts per energy bin to allow $\chi{^2}$ fitting within
{\sc XSPEC} (v12.6.0; \citealt{Arnaud1996}). We fit the individual spectra with a simple absorbed power law, with a
neutral hydrogen column density fixed to its Galactic value ($N_{\rm H} = 7.71 \times$ 10$^{20}$
cm$^{-2}$; \citealt{Kalberla2005}). The fit results are reported in Table \ref{1424_XRT}.
\begin{table*}[th!]
\footnotesize
\caption{Fitting results of {\it Swift}/XRT observations of PKS~1424$-$418.
Columns report, from left to right: observation time, net exposure time,
observed photon index and flux.
The last column indicates the method used to perform the spectral analysis:
reduced $\chi^2$ and, in parentheses, the degrees of freedom,
or the Cash method when the statistics were low.
Results were obtained considering an absorbed power-law model with $N_{\rm H}$ fixed to Galactic
absorption in the direction of the source. $^{a}$ Observed flux.}
\centering
\begin{tabular}{llcccc}
\hline
\hline
\noalign{\smallskip}
\multicolumn{1}{c}{Time} &
\multicolumn{1}{l}{Time } &
\multicolumn{1}{c}{Net Exp. Time} &
\multicolumn{1}{c}{Photon Index} &
\multicolumn{1}{c}{Flux$^{a}$ 0.3$-$10.0 keV} &
\multicolumn{1}{c}{$ \chi^{2}_{\rm red}$ (d.o.f.) / Cash} \\
\multicolumn{1}{c}{(UT)} &
\multicolumn{1}{l}{(MJD)} &
\multicolumn{1}{c}{ (sec) } &
\multicolumn{1}{c}{}&
\multicolumn{1}{c}{(10$^{-12}$ erg cm$^{-2}$ s$^{-1}$)} &
\\
\multicolumn{1}{c}{} \\
\hline
\noalign{\smallskip}
2005-Apr-19& 53479 & 2249 & $1.35 \pm 0.22$ & $3.34 \pm 0.62$ & Cash \\
2005-Apr-23& 53483 & 1543 & $1.54 \pm 0.41$ & $1.73 \pm 0.47$ & Cash \\
2006-Jun-18& 53904 & 3784 & $1.37 \pm 0.17$ & $4.28 \pm 0.49$ & 0.727 (11) \\
2010-Feb-11& 55238 & 3646 & $1.49 \pm 0.24$ & $3.77 \pm 0.75$ & 0.762(9) \\
2010-May-12& 55328 & 1958 & $1.70 \pm 0.21$ & $5.80 \pm 0.90$ & 0.796 (9) \\
2010-May-13& 55329 & 1279 & $1.75 \pm 0.24$ & $4.25 \pm 0.84$ & Cash \\
2010-May-14& 55330.5 & 1963 & $1.99 \pm 0.20$ & $5.25 \pm 0.63$ & 0.803 (10) \\
2010-May-14& 55330.6 & 951 & $1.78 \pm 0.20$ & $7.89 \pm 0.72$ & Cash \\
2010-May-15& 55331 & 1938 & $1.68 \pm 0.21$ & $4.45 \pm 0.71$ & Cash \\
2010-May-16& 55332 & 3833 & $1.82 \pm 0.18$ & $4.29 \pm 0.54$ & 0.868 (13) \\
2010-Sep-30& 55469 & 5152 & $1.47 \pm 0.15$ & $3.80 \pm 0.41$ & 1.008 (15) \\
2011-May-07& 55688 & 3913 & $1.80 \pm 0.24$ & $3.41 \pm 0.55$ & 0.920 (10) \\
2011-May-10& 55691 & 3932 & $1.61 \pm 0.23$ & $3.93 \pm 0.46$ & 1.080 (9) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\\
\label{1424_XRT}
\end{table*}
The average X-ray flux observed in 2010 mid-May was higher
than the values observed by {\it Swift}/XRT in 2005-2006 and
2010 February, indicating an increase in X-ray activity and implying that the
flaring mechanism also influences the X-ray band. In particular, an increase of a factor of $\sim$2.5 was observed on 2010 May 14 compared to the 2005-2006 average, at the peak of the X-ray emission, which coincides with flare B. We noted also a change of the spectral index from $\sim$1.4 to $\sim$1.8
during 2010 May. On the other hand, no increase of the X-ray flux was observed after the 2011 May
$\gamma$-ray flare.
All pointings of UVOT in 2005 and 2006 as well as in 2010 February and September were performed with all 6 UVOT filters (V, B, U, UVW1, UVM2, and UVW2). The remaining observations in 2010 were taken using the ``filter of the day", i.e., either the U or one of the UV filters. In 2011, the source was observed with the V filter only.
We re-processed the UVOT data using the script uvotgrblc, available in version 6.10 of the HEASoft software, and version 20100930 of the UVOT calibration database. The orbits of each UVOT image were summed in order to increase the signal to noise ratio. The photometry was obtained by customizing the background region, selecting an annulus with inner/outer radius of 27\arcsec/35\arcsec, respectively. All field sources that contaminate the background region and appear in any filter have been masked out.
The 2005 April pointings had relatively short exposures and the chosen source extraction region was a 3\arcsec-radius circle. Since the source intensity is higher than the values in the remaining observations due to the longer exposures, the radius has been increased to 5\arcsec. The script estimates the photometry by calling the task uvotsource, and the output values are then corrected for aperture effects.
Flux values were
de-reddened using the values of E(B-V) taken from \citet{schlegel98} with $\mathrm{A_\lambda}$/E(B-V) ratios calculated for the UVOT filters using the mean interstellar extinction curve from \citet{fitzpatrick}. The {\it Swift} UVOT and XRT results are shown in panel 4 and 5 of Fig. \ref{mwllc}.
\subsection{Radio data}
High resolution Very Long Baseline Interferometry (VLBI) observations of this blazar carried out
as part of the TANAMI program show it to have a faint
low-surface-brightness jet with a wide opening angle \citep{Ojha2010}.
At the milliarcsecond scale, the 22.3\,GHz image of PKS~1424$-$418\ (Fig. \ref{fig:1424-418_tanami26mar2008};
observed on 2008 March 26) clearly indicates that this source is extremely core dominated at this
frequency. This is further confirmed by the 8.4\,GHz image in \citet{Ojha2010}
which shows a dominant, compact VLBI core and a very diffuse and resolved jet.
\begin{table}
\caption{Flux densities of the milliarcsecond core of PKS~1424$-$418.}
\centering
\begin{tabular}{cccc}
\hline \hline
Time & Time & Frequency & Core Flux Density \\
(UT) & (MJD) & (GHz) & (Jy) \\
\hline
2008-Aug-08 & 54686 & 8.4 & 1.5 \\
2009-Feb-23 & 54885 & 8.4 & 1.1\\
2010-Mar-12 & 55267 & 8.4 & 1.1\\
2010-Jul-24 & 55401 & 8.4 & 1.2\\
\hline
\end{tabular}
\label{tab:tanami}
\end{table}
The milliarcsecond core flux densities of PKS~1424$-$418\ at
8.4\,GHz at four epochs during 2008 through 2010 are listed in Table \ref{tab:tanami}.
The core flux density
declined after the 2008 August epoch and has remained steady during the
period covered by the flares. This general trend is seen in the lower
resolution radio data as well (see below). The errors in these flux densities
are conservatively estimated to be less than 20\%. These flux densities were
obtained by model fitting a circular Gaussian to the core of VLBI
images made by the TANAMI program. For details on the observations,
imaging and model fitting process refer to \cite{Ojha2010}.
\begin{figure}
\centering
\resizebox{6cm}{!}{\rotatebox[]{-90}{\includegraphics{1424_K_Mar_2008}}}
\vspace{-13pt}
\caption{22.3\,GHz VLBI image of PKS~1424$-$418\ confirming its core-dominated morphology. The image has a peak flux density of 0.8 Jy/beam. The hatched ellipse
on the bottom left represents the synthesized beam of the observing array.}
\label{fig:1424-418_tanami26mar2008}
\end{figure}
As part of the TANAMI program, the Australia Telescope Compact Array
(ATCA) was used
to make ``snapshot'' observations of PKS~1424$-$418\ at frequencies
between 4.8 and 40\,GHz. Data at all frequencies were calibrated
against the ATCA primary flux calibrator PKS\,1934$-$638 \citep[see ][]{Stevens2012}.
These flux densities have a $1\sigma$ uncertainty of 5, 10 and 15\% at 4.8/9.0\,GHz, 19\,GHz,
and 40\,GHz, respectively. Each frequency is the center of a 2\,GHz wide band.
PKS~1424$-$418\ was also monitored at a
frequency of 6.7\,GHz by the 30-meter Ceduna radio telescope in South
Australia. Each flux density has a 1 $\sigma$ uncertainty of $\pm
0.3$Jy \citep{McCulloch2005}. The lower-frequency radio results from the TANAMI program are shown in the top panel of Fig. \ref{mwllc}.
\subsection{SMA observations}
Observations at 230 GHz (1.3 mm) were obtained at the Submillimeter Array (SMA)
near the summit of Mauna Kea (Hawaii).
PKS~1424$-$418\ is included in an ongoing monitoring program at the SMA to
determine the fluxes of compact extragalactic
radio sources that can be used as calibrators at mm wavelengths \citep{Gurwell07}.
PKS~1424$-$418\ was also observed as part of a dedicated program to follow sources on the
\emph{Fermi}~LAT~ Monitored Source List (PI: A. Wehrle).
These potential calibrators are observed for 3 to 5 minutes,
and the measured source signal strength is calibrated against known standards,
typically solar system objects (Titan, Uranus, Neptune, or Callisto).
Data from this program are updated regularly and are available at the SMA website\footnote{http://sma1.sma.hawaii.edu/callist/callist.html}.
\subsection{APEX observations}
As part of the F-GAMMA program (e.g. \citealt{Fuhrmann2007}, \citealt{Angelakis2008}, \citealt{Angelakis2012}),
sub-mm observations of a large sample of \emph{Fermi}~LAT~ $\gamma$-ray blazars including PKS~1424$-$418\
have been performed with the APEX (The Atacama Pathfinder EXperiment) telescope in Chile since early 2008 (see also \citealt{Larsson2012}). The quasi-regular F-GAMMA
observations are obtained during several dedicated
time-blocks per year complemented by regular and frequent pointing observations within the framework of other projects and APEX technical time.
The multi-channel bolometer array facility instrument LABOCA
\citep[Large Apex BOlometer CAmera,][]{Siringo2008} used for these observations consists of 295 channels arranged in 9
concentric hexagons and operates at a wavelength of 0.87\,mm
(345\,GHz). The observations are typically performed in `spiral
observing mode' with a raster of four spiral maps each of 20 or 35
seconds integration, depending on the source brightness at
345\,GHz. During each run, Skydip\footnote{Skydips are measurements
of the sky temperature as a function of airmass and are used
to estimate the zenith sky opacity (and so apply to astronomical data for the atmospheric extinction).}
measurements for opacity correction and
frequent calibrator measurements are performed
(Fuhrmann et al. in prep.).
The SMA and APEX flux results are shown in the second panel of Fig. \ref{mwllc}.
\section{Multiwavelength behaviour of the source}\label{sec:lc}
The most striking feature of the multiwavelength light curve of Fig. \ref{mwllc} is the series of strong, isolated flares present in the optical and $\gamma$-ray bands, seen in the third and bottom panels respectively.
At these wavebands the source underwent two main flaring episodes,
the first occurring between 2009 June 22 to July 20 (MJD 55004 - 55032; flare A)
and the second between 2010 May 6 to 25 (MJD 55322 - 55341; flare B).
A third, lower-amplitude flare
occurred between 2011 April 19 - May 16 (MJD 55670 - 55697; flare C). In both bands, PKS~1424$-$418\ remained in a relatively low state at other times. A detailed analysis of these optical/$\gamma$-ray flares is presented in the following section.
Lower-frequency radio flux densities are plotted in the
first panel of Fig. \ref{mwllc}.
Although the radio data indicate that the source has been almost steady
during the period of the flares, we note that the
Ceduna values (gray points) have quite large error bars, while the
sparse sampling of ATCA data may be consistent with some variability,
at least on long time scales.
At mm and sub-mm wavelength, the overall sampling displayed in the
second panel of Fig. \ref{mwllc} (SMA and APEX data are represented by red and black symbols
respectively) is limited, but a strong flux density increase is evident over the observing
period of more than two years: the overall
flux density increased by a factor of about 3, whereas faster, under-sampled
variability is superimposed on the long-term increasing trend.
PKS~1424$-$418\ shows dramatic variability in the X-ray band and in all of the UVOT bands. The data are sparse
and do not allow firm conclusions about long term trends but they show that immediately
after all three of the $\gamma$-ray flares, the optical/UV emission increased as well,
when compared with periods of lower $\gamma$-ray activity. In particular, the source is $\sim 1-1.5$ and $\sim 0.5$ magnitude brighter in 2010 May and 2011 May, respectively, than when observed in 2010 February and September.
During the latter periods, the intensity was moderately higher ($\sim 0.5$
magnitudes) than that observed in 2006 June, and significantly higher ($\sim 2$ magnitudes) than the very low level recorded in 2005 April. In X-rays, an increase of a factor of 2 in the \textit{Swift}-XRT flux was seen during 2010 May,
in coincidence with the second $\gamma$-ray flare.
\section{Optical/gamma-ray correlation}\label{sec:var}
\begin{figure*}[!ht]
\centering
\resizebox{13cm}{!}{\rotatebox[]{0}{\includegraphics[width=117px,height=70px]{2-3lc}}}\\
\caption{\emph{Fermi}~LAT~ (2-day bin, top panel) and optical light curves (bottom panel) of PKS~1424$-$418.
The dashed horizontal line in the upper panel indicates the mean $\gamma$-ray flux over the whole observing period considered in this work.
Vertical lines denote the three prominent outbursts
studied in this work: black, flare A (2009 June 22 - July 20, MJD 55004-55032), red, flare B1 (2010 May 6-16, MJD 55322-55332), blue, flare B2 (2010 May 19-25, MJD 55335-55341), and green, flare C (2011 April 19 - May 16, MJD 55670-55697).}
\label{fermiAtomLc}%
\end{figure*}
The three main outbursts are clearly visible in the 2-day bin $\gamma$-ray light curve and the optical light curves, Fig. \ref{fermiAtomLc}.
Flare A and flare B are displayed with more detail
in the upper panels of Fig. \ref{fig:Flare1LC} and \ref{fig:Flare2LC},
in which the $\gamma$-ray light curves (black crosses) have been superimposed onto the optical
flux values (pink points).
Close examination of the $\gamma$-ray flux evolution during flare B
shows a single flaring event. However, in
the optical band a two-peak sub-structure is visible during
the same time period.
In our work, therefore, we study separately the sub-periods
and refer to them as flare B1 (2010 May 6 -- 16, MJD 55322 -- 55332) and flare B2 (2010 May 19 -- 25; MJD 55335 -- 55341).
\subsection{Flare A}\label{subsec:a}
The shape of flare A is very similar in the optical and $\gamma$-ray bands. As can be
seen in the upper panel of Fig. \ref{fig:Flare1LC}, not just the peak but also the rising and falling
branches seem to track each other fairly well. The main difference
between the two bands is a somewhat larger flare amplitude in
optical. This is apparent also in the flux ratio plot in
the lower panel of Fig. \ref{fig:Flare1LC}, although the comparison is
affected by the limited R band sampling during the first part
of the flare. The flux ratios
were calculated by interpolating the LAT light curve to the
times of R-band measurements.
To evaluate the ratio, given an optical flux value, we considered
two successive LAT flux values, before and after the optical one,
and performed a linear interpolation between the two.
This interpolated LAT flux is matched with the corresponding R-band measurement.
\begin{figure}[t]
\centering
\resizebox{9 cm}{!}{\rotatebox[]{0}{\includegraphics{flare1lc2dv4}}}\\
\caption{\textit{Top panel}:
Comparison of LAT (crosses) and R-band (filled circles) light curves for flare A.
Fluxes are rescaled so the mean flux values over the full time range are approximately 1.
\textit{Bottom panel}: Flux
ratio for the two arbitrarily normalized light curves computed by interpolating
LAT fluxes to the times of the R-band measurements.}
\label{fig:Flare1LC}
\end{figure}
Both light curves
fluxes were rescaled so the mean flux values over the full time range are approximately 1.
\begin{figure}[!ht]
\centering
\resizebox{9 cm}{!}{\rotatebox[]{0}{\includegraphics{DCCF}}}\\
\caption{Discrete Cross Correlation Functions for flare A using the ATOM R-band light curve (transformed to flux) and the 1-day bin LAT light curve ($>$ 100 MeV).}
\label{fig:DCCF1}
\end{figure}
The optical/$\gamma$-ray correlation
was also investigated by calculating the Discrete Cross Correlation
Functions \citep[DCCF, ][]{edelson} between the R band flux and a 1-day binned LAT
light curve;
this is shown in Fig. \ref{fig:DCCF1} for flare A.
Approximately 50 days of data were used for the correlation.
The DCCF is consistent with zero time lag. From a Gaussian fit we
obtain a time lag = $-1 \pm 2$ days, where negative lag means optical leading
$\gamma$ rays. The uncertainty is based on Monte Carlo simulations taking
flux errors and time sampling into account \citep[see][]{Peterson1998}.
The rms variations in DCCF values for these Monte Carlo simulations
are shown as error bars in the plot, although we note that
these errors are strongly correlated:
they will make nearby bins in the DCCF move up or down together.
\subsection{Flare B}\label{subsec:b}
\begin{figure}[b]
\centering
\resizebox{9 cm}{!}{\rotatebox[]{0}{\includegraphics{flare2lc2dv4}}}\\
\caption{\textit{Top panel}:
Comparison of LAT (crosses) and R-band (filled circles) light curves for flare B,
Fluxes are rescaled so the mean flux values over the full time range are approximately 1.
\textit{Bottom panel}: Flux ratio computed by interpolating LAT fluxes to the times of the R-band
measurements as calculated for flare A in Fig. \ref{fig:Flare1LC}.
Symbols and normalization are the same as those used for flare A in Fig. \ref{fig:Flare1LC}.}
\label{fig:Flare2LC}
\end{figure}
In contrast to the first flare, Flare B shows large differences between the two bands.
Two separate flare
components are seen in optical but only one of these has a prominent
counterpart in the LAT light curve. The flare onset is also different
with a sharp, less than 1 day, increase in $\gamma$ rays and a much more
gradual brightening in optical. These differences are illustrated by
the relative flux of the two bands as plotted in the lower part of
Fig. \ref{fig:Flare2LC}, which refers to the period of Flare B and displays in the upper
panel the superposition of the optical and $\gamma$-ray light curves.
The lower panel shows their flux ratios with the same normalization that was applied in Fig. \ref{fig:Flare1LC}.
\begin{figure}[!ht]
\centering
\resizebox{9 cm}{!}{\rotatebox[]{0}{\includegraphics{tracknewflare2}}}\\
\caption{
Gamma-ray/optical flux - flux evolution over flare B. LAT 2-day binned
fluxes are interpolated to the times of R-band measurements as explained
in section \ref{subsec:a} and plotted on the y-axis.
R-band fluxes are plotted on the x-axis. For clarity their errors
(typically 5-10\% or less) are not shown.
Approximate times in days (from MJD 55322) are indicated along the track.
}
\label{fig:Track}
\end{figure}
For the second flare, the relation between the two bands is more
complex than can be described by a single DCCF and cannot be explained by a single time lag.
We show in Fig. \ref{fig:Track} the flux-flux evolution for flare B.
The data points are the same values that were used to compute the
flux ratios in Fig. \ref{fig:Flare2LC}. In other words, $\gamma$-ray fluxes are
interpolated to the times of the R-band observations.
Approximate times in days (starting from MJD 55322) are indicated
along the track. The upper loop (days 0 - 12) in the plot corresponds to
the first of the two sub-flares (flare B1; 2010
May 6 -- 16, MJD 55322 -- 55332) and the lower part (from day 14)
to the second sub-flare (flare B2; 2010 May 19 -- 25, MJD 55335 -- 55341),
emitting predominantly in the R-band.
An interesting feature is that for the rise and decay of
the two optical sub-flares, the variation in $\gamma$-ray and
R-band fluxes can be described by a linear relation with similar
slope in all four cases (note, however, the data gap
in the R-band during the decay of the first sub-flare).
This behaviour can be interpreted as if there are two $\gamma$-ray components,
one that is directly correlated with optical flux and one that is essentially unrelated
(the rise and final drop of the first ``sub-flare component'').
The former component contributes
to both sub-flares, the latter is quiescent during sub-flare B2.
Further details are discussed with the SED modeling results in Section \ref{sec:sed}.
\section{Broad-band spectral energy distribution}\label{sec:sed}
We have built four SEDs around the four flare intervals using all multiwavelength data available.
The LAT spectra for flares A, B1 and B2 show a peculiar upwards shape in the SED representation, although not statistically significant.
We modeled each flare with
a leptonic model that includes the synchrotron, synchrotron
self-Compton (SSC), and external Compton (EC) processes. Details of
the calculations can be found in \citet{finke08_SSC} and \citet{dermer09_EC}.
As is common with blazar SEDs \citep[e.g. ][]{dammando12}, our model fit did not
account for the bulk of the observed radio flux densities.
This emission, in the framework of the blazar scenario, must be produced
further down the jet, at relatively large distances from the blazar emission zone.
The SEDs are presented in Fig. \ref{SED_fig}
and the model parameters can be found in Table \ref{table_fit}. The
electron distribution was assumed to be a broken power-law with a
super-exponential cutoff at high electron Lorentz factor $\gamma^{\prime}$
(in the frame co-moving with the jet: $N_e \propto \gamma^{\prime -p_2}
\exp( -(\gamma^{\prime}/\gamma^{\prime}_{max})^4 )$ for $\gamma^{\prime} > \gamma^{\prime}_{brk}$).
This electron distribution was chosen to fit the SEDs, and does not necessarily reflect particular acceleration or cooling processes.
Due to the odd concave upwards shape of the LAT spectra in the SED representation, the source was modeled with
two external seed photon sources, one with parameters similar to what
one would expect from a broad line region (BLR, \#1), and one similar to what one would
expect from a dust torus (\#2). Both sources were modeled as
monochromatic sources that are isotropic in the host frame
(i.e., the frame of the host galaxy and black hole).
In the host frame, the sources have energy densities $u_{seed,1}$ and $u_{seed,2}$ and dimensionless monochromatic energies $\epsilon_{seed,1}$ and $\epsilon_{seed,2}$.
The four different electron spectra derived from the modeling for the flares
are shown in Fig. \ref{fig:elecDistr}.
The electron
spectra needed to be exceptionally narrow in order to explain the LAT
spectra.
In general, we attempted to fit as many of the flares as possible with the
same parameters while only varying the electron distribution.
We succeeded in doing this for the flares A, B1, and
B2; the only difference in the model for these flares is the electron
distribution. Flares B1 and B2 differ from flare A only by having a
harder spectrum above the break. Flares B1 and B2 differ from each
other by flare B2 having a higher $\gamma^{\prime}_{min}$ than flare B1.
In this model,
this explains why, although flares B1 and B2 have similar
optical behaviour and similar flux - flux evolution (see Fig. \ref{fig:Track}),
flare B2 has lower $\gamma$-ray flux than flare B1,
as seen in Fig. \ref{mwllc}.
Flare C does not show the
concave upwards feature. This flare was modeled with only one external
seed photon source, the one representing the dust torus, and so it
represents a flare taking place outside of the BLR. It also has a
lower $B$ and $\gamma^{\prime}_{min}$ relative to the models for the other flares.
This was possible for flares A, B1 and B2 but not C.
Note that \citet{finke10_3c454} suggested that the LAT spectra of
3C~454.3 can be modeled as a combination of EC from two seed photon
sources. In that case, they used the accretion disk and BLR as their
photon sources. The unusual shape of the LAT spectra in PKS~1424$-$418\
indicates that a similar combination of seed photon sources
can be used to model this source for flares A, B1, and B2, although
in this case we use the BLR and dust torus as the sources.
We have also included a model for the accretion disk and dust torus
(dashed violet curves in Fig. \ref{SED_fig}).
We used the black hole mass estimate of \citet{fan04} ($M_{BH} \approx 4.5\times10^9\
M_\odot$) in the accretion disk model, assumed to be a Shakura-Sunyaev disk \citep{shakura73}.
There is no evidence for a blue bump in the SED, so the disk was
modeled with enough luminosity that a fraction of this could explain
the EC seed photon sources. If seed photon source \#1 represents the
BLR, and the BLR luminosity is related to the BLR radius by the
relation $L_{BLR}/(10^{45}\ \mathrm{erg}\ \mathrm{s}^{-1}) = [R_{BLR}/(10^{17}\
\mathrm{cm})]^2$ \citep{ghisellini08}, then
$L_{BLR}/L_{disk}
\approx 3\times10^{-3}$. We also include an infrared bump representing the
dust torus, based on the parameters for seed photon source \#2. In
this case we assumed a dust temperature $T_{dust}\approx 800$\ K and a
torus luminosity and radius that follow the relation
$L_{dust}/(10^{45}\ \mathrm{erg}\ \mathrm{s}^{-1}) = [R_{dust}/(2.5\times10^{18}\
\mathrm{cm})]^2$. With our models this implies
$L_{dust}/L_{disk} \approx 0.4$.
We have computed the jet powers for the model fits as well
\citep[e.g.,][]{celotti93,finke08_SSC}, assuming a two-sided jet.
For flares A, B1, and B2, the result is quite far from equipartition.
The electrons have significantly less energy than the magnetic field,
due to the narrow electron spectra needed to fit the concave upwards
LAT spectra. Flare C, on the other hand, has no such LAT spectrum,
and thus it is possible to fit it with a much broader electron
spectrum, and therefore is closer to equipartition. A black hole mass
of $4.5\times10^9\ M_\odot$ implies an Eddington luminosity of
$L_{Edd} = 5.7\times10^{47}\ \mathrm{erg}\ \mathrm{s}^{-1}$, so $L_{disk} \approx 0.2
L_{Edd}$. The jet powers of all of the models are about $P_{j,tot} =
P_{j,B}+P_{j,e}\approx 0.3 L_{Edd}$ so it appears approximately the
same amount of energy is going into the disk and jet.
\begin{figure*}[!ht]
\centering
\includegraphics[width=.8\textwidth]{1424_SED_08}
\caption{SEDs and model fits for the four flares detected from PKS~1424$-$418.
Simultaneous data for flare A are represented by black symbols, for flare B1 by red symbols,
for flare B2 by blue symbols and for flare C by green symbols. The dashed violet lines are the modeled spectra of the dust torus
(peaked in the infrared) and accretion disk (peaked in the optical). The solid lines are models of the total emission.
The inset shows an enlargement of the LAT spectrum to point out the peculiar
concave upwards shape of flares A, B1 and B2.
} \label{SED_fig}
\end{figure*}
\begin{figure*}[t!]
\centering
\resizebox{12cm}{!}{\rotatebox[]{0}{\includegraphics{electron_dist_02}}}
\caption{Electron spectra (broken power laws) representing the four different flaring states considered for the source. Details on the parameter values are given in Table \ref{table_fit}.}
\label{fig:elecDistr}
\end{figure*}
\section{Summary and Conclusions}\label{sec:conclu}
We have presented multiwavelength observations of PKS~1424$-$418\ during a period of 33 months (2008 August -- 2011 May) including data from Ceduna, ATCA, SMA, APEX, ATOM, \textit{Swift} and \emph{Fermi} LAT.
Throughout the overall observing period significant variability is clearly present at optical and $\gamma$-ray frequencies,
whereas only moderate variability can be noted at radio and sub-mm frequencies.
Focusing on the study of the optical and $\gamma$-ray behaviour of PKS~1424$-$418 ,
four main flaring phases have been pointed out and analyzed in detail.
Good correlation is found between these energy bands during all periods with
the only exception of one flare (Flare B2).
The relative lack of variability in the VLBI, ATCA and Ceduna data
suggests that either the mechanism causing the optical-$\gamma$-ray flares is not
linked to the radio emission or that there is a delay before changes
in radio emission become evident. However, the relatively sparse radio monitoring data
means that we cannot rule out radio variability on timescales shorter than the monitoring
cadence as is seen in some other blazars \citep[e.g. ][]{richards}. The
cadence of the Ceduna data is sufficient to rule out significant radio flares on $\sim 100$ day
timescales.
We complemented the variability investigations building and modeling the spectral energy distribution
for flare A, flare B1, flare B2 and flare C.
These SEDs were fitted with a leptonic model which included the SSC and EC processes.
Based on the unusual LAT spectra for these flares,
it appears that the $\gamma$-ray emission originates from
the scattering of two external seed radiation fields: the dust torus and the BLR.
In contrast to all other flare SEDs presented here, the SED of flare C is adequately modeled with only one EC component,
the dust torus, and presents a slightly lower magnetic field value.
We find that in all outburst states the prevalent source of seed photons
is consistent with a dust torus origin (with only about $4\%$ being provided by the BLR).
As noted, flare B shows a remarkably complex behaviour
with a single evident flux increase at $\gamma$ rays
coincident in time with a double structured flare in the optical band.
Similar behaviour has already been reported for the blazars \object{4C\,+38.41} \citep{raiteri12} and
\object{PKS 0208-512}. For the latter source \citet{chatterjee13} suggested that changes in the magnetic field
or in the bulk Lorentz factor could explain the absence of a $\gamma$-ray counterpart to the optical outburst.
However, the same speculations cannot be applied in the case of PKS~1424$-$418.
Examination of the SEDs and the modeling results show that flares A, B1, and B2 can be explained by varying
only the electron distribution. In particular, flares B1 and B2, which displayed different behaviour in the
optical and $\gamma$-ray bands, have approximately the same optical brightening, but flare B1 is brighter in $\gamma$ rays.
Looking at the $\gamma$-ray SEDs though, it can be seen that the LAT spectrum is at the same level for both flare B1 and flare B2, but that flare B2 has lower flux contribution around
0.4 GeV to 2 GeV.
Results of the SED modeling indicate that the value of $\gamma_{\mathrm{min}}$ is higher for flare B2 than flare B1:
this is the only difference between the two outbursts
and the only change needed to explain the difference between these two flaring states.
In conclusion, our investigation of multiple flares of PKS~1424$-$418\ shows that, at least
in some objects, major variations in the overall blazar behaviour can be explained
by changes in the flux and energy spectrum of the particles in the jet that are radiating.
In this context, detailed studies of individual blazars like PKS~1424$-$418\
constitute an important opportunity to identify and unveil the fundamental mechanisms
at work in blazar physics.
\begin{table*}[t]
\footnotesize
\caption{Model parameters for the SED shown in Fig.~\ref{SED_fig}
coincident with the following flaring intervals: flare A (2009 June 22 - July 20, MJD 55004-55032), flare B1 (2010 May 6-16, MJD 55322-55332), flare B2 (2010 May 19-25, MJD 553325-55341) and flare C (2011 April 19 - May 16, MJD 55670-55697).
The first source of seed photons that has been taken into account is
the BLR, while the second seed source is the dust torus.
$^a$\,Seed photon energies are given in units of electron rest energy.}
\label{table_fit}
\centering
\begin{tabular}{lccccc}
\hline \hline
Parameter & Symbol & Flare A & Flare B1 & Flare B2 & Flare C \\
\hline
Redshift & $z$ & 1.522 & 1.522 & 1.522 & 1.522 \\
Bulk Lorentz Factor & $\Gamma$ & 37 & 37 & 37 & 37 \\
Doppler factor & $\delta_D$ & 37 & 37 & 37 & 37 \\
Magnetic Field (G)& $B$ & 2.5 G & 2.5 G & 2.5 G & 2.1 G \\
Variability Timescale (s)& $t_v$ & 1.0$\times10^5$ & 1.0$\times10^5$ & 1.0$\times10^5$ & 1.0$\times10^5$ \\
Comoving radius of blob (cm)& $R^{\prime}_b$ & 4.4$\times$10$^{16}$ & 4.4$\times$10$^{16}$ & 4.4$\times$10$^{16}$ & 4.4$\times$10$^{16}$ \\
\hline
Low-Energy Electron Spectral Index & $p_1$ & 2.0 & 2.0 & 2.0 & 2.0 \\
High-Energy Electron Spectral Index & $p_2$ & 3.2 & 4.0 & 4.0 & 3.2 \\
Minimum Electron Lorentz Factor & $\gamma^{\prime}_{min}$ & $2.6\times10^2$ & $1.3\times10^2$ & $2.6\times10^2$ & $1.0$ \\
Break Electron Lorentz Factor & $\gamma^{\prime}_{brk}$ & $3.3\times10^2$ & $3.3\times10^2$ & $3.3\times10^2$ & $3.3\times10^2$ \\
Maximum Electron Lorentz Factor & $\gamma^{\prime}_{max}$ & $2.6\times10^3$ & $5.0\times10^3$ & $5.0\times10^3$ & $2.6\times10^3$ \\
\hline
Black hole Mass ($M_\odot)$ & $M_{BH}$ & $4.5\times10^9$ & $4.5\times10^9$ & $4.5\times10^9$ & $4.5\times10^9$ \\
Disk luminosity ($\mathrm{erg}\ \mathrm{s}^{-1}$) & $L_{disk}$ & $1.0\times10^{47}$ & $1.0\times10^{47}$ & $1.0\times10^{47}$ & $1.0\times10^{47}$ \\
Inner disk radius ($R_g$) & $R_{in}$ & $6.0$ & $6.0$ & $6.0$ & $6.0$ \\
Seed ph. source \#1 energy density ($\mathrm{erg}\ \mathrm{cm}^{-3}$) & $u_{seed,1}$ & $2.2\times10^{-3}$ & $2.2\times10^{-3}$ & $2.2\times10^{-3}$ & 0.0 \\
Seed ph. source \#1 photon energy$^a$ & $\epsilon_{seed,1}$ & $4.0\times10^{-5}$ & $4.0\times10^{-5}$ & $4.0\times10^{-5}$ & 0.0 \\
Seed ph. source \#2 energy density ($\mathrm{erg}\ \mathrm{cm}^{-3}$) & $u_{seed,2}$ & $5.5\times10^{-4}$ & $5.5\times10^{-4}$ & $5.5\times10^{-4}$ & $5.5\times10^{-4}$ \\
Seed ph. source \#2 photon energy$^a$ & $\epsilon_{seed,2}$ & $4.0\times10^{-7}$ & $4.0\times10^{-7}$ & $4.0\times10^{-7}$ & $4.0\times10^{-7}$ \\
Dust Torus luminosity ($\mathrm{erg}\ \mathrm{s}^{-1}$) & $L_{dust}$ & $4.0\times10^{46}$ & $4.0\times10^{46}$ & $4.0\times10^{46}$ & $4.0\times10^{46}$ \\
Dust Torus radius (cm) & $R_{dust}$ & $4.1\times10^{18}$ & $4.1\times10^{18}$ & $4.1\times10^{18}$ & $4.1\times10^{18}$ \\
\hline
Jet Power in Magnetic Field ($\mathrm{erg}\ \mathrm{s}^{-1}$) & $P_{j,B}$ & $1.3\times10^{47}$ & $1.2\times10^{47}$ & $1.2\times10^{47}$ & $8.8\times10^{46}$ \\
Jet Power in Electrons ($\mathrm{erg}\ \mathrm{s}^{-1}$) & $P_{j,e}$ & $2.2\times10^{44}$ & $1.4\times10^{45}$ & $7.3\times10^{44}$ & $2.1\times10^{45}$ \\
\hline
\end{tabular}
\end{table*}
\begin{acknowledgements}
The \emph{Fermi}~LAT~ Collaboration acknowledges generous ongoing support from a number
of agencies and institutes that have supported both the development and the
operation of the LAT as well as scientific data analysis. These include the
National Aeronautics and Space Administration and the Department of Energy
in the United States, the Commissariat \`a l'Energie Atomique and the Centre
National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire
et de Physique des Particules in France, the Agenzia Spaziale Italiana and the
Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research
Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and
the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish
National Space Board in Sweden.
Additional support for science analysis during the operations phase from the
following agencies is also gratefully acknowledged: the Istituto Nazionale di
Astrofisica in Italy and the K.~A.~Wallenberg Foundation in Sweden for providing
a grant in support of a Royal Swedish Academy of Sciences Research fellowship for JC.
Part of this work was supported by the German
\emph{Deut\-sche For\-schungs\-ge\-mein\-schaft, DFG\/} project
number Ts~17/2--1.
The Australian Long Baseline Array and the Australia Telescope Compact
Array are part of the Australia Telescope National Facility which is
funded by the Commonwealth of Australia for operation as a National
Facility managed by CSIRO.
The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.
This research was funded in part by NASA through {\it Fermi} Guest
Investigator grants NNH09ZDA001N and NNH10ZDA001N. This research was
supported by an appointment to the NASA Postdoctoral Program at the
Goddard Space Flight Center, administered by Oak Ridge Associated
Universities through a contract with NASA.
We thank Neil Gehrels and the \emph{Swift} team for scheduling our Target of Opportunity
requests.
This research was enabled in part through \emph{Swift} Guest Investigator
grants 6090777.
We thank Silvia Rain\`{o} for useful comments and suggestions.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,497,645 | arxiv | \section{Introduction}
Outflows from the edges of active regions have become a focus of many studies since they were noted in early observations
from Hinode \citep{Kosugi2007}. These outflows are of great interest because of their likely contribution to the slow solar wind
\citep{Sakao2007,Harra2008,Doschek2008}. Upflows in active regions are most obvious in hot spectral lines formed around 1--2\,MK \citep{DelZanna2008,Warren2011},
and occur in dark areas where the line intensities are faint, especially at the active region edges \citep{Doschek2008}.
They show bulk plasma motions on the order of tens of km s$^{-1}$ \citep{DelZanna2008}. A higher speed component, reaching hundreds of km s$^{-1}$,
is also often present in the blue wing of EUV spectral lines \citep{Bryans2010,Peter2010,Tian2011,Brooks2012}. Plasma composition measurements
and simple mass flux estimates have strengthened the idea of a connection to the slow solar wind \citep{Brooks2011,Brooks2015,Brooks2021}. Indeed different
composition signatures in the outflows may help explain variability in the slow wind \citep{Brooks2020}, and the evolution of the upflows
has even been linked to radio noise storms detected close-in to the Sun by Parker Solar Probe \citep{Harra2021}.
Linking remote sensing
and in-situ observations is a key goal of both Parker Solar Probe \citep[PSP,][]{Fox2016} and Solar Orbiter \citep{Muller2020}. Detailed recent reviews focusing
on active region outflows are given by \cite{Hinode2019} and Tian et al. (2021), while an extensive review including their contribution to the
solar wind was presented by \cite{Abbo2016}.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{fig1.pdf}
\caption{ AIA 193\,\AA\, images showing the emergence and growth of AR 12737 from 00\,UT on March 31 to 06\,UT on April 1. The white box shows the FOV
of the IRIS observations in Figure \ref{fig4}. }
\label{fig1}%
\end{figure*}
There remain several outstanding issues with our understanding of the upflows/outflows, and these have significant implications for their contribution to the slow
solar wind. In particular, it is not clear how early they form in the emergence phase of an active region,
nor how long they persist during its lifetime. Do upflows become outflows and contribute to the slow wind all the time they exist, or just for a shorter fraction
of the active region lifetime? Clearly active regions are a more significant contributor to the slow wind if 1) the upflows exist longer, and 2)
even small regions have upflows.
One of the difficulties in studying the early formation phase is that the spectroscopic instruments used to measure plasma flows around active regions
have small fields-of-view (FOV) and slow slit scanning times. This makes it challenging to catch active regions as, or soon after, they emerge. \cite{Harra2010}
present observations of emerging flux within an already developed active region that was observed by the EUV Imaging Spectrometer \citep[EIS,][]{Culhane2007}
on Hinode. The interaction between newly emerging and pre-existing opposite polarity magnetic field formed a ring of strong upflows at the active region
edge. This formed quickly - within 12 hours - but we should note that the magnetic topological conditions were very favourable since large scale upflows were already
present in the overlying developed active region. These may already have opened the field surrounding the active region. It is unclear if upflows form
as quickly in an isolated active region.
These ideas are closely connected with the upflow formation mechanism itself, which is
another unresolved issue.
Perhaps the most popular picture is that closed field loops in the active region core are opened by interchange reconnection with the open fields in its surroundings.
This can occur at quasi-separatrix layers \citep{Baker2009,Mandrini2015}, where there are strong gradients in magnetic connectivity, and may be driven by the active region
emergence and expansion \citep{Murray2010,DelZanna2011}. This process provides a mechanism to transfer the closed (solar wind-like) composition of the hot
core loops onto open magnetic fields.
Again, a key question is when and where? If emergence is the driver, this could happen quickly and low down in the atmosphere.
Conversely, it could be that even after upflow formation, the parent active region needs to expand and interact
with high lying magnetic field before the upflow opens to the heliosphere.
Some studies have shown that the upflow magnetic field is not always open \citep{Edwards2016}, while others suggest that some fraction
of the upflow mass flux also flows through connections to distant active regions \cite{Boutry2012}.
Another possibility is the direct injection of mass and energy into the upflows by chromospheric jets \citep{DePontieu2009}.
Recently \cite{Polito2020} found signatures of the upflows in chromospheric spectral lines observed by the Interface Region Imaging Spectrometer
\citep[IRIS,][]{DePontieu2014}. Several chromospheric and transition region lines showed different behavior in the upflows
compared to the cores of two active regions.
It is still unclear, however, if these signatures are evidence of the driving mechanism of the upflows operating at low heights.
It is also possible that they are revealing a chromospheric response to the changed coronal environment of open-field regions.
Based on the multi-wavelength analysis of Hinode/EIS and IRIS data, Barczynski et al. (2021) argue that at least three parallel
mechanisms generate the plasma upflow, and that these mechanisms are localized in the chromosphere, transition region, and corona.
AR 12737 emerged on the Earth facing solar disk on 2019, March 31, during the 2nd PSP encounter, and was targetted 1--2 days later by EIS and IRIS.
From a case study of AR 12737, we will show evidence that
1) the upflows in this region are formed low down, in the early emergence phase, when it is still relatively small, and 2) once formed,
the upflows exist for the entire observed lifetime of the region.
\section{Observations} \label{sec:obs}
In this work we use several datasets from the Atmospheric Imaging Assembly \citep[AIA,][]{Lemen2012}
on board the Solar Dynamics Observatory \citep[SDO,][]{Pesnell2012}. These were downloaded via the web-based interface to the
Joint Science Operations Center (JSOC) at Stanford, are calibrated and correspond to level 1.5.
We retrieved 193\,\AA\, data for two long duration time-periods at 156\,s and 600\,s cadence, and a high cadence (12\,s)
multi-wavelength (304\,\AA, 171\,\AA, 193\,\AA, and 211\,\AA) dataset around the time of the upflow onset.
For Doppler velocity maps of the active region corona we use EIS measurements.
EIS is a dual spectrograph that observes in the 171--211\,\AA\, and 245--291\,\AA\, wavelength ranges with a
spectral resolution of 22.3\,m\AA. To account for instrumental effects (defective pixels, dark current, cosmic ray hits) we reduced
the data using the eis\_prep SolarSoftware routine. The observations we analyze are 261$''\times512''$ field-of-view (FOV) rasters of AR 12737
using the 2$''$ slit with coarse scan steps of 3$''$. The exposure time was 40\,s.
To obtain the Doppler velocity maps we fit the strong Fe XII 195.119\,\AA\, spectral line. We used a double Gaussian function to take account of
the density sensitive Fe XII blend at 195.179\,\AA. Measured line centroids were converted to velocities after first correcting for spectral
motion across the CCD, due to instrument structure temperature variations around the Hinode orbit, using the artificial neural network
model of \cite{Kamio2010}. We then calibrated the velocities to a reference wavelength obtained by averaging the top part of the FOV, and
finally removed a residual orbital variation that was present following the strategy of \cite{Brooks2020}. This last step was necessary
since the neural network model uses data from early in the mission that are less applicable to recent observations.
The accuracy is on the order of 4--5\,km s$^{-1}$, though we do not use any actual values in this study.
\begin{figure*}
\centering
\includegraphics[viewport = 70 20 1566 492,width=1.0\textwidth]{fig2.pdf}
\caption{ EIS intensity and Doppler velocity maps of AR 12737 between April 1 and 9. The maps were derived from the
Fe {\sc xii} 195.119\,\AA\, spectral line.
The colour bar shows the velocity range. }
\label{fig2}%
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[viewport = 0 190 360 550,clip=true,width=0.50\textwidth]{fig3.pdf}
\caption{ Magnetic field model for AR 12737 on April 1. The extrapolated field lines are overlaid in red on the EIS Doppler velocity map obtained at
15:51\,UT\, and already shown in Figure \ref{fig2}. The axes show the model scale in Mm with the origin set at the active region
center. The magenta and cyan contours show the line-of-sight magnetic field for positive and negative values of $\pm$50--500\,G.
Field lines ending with a circle leave the computational box and are potentially open. }
\label{fig3}
\end{figure}
For observations of chromospheric structure and velocities we used IRIS. The IRIS instrument observes two wavelength bands in the far and
near-ultraviolet (FUV \& NUV) covering 1332--1407\,\AA\, and 2783--2835\,\AA. IRIS has a slit-jaw imager and spectrograph. Here we focus on
observations in the Si IV 1393\,\AA, C II 1335\,\AA, and Mg II 2796\,\AA\, spectral lines. These cover the transition region, upper chromosphere,
and mid to upper chromosphere, respectively. We use level-2 data, and these are processed to account for instrumental effects (dark current,
geometric effects, flat-field, orbital wavelength variation). The observations we analyze are 129$''\times126''$ FOV rasters
constructed from coarse (2$''$ step scans) at a spatial resolution of 0.33--0.4$''$. The exposure time was 15\,s.
Doppler velocity maps of the upper chromosphere and transition region were derived from spectral fits to the C II 1335\,\AA\, and Si IV 1393\,\AA\,
lines. The Si IV 1393\,\AA\, velocities are derived from the line peak of a single Gaussian fit.
The C II 1335\,\AA\, line profiles are complex and can be singly or doubly peaked. The velocities here are derived from the peak if the profile
has a single peak, but are computed from the line reversal position if
the profile is double peaked. We used the algorithm of \cite{Rathore2015} to identify the profile peaks.
We also constructed a map of the Mg II 2796\,\AA\, $k2$ asymmetry i.e. the difference in intensity between the peaks
in the red and blue wings of the line profile. A positive asymmetry can imply upflows due to increased absorption in the blue wing, while a negative
asymmetry can imply downflows due to increased absorption in the red wing.
We analyzed the Mg II spectral profiles using the iris\_get\_mg\_features\_lev2 procedure available in the IRIS branch of SolarSoftware \citep{Freeland1998}.
We also compare our analysis of AIA images with type III radio noise data recorded by the FIELDS \citep{Bale2016} Radio Frequency Spectrometer
\citep[RFS,][]{Pulupa2017} on PSP. The RFS obtains full Stokes parameters using low- and high-frequency receivers covering a wide range
from 10.5\,kHz to 19.17\,MHz. The usual spectral cadence is 7\,s.
We only use a subset of the RFS encounter 2 data, described in detail by \cite{Harra2021}, for illustration. We focus only on the frequency
of maximum Stokes intensity.
The data were reduced as described in \cite{Harra2021}.
\begin{figure*}
\centering
\includegraphics[viewport = 25 30 708 170,clip=true,width=1.0\textwidth]{fig4.pdf}
\caption{ IRIS Si IV 1393\,\AA\, and C II 1335\,\AA\, intensity and velocity maps AR 12737 on April 2. The last two panels show the
Mg II 2796\,\AA\, $k3$ intensity and asymmetry between the $k2_R$ and $k2_v$ peaks. The FOV is shown by the white box in Figure \ref{fig1}.
}
\label{fig4}%
\end{figure*}
\section{Analysis results} \label{sec:results}
\subsection{Evolution of the upflows} \label{sec:results1}
AR 12737 was a quiescent active region. Despite its emergence the GOES X-ray flux remained below B-class while
it was the sole region on disk. Figure \ref{fig1} shows AIA 193\,\AA\, images to give an overview of its emergence phase.
It appears to have evolved from a cusp-shaped loop arcade around 00\,UT on March 31. By 10\,UT it is clear that a new
active region is forming, and this development phase lasts several hours. By 20\,UT the typical structure of an active region
has formed.
In Figure \ref{fig1} we can see
a hot core loop arcade, high lying million degree loops, and dark channels from the active region edge, which often show
propagating motions in imaging data. These dark areas at the active region edge typically show upflow signatures when observed by EIS \citep{Doschek2008}.
In section \ref{sec:results2} we attempt to estimate these time periods more quantitatively.
EIS moved to observe AR 12737 from 15:51\,UT on April 1, and tracked the region across the solar disk, making a final
slit scan at 06:39\,UT on April 9. Figure \ref{fig2} shows an overview of these observations. EIS spectroscopically confirms the existence
of upflows in the dark channels on both the east and west sides of AR 12737 as early as 16\,UT on April 1.
\cite{Harra2021} used linear force-free field models to establish the global coronal structure of AR 12737, and discussed the expansion and development of the eastern upflow after April 1 in detail. They show that the area of the blue-shifted
upflow expands by a factor of 10 between April 1 and April 4, that the region is associated with large scale
magnetic field lines in their model, and that the area associated with these large scale field lines increases as the AR expands.
This seems to be driven by the expansion of closed loops to the south east of the AR,
which appear as red-shifted in the EIS velocity map of April 1 (Figure \ref{fig2} left panel).
We show an example of their magnetic field model in Figure \ref{fig3}, overlaid on an EIS Doppler velocity map to highlight the associations
between the extrapolated field lines and the active region outflows. Large scale field lines are clearly seen rooted in the positive polarities at
the base of the blue-shifted outflow on the east side (around coordinates [-30,0]). The field lines are a composite from different models and the details
are described in \cite{Harra2021}.
Figure \ref{fig2} also shows that the eastern upflow is still clearly visible on April 9. Projection effects, however, mean that we can no longer
detect blue-shifted emission in the western upflow.
IRIS scanned AR 12737 from 05:17--05:36\,UT\, on April 2 ($\sim$13.5 hours after the first EIS scan). Figure \ref{fig4} shows intensity and
velocity diagnostics from the transition region (Si IV 1393\,\AA) down through the upper (C II 1335\,\AA) and mid- to upper chromosphere (Mg II 2796\,\AA).
\cite{Polito2020} previously
reported chromospheric signatures of coronal upflows in IRIS observations of two active regions. They found that the average redshift in the Si IV 1393\,\AA\,
line was reduced, the C II 1335\,\AA\, line was slightly blueshifted, and that Mg II k2 asymmetries were present,
in the upflows compared to the active region core. Figure \ref{fig4} indicates that these characteristics are present in the eastern upflow of AR 12737.
Note, for example, the strong redshifts in the core of the region in the Si IV 1393\,\AA\, velocity map, and the reduced redshift (occasional blueshift)
to the east (second panel of Figure \ref{fig4}.
These observations provide another example of chromospheric signatures of the upflows, but they
appear to be reduced in AR 12737 compared to the active regions analyzed by \cite{Polito2020}. Visual inspection of the locations compared to
the EIS data suggests that the spatial correlation is not as clear, especially as the morphology of the upflow has already evolved in the 13 hours
between the EIS and IRIS observations.
The weaker signatures are probably due to the fact that AR 12737 was not yet
fully developed on April 2, so the spectral lines were also weaker, and the upflow region had not yet expanded to the extent observed on April 4.
These results do indicate, however,
that chromospheric signatures of the upflows have already appeared by two days into the lifetime of the active region.
\subsection{Formation of the upflows} \label{sec:results2}
We have attempted to pinpoint the transition from an emerging flux region to a formed active region with upflows using
a combination of time-slice intensity tracking, cross-correlation analysis of images, and simple visual
inspection. Without spectroscopic data of the very early emergence phase this is, of course, difficult to confirm, but here we
discuss evidence that the upflows may have formed as early as 12--16\,UT on March 31.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{fig5.pdf}
\caption{ AIA 304\,\AA, 171\,\AA, 193\,\AA, and 211\,\AA\, images showing the small eruption at 10\,UT on March 31.
The blue arrow on the 304\,\AA\, image points out the eruption from the solar east side of AR 12737.
This image is linked to a 30\, min animation in the online version of the manuscript. }
\label{fig5}%
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[viewport = 100 20 560 512,clip=true,width=0.50\textwidth]{fig6.pdf}
\caption{ Space-time analysis showing the mini-eruption in the 304\,\AA\, data. The space-time plot corresponds to the thin
blue line in Figure \ref{fig5}. The data were taken at a high cadence of 12\,s between 10--12\,UT on March 31.
The sky blue arrow points out the start of the small eruption discussed in the text.
\label{fig6} }
\end{figure}
First, while viewing AIA movies of the emerging active region, we noticed a small eruption from the east side around 10\,UT. Figure \ref{fig5}
shows multi-wavelength AIA images of the mini-eruption. The figure is linked to a 30\,min animation in the same format.
The eruption is best seen in the 304\,\AA\, images, and in temperatures below $\sim$2\,MK (formation temperature of the 211\,\AA\, band).
Prior to the eruption the region is bipolar with a closed field cusp shaped arcade -- as was shown in Figure \ref{fig1} left panel.
After the eruption, the loop arcade appears to spread open and rapidly develop. Loops appear to draw back from the location where
the upflows are later observed, and start to interact with the closed field to the south east: a process that seems to be integral to
the expansion of the upflows after April 1. Even as early as shortly after the eruption on March 31 it appears that large scale, open,
or distantly connecting long loops develop. The field is predominantly positive in the core of the AR at this time, with only weak scattered
negative flux to the north so that field lines from the south east can only connect far from the region i.e. the negative polarities that
counterbalance the positive field are not nearby. This is the picture we also get from a magnetic field extrapolation we attempted,
though unfortunately the AR is too close to the east limb for the model to be convincing and reliable - so we do not include it here.
The features of the small eruption can be seen in the space-time intensity plot of Figure \ref{fig6},
which shows high cadence (12\,s) 304\,\AA\, data obtained between 10--12\,UT\, on March 31. The plot is made by stacking intensities extracted
along the thin blue line shown in Figure \ref{fig5}. Relatively stable structures, such as the active region itself, appear as broad
horizontal trails in the plot, whereas dynamic features that move rapidly along the line appear as streaks. The sky blue arrow $P0$ points
out the streak at the start of the small eruption just after 10\,UT\, (around 60$''$ along the slice-line). The fuzzier `W' shaped streaks following $P0$ are a result of the loops
spreading open and then retracting.
To quantitatively show when the active region emerged and infer when the upflows developed, we examined space-time intensity cuts of the AIA 193\,\AA\, data.
Figure \ref{fig7} shows a context image with two slice-lines overlaid in sky blue and red. We examined several slice-lines but these two illustrate the
behavior of the eastern upflow. Note that they are not optimised for the upflows on the western side. The sky blue line runs across the eastern dark lane
where EIS later detects upflows. The idea here is to try to trace back, from when we know the dark emission is associated with upflows detected by EIS,
to as early as we can in the active region development. The red line runs across the central loop arcade to try to trace the dark emission farther back
in time when AR 12737 was small, and the upflows, if formed, would be closer to the core. As the lines run across the upflow and bright core we expect
that when they are stacked in time we should see a bright trail across the space-time plot that represents the core region, and a dark trail below it
that represents the area where the upflow is later detected.
We used 193\,\AA\, data at 156\,s cadence for this analysis.
Figure \ref{fig8} shows the results. The top left panel is the space-time plot for the sky blue line in Figure \ref{fig7}. We can see the emergence of the
active region from $\sim$10--11\,UT and the development of the bright loop arcade $\sim$14\,UT\, in the center of the space-time plot, as expected.
The loop arcade further brightens and expands thereafter, trailing left to right across the space-time plot (approximately between the 91$''$ and 182$''$ tick marks on the Y-axis).
We point out a dark trail below the bright trail with the sky blue arrows $P1$ and $P2$ (crossing left to right and centered approximately on the 91$''$ tick mark).
This is the trace of the dark areas at the eastern edge of the active region where we expect upflows should be observed. To reiterate, this is simply because of the usual correlation between
low intensity and blue-shifts observed previously in many active regions \citep{Doschek2008}.
For ease of identification we have also added a dotted line in the dark trail. As discussed, EIS does indeed later detect upflows in the Eastern region.
It seems that the dark region can be traced back as far as the first $P1$ arrow $\sim$16\,UT.
The top right panel shows the space-time plot for the slice-line closer in to the active region core (red in Figure \ref{fig7}).
Being closer to the location of flux emergence, this slice-line shows the appearance of the active region earlier (before 10\,UT).
Since the region is emerging at this time, any upflows that are forming are also more likely to be obscured by the growing loop arcade.
We can see this in the figure. The dark trail, again crossing left to right and here approximately centered on the 97$''$ Y-axis tick mark, brightens and
fades due to the line-of-sight intereference by the bright loop arcade. Nevertheless, the
sky blue arrow, $P3$, points out that the dark trail of the eastern upflow can be traced back to around $\sim$12--13\,UT. We again added a dotted line
for ease of identification.
The middle panel of Figure \ref{fig8} shows results from our image-to-image correlation analysis. The idea here is to establish when the active region
has formed its basic structure (likely with upflows). We compute the linear Pearson correlation coefficient, $r$, between successive images for two time
periods (08\,UT March 31--08\,UT April 1 and 00\,UT March 31--00\,UT April 2) at two different cadences (156\,s and 600\,s) inside the boxed region shown in Figure \ref{fig7}.
When there are large changes between successive images, $r$ will decrease, and when there
are minimal changes, $r$ will increase. We use the higher cadence data to ensure we capture any rapid changes. As we can see from the figure, the two datasets
show essentially the same result. The correlation coefficient between images is high, $r$ above 0.95 for most of the time-series,
as we would expect given that AR 12737 was a quiescent region. Both
time series also show that flux emerged rapidly between $\sim$10--13\,UT.
Dynamic activity during emergence caused $r$ to fall as low as $\sim$0.85, but changes began to reduce after this time and $r$ returned to its typical
value by 14--16\,UT. These timescales and periods are in line with what we discerned from the AIA movies and space-time plots. AR 12737 does not
appear to alter significantly from this time until EIS observes the upflows.
\begin{figure}[h]
\centering
\includegraphics[viewport = 90 0 550 512,clip=true,width=0.50\textwidth]{fig7.pdf}
\caption{ AIA 193\,\AA\, context for the space-time and correlation analysis, taken at 07\,UT on April 1. The
sky blue line corresponds to the space-time plot shown in the left panel of Fig. \ref{fig8}. The red line corresponds
to the space-time plot shown in the right panel of Fig. \ref{fig8}. The white box shows the region used for the
correlation analysis in the lower panel of Fig. \ref{fig8}.
\label{fig7} }
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[viewport = 0 60 600 452,clip=true,width=0.49\textwidth]{fig8a.pdf}
\includegraphics[viewport = 100 15 540 482,clip=true,width=0.49\textwidth]{fig8b.pdf}
\includegraphics[viewport = 160 0 1446 360,width=1.1\textwidth]{fig8c.pdf}
\includegraphics[viewport = 165 0 1441 360,width=1.1\textwidth]{fig8d.pdf}
\caption{ Space-time and correlation analysis.
{\it Top left}: space-time plot corresponding to the sky blue line in Fig. \ref{fig7}.
{\it Top right}: space-time plot corresponding to the red line in Fig. \ref{fig7}.
{\it Middle panel}: image-to-image correlation at 156\,s (blue dots) and 600\,s (sky blue dots)
cadence for the region shown by the white box in Figure \ref{fig7}.
The sky blue arrows in the top panels indicate features discussed in the text.
{\it Bottom panel}: RFS data showing the onset of the type III radio storm detected by PSP.
The plot shows the frequency of the peak normalized intensity (red) during the same time-interval as the correlation analysis.
Data points with frequency above 6$\times$10$^{6}$\,Hz are increased in size to visually highlight the noise storm.
\label{fig8} }
\end{figure*}
The lower panel of Figure \ref{fig8} shows data from FIELDS/RFS for the same time-period as the image-to-image correlation analysis. The data are
derived from radio Stokes $I$ intensities detected by RFS in its two highest frequency bins (18.28--19.17\,MHz). The plot shows the frequencies of
the maximum of the mode normalized (over 40\,s windows) Stokes $I$ intensities. As discussed by \cite{Harra2021}, the radio noise storm detected
at PSP appears most likely associated with the eastern upflow as it expands from April 1 to April 4. The drift to lower frequencies with time also
suggests the emission height, or lateral expansion, is increasing. Looking here at the time period closer to
active region emergence, sporadic indications of the start of the high frequency ($>$10\,MHz) radio emission are seen as early as 8--10\,UT, while the
noise storm is clearly visible from 13\,UT\, onwards. If the radio noise storm is associated with the upflows, then the RFS data independently support
the conclusion that the upflows formed in the 10--13\,UT\, period as the active region developed following the small eruption. The radio emission drift
to lower frequencies is already visible on March 31 -- April 1.
\section{Summary and Discussion} \label{sec:summary}
We have studied the formation and lifetime of the eastern upflow from AR 12737 using EUV spectroscopic observations from EIS, NUV \& FUV spectroscopic
data from IRIS, images from AIA, and
radio data from FIELDS/RFS. Our goal was to understand where and how quickly upflows form in an active region, and how
long they might contribute to the slow solar wind during the observed lifetime.
These observations establish the following timeline.
The active region emerged and developed from a cusp shaped loop arcade from 8\,UT\, on March 31. A small eruption occurred $\sim$10\,UT, and the typical
structure of an active region with bright core loops and peripheral dark features (usually associated with upflows) formed from 12--16\,UT. Space-time
intensity plots and image-to-image correlation analysis support this picture and show that the active region did not alter appreciably until EIS observed
it at 15:50\,UT\, on April 1. At this time, EIS confirmed the presence of blue-shifted upflows associated with
large-scale field lines connecting out of the AR, from both the east and west edges, in force-free
field models (Figure \ref{fig3}). IRIS observed the region $\sim$13.5\,hours later, and detected signatures of the upflow in the chromosphere and transition
region. The active region grew and the upflow area expanded between April 1--4. The magnetic field modeling is consistent with this expansion, and there is
an associated increase in the number of large-scale (potentially open) field lines \citep{Harra2021}. The eastern upflow
was still present when the region was last observed by EIS at 06:39\,UT\, on April 9. The region was observed for 9 days.
We conclude that the upflows in AR 12737 formed early in its lifetime (no later than 32\,hours after emergence)
and persisted for as long as EIS tracked it (85\% of the observed lifetime).
Any contribution to the slow solar wind is therefore not a short lived phenomenon.
The lack of spectroscopic data within the first 32\,hours makes it difficult to confirm the exact time of upflow formation, but our analysis also
suggests that it occurred earlier. Based on the space-time and correlation analysis, the eastern upflow can be traced back to when the typical
structure of the active region was formed between 12--16\,UT\, on March 31. That is, the upflow may have formed as little as 4--8\,hours after
emergence and persisted for 95\% of the observed lifetime. The small eruption could have opened the magnetic field on the eastern side before this,
and the onset of the radio noise storm detected by FIELDS/RFS occurred at the start of this period. We should add the caveat that the upflow
might not contribute to the slow wind all the time it is observed, but magnetic modeling of the region suggests it is associated with large-scale
expanding field lines the whole time that it was tracked, so in principle the plasma flows can become outflows and escape to the heliosphere.
The evidence also suggests that the upflow formation occurs low down in the atmosphere.
The mini-eruption ejected from the base of the cusp loop arcade soon after
emergence while the active region was still small. It was also best observed in the AIA filters associated with cooler temperatures, especially
304\,\AA. Even if we only consider the spectroscopic data,
possible signatures of the upflows were observed in the chromosphere and transition region by IRIS on April 2, and the EIS data show that
the active region and upflow did not grow to their full extent until April 4. This implies that the upflow formation was well underway before AR 12737
had expanded to interact with high lying magnetic fields. The radio noise storm observed by FIELDS/RFS also showed a frequency drift that can be
interpreted as the emission height forming at lower altitudes.
Future multi-mission observations, of the earliest stages of active region emergence, will hopefully pin down more accurately some of the suggestions put forward in this article.
\acknowledgments
This study benefited from discussions at the meeting of ISSI international team 463; project title `Exploring The Solar Wind In Regions Closer Than Ever Observed Before'.
The work of DHB and HPW was funded by the NASA Hinode program. VP acknowledges support by NASA contract NNG09FA40C (IRIS) and grant no. 80NSSC21K0623. CM acknowledges
financial support from the Argentine grants PICT 2016-0221 (ANPCyT) and
UBACyT 20020170100611BA (UBA). This work is supported by the Swiss National Science Foundation - SNF.
Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners.
It is operated by these agencies in co-operation with ESA and NSC (Norway). The SDO data are courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink
communications funded by ESA and the Norwegian Space Centre.
We acknowledge the NASA Parker Solar Probe Mission and FIELDS team led by S.D.Bale for use of data.
The PSP/FIELDS experiment was developed and is operated under NASA contract NNN06AA01C.
|
1,116,691,497,646 | arxiv | \section{Introduction}
In computer aided ECG signal analysis, many of the ECG interpretation problems are solved but there are still doubts if those methods are really useful for practical purposes. Big advancement were made in noise removal, heartbeat detection (QRS detectors), heart rate variability (HRV) analysis and recently even some classification of the ECG shapes. As algorithms are becoming more and more powerful and precise, gaps between recent algorithmic advancements and available testing methodology are becoming to emerge. This paper is presenting short analysis of the current state of the art. Comprehensive literature review is not goal of this paper and a reader is advised to follow given references on recent work for deeper information on each mentioned subject. In this paper more attention is put on the medical literature where new challenges are identified going beyond the current advancements. Due to that, gaps in the testing methodology are identified and new approaches are suggested. In respect with size constraints, this paper is not showing figures for every mentioned ECG issue and reader is here again advised to follow page-level references on medical literature.
Goal of this paper is not to serve as a cookbook for ECG waves interpretation nor as a recipe for ECG algorithm heuristic development but to indicate clinical importance of the ECG morphology recognition problem and pave the way for future research through concrete and medically plausible cases.
\section{Medical Importance of ECG Analysis}
Electrocardiogram (ECG or EKG) is a record of bio-electric potential variation recorded through time on the body surface that represents heart beats \cite {EXSY:EXSY444}. Every heartbeat cycle is normally characterized by the sequence of waveforms known as a P wave, QRS complex and a T wave. Time intervals between those waveforms as well as their shapes and orientation are representing physiological processes occurring in heart and autonomous nervous system. Although today in medical centres advanced equipment and tools are used for detecting heart-beat arrhythmias and other cardiovascular abnormalities, visual inspection of the multi-channel (lead) ECG record is still the first step taken by cardiologists in diagnosis process \cite{Ghaffari:2008}.
\begin{figure}[h]
\centerline{\psfig{figure=fig_1.png,width=68.7mm} }
\caption{Normal ECG signal with marked characteristics, according to \cite{Kabir2012481:2012}}
\label{fig:sample_graph}
\end{figure}
Detailed explanation of the physiological process behind the ECG signal shape is out of the scope of this paper, but for the easier understanding of the main goals of this research we will give short explanation and a reader is advised to follow given references for detailed information.
Human heart is divided into four main chambers called atria and ventricles both with their left and right instances. Those chambers together form a biological pump for propelling the blood throughout the body. Besides those four obvious sections there are some other parts of the heart for very specialized functions like dividing atria from ventricles, slow impulse propagation, very fast impulse propagation etc., all of them performing particular tasks, ensuring that blood flows properly and efficiently throughout the body. When electrical impulse propagates through heart and all these specialized cells, ECG electrodes pick up that impulse in various directions and speed. In this way ECG waveforms are formed \cite{Clifford:2006}, \cite{camm2009esc}. With that in mind one can logically assume that different problems in different kind of cells or different parts of heart will have corresponding effects in ECG waves direction and morphology \cite {de2008basic} and this connection will be covered in the following chapters.
Efficient and fast ECG analysis algorithms are needed in clinical practice but also in pre-hospital use cases since clinical findings indicated that there was a significant improvement in patient outcome based on this early treatment \cite{Purvis1999604}. Pre-hospital ECG is a test that may potentially influence the management of patients with acute myocardial infarction through wider, faster in-hospital utilization of re-perfusion strategies and greater usage of invasive procedures, factors that may possibly reduce short term mortality \cite{Canto1997498}.
Medical literature suggests clinical importance of ECG not only in identifying heart problems itself, but also other health issues that leave a trace on ECG as a symptomatic phenomena like ECG patterns reflecting antidepressant treatment \cite{doi:10.3109/08039489309104126}.
\section{State of the Art}
\subsection{Noise Removal}
As is the case with any other signal, pre-processing of the ECG data is an important step in the analysis \cite{Ghosh_Raychaudhuri_2007}. Filtering of the data is important because of the noise that in the case of ECG recordings can have several causes like interference with other devices and signals, infrastructure noise, muscle movements under the skin where electrodes are placed, respiratory movements or even friction between the skin and the electrodes. Since the nature of the most of the mentioned noise sources is known and their features like frequency components and periodicity are understood those features can be used for noise reduction or even complete attenuation. Due to that, we have today a wide spectrum of noise removal techniques like band pass filters \cite{manikandan2012novel}, Fourier based analysis and transforms \cite{Darrington2009}, wavelet transformations \cite{mishra2010real}, cubic splines \cite{kabir2012denoising} etc. These methods rely on techniques like averaging and smoothing of the signal or transforming the signal from time-domain to frequency-domain and then removing the noise frequency components. Results of these techniques are reported in the literature and they show that noise removal in ECG signal analysis is well coped.
\subsection{QRS Detection}
Biomedical signal processing and pattern recognition literature offers a wide spectrum of solutions and approaches to QRS detection. Even as early as in 60s of the 20th century, algorithms like AZTEC \cite{yanowitz1974accuracy} and Pan-Tompkins resulted in very high QRS detection rates.
Basic principle of the AZTEC algorithm was inclination measurement i.e. amplitude analysis. Decision about the arrhythmia occurrence was made based on the intervals between detected heart beats. At that time, there was no standardised ECG testing database and algorithm evaluation was performed in the long-term study on six patients and short-term study on thirty patients. QRS detection rate was 99\%. Interesting result is that this algorithm was capable of detecting Premature Ventricular Beats (PVB) with 90\% detection rate and 1\% of false positive PVBs. Nevertheless it is important to mention that AZTEC had one exception built in - when major noise was detected, classification was not performed \cite{Aztec_4502549}.
Pan Tompkins algorithm uses several mathematical transforms to find out width, amplitude and inclination of the particular ECG signal artefacts. This algorithm works in three phases - implementation of the linear digital filter, squaring of the signal to ensure that all signal segments are in the set of positive numbers and finally thresholds and logic for QRS segment detection. Algorithm needs a learning phase and overall accuracy measured on the MIT-BIH Arrhythmia Database \cite{mark1988bih} was 99.3\%. \cite{pan1985real}.
In recent years we can find fast and robust modern approaches based on various classifiers like Neural Networks \cite{NN_QRS_921381} or Support Vector Machines \cite{Mehta_2008}. These approaches are trying to pre process the signal to remove the noise and find distinctive features that will later be used as a features i.e. variables for classification algorithms.
Various transforms like Wavelet, Hilbert \cite{madeiro2012innovative}, Empirical Mode Decomposition \cite{pal2012empirical} and similar are proposed. These transformations are performed to transform the original signal from time-domain to the frequency-domain. Frequency characteristics of the particular signal sections are then used to build upon the decision logic that can identify particular QRS segment i.e. heartbeats.
Results reported in recent works show very high detection rates in sensitivity and positive predictivity, both more than 99.5\% \cite{zidelmal2012qrs}.
This kind of transformations alter original ECG morphology. E.g. Hilbert transform can be applied in the frequency domain to simply shifts all positive frequency components by -90\ensuremath{^\circ} and all negative frequency components by +90\ensuremath{^\circ}. The amplitude always remains constant throughout this transformation. Aplying the first derivative on the signal followed by the Hilbert transform does not detect QRS complex, but it only aims to enhance the peaks and in that way ensure bigger probability to find the QRS region by threshold-detection rules \cite{rezk2011algebraic}.
Since the ECG signal is actually a time series, conventional statistical and data mining procedures are not directly applicable due to the violation of independence between the observations. This dependence or to be more precise, existence of particular state transition probabilities between the observations makes the ECG analysis problem suitable for application of Hidden Markov Models (HMM) statistical method. In the HMM process, the result of the previous state will influence the result of the following state. This is similar to the processes in the heart which posses properties of successive stage transitions. This makes HMM models a promising approach for ECG analysis and notable results were achieved in QRS detection with some advancements in pathology classification \cite{HMM_6409718}.
\subsection{HRV Analysis}
Heart rate variability analysis (HRV) implies advanced analytical models based on chaos theory, statistics, entropy and other features that describe changes in the heart rate \cite{Jovic_ITI_2009_5196051}. Main motivation for this approach is the fact that pathological states of the heart muscle have impacts on the heart rhythm. As it is reported in the literature, healthy heart has in fact more chaotic behaviour that the pathologically affected heart \cite{bolis1999autonomic}. As this approach is based on the heart rhythm analysis, QRS detection is assumed and for the development and testing purposes, in this kind of research annotated databases are used to calculate linear and non-linear rhythm features. Those features are then used as inputs for different machine learning algorithms i.e. classifiers. Recent work shows significant advancements and potential of HRV in diagnosis of the various pathologies and arrhythmias which implies that reliable and precise QRS detection is mandatory for this kind of analysis \cite{jovic2011electrocardiogram}.
\subsection{Testing Sources}
Standardization in the field of computer aided ECG analysis originates from 1980. under the patronage of the European Commission with the project called „Common Standards for Quantitative Electrocardiography (CSE)“ \cite{willems1990common}. Algorithms that are presented to the scientific community are evaluated with parameters that represent their ability to successfully identify QRS complexes and this is covered by ANSI \cite{american2002cardiac}. To enable comparison of the tested algorithms, common databases are used like e.g. MIT-BIH Arrhytmia Database and a web site where those can be found is called Physionet \cite{goldberger2000physiobank}. ECG records in those databases are annotated which means that there is information about the occurrences of various ECG artefacts like normal heart beat, PVC (Premature Venticular Contraction), changes in signal quality etc. These databases include many beats and many pathological states, various noise occurrences and in that way represent a good testing ground.
\subsection{Performance Measures}
Although most of the papers indicate algorithm results based on classical measures like sensitivity (Se), specificity (Sp), positive predictivity (+P), detection error rate (DER), area under the Receiver Operating Characteristics (ROC) curve (AUC) and overall accuracy (ACC) \cite{manikandan2012novel} there are still some doubts on measures that are really good indicators for algorithms' performance. All of these metrics rely on the calculations that show how good a classifier is in detecting true positives (TP - class detected is actually the real class) and true negatives (TN - e.g. QRS complex is not reported when there is no one in the testing signal) and at the same time prone to mistakes of false positives (FP) and false negatives (FN) erroneous classifications. Researchers need to be careful in reporting the performance measures to ensure that results are not influenced by the sampling rate or \emph{a priori} probability of considered class \cite{Darrington2009}. Performance measures are defined as
\begin{equation}
Se(\%) = 1- \dfrac{FN}{TP+FN}=\dfrac{TP}{TP+FN} * 100\%
\end{equation}
\begin{equation}
Sp(\%) = \dfrac{TN}{N}=\dfrac{TN}{TN+FP} * 100\%
\end{equation}
\begin{equation}
+P(\%) = 1- \dfrac{FP}{TP+FP}=\dfrac{TP}{TP+FP} * 100\%
\end{equation}
\begin{equation}
DER(\%) = \dfrac{FP+FN}{TP+FN} * 100\%
\end{equation}
\begin{equation}
ACC(\%) = \dfrac{TP}{TP+FP+FN} * 100\%
\end{equation}
In spite of the apparently clear testing methodology there is still a problem with reporting results and comparison on the different algorithms performance. Some authors test their algorithms on some of the records and not on the whole databases and often authors exclude some parts of the signal e.g. ventricular flutter episodes or parts with high noise like in the record 207 from the MIT-BIH Arrhythmia Database \cite{manikandan2012novel}.
\subsection{Recent Approaches}
Recent approaches to more robust and more realistic tests include class-based and subject-based testing. In class-based testing all records are used for training of the classifier. Drawback of this approach is that it is not realistic in a way that, when it comes to a prediction, waveforms that should be predicted are from a patient (record) that was used for learning of the classifier. Newer and more realistic evaluation method proposed in only few papers is subject-based evaluation. In subject-based testing, whole records are excluded from the training set and predictions for particular record are made in respect to that i.e. based on the other records from the database. This situation simulates real world scenario when software application that implements a classifier faces new patient it has never "seen" before. Results in subject-based testing are always lower than in class based testing \cite{ye2012heartbeat}.
Problem with these approaches is that often researches use limited number of classes and variety of the ECG morphology shapes in real life is much higher. However, ANSI/AAMI defined five-class division so evaluations done in that way are actually following a standard \cite{american2002cardiac}.
\subsection{Bullseye Testing}
Since ECG signal can have numerous varieties, here we suggest new kind of test learned from somewhat similar discipline of shape recognition - the bullseye test. In shape recognition, bullseye testing can be used to evaluate how good a classifier is in finding similar shapes from the testing database. Bullseye test can be used for supervised and for unsupervised learning procedures.
In supervised learning, when testing database includes annotation or a class for each shape, every shape is matched with all shapes in the database. For example, if given dataset includes 20 instances of each class, we use proposed matching algorithm to identify e.g. top 40 matches (double the number of class instances) and discard the others. Among these 40 we count correct matches for each class in question. The accuracy of shape retrieval is the ratio of the number of correct hits to the highest possible correct hits \cite{donoser2010efficient}, \cite{lin2008efficient}.
In unsupervised learning, e.g. when database is very big and not all classes are identified in the training or testing dataset, bullseye test can be performed to find the most similar shapes and then visual inspection can be made and results reported as a graphic \cite{kontschieder2010beyond}.
\begin{figure}[h]
\centerline{\psfig{figure=fig_2.png,width=50mm} }
\caption{Example of the ECG waves from record 207, MIT-BIH AD database where different wave morphologies share same annotation}
\label{fig:sample_graph}
\end{figure}
Since similar problem exists in many ECG databases, i.e. not all ECG segments are annotated or annotations are not detailed enough, bullseye test could be performed and results visually reported. In that way one could see how various morphologies are matched e.g. PVCs of the similar shapes and orientation should be grouped together. If the algorithm that is tested groups visually similar PVC beats together, than that could mean that the algorithm is really distinguishing morphologies correctly and that decision about the class in question is not made on only morphology invariant features like entropy, duration, frequency components etc. Figure 2 shows an excerpt of the MIT-BIH Arrhythmia Database where one annotation implies in fact two different morphologies.
Results of a proposed bullseye test of the ECG waves morphology classification is shown on the figure 3.
\begin{figure}[h]
\centerline{\psfig{figure=fig_3.png,width=68.7mm} }
\caption{Exemplar results of the proposed bullseye test. Upper row shows the best three matches for one morphology and the second row for other morphology of the same annotation}
\label{fig:sample_graph}
\end{figure}
\section{Challenges and Gaps}
Despite very advanced and fast heartbeat detectors and even recent discoveries in ECG morphology analysis, there is limited use of those algorithms in clinical practice. To gain understanding in why is it so, besides all possible business related issues and medical clearances that must be satisfied, here we will try to identify medicine related issues.
\subsection{Morphology Considerations}
To understand what kind of algorithms are needed for efficient clinical practice in ECG analysis and diagnosis, one must first understand what is considered to be a normal heart rhythm. Normal heart rhythm i.e. behaviour, considering medical literature, must satisfy all of the following four criteria \cite{Najeeb}:
1) Heart Rate: Heart rate should be between 60 and 100 beats per minute \cite{de2008basic}, \cite{taylor2008150}, \cite{Gacek2012}.
2) Origin: Origin of the particular beat i.e. electrical impulse must be in the SA node \cite{Clifford:2006}, \cite{taylor2008150}, \cite{Gacek2012}.
3) Pathway: Impulse must propagate throughout the normal conducting pathway \cite{Clifford:2006}.
4) Speed: Impulse must propagate at the normal speed (i.e. speeds) \cite{Clifford:2006}.
Here we can see that heart rate is just one of the four criteria for identifying the normal heart behaviour. Of course, as it is explained earlier, significant advancements are achieved in identifying heart work problems from heart rate variability analysis, but other three criteria have more impact on the ECG wave morphology than on the heart rate itself. Considering the underlying ECG mechanism explained earlier, we can realize that if the second criteria is not met than the QRS complex will not proceed after exactly one P wave or P waves resulting from other myocardial cells, e.g. ectopic myocardial cell, will not be normally periodic or maybe P wave could be reversed due to the abnormal impulse propagation direction [52]. If the electric impulse is not propagating throughout the heart in normal conducting pathway than ECG signal can show short P-R segment due to the lack of the AV node pause or various morphology deviations can be symptoms of the abnormal ventricle contraction due to the bypass or skip of the fast Purkinje system and propagation throughout the much slower myocardial cells \cite{Clifford:2006}. This is example how abnormal conducting pathway can in turn lead to the abnormal speed of the signal propagation. Often in heart work problems, one issue can cause cascading problems resulting from collateral damages that just multiply abnormal behavior with each following beat. This is why it is very important to make correct and early diagnosis of the problem.
\subsection{P Wave}
P wave detection and classification is a problem because of the wave's small amplitude and attenuation due to the filtering of the signal. However, P wave is an important component of the clinical ECG diagnosis process since it can indicate various atrial problems and not just by its frequency of occurrence \cite{hampton2003150}, [53] but also by its shape \cite{de2008basic}, \cite{hampton2003150}. Besides frequency and morphology, P wave direction can also indicate pathological states like dextrocardia \cite{hampton2003150}.
\subsection{T Wave}
T wave morphology changes are also very important in pathologic states diagnosis. Different shape of the T wave can indicate problem with beat origin and re-polarization issue due to the branch block. Normal or abnormal T wave shape depending on the visibility of P wave and narrowness of the QRS can indicate heart block problems and inverted T wave can indicate among other problems e.g. right ventricular hypertrophy \cite{hampton2003150}.
\subsection{Multi-Lead Analysis}
Based on the morphological features of the ECG and keeping in mind the underlying electro-mechanics in 3D space, experienced cardiologist can identify not just the problem in the heart muscle but also the approximate region of the heart that is influenced by the problem e.g. ischemic region and even locate where is the source of the problem e.g. approximate source of the ectopic beat. In that way, ECG can be used as a low cost and fast tool for beginning of the diagnosis process.
Multi-lead analysis is crucial for a wide range of pathology identifications \cite{hampton2003150} and serious work in this direction is just appearing in the literature \cite{ye2012heartbeat}, \cite{Chang20123165}.
\subsection{ST Segment}
Acute coronary syndrome (ACS) is a significant health problem in industrialized countries and is becoming an increasingly significant problem in developing countries. ACS is a clinical syndrome defined by characteristic symptoms of myocardial ischemia in association with ECG ST-segment morphology changes (elevation or depression) indicative of the occlusion of a major epicardial coronary artery \cite{Kushner2013178}.
\subsection{Individual Adjustments}
Another challenge in computer aided ECG analysis is a fact that ECG is considered biometric characteristic which means that every person has individual ECG signature. Although most of the people have similar ECG manifestation there is part of population which has significant deviations in normal ECG or in pathological states \cite{douglas2006temporal}. Algorithms that could cover those cases should be capable of learning what is in fact normal wave morphology for each particular patient and then report identified misalignments if they occur.
\subsection{Annotations and Testing}
Problem with comparison of different approaches and algorithms in ECG analysis derives from the fact that algorithms' capabilities evolved over the QRS detection problem and even more powerful and precise algorithms are expected to arise in order to address the above mentioned challenges. Although group of researchers gathered around the Physionet made a lot of software components available for testing of the newly created algorithms \cite{goldberger2000physiobank}, recent work identifies gaps between annotation standards \cite{Clifford:2006}. Considering various upcoming challenges described earlier in this paper, new meta-model of annotations and e.g. online testing components that could unify various databases are welcome. Furthermore, authors of this paper think that new testing standards should be developed to meet the upcoming challenges so that researchers and their results can be more close to the real clinical use and benefits for patients which should be the ultimate goal of the work in this area.
\section{Future Work}
Based on the state of the art literature review and medical literature inspection, we can conclude that future work in this field should be focused on the efforts to further standardize annotations and testing principles and methodologies. Considering algorithmic efforts, further advancements should be done towards the algorithms that could correctly classify ECG wave morphologies and towards multi-lead analysis and decision modules to get more near the clinical use of the proposed systems. Supervised learning algorithms that can correlate particular wave morphology to a pathological state are welcome as are the unsupervised learning algorithms that could be adaptable to individual patients and detect outlying waves even if the particular morphology is not associated with the pathology in the knowledge base. Considering the latest advancements in HRV analysis which is based on QRS detection, one can assume what would be potential benefits and possible research opportunities if more powerful morphology classification algorithms will be developed. In that way, researches could utilize their findings from chaos theory and non-linear systems to conduct ECG morphology variability (EMV) analysis. All these findings could lead to the reliable mobile and tele-medical solutions whose prototypes are already developed but with very limited analysis capabilities \cite{vrcek2007integrated}, \cite{5503907}, \cite{Batistatos20121140}, \cite{Naikc_5546468}.
\section{Conclusion}
Despite growing technology in cardiology the electrocardiogram (ECG) stays as a cheap and quickly tool for beginning the identification of potential lethal cardiac pathology. The ECG analysis except the recognition of basic waves and complexes (P, QRS and T, sometimes U) keep in mind also its duration, configuration and orientation (positive or negative) and formation of parts between complexes and waves (for example ST segment which is crucial in identification of acute myocardial infarction). The complexity of ECG curve embarrasses computer programmers in finding the best mathematical model to describe what is exactly happening in cardiac cycle and that is the reason why we still do not have a software solution for ECG analysis which is comparable to clinical decision. Any new model in describing ECG curve is invaluable for future development of better software programs for ECG analysis which is used not only in ECG machine than in many cardiology devices (for example monitors and pacemakers) and for better understanding the ECG in various clinical conditions. Also, new testing methods and records annotated in more detail that will follow the algorithmic advancements are welcome.
\bibliographystyle{IEEEtran}
|
1,116,691,497,647 | arxiv | \section{Introduction}
A fundamental postulate of quantum theory is that the Hamiltonian of an isolated system is Hermitian. This Hermiticity ensures real eigenvalues and a coherent, unitary time evolution for the system. This conventional wisdom was upended two decades ago by Carl Bender and co-workers, who showed that a non-Hermitian Hamiltonian with parity-time ($\mathcal{PT}$) symmetry can exhibit entirely real spectra~\cite{CDH02,A02,A10,CHS+15}. Over time, it has become clear that non-Hermitian Hamiltonians with $\mathcal{PT}$ symmetry can provide an effective description for systems
with balanced, spatially separated gain and loss~\cite{JTSV13}. This concept has been extensively, and fruitfully, explored in classical (wave) systems where the number of energy quanta is much larger than one~\cite{CKR+10,alois2012,feng2014,BSF+14,CJH+14,BLD+14,WKP+16,AYF17,LRL17,HAS+17}. A $\mathcal{PT}$-symmetric system is described by an effective, non-Hermitian Hamiltonian $H_\mathcal{PT}$ that is invariant under the combined parity and time-reversal operation~\cite{CS98}. As the gain-loss strength is increased, the spectrum of $H_\mathcal{PT}$ changes from real into complex conjugate pairs, and the corresponding eigenvectors cease to be eigenvectors of the $\mathcal{PT}$ operator. This $\mathcal{PT}$-symmetry-breaking transition occurs at an EP of order $n$ (EP$n$), where $n$ eigenvalues, as well as their corresponding eigenvectors, coalesce~\cite{kato56,W12,ORN+19}. The $\mathcal{PT}$ transition and the non-unitary time evolution generated by $H_\mathcal{PT}$ have been observed in classical systems with EP$2$~\cite{CKR+10,alois2012,feng2014,BSF+14,CJH+14,BLD+14,WKP+16,AYF17,LRL17,POR+14,ZPO+18,MMC+19,MMC+19,MMC+20,JMC+19,HLL+19,MOB+15}, EP$3$~\cite{HAS+17}, and higher order EPs~\cite{XLKA19,JOL+17}.
Due to the quantum limit on noise in linear (gain) amplifiers~\cite{Caves1982}, creating a photonic system with balanced gain and loss in the quantum domain is not possible~\cite{Scheel2018}. However, the EP degeneracies also occur in dissipative systems with mode-selective losses. Such passive $\mathcal{PT}$-symmetric systems have been realized in the quantum domain with lossy, single-photons~\cite{LXZ+17,pxdqpt,pxchern,XWZ+19,BWZ+20,ZXB+17,XDW+20,Klauck2019}, ultracold atoms~\cite{Luo19}, and a superconducting transmon~\cite{NAJM19}. These realizations are limited to effective two-dimensional Hamiltonians with second-order EPs, and their quantum information studies are confined to global properties~\cite{XWZ+19}. Here we present experimental quantum simulation of entropy dynamics in a four-dimensional, passive $\mathcal{PT}$-symmetric system with an EP$4$.
\section{Implementing $\mathcal{PT}$-symmetric qudit with an EP$4$}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure1}
\caption{Experimental setup. (a) Illustration of a $4$-mode $\mathcal{PT}$-symmetric qudit. (b) Schematic of the optical circuit used for simulating dynamics in the four-mode, passive $\mathcal{PT}$-symmetric system. Heralded single photons are generated via spontaneous parametric down-conversion and prepared in arbitrary qudit states using a polarizing beam splitter (PBS), wave plates with certain setting angles and a beam displacer (BD). The coherent, lossy, non-unitary time evolution is realized by BDs, HWPs, and sandwich-type QWP-HWP-QWP setups, along with single-photon loss. For detection, projective measurements and quantum-state tomography are selected depending on the purpose, either of which is performed, both of which are realized by a PBS and (or) wave plates and a BD. Avalanche photodiodes (APDs) detect the signal and heralding photons.}
\label{fig:1}
\vspace{-3mm}
\end{figure*}
Let us consider an open, four-mode system described by a $4\times 4$ Hamiltonian
\begin{equation}
\label{eq:hpt}
H_\mathcal{PT}=-JS_x+i\gamma S_z,
\end{equation}
where $S_x$ and $S_z$ are spin-$3/2$ representations of the SU(2) group. It can be written in the matrix form as
\begin{equation}
\label{eq:hpt2}
H_\mathcal{PT}=\frac{1}{2}
\begin{pmatrix}
3i\gamma & -\sqrt{3}J & 0 & 0 \\
-\sqrt{3}J & i\gamma & -2J & 0 \\
0 & -2J & -i\gamma & -\sqrt{3}J\\
0 & 0 & -\sqrt{3}J & -3i\gamma
\end{pmatrix}
\end{equation}
in the computational basis $\{|1\rangle,|2\rangle,|3\rangle,|4\rangle\}$, and represents a $\mathcal{PT}$-symmetric qudit with $d=4$. The Hamiltonian $H_\mathcal{PT}$ commutes with the antilinear $\mathcal{PT}$ operator where the parity operator is $\mathcal{P}=\mathrm{antidiag}(1, 1, 1, 1)$ and a time-reversal operator is given by complex conjugation, $\mathcal{T}=*$. It follows from Eq.~(\ref{eq:hpt2}) that the first two computational modes represent the ``gain sector'' and the last two represent the ``loss sector'' in the system. The four equally spaced eigenvalues of $H_\mathcal{PT}$ are given by $\lambda_k=\{-3/2,-1/2,+1/2,+3/2\}\sqrt{J^2-\gamma^2}$ ($k=1,2,3,4$), which give rise to an EP$4$ at the $\mathcal{PT}$-breaking threshold $\gamma=J$. The advantage of choosing Hamiltonian (\ref{eq:hpt}) is that it can be easily generalized to an arbitrary dimensional system where it still remains analytically solvable and has an EP with the order equal to the system dimension~\cite{HAS+17,EUH+08,QJ2019}. Since $H_\mathcal{PT}$ has a single energy gap $\Delta=\sqrt{J^2-\gamma^2}$, it follows that the $\mathcal{PT}$ symmetric qudit has a sinusoidal dynamics in the $\mathcal{PT}$-symmetry unbroken region ($\gamma<J$), and a monotonic, exponential growth behavior in the $\mathcal{PT}$-broken region ($\gamma>J$).
The coherent, non-unitary time evolution operator for the system is given by $U(t)=\exp(-i H_\mathcal{PT}t)$ where we have set $\hbar=1$. For $\gamma=0$, the system is Hermitian and the fermionic nature of spin-$3/2$ representation is manifest in the anti-periodicity of $U$, i.e., $U(T)=-\mathbb{I}_4$ where $T(0)=2\pi/J$ for $\gamma=0$. In this case, the mode-occupations $P_k(t)=|\langle k |\psi(t)\rangle|^2$ of the four modes obey a shifted mirror symmetry with $P_k(t)=P_{5-k}(t+T/2)$, which indicates a perfect state transfer occurring from mode $k$ to mode $(5-k)$ at $T/2$. Here $|\psi(t)\rangle=U(t)|\psi(0)\rangle$ is the time-evolved state. For $\gamma < J$, the system is in the $\mathcal{PT}$-symmetry unbroken region, the dynamical evolution is anti-periodical with period $T(\gamma)=2\pi/\Delta$. At the EP$4$ ($\gamma= J$), $U(t)$ ceases to be periodic and has an operator norm that grows as $t^6$, reflecting the fourth order of the EP. In the $\mathcal{PT}$-symmetry broken region, the mode occupations grow exponentially with time. However, the quantum information metrics, such as the von Neumann entropy, are defined with respect to the instantaneously normalized state (indicating post-selection that eliminates the quantum jumps~\cite{NAJM19,Klauck2019,QJ2019}). Therefore, at the EP and in the $\mathcal{PT}$-broken region, these quantities reach a steady-state value. These results are applicable to all finite-dimensional representation of the $SU(2)$ group.
The four-dimensional Hamiltonian $H_\mathcal{PT}$ is particularly interesting because it can be viewed as a system of two interacting, non-Hermitian qubits. This mapping is provided by the identities $2S_x=\sigma_x\otimes\sigma_x+\sigma_y\otimes\sigma_y+\sqrt{3}\mathbb{I}_2\otimes\sigma_x$, $2S_z=\sigma_z\otimes\mathbb{I}_2+\mathbb{I}_2\otimes\sigma_z/2$, and $\mathcal{P}=\sigma_x\otimes\sigma_x$, where $\sigma_k$ ($k=x,y,z$) are the standard Pauli matrices. Using this insight, we investigate the quantum information dynamics in the gain and loss subsystems of the $\mathcal{PT}$-symmetric qudit.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure2}
\caption{Scaled mode occupations of a $\mathcal{PT}$-symmetric system. (a) In the Hermitian limit with $\gamma=0$, mode occupation numbers are periodic with period $T=2\pi/J$, and perfect state transfer occurs from mode $k$ to mode $5-k$ at time $T/2$. (b) In the $\mathcal{PT}$-unbroken phase with $\gamma=0.2J$, the occupation dynamics have period $T(\gamma)$, and the total weights exceed unity indicating a non-trace-preserving time evolution. (c) At $\gamma=J$, the scaled mode occupation grows as $t^6$ with time indicating that the exceptional point is of order four. (d) In the $\mathcal{PT}$-broken phase with $\gamma=1.2J$, the scaled mode occupation grows exponentially with time. Time is measured in units of $J$. Symbols: data; lines: theory. Error bars are due to the statistical uncertainty and obtained based on assuming Poisson statistics. When not shown, error bars are smaller than the symbol size.}
\label{fig:2}
\vspace{-3mm}
\end{figure}
We encode the four modes of the qudit in the spatial and polarization degrees of freedom of a single photon, and label them as $|1\rangle=|UH\rangle,|2\rangle=|UV\rangle,|3\rangle=|DH\rangle,|4\rangle=|DV\rangle$. Here $\{|H\rangle,|V\rangle\}$ are the horizontal and vertical polarizations, and $\{|U\rangle,|D\rangle\}$ denote the upper and lower paths, which undergo gain and loss respectively (Fig.~\ref{fig:1}(a)). As illustrated in Fig.~\ref{fig:1}(b), pairs of single photons are generated via type-I spontaneous parametric down conversion (SPDC) using a non-linear $\beta$-Barium-Borate (BBO) crystal. One photon serves as a trigger and the other signal photon is prepared in an arbitrary qudit state using a polarizing beam splitter (PBS), wave plates with certain setting angles and a beam displacer (BD).
By mapping the $\mathcal{PT}$-symmetric Hamiltonian $H_{\mathcal{PT}}$ into a passive $\mathcal{PT}$-symmetric one with mode-selective losses $H_\mathcal{L}=H_\mathcal{PT}-3i\gamma\mathbb{I}_4/2$, we implement the $4\times4$ lossy, time-evolution operator
\begin{equation}
U_\mathcal{L}(t)=\exp(-iH_{\mathcal{L}}t)
\end{equation}
via a lossy linear optical circuit, which is related to $U(t)$ through $U(t)= U_\mathcal{L}(t)\exp(3\gamma t/2)$~\cite{LXZ+17}. The evolution operator $U_\mathcal{L}(t)$ is realized by BDs, half-wave plates (HWPs), and sandwich-type QWP-HWP-QWP setups, where QWP is an abbreviation for quarter-wave plate.
We experimentally measure and then obtain scaled mode occupations $P_k(t)$ by projecting the time-evolved state $|\psi(t)\rangle$ onto $|k\rangle$. The initial state is chosen to be $|\psi(0)\rangle=(|1\rangle+|2\rangle+|3\rangle+|4\rangle)/2$. The projective measurement and the quantum state tomography on the qudit state are realized by BDs, wave plates and a PBS followed by avalanche photodiodes (APDs). Only coincidences between the heralded and trigger photons are registered. The perfect state transfer for $\gamma=0$ is confirmed by the transfer of occupation from the first mode to the fourth mode (Fig.~\ref{fig:2}(a)). In the $\mathcal{PT}$-unbroken phase with a finite $\gamma=0.2J$, there is no perfect state transfer at time $T(\gamma)/2$ due to the non-unitary dynamics (Fig.~\ref{fig:2}(b)). The measured occupations are, however, periodic in time with a period $T(\gamma)$.
At the EP$4$ with $\gamma=J$, the scaled mode occupation $P_k(t)$ grows algebraically with time as $t^6$ (Fig.~\ref{fig:2}(c)). Such a scaling is dictated by the order of the EP. At $\gamma=J$, the Hamiltonian obeys $H^{4}_\mathcal{PT}(\gamma=J)=0$ and the power-series expansion of $U(t)$ terminates at the third order, giving rise to $t^6$ dependence for the occupation numbers. By projecting the time-evolved state onto $|k\rangle$, we can obtain the occupation at the EP$4$ and its power-law behaviour (Fig.~\ref{fig:2}(c)). In the $\mathcal{PT}$-broken phase, the scaled mode occupation grows exponentially with time as expected (Fig.~\ref{fig:2}(d)). We note that while the simulation time-range is limited to two periods for $\gamma<J$, we restrict to $0\leq t\leq 4.5$ due to the rapid growth of the scaled mode occupation at the EP$4$ and in the broken $\mathcal{PT}$ region.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure3}
\caption{Eigenvalues of the perturbed Hamiltonian. The real (a) and imaginary (b) parts of the eigenvalues of the perturbed Hamiltonian $H_\delta=H_\mathcal{PT}(\gamma=J)+iJ\delta|1\rangle\langle 1|$, measured in units of $J$, scale as $\delta^{1/4}$, showing the enhanced sensitivity near the EP$4$. The initial states are eigenstates $|\psi_\delta\rangle$ of $H_\delta$. Experimental errors are calculated via Monte Carlo method; when not shown, error bars are smaller than the symbol size.}
\label{fig:S2}
\vspace{-3mm}
\end{figure}
When the $\mathcal{PT}$-symmetric Hamiltonian is perturbed from the EP$4$ by a small detuning $\delta$, the resulting complex eigenvalues in the vicinity of EP$n$ are given by a Puiseux series in $\delta^{1/n}$~\cite{EUH+08}, indicating enhanced classical sensitivity proportional to the order of the EP~\cite{HAS+17,WSG+17}. In addition to the behavior of the mode occupations at the EP, this serves as a complementary check of the order of the EP. To that end, we experimentally measure the complex eigenvalues of the perturbed Hamiltonian $H_\delta=H_\mathcal{PT}(\gamma=J)+iJ\delta|1\rangle\langle 1|$. Figure~\ref{fig:S2} shows that the real and imaginary parts of the eigenvalues of $H_\delta$ indeed scale as $\delta^{1/4}$, consistent with the EP$4$ that occurs at $\gamma=J$.
\section{Observing information dynamics}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure4}
\caption{Quantum information dynamics of the $\mathcal{PT}$-symmetric qudit. (a) Time-dependent entropy $S(t)$ for a qudit with pure initial state $|\psi(0)\rangle$ remains zero for any non-Hermiticity. (d) With mixed initial state, $S(t)$ is constant when $\gamma=0$, and oscillates in the $\mathcal{PT}$-unbroken phase ($\gamma=0.2J$) with period $T(\gamma)$. At the EP$4$ ($\gamma=J$) and in the broken $\mathcal{PT}$ region, $S(t)$ reaches zero because the system approaches a pure state that is determined by the sole eigenstate at the EP or the mode with maximum amplification. In contrast, the subsystem entropies $S_\mathrm{Gain}(t)$ and $S_\mathrm{Loss}(t)$ show qualitatively similar behavior for pure (b)-(c) and mixed (e)-(f) initial qudit states. In each case, the entropies show oscillatory behavior for $\gamma<J$ and steady-state behavior for $\gamma\geq J$. We restrict to $0\leq t\leq 4.5$ due to the rapid growth of the scaled mode occupation at the EP$4$ and in the broken $\mathcal{PT}$ region. Experimental errors are due to Monte Carlo method; when not shown, error bars are smaller than the symbol size.}
\label{fig:3}
\vspace{-3mm}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure5}
\caption{Reduced, instantaneously normalized density matrices $\tilde\rho_\mathrm{Gain}(t)$ (a)-(d) and $\tilde\rho_\mathrm{Loss}(t)$ (e)-(h) trace different paths inside the Bloch sphere. Experimental results are represented by colored symbols and their theoretical predictions are represented by dashed curves.}
\label{fig:4}
\vspace{-3mm}
\end{figure*}
A crucial aspect of dynamics of a high-dimensional $\mathcal{PT}$-symmetric system is the flow of information among its different parts, and the information retrieval phenomena between the whole system and its environment. To that end, we consider the qudit entropy
\begin{equation}
S(t)=-\text{Tr}\left[\tilde{\rho}(t)\log_2\tilde{\rho}(t)\right],
\end{equation}
where $\tilde{\rho}(t)=\rho(t)/\text{Tr}\left[\rho(t)\right]$ is the instantaneously normalized density matrix and $\rho(t)=U(t)\rho(0)U^\dagger(t)$ is the time-evolved density matrix of the system with a time-dependent trace. The gain- and loss-sector entropies are $S_\mathrm{Gain}(t)$ and $S_\mathrm{Loss}(t)$, respectively. These are obtained from the gain- and loss-sector reduced density matrices $\rho_\mathrm{Gain}(t)=\text{Tr}_{3,4}\left[\rho(t)\right]$ and $\rho_\mathrm{Loss}(t)=\text{Tr}_{1,2}\left[\rho(t)\right]$, respectively.
A full knowledge of the time-dependent density matrix through the quantum-state tomography allows us to experimentally explore the information flow. We focus on the quantum dynamics with the fully symmetric initial state $|\psi(0)\rangle$ (Figs.~\ref{fig:3}(a)-(c)) and a mixed initial state $\rho(0)=0.925|1\rangle\langle1|+0.025\left(|2\rangle\langle2|+|3\rangle\langle3|+|4\rangle\langle4|\right)$ (Figs.~\ref{fig:3}(d)-(f)) in the $\mathcal{PT}$-symmetry unbroken region. Since the qudit undergoes a coherent, non-unitary evolution for any gain-loss strength $\gamma$, a pure state remains a pure state and the entropy of the entire system $S(t)$ remains constant with time (Fig.~\ref{fig:3}(a)). For a mixed initial state, the entropy is constant only in the Hermitian limit, $\gamma=0$. In the $\mathcal{PT}$-symmetry unbroken region, the entropy $S(t)$ shows periodic oscillations. This demonstrates an exchange of quantum information between the $\mathcal{PT}$-symmetric qudit and its environment, and the oscillations observed here may be interpreted as an evidence of information backflow from the environment and a signature of non-Markovianity in the $\mathcal{PT}$-unbroken phase~\cite{XWZ+19}.
At the EP$4$ or in the $\mathcal{PT}$-symmetry broken region, due to the diverging occupation, the normalized density matrix $\tilde\rho(t)$ approaches a pure state, and the total system entropy, therefore, approaches zero~\cite{XWZ+19,KAU17,WLG+19}. In all cases, the experimental simulation results agree well with the theoretical prediction. Importantly, this observed behavior of entropy does not depend on the details of the system, which signifies its universality. In this case, information flows unidirectionally and the dynamics is asymptotically Markovian~\cite{KAU17}.
In a sharp contrast to the results for the entire system, the behavior of subsystem entropies for pure and mixed initial states is qualitatively similar. In either case, the gain-sector entropy $S_\mathrm{Gain}(t)$ and the loss-sector entropy $S_\mathrm{Loss}(t)$ oscillate in the $\mathcal{PT}$-symmetry unbroken region including the Hermitian limit. On the other hand, they reach nonzero steady-state values at EP$4$ and in the broken $\mathcal{PT}$-symmetry region. It is worth its while to point out that although the gain and loss entropies show qualitatively similar behavior, the trajectories traced out by the instantaneously normalized, reduced density matrices $\tilde\rho_\mathrm{Gain}(t)$ and $\tilde\rho_\mathrm{Loss}(t)$ in the Bloch ball are distinctly different (Fig.~\ref{fig:4}). The trajectory of the gain-sector density matrix is weighted towards the northern hemisphere, representing the largest amplifying mode, whereas the loss-sector density matrix trajectory is less heavily weighted. These differences lead to the slightly different behaviors of $S_\text{Gain}$ and $S_\text{Loss}$.
In this paper, we realize a four-level system dynamics under a non-Hermitian Hamiltonian in either $\mathcal{PT}$-symmetric unbroken, broken or at the exceptional point with single photons and a cascaded interferometric setup. We realize $4\times4$ non-unitary evolution operations with six BDs and use another one for state preparation. Two different measurements---projective measurement and the quantum state tomography of a four-level system---are carried out at the output. In contrast, the setup in~\cite{XWZ+19} is much simpler; a two-level system dynamics under a non-Hermitian Hamiltonian is realized with two BDs, and only a single-qubit state tomography is carried out to reconstruct the final state. Our experimental method to implement a non-unitary, loss time evolution operator is scalable, and therefore can be used to simulate higher-dimensional $\mathcal{PT}$-symmetric systems in the future.
\section{Discussion}
In this section we briefly present the analytical derivation for the entropy of the $\mathcal{PT}$-symmetric system. If we start with a pure state, it remains pure under the coherent, non-unitary evolution that is generated by a $\mathcal{PT}$-symmetric, non-Hermitian Hamiltonian. Therefore, the entropy of such a state continues to remain zero. If the initial state is mixed, i.e., $\rho(0)=\sum_{i}\alpha_i|\upsilon_i\rangle\langle\upsilon_i|$, we can express the orthonormal vectors $|\upsilon_i\rangle=\sum_{k}\beta_{ik}|\zeta_k\rangle$ in terms of the non-orthogonal right eigenvectors $|\zeta_k\rangle$ of $H_\mathcal{PT}$. The initial state, thus, can be rewritten as
\begin{align*}
\rho(0)=\sum_{k,j,i}\alpha_i\beta_{ik}\beta^*_{ij}|\zeta_k\rangle\langle\zeta_j|.
\end{align*}
The final state is then given by
\begin{align*}
\rho(t)=&\sum_{\substack{k,i}}\alpha_i|\beta_{ik}|^2|\zeta_k\rangle\langle\zeta_k|\\
&+\sum_{\substack{k\neq j,i}}\alpha_i\beta_{ik}\beta^*_{ij}e^{-i(\lambda_k-\lambda_j)t}|\zeta_k\rangle\langle\zeta_j|.
\end{align*}
We further express the right eigenvectors of $H_\mathcal{PT}$ in terms of the orthonormal eigenvectors of the instantaneous density matrix $\rho(t)$ as $|\zeta_k\rangle=\sum_l \kappa_{kl}|\varphi_l\rangle$. It allows us to obtain the time-dependent occupation eigenvalues $p_k(t)=\langle\varphi_l|\rho(t)|\varphi_l\rangle$ as
\begin{align*}
p_l=\sum_{k,i,l}\alpha_i|\beta_{ik}|^2|\kappa_{kl}|^2+\sum_{\substack{k\neq j,i,l}}\alpha_i\beta_{ik}\beta^*_{ij}\kappa_{kl}\kappa^*_{jl}e^{-i(\lambda_k-\lambda_j)t}.
\end{align*}
In the Hermitian limit, the eigenvectors of $H_\mathcal{PT}$ are orthonormal, and the time evolution acts as the rotation of coordinates. Therefore the eigenstates $|\varphi_i\rangle$ are unchanged and the entropy remains a constant of motion. In the non-Hermitian case, $\{|\zeta_k\rangle\}$ are not orthonormal, and the time-dependent entropy is then given by
\begin{align*}
S(t)=-\sum_l \tilde{p}_l\log_2\tilde{p_l},
\end{align*}
where the fractional occupations are given by $\tilde{p}_l(t)=p_l(t)/\sum_k p_k(t)$. The entropy of time-evolved state oscillates periodically in $\mathcal{PT}$-symmetric unbroken region. At the EP$4$, $p_l(t)$ grow algebraically with time as $t^6$. By writing $p_l=\lambda_l t^6+\mu_l$ where $\lambda_l$ and $\mu_l$ are constant, it is straightforward to see that the entropy approaches a steady-state value polynomially with time. In contrast, in the $\mathcal{PT}$-broken region, $p_l(t)$ grow exponentially with time, leading to a steady-state value for the entropy that is approaches in an exponential manner.
\section{Summary}
Higher-dimensional $\mathcal{PT}$ systems, which can be treated as composites of two or more minimal, non-Hermitian, quantum systems, provide a starting point for interacting quantum models with $\mathcal{PT}$-symmetry and EP degeneracies. In this work, we experimentally simulate and observe the quantum information dynamics in a four-dimensional system with EP$4$. We show that the subsystem-entropy behavior for gain or loss subsystems can be either qualitatively different from or similar to the dynamics for the total entropy of the four-dimensional system. Our work is the first experimental demonstration of critical phenomena in four-dimensional $\mathcal{PT}$-symmetric quantum dynamics, and shows the versatility of the single-photon interferometric network platform for simulating interacting, non-Hermitian, quantum systems.
\acknowledgements The authors thank support from the National Natural
Science Foundation of China (Grant Nos. 11674056, 11974331 and U1930402), the National Natural
Science Foundation of Jiangsu Province (Grant No. BK20190577), the Fundamental Research Funds for the Central Universities (JUSRP11947), the National Key R\&D Program (Grant Nos. 2016YFA0301700 and 2017YFA0304100) and NSF DMR-1054020.
\bibliographystyle{plain}
\bibliographystyle{apsrev4-1}
\section{Introduction}
A fundamental postulate of quantum theory is that the Hamiltonian of an isolated system is Hermitian. This Hermiticity ensures real eigenvalues and a coherent, unitary time evolution for the system. This conventional wisdom was upended two decades ago by Carl Bender and co-workers, who showed that a non-Hermitian Hamiltonian with parity-time ($\mathcal{PT}$) symmetry can exhibit entirely real spectra~\cite{CDH02,A02,A10,CHS+15}. Over time, it has become clear that non-Hermitian Hamiltonians with $\mathcal{PT}$ symmetry can provide an effective description for systems
with balanced, spatially separated gain and loss~\cite{JTSV13}. This concept has been extensively, and fruitfully, explored in classical (wave) systems where the number of energy quanta is much larger than one~\cite{CKR+10,alois2012,feng2014,BSF+14,CJH+14,BLD+14,WKP+16,AYF17,LRL17,HAS+17}. A $\mathcal{PT}$-symmetric system is described by an effective, non-Hermitian Hamiltonian $H_\mathcal{PT}$ that is invariant under the combined parity and time-reversal operation~\cite{CS98}. As the gain-loss strength is increased, the spectrum of $H_\mathcal{PT}$ changes from real into complex conjugate pairs, and the corresponding eigenvectors cease to be eigenvectors of the $\mathcal{PT}$ operator. This $\mathcal{PT}$-symmetry-breaking transition occurs at an EP of order $n$ (EP$n$), where $n$ eigenvalues, as well as their corresponding eigenvectors, coalesce~\cite{kato56,W12,ORN+19}. The $\mathcal{PT}$ transition and the non-unitary time evolution generated by $H_\mathcal{PT}$ have been observed in classical systems with EP$2$~\cite{CKR+10,alois2012,feng2014,BSF+14,CJH+14,BLD+14,WKP+16,AYF17,LRL17,POR+14,ZPO+18,MMC+19,MMC+19,MMC+20,JMC+19,HLL+19,MOB+15}, EP$3$~\cite{HAS+17}, and higher order EPs~\cite{XLKA19,JOL+17}.
Due to the quantum limit on noise in linear (gain) amplifiers~\cite{Caves1982}, creating a photonic system with balanced gain and loss in the quantum domain is not possible~\cite{Scheel2018}. However, the EP degeneracies also occur in dissipative systems with mode-selective losses. Such passive $\mathcal{PT}$-symmetric systems have been realized in the quantum domain with lossy, single-photons~\cite{LXZ+17,pxdqpt,pxchern,XWZ+19,BWZ+20,ZXB+17,XDW+20,Klauck2019}, ultracold atoms~\cite{Luo19}, and a superconducting transmon~\cite{NAJM19}. These realizations are limited to effective two-dimensional Hamiltonians with second-order EPs, and their quantum information studies are confined to global properties~\cite{XWZ+19}. Here we present experimental quantum simulation of entropy dynamics in a four-dimensional, passive $\mathcal{PT}$-symmetric system with an EP$4$.
\section{Implementing $\mathcal{PT}$-symmetric qudit with an EP$4$}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure1}
\caption{Experimental setup. (a) Illustration of a $4$-mode $\mathcal{PT}$-symmetric qudit. (b) Schematic of the optical circuit used for simulating dynamics in the four-mode, passive $\mathcal{PT}$-symmetric system. Heralded single photons are generated via spontaneous parametric down-conversion and prepared in arbitrary qudit states using a polarizing beam splitter (PBS), wave plates with certain setting angles and a beam displacer (BD). The coherent, lossy, non-unitary time evolution is realized by BDs, HWPs, and sandwich-type QWP-HWP-QWP setups, along with single-photon loss. For detection, projective measurements and quantum-state tomography are selected depending on the purpose, either of which is performed, both of which are realized by a PBS and (or) wave plates and a BD. Avalanche photodiodes (APDs) detect the signal and heralding photons.}
\label{fig:1}
\vspace{-3mm}
\end{figure*}
Let us consider an open, four-mode system described by a $4\times 4$ Hamiltonian
\begin{equation}
\label{eq:hpt}
H_\mathcal{PT}=-JS_x+i\gamma S_z,
\end{equation}
where $S_x$ and $S_z$ are spin-$3/2$ representations of the SU(2) group. It can be written in the matrix form as
\begin{equation}
\label{eq:hpt2}
H_\mathcal{PT}=\frac{1}{2}
\begin{pmatrix}
3i\gamma & -\sqrt{3}J & 0 & 0 \\
-\sqrt{3}J & i\gamma & -2J & 0 \\
0 & -2J & -i\gamma & -\sqrt{3}J\\
0 & 0 & -\sqrt{3}J & -3i\gamma
\end{pmatrix}
\end{equation}
in the computational basis $\{|1\rangle,|2\rangle,|3\rangle,|4\rangle\}$, and represents a $\mathcal{PT}$-symmetric qudit with $d=4$. The Hamiltonian $H_\mathcal{PT}$ commutes with the antilinear $\mathcal{PT}$ operator where the parity operator is $\mathcal{P}=\mathrm{antidiag}(1, 1, 1, 1)$ and a time-reversal operator is given by complex conjugation, $\mathcal{T}=*$. It follows from Eq.~(\ref{eq:hpt2}) that the first two computational modes represent the ``gain sector'' and the last two represent the ``loss sector'' in the system. The four equally spaced eigenvalues of $H_\mathcal{PT}$ are given by $\lambda_k=\{-3/2,-1/2,+1/2,+3/2\}\sqrt{J^2-\gamma^2}$ ($k=1,2,3,4$), which give rise to an EP$4$ at the $\mathcal{PT}$-breaking threshold $\gamma=J$. The advantage of choosing Hamiltonian (\ref{eq:hpt}) is that it can be easily generalized to an arbitrary dimensional system where it still remains analytically solvable and has an EP with the order equal to the system dimension~\cite{HAS+17,EUH+08,QJ2019}. Since $H_\mathcal{PT}$ has a single energy gap $\Delta=\sqrt{J^2-\gamma^2}$, it follows that the $\mathcal{PT}$ symmetric qudit has a sinusoidal dynamics in the $\mathcal{PT}$-symmetry unbroken region ($\gamma<J$), and a monotonic, exponential growth behavior in the $\mathcal{PT}$-broken region ($\gamma>J$).
The coherent, non-unitary time evolution operator for the system is given by $U(t)=\exp(-i H_\mathcal{PT}t)$ where we have set $\hbar=1$. For $\gamma=0$, the system is Hermitian and the fermionic nature of spin-$3/2$ representation is manifest in the anti-periodicity of $U$, i.e., $U(T)=-\mathbb{I}_4$ where $T(0)=2\pi/J$ for $\gamma=0$. In this case, the mode-occupations $P_k(t)=|\langle k |\psi(t)\rangle|^2$ of the four modes obey a shifted mirror symmetry with $P_k(t)=P_{5-k}(t+T/2)$, which indicates a perfect state transfer occurring from mode $k$ to mode $(5-k)$ at $T/2$. Here $|\psi(t)\rangle=U(t)|\psi(0)\rangle$ is the time-evolved state. For $\gamma < J$, the system is in the $\mathcal{PT}$-symmetry unbroken region, the dynamical evolution is anti-periodical with period $T(\gamma)=2\pi/\Delta$. At the EP$4$ ($\gamma= J$), $U(t)$ ceases to be periodic and has an operator norm that grows as $t^6$, reflecting the fourth order of the EP. In the $\mathcal{PT}$-symmetry broken region, the mode occupations grow exponentially with time. However, the quantum information metrics, such as the von Neumann entropy, are defined with respect to the instantaneously normalized state (indicating post-selection that eliminates the quantum jumps~\cite{NAJM19,Klauck2019,QJ2019}). Therefore, at the EP and in the $\mathcal{PT}$-broken region, these quantities reach a steady-state value. These results are applicable to all finite-dimensional representation of the $SU(2)$ group.
The four-dimensional Hamiltonian $H_\mathcal{PT}$ is particularly interesting because it can be viewed as a system of two interacting, non-Hermitian qubits. This mapping is provided by the identities $2S_x=\sigma_x\otimes\sigma_x+\sigma_y\otimes\sigma_y+\sqrt{3}\mathbb{I}_2\otimes\sigma_x$, $2S_z=\sigma_z\otimes\mathbb{I}_2+\mathbb{I}_2\otimes\sigma_z/2$, and $\mathcal{P}=\sigma_x\otimes\sigma_x$, where $\sigma_k$ ($k=x,y,z$) are the standard Pauli matrices. Using this insight, we investigate the quantum information dynamics in the gain and loss subsystems of the $\mathcal{PT}$-symmetric qudit.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure2}
\caption{Scaled mode occupations of a $\mathcal{PT}$-symmetric system. (a) In the Hermitian limit with $\gamma=0$, mode occupation numbers are periodic with period $T=2\pi/J$, and perfect state transfer occurs from mode $k$ to mode $5-k$ at time $T/2$. (b) In the $\mathcal{PT}$-unbroken phase with $\gamma=0.2J$, the occupation dynamics have period $T(\gamma)$, and the total weights exceed unity indicating a non-trace-preserving time evolution. (c) At $\gamma=J$, the scaled mode occupation grows as $t^6$ with time indicating that the exceptional point is of order four. (d) In the $\mathcal{PT}$-broken phase with $\gamma=1.2J$, the scaled mode occupation grows exponentially with time. Time is measured in units of $J$. Symbols: data; lines: theory. Error bars are due to the statistical uncertainty and obtained based on assuming Poisson statistics. When not shown, error bars are smaller than the symbol size.}
\label{fig:2}
\vspace{-3mm}
\end{figure}
We encode the four modes of the qudit in the spatial and polarization degrees of freedom of a single photon, and label them as $|1\rangle=|UH\rangle,|2\rangle=|UV\rangle,|3\rangle=|DH\rangle,|4\rangle=|DV\rangle$. Here $\{|H\rangle,|V\rangle\}$ are the horizontal and vertical polarizations, and $\{|U\rangle,|D\rangle\}$ denote the upper and lower paths, which undergo gain and loss respectively (Fig.~\ref{fig:1}(a)). As illustrated in Fig.~\ref{fig:1}(b), pairs of single photons are generated via type-I spontaneous parametric down conversion (SPDC) using a non-linear $\beta$-Barium-Borate (BBO) crystal. One photon serves as a trigger and the other signal photon is prepared in an arbitrary qudit state using a polarizing beam splitter (PBS), wave plates with certain setting angles and a beam displacer (BD).
By mapping the $\mathcal{PT}$-symmetric Hamiltonian $H_{\mathcal{PT}}$ into a passive $\mathcal{PT}$-symmetric one with mode-selective losses $H_\mathcal{L}=H_\mathcal{PT}-3i\gamma\mathbb{I}_4/2$, we implement the $4\times4$ lossy, time-evolution operator
\begin{equation}
U_\mathcal{L}(t)=\exp(-iH_{\mathcal{L}}t)
\end{equation}
via a lossy linear optical circuit, which is related to $U(t)$ through $U(t)= U_\mathcal{L}(t)\exp(3\gamma t/2)$~\cite{LXZ+17}. The evolution operator $U_\mathcal{L}(t)$ is realized by BDs, half-wave plates (HWPs), and sandwich-type QWP-HWP-QWP setups, where QWP is an abbreviation for quarter-wave plate.
We experimentally measure and then obtain scaled mode occupations $P_k(t)$ by projecting the time-evolved state $|\psi(t)\rangle$ onto $|k\rangle$. The initial state is chosen to be $|\psi(0)\rangle=(|1\rangle+|2\rangle+|3\rangle+|4\rangle)/2$. The projective measurement and the quantum state tomography on the qudit state are realized by BDs, wave plates and a PBS followed by avalanche photodiodes (APDs). Only coincidences between the heralded and trigger photons are registered. The perfect state transfer for $\gamma=0$ is confirmed by the transfer of occupation from the first mode to the fourth mode (Fig.~\ref{fig:2}(a)). In the $\mathcal{PT}$-unbroken phase with a finite $\gamma=0.2J$, there is no perfect state transfer at time $T(\gamma)/2$ due to the non-unitary dynamics (Fig.~\ref{fig:2}(b)). The measured occupations are, however, periodic in time with a period $T(\gamma)$.
At the EP$4$ with $\gamma=J$, the scaled mode occupation $P_k(t)$ grows algebraically with time as $t^6$ (Fig.~\ref{fig:2}(c)). Such a scaling is dictated by the order of the EP. At $\gamma=J$, the Hamiltonian obeys $H^{4}_\mathcal{PT}(\gamma=J)=0$ and the power-series expansion of $U(t)$ terminates at the third order, giving rise to $t^6$ dependence for the occupation numbers. By projecting the time-evolved state onto $|k\rangle$, we can obtain the occupation at the EP$4$ and its power-law behaviour (Fig.~\ref{fig:2}(c)). In the $\mathcal{PT}$-broken phase, the scaled mode occupation grows exponentially with time as expected (Fig.~\ref{fig:2}(d)). We note that while the simulation time-range is limited to two periods for $\gamma<J$, we restrict to $0\leq t\leq 4.5$ due to the rapid growth of the scaled mode occupation at the EP$4$ and in the broken $\mathcal{PT}$ region.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure3}
\caption{Eigenvalues of the perturbed Hamiltonian. The real (a) and imaginary (b) parts of the eigenvalues of the perturbed Hamiltonian $H_\delta=H_\mathcal{PT}(\gamma=J)+iJ\delta|1\rangle\langle 1|$, measured in units of $J$, scale as $\delta^{1/4}$, showing the enhanced sensitivity near the EP$4$. The initial states are eigenstates $|\psi_\delta\rangle$ of $H_\delta$. Experimental errors are calculated via Monte Carlo method; when not shown, error bars are smaller than the symbol size.}
\label{fig:S2}
\vspace{-3mm}
\end{figure}
When the $\mathcal{PT}$-symmetric Hamiltonian is perturbed from the EP$4$ by a small detuning $\delta$, the resulting complex eigenvalues in the vicinity of EP$n$ are given by a Puiseux series in $\delta^{1/n}$~\cite{EUH+08}, indicating enhanced classical sensitivity proportional to the order of the EP~\cite{HAS+17,WSG+17}. In addition to the behavior of the mode occupations at the EP, this serves as a complementary check of the order of the EP. To that end, we experimentally measure the complex eigenvalues of the perturbed Hamiltonian $H_\delta=H_\mathcal{PT}(\gamma=J)+iJ\delta|1\rangle\langle 1|$. Figure~\ref{fig:S2} shows that the real and imaginary parts of the eigenvalues of $H_\delta$ indeed scale as $\delta^{1/4}$, consistent with the EP$4$ that occurs at $\gamma=J$.
\section{Observing information dynamics}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure4}
\caption{Quantum information dynamics of the $\mathcal{PT}$-symmetric qudit. (a) Time-dependent entropy $S(t)$ for a qudit with pure initial state $|\psi(0)\rangle$ remains zero for any non-Hermiticity. (d) With mixed initial state, $S(t)$ is constant when $\gamma=0$, and oscillates in the $\mathcal{PT}$-unbroken phase ($\gamma=0.2J$) with period $T(\gamma)$. At the EP$4$ ($\gamma=J$) and in the broken $\mathcal{PT}$ region, $S(t)$ reaches zero because the system approaches a pure state that is determined by the sole eigenstate at the EP or the mode with maximum amplification. In contrast, the subsystem entropies $S_\mathrm{Gain}(t)$ and $S_\mathrm{Loss}(t)$ show qualitatively similar behavior for pure (b)-(c) and mixed (e)-(f) initial qudit states. In each case, the entropies show oscillatory behavior for $\gamma<J$ and steady-state behavior for $\gamma\geq J$. We restrict to $0\leq t\leq 4.5$ due to the rapid growth of the scaled mode occupation at the EP$4$ and in the broken $\mathcal{PT}$ region. Experimental errors are due to Monte Carlo method; when not shown, error bars are smaller than the symbol size.}
\label{fig:3}
\vspace{-3mm}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure5}
\caption{Reduced, instantaneously normalized density matrices $\tilde\rho_\mathrm{Gain}(t)$ (a)-(d) and $\tilde\rho_\mathrm{Loss}(t)$ (e)-(h) trace different paths inside the Bloch sphere. Experimental results are represented by colored symbols and their theoretical predictions are represented by dashed curves.}
\label{fig:4}
\vspace{-3mm}
\end{figure*}
A crucial aspect of dynamics of a high-dimensional $\mathcal{PT}$-symmetric system is the flow of information among its different parts, and the information retrieval phenomena between the whole system and its environment. To that end, we consider the qudit entropy
\begin{equation}
S(t)=-\text{Tr}\left[\tilde{\rho}(t)\log_2\tilde{\rho}(t)\right],
\end{equation}
where $\tilde{\rho}(t)=\rho(t)/\text{Tr}\left[\rho(t)\right]$ is the instantaneously normalized density matrix and $\rho(t)=U(t)\rho(0)U^\dagger(t)$ is the time-evolved density matrix of the system with a time-dependent trace. The gain- and loss-sector entropies are $S_\mathrm{Gain}(t)$ and $S_\mathrm{Loss}(t)$, respectively. These are obtained from the gain- and loss-sector reduced density matrices $\rho_\mathrm{Gain}(t)=\text{Tr}_{3,4}\left[\rho(t)\right]$ and $\rho_\mathrm{Loss}(t)=\text{Tr}_{1,2}\left[\rho(t)\right]$, respectively.
A full knowledge of the time-dependent density matrix through the quantum-state tomography allows us to experimentally explore the information flow. We focus on the quantum dynamics with the fully symmetric initial state $|\psi(0)\rangle$ (Figs.~\ref{fig:3}(a)-(c)) and a mixed initial state $\rho(0)=0.925|1\rangle\langle1|+0.025\left(|2\rangle\langle2|+|3\rangle\langle3|+|4\rangle\langle4|\right)$ (Figs.~\ref{fig:3}(d)-(f)) in the $\mathcal{PT}$-symmetry unbroken region. Since the qudit undergoes a coherent, non-unitary evolution for any gain-loss strength $\gamma$, a pure state remains a pure state and the entropy of the entire system $S(t)$ remains constant with time (Fig.~\ref{fig:3}(a)). For a mixed initial state, the entropy is constant only in the Hermitian limit, $\gamma=0$. In the $\mathcal{PT}$-symmetry unbroken region, the entropy $S(t)$ shows periodic oscillations. This demonstrates an exchange of quantum information between the $\mathcal{PT}$-symmetric qudit and its environment, and the oscillations observed here may be interpreted as an evidence of information backflow from the environment and a signature of non-Markovianity in the $\mathcal{PT}$-unbroken phase~\cite{XWZ+19}.
At the EP$4$ or in the $\mathcal{PT}$-symmetry broken region, due to the diverging occupation, the normalized density matrix $\tilde\rho(t)$ approaches a pure state, and the total system entropy, therefore, approaches zero~\cite{XWZ+19,KAU17,WLG+19}. In all cases, the experimental simulation results agree well with the theoretical prediction. Importantly, this observed behavior of entropy does not depend on the details of the system, which signifies its universality. In this case, information flows unidirectionally and the dynamics is asymptotically Markovian~\cite{KAU17}.
In a sharp contrast to the results for the entire system, the behavior of subsystem entropies for pure and mixed initial states is qualitatively similar. In either case, the gain-sector entropy $S_\mathrm{Gain}(t)$ and the loss-sector entropy $S_\mathrm{Loss}(t)$ oscillate in the $\mathcal{PT}$-symmetry unbroken region including the Hermitian limit. On the other hand, they reach nonzero steady-state values at EP$4$ and in the broken $\mathcal{PT}$-symmetry region. It is worth its while to point out that although the gain and loss entropies show qualitatively similar behavior, the trajectories traced out by the instantaneously normalized, reduced density matrices $\tilde\rho_\mathrm{Gain}(t)$ and $\tilde\rho_\mathrm{Loss}(t)$ in the Bloch ball are distinctly different (Fig.~\ref{fig:4}). The trajectory of the gain-sector density matrix is weighted towards the northern hemisphere, representing the largest amplifying mode, whereas the loss-sector density matrix trajectory is less heavily weighted. These differences lead to the slightly different behaviors of $S_\text{Gain}$ and $S_\text{Loss}$.
In this paper, we realize a four-level system dynamics under a non-Hermitian Hamiltonian in either $\mathcal{PT}$-symmetric unbroken, broken or at the exceptional point with single photons and a cascaded interferometric setup. We realize $4\times4$ non-unitary evolution operations with six BDs and use another one for state preparation. Two different measurements---projective measurement and the quantum state tomography of a four-level system---are carried out at the output. In contrast, the setup in~\cite{XWZ+19} is much simpler; a two-level system dynamics under a non-Hermitian Hamiltonian is realized with two BDs, and only a single-qubit state tomography is carried out to reconstruct the final state. Our experimental method to implement a non-unitary, loss time evolution operator is scalable, and therefore can be used to simulate higher-dimensional $\mathcal{PT}$-symmetric systems in the future.
\section{Discussion}
In this section we briefly present the analytical derivation for the entropy of the $\mathcal{PT}$-symmetric system. If we start with a pure state, it remains pure under the coherent, non-unitary evolution that is generated by a $\mathcal{PT}$-symmetric, non-Hermitian Hamiltonian. Therefore, the entropy of such a state continues to remain zero. If the initial state is mixed, i.e., $\rho(0)=\sum_{i}\alpha_i|\upsilon_i\rangle\langle\upsilon_i|$, we can express the orthonormal vectors $|\upsilon_i\rangle=\sum_{k}\beta_{ik}|\zeta_k\rangle$ in terms of the non-orthogonal right eigenvectors $|\zeta_k\rangle$ of $H_\mathcal{PT}$. The initial state, thus, can be rewritten as
\begin{align*}
\rho(0)=\sum_{k,j,i}\alpha_i\beta_{ik}\beta^*_{ij}|\zeta_k\rangle\langle\zeta_j|.
\end{align*}
The final state is then given by
\begin{align*}
\rho(t)=&\sum_{\substack{k,i}}\alpha_i|\beta_{ik}|^2|\zeta_k\rangle\langle\zeta_k|\\
&+\sum_{\substack{k\neq j,i}}\alpha_i\beta_{ik}\beta^*_{ij}e^{-i(\lambda_k-\lambda_j)t}|\zeta_k\rangle\langle\zeta_j|.
\end{align*}
We further express the right eigenvectors of $H_\mathcal{PT}$ in terms of the orthonormal eigenvectors of the instantaneous density matrix $\rho(t)$ as $|\zeta_k\rangle=\sum_l \kappa_{kl}|\varphi_l\rangle$. It allows us to obtain the time-dependent occupation eigenvalues $p_k(t)=\langle\varphi_l|\rho(t)|\varphi_l\rangle$ as
\begin{align*}
p_l=\sum_{k,i,l}\alpha_i|\beta_{ik}|^2|\kappa_{kl}|^2+\sum_{\substack{k\neq j,i,l}}\alpha_i\beta_{ik}\beta^*_{ij}\kappa_{kl}\kappa^*_{jl}e^{-i(\lambda_k-\lambda_j)t}.
\end{align*}
In the Hermitian limit, the eigenvectors of $H_\mathcal{PT}$ are orthonormal, and the time evolution acts as the rotation of coordinates. Therefore the eigenstates $|\varphi_i\rangle$ are unchanged and the entropy remains a constant of motion. In the non-Hermitian case, $\{|\zeta_k\rangle\}$ are not orthonormal, and the time-dependent entropy is then given by
\begin{align*}
S(t)=-\sum_l \tilde{p}_l\log_2\tilde{p_l},
\end{align*}
where the fractional occupations are given by $\tilde{p}_l(t)=p_l(t)/\sum_k p_k(t)$. The entropy of time-evolved state oscillates periodically in $\mathcal{PT}$-symmetric unbroken region. At the EP$4$, $p_l(t)$ grow algebraically with time as $t^6$. By writing $p_l=\lambda_l t^6+\mu_l$ where $\lambda_l$ and $\mu_l$ are constant, it is straightforward to see that the entropy approaches a steady-state value polynomially with time. In contrast, in the $\mathcal{PT}$-broken region, $p_l(t)$ grow exponentially with time, leading to a steady-state value for the entropy that is approaches in an exponential manner.
\section{Summary}
Higher-dimensional $\mathcal{PT}$ systems, which can be treated as composites of two or more minimal, non-Hermitian, quantum systems, provide a starting point for interacting quantum models with $\mathcal{PT}$-symmetry and EP degeneracies. In this work, we experimentally simulate and observe the quantum information dynamics in a four-dimensional system with EP$4$. We show that the subsystem-entropy behavior for gain or loss subsystems can be either qualitatively different from or similar to the dynamics for the total entropy of the four-dimensional system. Our work is the first experimental demonstration of critical phenomena in four-dimensional $\mathcal{PT}$-symmetric quantum dynamics, and shows the versatility of the single-photon interferometric network platform for simulating interacting, non-Hermitian, quantum systems.
\acknowledgements The authors thank support from the National Natural
Science Foundation of China (Grant Nos. 11674056, 11974331 and U1930402), the National Natural
Science Foundation of Jiangsu Province (Grant No. BK20190577), the Fundamental Research Funds for the Central Universities (JUSRP11947), the National Key R\&D Program (Grant Nos. 2016YFA0301700 and 2017YFA0304100) and NSF DMR-1054020.
\bibliographystyle{plain}
\bibliographystyle{apsrev4-1}
|
1,116,691,497,648 | arxiv | \section{Introduction}
The Isolation Lemma by Mulmuley, Vazirani, and Vazirani~\cite{MVV87}
states that for any given family of subsets of a ground set $E$, if we assign random
weights (bounded in magnitude by poly($\abs{E}$)) to the elements of $E$ then, with high probability, the minimum weight
set in the family is unique.
Such a weight assignment is called an \emph{isolating weight assignment}.
The lemma was introduced in the context of randomized parallel algorithms for the matching problem.
Since then it has found numerous other applications, in both algorithms and complexity: e.g., a reduction from CLIQUE to UNIQUE-CLIQUE~\cite{MVV87},
NL/poly $\subseteq \oplus$L/poly~\cite{Wig94},
NL/poly $=$ UL/poly~\cite{RA00}, an RNC-algorithm for linear matroid intersection~\cite{NSV94}, and
an RP-algorithm for disjoint paths~\cite{BH14}.
In all of these results, the Isolation Lemma is the only place where they need randomness.
Thus, if the Isolation Lemma can be derandomized, i.e., if a polynomially bounded isolating weight assignment can be
deterministically constructed, then the aforementioned results that rely on it can also be derandomized.
In particular, it will give a deterministic parallel algorithm for matching.
A simple counting argument shows that a single weight assignment with polynomially bounded weights
cannot be isolating for all possible families
of subsets of~$E$.
We can relax the question and ask if we
can construct a poly-size list of poly-bounded weight assignments such that for each family $\mathcal{B} \subseteq 2^E$,
one of the weight assignments in the list is isolating.
%
Unfortunately, even this can be shown to be impossible
via arguments involving the polynomial identity testing (PIT) problem.
The PIT problem asks if an implicitly given multivariate polynomial is identically zero.
%
Derandomization of PIT is another important consequence of derandomizing the Isolation Lemma.
Here, the Isolation Lemma is applied to the family of monomials present in the polynomial.
In essence, if we have a small list of weight assignments that works for all families,
then we will have a small hitting-set for all small degree polynomials, which is impossible (see~\cite{AM08}).
Once we know that a deterministic isolation is not possible for all families,
a natural question is to solve the isolation question for families~$\mathcal{B}$,
that have a succinct representation, for example, the family of perfect matchings of a graph.
For the general setting of families with succinct representations, no deterministic isolation is known, other than the trivial construction with exponentially large weights.
In fact, derandomizing the isolation lemma in this setting will imply circuit lower bounds~\cite{AM08}.
Efficient deterministic isolation is known only for very special kinds of families,
for example, perfect matchings in some special classes of graphs~\cite{DKR10, DK98, GK87, AHT07},
$s$-$t$ paths in directed graphs~\cite{BTV09,KT16,MP17}. Recently, there has been significant progress on deterministic isolation for perfect matchings in bipartite graphs~\cite{FGT16} and subsequently, in general graphs~\cite{ST17},
and matroid intersection~\cite{GT17}, which implied quasi-NC algorithms for these problems.
Motivated by these recent works, we give a generic approach towards derandomizing the Isolation Lemma.
We show that the approach works for a large class of combinatorial polytopes
and conjecture that it works for a significantly larger class.
For a family of sets $\mathcal{B} \subseteq 2^E$, define the polytope~$P(\mathcal{B}) \subseteq \mathbb{R}^E$
to be the convex hull of the indicator vectors of the sets in~$\mathcal{B}$.
Our main result shows that for $m:= \abs{E}$, there exists an $m^{O(\log m)}$-sized family of weight assignments on $E$
with weights bounded by $m^{O(\log m)}$ that is isolating for any family~$\mathcal{B}$ whose corresponding polytope~$P(\mathcal{B})$ satisfies the following property:
\emph{the affine space spanned by any face of~$P(\mathcal{B})$ is parallel to
the null space of {\bf some}
totally unimodular (TU) matrix}; see \cref{thm:TUMisolation}.
This is a black-box weight construction in the sense that it does not need the description of the family or the polytope.
A large variety of polytopes satisfy this property and, as a consequence, have been extensively studied in combinatorial optimization.
The simplest such class is when the polytope $P(\mathcal{B})$ has a description $Ax \leq b$ with $A$ being a TU matrix.
Thus, a simple consequence of our main result is a resolution to the problem of derandomizing the isolation lemma for polytopes with TU constraints, as raised in a recent work~\cite{ST17}.
This generalizes
the isolation result
for perfect matchings in a bipartite graph~\cite{FGT16},
since the perfect matching polytope of a bipartite graph can be described by the incidence matrix of the graph, which is TU.
Other examples of families whose polytopes are defined by TU constraints
are
vertex covers of a bipartite graph, independent sets of a bipartite graph,
and, edge covers of a bipartite graph.
Note that these three problems are computationally equivalent to bipartite matching and thus, already have quasi-NC algorithms due to \cite{FGT16}.
However, the isolation results for these families are not directly implied by isolation for bipartite matchings.
Our work also generalizes the isolation result
for the family of common bases of two matroids~\cite{GT17}.
In the matroid intersection problem,
the constraints of the common base polytope are a rank bound on every subset of the ground set.
These constraints, in general, do not form a TUM.
However,
for every face of the polytope there exist two laminar families of subsets that form a basis for the tight constraints of the face.
The incidence matrix for the union of two laminar families is TU
(see~\cite[Theorem 41.11]{Sch03B}).
Since our condition on the polytope~$P(\mathcal{B})$ does not require the constraint matrix defining the polytope itself (or any of its faces) to be TU,
it is quite weak and is also well studied.
Schrijver~\cite[Theorem 5.35]{Sch03A} shows that this condition is sufficient to prove that the polytope is \emph{box-totally dual integral}.
The second volume of Schrijver's book~\cite{Sch03B}
gives an excellent overview of polytopes that satisfy the condition required in \cref{thm:TUMisolation} such as
\begin{itemize}
\item $R-S$ bibranching polytope \cite[Section 54.6]{Sch03B}
\smallskip
\item directed cut cover polytope \cite[Section 55.2]{Sch03B}
\smallskip
\item submodular flow polyhedron \cite[Theorem 60.1]{Sch03B}
\smallskip
\item lattice polyhedron \cite[Theorem 60.4]{Sch03B}
\smallskip
\item submodular base polytope \cite[Section 44.3]{Sch03B}
\smallskip
\item many other polytopes defined via submodular and supermodular set functions
\cite[Sections 46.1, 48.1, 48.23, 46.13, 46.28, 46.29, 49.3, 49.12, 49.33, 49.39, 49.53]{Sch03B}.
\end{itemize}
We would like to point out that it is not clear if our isolation results in the above settings lead to any new derandomization of algorithms.
Finding such algorithmic applications of our isolation result would be quite interesting.
To derandomize the Isolation Lemma, we
abstract out ideas from the bipartite matching and
matroid intersection isolation~\cite{FGT16,GT17},
and give a geometric approach in terms of certain {\em lattices} associated to polytopes.
For each face $F$ of $P(\mathcal{B})$, we consider the lattice $L_F$ of all integer vectors parallel to~$F$.
We show that, if for each face $F$ of $P(\mathcal{B})$, the number of near-shortest vectors in $L_F$ is polynomially bounded then
we can construct an isolating weight assignment for $\mathcal{B}$ with quasi-polynomially bounded weights; see \cref{thm:isolation}.
Our main technical contribution is to give a polynomial bound on the number of near-shortest vectors in~$L_F$
(whose $\ell_1$-norm is less than $ \nicefrac{3}{2} $ times the smallest $\ell_1$-norm of any vector in~$L_F$),
when this lattice is the set of integral vectors in the null space of a TUM; see \cref{thm:tum-LA}.
%
The above lattice result is in contrast to general lattices where the number of such near-shortest vectors could be exponential in the dimension.
%
Our result on lattices can be reformulated using the language of matroid theory:
the number of near-shortest circuits in a regular matroid is polynomially bounded; see \cref{thm:regular}.
In fact, we show how \cref{thm:tum-LA} can be deduced from \cref{thm:regular}.
One crucial ingredient in the proof of \cref{thm:regular} is Seymour's remarkable decomposition theorem for regular matroids~\cite{Sey80}.
\cref{thm:regular} answers a question raised by Subramanian~\cite{Sub95} and
is a generalization of (and builds on) known results in the case of graphic and cographic matroids,
that is, the number of near-minimum length cycles in a graph is polynomially bounded (see~\cite{TK92,Sub95})
and the result of Karger \cite{Kar93} that states that the number of near-mincuts in a graph is polynomially bounded.
Thus, not only do our results make progress in derandomizing the isolation lemma for combinatorial polytopes, they make interesting connections between lattices (that are geometric objects) and combinatorial polytopes.
Our structural results about the number of near-shortest vectors in lattices
and near-shortest circuits in matroids should be of independent interest and raise the question:
to what extent are they generalizable?
A natural conjecture would be that for any $(0,1)$-matrix, the lattice formed by its integral null vectors has a small number of near-shortest vectors.
In turn, this would give us the isolation result for any polytope which is defined by a $(0,1)$-constraint matrix.
Many combinatorial polytopes have this property.
One such interesting example is the perfect matchings polytope for general graphs.
The recent result of \cite{ST17}, which showed a quasi-NC algorithm for perfect matchings,
does not actually go via a bound on the number of near-shortest vectors in the associated lattice.
Obtaining a polynomial bound on this number would give a proof for their quasi-NC result in our unified framework
and with improved parameters.
Another possible generalization is for $(0,1)$-polytopes that have this property that the integers occurring in the description of each supporting hyperplane are bounded by a polynomial in the dimension of the polytope.
Such polytopes generalize almost all combinatorial polytopes and yet seem to have enough structure -- they have been recently studied in the context of optimization \cite{SinghV14, SV17Entropy}.
\section{Our Results}
\subsection{Isolating a vertex in a polytope}
\label{sec:TUMisolation}
For a set~$E$ and a weight function $w \colon E \to \mathbb{Z}$,
we define the extension of~$w$ to any set $S \subseteq E$ by
$$w(S) := \sum_{e \in S} w(e). $$
Let $\mathcal{B} \subseteq 2^{E}$ be a family of subsets of $E$.
A weight function $w \colon E \to \mathbb{Z}$ is called \emph{isolating for}~$\mathcal{B}$
if the minimum weight set in~$\mathcal{B}$
is unique.
In other words, the set $ \arg \min_{S \in \mathcal{B}} w(S)$ is unique.
The Isolation Lemma of Mulmuley, Vazirani, and Vazirani~\cite{MVV87} asserts that a uniformly random weight function is isolating with a good probability for any~$\mathcal{B}$.
\begin{lemma}[Isolation Lemma]
Let $E$ be a set, $\abs{E} = m$,
and
let $w\colon E \to \{1,2,\dots, 2m \}$ be a random weight function,
where for each $e \in E$,
the weight~$w(e)$ is chosen uniformly and independently at random.
Then for any family $\mathcal{B} \subseteq 2^{E}$, ~$w$ is isolating with probability at least~$ \nicefrac{1}{2} $.
\end{lemma}
\noindent The task of derandomizing the Isolation Lemma
requires the deterministic construction of an isolating weight function with weights polynomially bounded in $m = \abs{E}$.
Here, we view the isolation question for~$\mathcal{B}$
as an isolation over a corresponding polytope~$P(\mathcal{B})$, as follows.
For a set $S \subseteq E$,
its indicator vector
$x^S := (x^S_e)_{e \in E}$ is defined as
$$x^S_e := \begin{cases}
1, & \text{if } e \in S,\\
0, & \text{otherwise.}
\end{cases}
$$
For any family of sets $\mathcal{B} \subseteq \powerset{E}$,
the polytope $P(\mathcal{B}) \subseteq \mathbb{R}^m$ is defined as
the convex hull of the indicator vectors of the sets in $\mathcal{B}$,
i.e.,
$$P(\mathcal{B}) := \conv \set{x^S}{S \in \mathcal{B}}.$$
Note that $P(\mathcal{B})$ is contained in the $m$-dimensional unit hypercube.
The isolation question for a family $\mathcal{B}$ is equivalent to
constructing a weight vector $w \in \mathbb{Z}^E$ such that $\langle w, x \rangle$ has a unique minimum over $P(\mathcal{B})$.
The property we need for our isolation approach is in terms of total unimodularity of a matrix.
\begin{definition}[Totally unimodular matrix]
A matrix $A \in \mathbb{R}^{n \times m}$ is said to be \emph{totally unimodular (TU)}, if every square
submatrix has determinant~$0$ or~$\pm 1$.
\end{definition}
\noindent Our main theorem gives an efficient quasi-polynomial isolation for a family $\mathcal{B}$
when each face of the polytope $P(\mathcal{B})$ lies in the affine space defined by a TU matrix.
\begin{theorem}[{\bf Main Result}]
\label{thm:TUMisolation}
Let $E$ be a set with $\abs{E}=m$.
Consider a class $\mathcal{C}$ of families $\mathcal{B} \subseteq 2^E$ that have the following property:
for any face~$F$ of the polytope~$P(\mathcal{B})$,
there exists a TU matrix~$A_F \in \mathbb{R}^{n \times m}$ such that the affine space spanned by~$F$ is given by $A_Fx = b_F$
for some $b_F \in \mathbb{R}^n$.
We can construct a set $W$ of $m^{O(\log m)}$ weight assignments on~$E$ with weights bounded by $m^{O(\log m)}$
such that for any family~$\mathcal{B}$ in the class $\mathcal{C}$, one of the weight assignments in~$W$ is isolating.
\end{theorem}
\subsection{Short vectors in lattices associated to polytopes}
\label{sec:polytopeLattice}
Our starting point towards proving \cref{thm:TUMisolation}
is a reformulation of the isolation approach for bipartite perfect matching and matroid intersection~\cite{FGT16,GT17}.
For a set~$E$ and a family $\mathcal{B} \subseteq 2^{E}$,
we define a lattice corresponding to each face of the polytope~$P(\mathcal{B})$.
The isolation approach works when this lattice has a small number of near-shortest vectors.
For any face $F$ of $P(\mathcal{B})$, consider the lattice of all integral vectors parallel to~$F$,
$$L_F := \set{ v \in \mathbb{Z}^E }{ v = \alpha (x_1-x_2) \text{ for some } x_1,x_2 \in F \text{ and } \alpha \in \mathbb{R} }.$$
The length of the shortest nonzero vector of a lattice $L$ is denoted by
$$\lambda(L) := \min \set{ \|v\| }{ 0 \neq v \in L},$$
where $\norm{\cdot}$ denotes the $\ell_1$-norm.
We prove that if, for all faces~$F$ of~$P(\mathcal{B})$ the number of near-shortest vectors in~$L_F$
is small, then we can efficiently isolate a vertex in~$P(\mathcal{B})$.
\begin{theorem}[{\bf Isolation via Lattices}]
\label{thm:isolation}
Let $E$ be a set with $|E| = m$ and let $\mathcal{B} \subseteq 2^{E}$ be a family such that there exists a constant $ c >1$,
such that for any face~$F$ of polytope~$P(\mathcal{B})$,
we have
$$\left\lvert \set{v \in L_F }{ \norm{v} < c \, \lambda(L_F)} \right\rvert \leq m^{O(1)}.$$
Then one can construct a set of $m^{O(\log m)}$ weight functions with weights bounded by $m^{O(\log m)}$
such that at least one of them is isolating for~$\mathcal{B}$.
\end{theorem}
\noindent
The main ingredient of the proof of \cref{thm:TUMisolation} is to show that
the hypothesis of \cref{thm:isolation}
is true when the lattice $L_F$ is the set of all integral vectors in the nullspace of a TU matrix.
For any $n \times m$ matrix~$A$ we define a lattice:
$$L(A) := \set{v \in \mathbb{Z}^m }{ A v =0 }.$$
\begin{theorem}[{Near-shortest vectors in TU lattices}]
\label{thm:tum-LA}
For an $n \times m$ TU matrix $A$, let $\lambda := \lambda(L(A))$. Then
$$\left\lvert \set{ v \in L(A) }{ \norm{v} < \nicefrac{3}{2}\,\lambda } \right\rvert ~=~ O(m^5).$$
\end{theorem}
\noindent A similar statement can also be shown with any $\ell_p$-norm for $p\geq 2$, but with an appropriate multiplicative constant.
\cref{thm:tum-LA} together with \cref{thm:isolation} implies \cref{thm:TUMisolation}.
\begin{proof}[Proof of \cref{thm:TUMisolation}]
Let $F$ be a face of the polytope~$P(\mathcal{B})$ and let~$A_F$ be the TU matrix associated with~$F$.
Thus $A_F x = b_F$ defines the affine span of~$F$.
In other words,
the set of vectors parallel to~$F$ is precisely the solution set of $A_F x =0 $
and the lattice~$L_F$ is given by~$L(A_F)$.
\cref{thm:tum-LA} implies the hypothesis of \cref{thm:isolation}
for any $L_F = L(A_F)$,
when the matrix~$A_F$ is TU.
\end{proof}
\subsection{Near-shortest circuits in regular matroids}
The proof of \cref{thm:tum-LA} is combinatorial and uses the language and results from matroid theory.
We refer the reader to Section~\ref{sec:matroids} for preliminaries on matroids;
here we just recall a few basic definitions.
A matroid is said to be \emph{represented by a matrix}~$A$,
if its ground set is the column set of~$A$ and its independent sets
are the sets of linearly independent columns of~$A$.
A matroid represented by a TU matrix is said to be a \emph{regular matroid}.
A \emph{circuit} of a matroid is a minimal dependent set.
The following is one of our main results which gives a bound on the number of near-shortest circuits in a regular matroid,
which, in turn, implies \cref{thm:tum-LA}.
Instead of the circuit size, we consider the weight of a circuit and present a more general result.
\begin{theorem}[{Near-shortest circuits in regular matroids}]
\label{thm:regular}
Let $M=(E,\mathcal{I})$ be a regular matroid with $m = \abs{E} \geq 2$
and let $w\colon E \to \mathbb{N}$ be a weight function.
Suppose~$M$ does not have any circuit~$C$ with $w(C)< r$ for some number $r$.
Then
\[
\abs{\set{C}{C \text{ circuit in $M$ and } w(C) < \nicefrac{3r}{2}}} ~\leq~ 240\, m^{5}.
\]
\end{theorem}
\begin{remark} An extension of this result would be to give a polynomial bound on
the number of circuits of weight at most $\alpha r$
for any constant $\alpha$. Our current proof technique does not extend to this setting.
\end{remark}
\section{Isolation via the Polytope Lattices: Proof of \cref{thm:isolation}}
\label{sec:isolation}
This section is dedicated to a proof of \cref{thm:isolation}.
That is, we give a construction of an isolating weight assignment for a family $\mathcal{B} \subseteq \powerset{E}$
assuming that for each face~$F$ of the corresponding polytope~$P(\mathcal{B})$,
the lattice~$L_F$ has small number of near-shortest vectors.
First, let us see how the isolation question for a family~$\mathcal{B}$ translates in the polytope setting.
For any weight function $w \colon E \to \mathbb{Z}$,
we view~$w$ as a vector in $\mathbb{Z}^E$ and consider the function $\langle w,x \rangle$ over the points in~$P(\mathcal{B})$.
Note that $ \langle w , x^B \rangle = w(B)$, for any $B \subseteq E$.
Thus,
a weight function $w \colon E \to \mathbb{Z}$ is isolating for a family $\mathcal{B}$ if and only if
$\langle w,x \rangle$ has a unique minimum over the polytope~$P(\mathcal{B})$.
Observe that for any $w \colon E \to \mathbb{Z}$, the points that minimize $\langle w, x \rangle $ in $P(\mathcal{B})$
will form a face of the polytope~$P(\mathcal{B})$.
The idea is to build the isolating weight function in rounds.
In every round,
we slightly modify the current weight function to get a smaller minimizing face.
Our goal is to significantly reduce the dimension of the minimizing face in every round.
We stop when we reach a zero-dimensional face, i.e., we have a unique minimum weight point in~$P(\mathcal{B})$.
The following claim asserts that if we modify the current weight function on a small scale,
then the new minimizing face will be a subset of the current minimizing face.
In the following, we will denote the size of the set $E$ by~$m$.
\begin{claim}
\label{cla:subface}
Let $w\colon E \to \mathbb{Z}$ be a weight function and~$F$ be the face of~$P(\mathcal{B})$ that minimizes~$w$.
Let $w'\colon E \to \{0,1,\dots, N-1\}$ be another weight function
and let~$F'$ be the face that minimizes the combined weight function $mN \, w+ w'$.
Then $F'\subseteq F$.
\end{claim}
\begin{proof}
Consider any vertex $x \in F'$. We show that $x \in F$.
By definition of~$F'$, for any vertex $y \in P(\mathcal{B})$ we have
$$\langle mN \, w+ w' , x \rangle \leq \langle mN \, w+ w' , y \rangle.$$
In other words,
\begin{equation}
\label{eq:w'}
\langle mN \, w+ w', x -y \rangle \leq 0 .
\end{equation}
Since $x$ and $y$ are vertices of $\P(\mathcal{B})$, we have $x, y \in \{0,1\}^m$.
Thus, $\abs{ \langle w' , x-y \rangle} < mN.$
On the other hand, if $\abs{\langle m N\, w , x-y \rangle }$ is nonzero then it is at least $mN$ and thus dominates $\abs{\langle w' , x-y \rangle}$.
Hence, for (\ref{eq:w'}) to hold, it must be that
$$\langle m N \, w , x-y \rangle \leq 0.$$
It follows that $\langle w,x \rangle \leq \langle w, y \rangle$,
and therefore $x \in F$.
\end{proof}
\noindent
Thus, in each round, we will add a new weight function to the current function using a smaller scale
and try to get a sub-face with significantly smaller dimension.
Henceforth, $N$ will be a sufficiently large number bounded by $\mathrm{poly}(m)$.
The following claim gives a way to go to a smaller face.
\begin{claim}
\label{cla:parallel}
Let $F$ be the face of $P(\mathcal{B})$ minimizing~$\innerprod{w}{x}$ and
let $v \in L_F$.
Then $\innerprod{w}{v} =0$.
\end{claim}
\begin{proof}
Since $v \in L_F$,
we have
$v = \alpha(x_1 - x_2)$, for some $x_1,x_2 \in F$ and $\alpha \in \mathbb{R}$.
As
$x_1, x_2 \in F$,
we have $\langle w, x_1 \rangle = \langle w, x_2 \rangle$. The claim follows.
\end{proof}
\noindent
Now, let $F_0$ be the face that minimizes the current weight function $w_0$.
Let $v$ be in $L_{F_0}$.
Choose a new weight function $w' \in \{0,1,\dots, N-1\}^E$ such that $\innerprod{w'}{v} \neq 0.$
Let $w_1 := m N\, w_0 + w'$ and let $F_1$ be the face that minimizes~$w_1$.
Clearly, $\innerprod{w_1}{v} \neq 0$ and thus, by \cref{cla:parallel}, $v \not\in L_{F_1}$.
This implies that $F_1$ is strictly contained in $F_0$.
To ensure that $F_1$ is \emph{significantly} smaller than~$F_0$,
we choose many vectors in $L_{F_0}$, say $v_1,v_2,\dots, v_k$,
and construct a weight vector $w'$ such that for all $i \in [k]$, we have $\innerprod{w'}{v_i} \neq 0 $.
The following well-known lemma actually constructs a list of weight vectors
such that one of them has the desired property (see \cite[Lemma 2]{FKS84}).
\begin{lemma}
\label{lem:weights}
Given $m,k,t$, let $q = mk \log t$.
In time $\mathrm{poly}(q)$ one can construct a
set of weight vectors $w_1,w_2,\dots, w_q \in \{0,1,2, \dots, q\}^m$
such that for any set of nonzero vectors $v_1,v_2, \dots, v_k $ in $\{-(t-1), \dots, 0,1,\dots, t-1\}^m$
there exists a $j \in [q]$ such that for all $i \in [k]$ we have
$\langle w_j , v_i \rangle \neq 0$.
\end{lemma}
\begin{proof}
First define $w := (1,t, t^2, \dots, t^{m-1})$.
Clearly, $\langle w , v_i \rangle \neq 0$ for each $i$, because each coordinate of $v_i$ is less than~$t$ in absolute value.
To get a weight vector with small coordinates, we go modulo small numbers.
We consider the following weight vectors $w_j$ for $1 \leq j \leq q$:
$$ w_j := w \bmod j .$$
We claim that this set of weight vectors has the desired property.
We know that
$$W = \prod_{i=1}^k \langle w , v_i \rangle \neq 0.$$
Note that the product $W$ is bounded by $t^{mk}$.
On the other hand, it is known that
$\lcm(2,3,\dots, q) > 2^q = t^{mk}$ for all $q \geq 7$ \cite{Nai82}.
Thus, there must exist a $2 \leq j \leq q$ such that $j$ does not divide $W$.
In other words, for all $i \in [k]$
$$\langle w , v_i \rangle \not\equiv 0 \pmod{j}$$
which is the desired property.
\end{proof}
\noindent
There are two things to note about this lemma: (i) It is black-box in the sense that
we do not need to know the set of vectors $\{v_1,v_2,\dots, v_k\}$. (ii)
We do not know a priori which function will work in the given set of functions. So, one has to
try all possibilities.
The lemma tells us that we can ensure that $\langle w' , v \rangle \neq 0$ for polynomially many vectors~$v$ whose
coordinates are polynomially bounded.
Below, we formally present the weight construction.
To prove \cref{thm:isolation},
let~$c$ be the constant in the assumption of the theorem.
Let $N = m^{O(1)}$ be a sufficiently large number and $p = \floor{\log_c (m+1)} $.
Let~$w_0\colon E \to \mathbb{Z}$ be a weight function such that $ \innerprod{w_0}{v} \neq 0$ for
all nonzero $v \in \mathbb{Z}^E$ with $\norm{v} < c$.
For $i = 1,2, \dots, p$, define
\begin{itemize}
\item[$F_{i-1}$:] the face of $P(\mathcal{B})$ minimizing $w_{i-1}$
\item[$w'_i$:] a weight vector in $\{0,1,\dots,N-1\}^E$ such that $\innerprod{w'_i}{v} \neq 0$ for
all nonzero $v \in L_{F_{i-1}}$ with $ \norm{v} < c^{i+1}$.
\item[$w_{i}$:] $m N w_{i-1} + w'_i$.
\end{itemize}
\noindent
Observe that $F_{i} \subseteq F_{i-1}$, for each~$i$ by \cref{cla:subface}.
Hence, also for the associated lattices we have $L_{F_{i}} \subseteq L_{F_{i-1}}$.
As we show in the next claim, the choice of $w'_i$ together with \cref{cla:parallel} ensures that there are no vectors in $L_{F_i}$ with norm less than $c^{i+1}$.
\begin{claim}
\label{cla:vci}
For $i =0,1,2, \dots, p$,
we have $\lambda(L_{F_i}) \geq c^{i+1}$.
\end{claim}
\begin{proof}
Consider a nonzero vector $v \in L_{F_i}$.
By \cref{cla:parallel}, we have
\begin{equation}
\innerprod{w_i}{v} = m N \innerprod{w_{i-1}}{v} + \innerprod{w'_i}{v} = 0.
\label{eq:wiv0}
\end{equation}
Since $v$ is in $L_{F_i}$, it is also in $L_{F_{i-1}}$ and
again by \cref{cla:parallel}, we have
$ \innerprod{w_{i-1}}{v} = 0$.
Together with~(\ref{eq:wiv0}) we conclude that
$
\innerprod{w'_i}{v} = 0.
$
By the definition of~$w'_{i}$,
this implies that
$\norm{v} \geq c^{i+1}$.
\end{proof}
\noindent
Finally we argue that $w_p$ is isolating.
\begin{claim}
\label{cla:Fp-point}
The face $F_p$ is a point.
\end{claim}
\begin{proof}
Let $y_1,y_2 \in F_p$ be vertices and thus belong to $\{0,1\}^m$.
Then $y_1-y_2 \in L_{F_p}$ and $\norm{y_1-y_2} \leq m < c^{p+1}$.
By \cref{cla:vci}, we have that $y_1-y_2$ must be zero, i.e., $y_1 = y_2$.
\end{proof}
\noindent
The following claim, which gives bounds on the number of weight vectors we need to try
and the weights involved, finishes the proof of \cref{thm:isolation}.
\begin{claim}
\label{cla:weight-bound}
The number of possible choices for $w_p$ such that one of them is isolating for~$\mathcal{B}$
is $m^{O(\log m)}$.
The weights in each such weight vector are bounded by $m^{O( \log m)}$.
\end{claim}
\begin{proof}
To bound the weights of~$w_p$, we bound $w'_i$ for each~$i$.
By \cref{cla:vci},
we have $\lambda(L_{F_{i-1}}) \geq c^{i}$, for each $1\leq i \leq p$.
The hypothesis of \cref{thm:isolation} implies
$$\left\lvert \set{v \in L_{F_{i-1}}}{\norm{v} < c^{i+1}} \right\rvert \leq m^{O(1)}.$$
Recall that we have to ensure $ \innerprod{w'_i}{v} \neq 0$ for all nonzero vectors~$v$ in the above set.
We apply \cref{lem:weights} with $k = m^{O(1)}$.
For parameter~$t$,
note that as $\norm{v} < c^{i+1} \leq c^{p+1} \leq c(m+1)$, each coordinate of $v$ is
less than $c(m+1)$
and therefore $t \leq c(m+1)$.
Thus,
we get $w'_i$ with weights bounded by~$m^{O(1)}$.
Therefore the weights in $w_p$ are bounded by $m^{O(p)} = m^{O( \log m)}$.
Recall that \cref{lem:weights} actually gives a set of $m^{O(1)}$ weight vectors for possible choices of $w'_i$
and one of them has the desired property. Thus, we try all possible combinations for each $w'_i$.
This gives us a set of $m^{O(\log m)}$ possible choices for $w_p$ such that one of them is isolating for~$\mathcal{B}$.
\end{proof}
\section{Number of Short Vectors in Lattices: Proof of \cref{thm:tum-LA}}
\label{sec:tum-regular}
In this section, we show that \cref{thm:tum-LA} follows from \cref{thm:regular}.
We define a circuit of a matrix and show that
to prove \cref{thm:tum-LA}, it is sufficient to upper bound the number of near-shortest circuits of a TU matrix.
We argue that this, in turn, is implied by a bound on the number of near-shortest circuits of a regular matroid.
Just as a circuit of a matroid is a minimal dependent set, a circuit of matrix is
a minimal linear dependency among its columns.
Recall that for an $n \times m$ matrix~$A$,
the lattice~$L(A)$ is defined as the set of integer vectors in its kernel,
$$L(A) := \set{v \in \mathbb{Z}^m }{ A v =0 }.$$
\begin{definition}[Circuit]
\label{def:tum-circuit}
For an $n\times m$ matrix~$A$, a vector $u \in L(A)$ is a \emph{circuit of}~$A$ if
\begin{itemize}
\item
there is no nonzero $v \in L(A)$ with $\mathrm{supp}(v) \subsetneq \mathrm{supp}(u)$, and
\item
$\gcd(u_1,u_2, \dots, u_m) =1$.
\end{itemize}
\end{definition}
\noindent
Note that if $u$ is a circuit of~$A$, then so is $-u$.
The following property of the circuits of a TU matrix is well known (see~\cite[Lemma 3.18]{Onn10}).
\begin{fact}
\label{fac:matrix-circuit}
Let $A$ be a TU matrix.
Then every circuit of~$A$ has its coordinates in $\{-1,0,1\}$.
\end{fact}
\noindent
Now, we define a notion of conformality among two vectors.
\begin{definition}[Conformal \cite{Onn10}]
Let $u,v \in \mathbb{R}^m$.
We say that~$u$ is \emph{conformal to}~$v$, denoted by
$u \sqsubseteq v $,
if $u_iv_i \geq 0$ and $\abs{u_i} \leq \abs{v_i}$, for each $1\leq i \leq m$.
\end{definition}
\begin{observation}
\label{obs:conformal}
For vectors~$u$ and~$v$ with $u \sqsubseteq v$, we have
$\norm{v-u} = \norm{v} - \norm{u}$.
\end{observation}
\noindent
The following lemma follows from \cite[Lemma 3.19]{Onn10}.
\begin{lemma}
\label{lem:conformal-circuit}
Let $A$ be a TU matrix.
Then for any nonzero vector $v \in L(A)$, there is
a circuit~$u$ of~$A$ that is conformal to~$v$.
\end{lemma}
\noindent
We use the lemma to argue that any small enough vector in $L(A)$ must be a circuit.
\begin{lemma}
\label{lem:tum-small-circuit}
Let $A$ be a TU matrix and let $\lambda := \lambda(L(A))$.
Then any nonzero vector $v \in L(A)$ with $\norm{v} < 2 \lambda$ is a circuit of $A$.
\end{lemma}
\begin{proof}
Suppose $v \in L(A)$ is not a circuit of~$A$.
We show that $\norm{v} \geq 2 \lambda$.
By \cref{lem:conformal-circuit},
there is a circuit $u$ of $A$ with $u \sqsubseteq v$.
Since $v$ is not a circuit, $v-u \neq 0$.
Since both~$u$ and~$v-u$ are nonzero vectors in~$L(A)$, we have
$\norm{u}, \norm{v-u} \geq \lambda $.
By \cref{obs:conformal}, we have
$\norm{v} =\norm{v-u} + \norm{u} $ and thus,
we get that $\norm{v} \geq 2 \lambda$.
\end{proof}
\noindent
Recall that a matroid represented by a TU matrix is a regular matroid (see~\cref{thm:reg}).
The following lemma shows that
the two definitions of circuits,
1) for TU matrices and
2) for regular matroids,
coincide.
\begin{lemma}
\label{lem:circuits}
Let $M=(E,\mathcal{I})$ be a regular matroid, represented by a TU matrix~$A$.
Then there is a one to one correspondence between the circuits of~$M$
and the circuits of~$A$ (up to change of sign).
\end{lemma}
\begin{proof}
If $u \in \mathbb{R}^E$ is a circuit of~$A$,
then the columns in~$A$ corresponding to the set~$\mathrm{supp}(u)$
are minimally dependent. Thus, the set~$\mathrm{supp}(u)$ is a circuit of matroid~$M$.
In the other direction, a circuit $C \subseteq E$ of matroid~$M$ is a minimal dependent set.
Thus, the set of columns of~$A$ corresponding to~$C$ is minimally linear dependent.
Hence, there are precisely two circuits $u , -u \in L(A)$ with their support being~$C$.
\end{proof}
\noindent
To prove \cref{thm:tum-LA}, let~$A$ be TU matrix.
By \cref{lem:tum-small-circuit},
it suffices to bound the number of near-shortest circuits of~$A$.
By \cref{lem:circuits}, the circuits of~$A$ and the circuits
of the regular matroid~$M$ represented by~$A$, coincide.
Moreover, the size of a circuit of~$M$ is same as the $\ell_1$-norm of the corresponding circuit of~$A$,
as a circuit of $A$ has its coordinates in $\{-1,0,1\}$ by \cref{fac:matrix-circuit}.
Now \cref{thm:tum-LA} follows from \cref{thm:regular}
when we define the weight of each element being~$1$.
\section{Proof Overview of \cref{thm:regular}}
\label{sec:overview}
\cref{thm:regular} states that for a regular matroid, the number of near-shortest circuits
-- circuits whose size is at most 3/2 of the shortest circuit size -- is polynomially bounded.
The starting point of the proof of this theorem is a remarkable result of Seymour~\cite{Sey80} which showed that every regular matroid
can be decomposed into a set of much simpler matroids.
Each of these building blocks for regular matroids either
belongs to the classes of graphic and cographic matroids -- the simplest and well-known examples of regular matroids,
or is a special 10-element matroid $R_{10}$ (see Section~\ref{sec:matroids} for the definitions).
One important consequence of Seymour's result is a polynomial time algorithm, the only one known, for testing
the total unimodularity of a matrix; see \cite{Sch86} (recall that a TU matrix represents a regular matroid).
Our strategy is to leverage Seymour's decomposition theorem in order to bound the number of circuits in a regular matroid.
\subsubsection*{Seymour's Theorem and a simple inductive approach}
Seymour's decomposition involves a sequence of binary operations on matroids, each of which is either a $1$-sum, a $2$-sum or a $3$-sum.
Formally, it states that for every regular matroid $M$, we can build a decomposition tree -- which is a binary rooted tree -- in which the root node is the matroid $M$,
every node is a $k$-sum of its two children for $k=1,2$, or $3$, and at the bottom we have
graphic, cographic and the $R_{10}$ matroids as the leaf nodes.
%
Note that the tree, in general, is not necessarily balanced and can have large depth (linear in the ground set size).
This suggests that to bound the number of near-shortest circuits in a regular matroid,
perhaps one can use the tree structure of its decomposition, starting from the leaf nodes and arguing, inductively,
all the way up to the root.
It is known that the number of near-shortest circuits in graphic and cographic matroids
is polynomially bounded.
This follows from the polynomial bounds on the number of near-shortest cycles of a graph \cite{Sub95}
and on the number of near min-cuts in a graph \cite{Kar93} (\cref{thm:graphic-cographic}).
The challenge is to show how to combine the information at an internal node.
The $k$-sum $M$ of two matroids $M_1$ and $M_2$ is defined in a way such that
each circuit of $M$ can be built from a combination of two circuits, one from $M_1$ and another from $M_2$.
Thus, if we have upper bounds for the number of circuits in $M_1$ and $M_2$,
their product will give a naive upper bound for number of circuits in $M$.
Since there can be many $k$-sum operations involved, the naive product bound can quickly explode.
Hence, to keep a polynomial bound
we need to take a closer look at the $k$-sum operations.
\subsection*{$k$-sum operations}
\paragraph{\textbf{$1$-sum.}} A $1$-sum $M$ of two matroids $M_1$ and $M_2$ is simply their direct sum.
That is, the ground set of $M$ is the disjoint union of the ground sets of $M_1$ and $M_2$, and any circuit of $M$
is either a circuit of $M_1$ or a circuit of $M_2$.
The $2$-sum and $3$-sum are a bit more intricate.
It is known that the set of circuits of a matroid completely characterizes the matroid.
The $2$-sum and $3$-sum operations are defined by describing the set of circuits of the matroid obtained by the sum.
To get an intuition for the $2$-sum operation, we first describe it on two graphic matroids.
A graphic matroid is defined with respect to a graph, where a circuit is a simple cycle in the graph.
%
\paragraph{\textbf{$2$-sum on graphs.}} For two graphs $G_1$ and $G_2$, their $2$-sum $G = G_1 \oplus_2 G_2$ is any graph obtained by
identifying an edge $(u_1,v_1)$ in $G_1$ with an edge $(u_2,v_2)$ in $G_2$, that is,
identifying $u_1$ with $u_2$ and $v_1$ with $v_2$ and then, deleting the edge $(u_1,v_1) = (u_2,v_2)$.
It would be instructive to see how a cycle in $G$, i.e., a circuit of the associated graphic matroid, looks like.
A cycle in $G$ is either a cycle in $G_1$ or in $G_2$
that avoids the edge $(u_1,v_1)=(u_2,v_2)$, or it is a union of a path $u_1 \rightsquigarrow v_1$ in $G_1$ and a
path $v_2 \rightsquigarrow u_2$ in $G_2$.
This last possibility is equivalent to taking a symmetric difference $C_1\triangle C_2$ of two cycles $C_1$ in $G_1$ and $C_2$ in $G_2$ such that
$C_1$ passes through $(u_1,v_1)$ and $C_2$ passes through $(u_2,v_2)$.
\paragraph{\textbf{$2$-sum on matroids.}}
The $2$-sum $M_1\oplus_2 M_2$ of two matroids $M_1$ and $M_2$ is defined analogously.
The grounds sets of $M_1$ and $M_2$, say $E_1$ and $E_2$ respectively,
have an element in common, say $e$ (this can be achieved by identifying an element from $E_1$ with an element from $E_2$).
The sum $M_1 \oplus_2 M_2$ is defined on the ground set $E =E_1 \Delta E_2$, the symmetric difference of the two given ground sets.
Any circuit of the sum $M_1\oplus_2 M_2$ is either
a circuit in $M_1$ or in $M_2$ that avoids the common element $e$,
or it is the symmetric difference $C_1 \triangle C_2$ of two circuits $C_1$ and $C_2$ of $M_1$ and $M_2$, respectively,
such that both $C_1$ and $C_2$ contain the common element $e$.
\paragraph{\textbf{$3$-sum on matroids.}} A $3$-sum is defined similarly.
A matroid $M$ is a $3$-sum of two matroids $M_1$ and $M_2$
if their ground sets $E_1$ and $E_2$
have a set $S$ of three elements in
common such that $S$ is a circuit in both the matroids
and the ground set of $M$ is the symmetric difference $E_1\triangle E_2$.
Moreover, a circuit of $M$ is either
a circuit in $M_1$ or in $M_2$ that avoids the common elements $S$,
or it is the symmetric difference $C_1 \triangle C_2$ of two circuits $C_1$ and $C_2$ of $M_1$ and $M_2$, respectively,
such that both $C_1$ and $C_2$ contain a common element $e$ from $S$ and no other element from $S$.
\subsection*{The inductive bound on the number of circuits}
Our proof is by a strong induction on the ground set size.
\paragraph{Base case:} For a graphic or cographic matroid with a ground set of size $m$,
if its shortest circuit has size $r$ then the number of its circuits of size less than $\alpha r$ is at most $m^{4\alpha}$.
For the $R_{10}$ matroid, we present a constant upper bound on the number of circuits.
\paragraph{Induction hypothesis:}
For any regular matroid with a ground set of size $m< m_0$,
if its shortest circuit has size $r$,
then the number of its circuits of size less than $\alpha r$ is bounded by ${m}^{c\alpha}$ for some sufficiently large constant $c$.
\paragraph{Induction step:}
We prove the induction hypothesis for a regular matroid $M$ with a ground set of size $m_0$.
Let the minimum size of a circuit in $M$ be $r$.
We want to show a bound of $m_0^{c\alpha}$ on the number of circuits in $M$ of size less than $\alpha r$.
The main strategy here is as follows:
by Seymour's Theorem, we can write $M$ as a $k$-sum of two smaller regular matroids $M_1$ and $M_2$, with ground sets of size $m_1 <m_0$ and $m_2 <m_0$ respectively.
As the circuits of $M$ can be written as a symmetric differences of circuits of $M_1$ and $M_2$,
we derive an upper bound on the number circuits of $M$
from the corresponding bounds for $M_1$ and $M_2$, which we get from the induction hypothesis.
\vspace{1mm}
\noindent \textbf{The $1$-sum case.}
In this case, any circuit of $M$ is either a circuit of $M_1$ or a circuit of $M_2$.
Hence, the number of circuits in $M$ of size less than $\alpha r$ is simply the sum of the number of circuits in $M_1$ and $M_2$ of size less than $\alpha r$.
Using the induction hypothesis, this sum is bounded by $m_1^{c\alpha}+ m_2^{c\alpha}$,
which is less than $m_0^{c\alpha}$ since
$m_0 = m_1+m_2$.
\vspace{1mm}
\noindent\textbf{The $2$-sum and $3$-sum cases.}
Let the set of common elements in the ground sets of $M_1$ and $M_2$ be $S$.
Note that $m_0 = m_1 +m_2 -\abs{S}$.
Recall from the definition of a $k$-sum that any circuit $C$ of $M$ is of the form $C_1 \triangle C_2$, where $C_1 $ and $C_2$ are circuits in $M_1$ and $M_2$ respectively, such that either \textbf{(i)} one of them, say $C_1$, has no element from $S$ and the other one $C_2$ is empty
or \textbf{(ii)} they both contain exactly one common element from $S$.
We will refer to $C_1$ and $C_2$ as projections of $C$.
Note that $\abs{C_1},\abs{C_2} \leq \abs{C}$.
In particular, if circuit $C$ is of size less than $\alpha r$, then so are its projections
$C_1$ and $C_2$.
\vspace{1mm}
\noindent\textbf{An obstacle.}
The first step would be to bound the number of circuits $C_1$ of $M_1$ and $C_2$ of $M_2$ using the induction hypothesis.
However, we do not have a lower bound on the minimum size of a circuit in $M_1$ or $M_2$, which is required to use the induction hypothesis.
What we do know is that any circuit in $M_1$ or $M_2$ that does not involve elements from $S$ is also a circuit of $M$,
and thus, must have size at least $r$.
However, a circuit that involves elements from $S$ could be arbitrarily small.
We give different solutions for this obstacle in case \textbf{(i)} and case \textbf{(ii)} mentioned above.
\vspace{1mm}
\noindent\textbf{Case (i): deleting elements in $S$.}
Let us first
consider the circuits $C_1$ of $M_1$ that do not involve elements from $S$.
These circuits can be viewed as circuits of a new regular matroid $M_1\setminus S$ obtained by deleting the elements in $S$ from $M_1$.
Since we know that the minimum size of a circuit in $M_1\setminus S$ is $r$, we can apply the induction hypothesis
to get a bound of $(m_1-\abs{S})^{c\alpha}$ for the number of circuits $C_1$ of $M_1\setminus S$ of size less than $\alpha r$.
Summing this with a corresponding bound for $M_2\setminus S$ gives us a bound less than $m_0^{c \alpha}$ for the number of circuits of $M$ in case \textbf{(i)}.
\vspace{1mm}
\noindent\textbf{Case (ii): stronger induction hypothesis.}
The case when circuits $C_1$ and $C_2$ contain an element from $S$ turns out to be much harder.
For this case, we actually need to strengthen our induction hypothesis.
Let us assume that for a regular matroid of ground set size $m <m_0$, if the minimum size of a circuit that avoids a given element $\widetilde{e}$ is $r$,
then the number of circuits containing $\widetilde{e}$ and of size less than $\alpha r$ is bounded by $m^{c\alpha}$.
This statement will also be proved by induction, but we will come to its proof later.
Since we know that any circuit in $M_1$ (or $M_2$) that avoids elements from $S$ has size at least $r$,
we can use the above stronger inductive hypothesis to get a bound of $m_1^{c \alpha}$
on the number of circuits $C_1$ in $M_1$ containing a given element from $S$ and of size less than $\alpha r$.
Similarly, we get an analogous bound of $m_2^{c \alpha}$ for circuits $C_2$ of $M_2$.
Since $C$ can be a symmetric difference of any $C_1$ and $C_2$,
the product of these two bounds, that is, $(m_1m_2)^{c\alpha}$
bounds the number of circuits $C$ of $M$ of size less than $\alpha r$.
Unfortunately, this product can be much larger than $m_0^{c\alpha}$.
Note that this product bound on the number of circuits $C$ is not really tight since $C_1$ and $C_2$ both cannot have their sizes close to $\alpha r$ simultaneously.
This is because $C = C_1 \triangle C_2$ and thus, $\abs{C} = \abs{C_1} + \abs{C_2}-1$.
Hence,
a better approach is to consider different cases based on the sizes of $C_1$ and $C_2$.
\vspace{2mm}
\noindent\textbf{Number of circuits $C$ when one of its projections is small.}
We first consider the case when the size of $C_1$ is very small, i.e., close to zero.
In this case, the size of $C_2$ will be close to $\alpha r$ and we have to take the bound of $m_2^{c \alpha}$ on the number of such circuits $C_2$.
Now, if number of circuits $C_1$ with small size is $N$ then we get a bound of $N m_2^{c \alpha}$ on the number of circuits $C$ of $M$ of this case.
Note that $N m_2^{c\alpha}$ is dominated by $m_0^{c\alpha}$ only when $N\leq 1$, as $m_2$ can be comparable to $m$.
While $N\leq 1$ does not always hold, we show something weaker which is true.
{\emph{Uniqueness of $C_1$.}}
We can show that for any element $s$ in the set of common elements $S$, there is at most one circuit $C_1$ of size less than $r/2$
that contains $s$ and no other element from $S$.
To see this, assume that there are two such circuits $C_1$ and $C'_1$.
It is known that the symmetric difference of two circuits of a matroid is a disjoint union of some circuits of the matroid.
Thus, $C_1 \triangle C'_1$ will be a disjoint union of circuits of $M_1$.
Since $C_1 \triangle C'_1$ does not contain any element from $S$, it is also a disjoint union of circuits of $M$.
This would lead us to a contradiction because the size of $C_1 \triangle C'_1$ is less than $r$ and $M$ does not
have circuits of size less than $r$. This proves the uniqueness of $C_1$.
Our problem is still not solved since the set $S$ can have three elements in case of a $3$-sum,
and thus, there can be three possibilities for $C_1$ (i.e., N=3).
{\emph{Assigning weights to the elements.}}
To get around this problem, we use a new idea of considering matroids elements with weights.
For each element $s$ in $S$,
consider the unique circuit $C_1$ of size at most $r/2$ that contains $s$.
In the matroid $M_2$, we assign a weight of $\abs{C_1} -1$ to the element $s$.
The elements outside $S$ get weight $1$.
The weight of element $s \in S$ signifies that if a circuit $C_2$ of $M_2$ contains $s$ then it has to be summed up with the unique
circuit $C_1$ containing $s$, which adds a weight of $\abs{C_1}-1$.
Essentially, the circuits of the weighted matroid $M_2$ that have weight $\gamma$ will have a one-to-one correspondence with circuits $C = C_1 \triangle C_2$ of $M$ that have size $\gamma$ and have $ \abs{C_1} < r/2 $.
Hence, we can assume there are no circuits in the weighted matroid $M_2$ of weight less than $r$.
Thus, we can apply the induction hypothesis on $M_2$, but we need to further strengthen the hypothesis to a weighted version.
By this new induction hypothesis, we will get a bound of $m_2^{c \alpha }$
on the number of circuits of $M_2$ with weight less that $\alpha r$.
As mentioned above, this will bound the number of circuits $C =C_1 \triangle C_2 $ of $M$ with size less than $\alpha r$
and $\abs{C_1} < r/2$.
Note that the bound $m_2^{c \alpha }$ is smaller than the desired bound $m_0^{c\alpha}$.
\vspace{2mm}
\noindent\textbf{Number of circuits $C$ when none of its projections is small.}
It is relatively easier to handle the other case when $C_1$ has size at least $r/2$ (and less than $\alpha r$).
In this case, $C_2$ has size less than $(\alpha- \nicefrac{1}{2} )r$.
The bounds we get by the induction hypothesis for the number of circuits $C_1$ and $C_2$
are $m_1^{c \alpha}$ and $m_2^{c(\alpha- \nicefrac{1}{2} )}$ respectively.
Their product $m_1^{c \alpha} m_2^{c(\alpha- \nicefrac{1}{2} )}$ bounds the number of circuits $C$ in this case.
However, this product is not bounded by $m_0^{c\alpha}$.
{\emph{Stronger version of Seymour's Theorem.}}
To get a better bound we need another key idea.
Instead of Seymour's Theorem, we work with a stronger variant given by Truemper~\cite{Tru98}.
It states that any regular matroid can be written as a $k$-sum of two smaller regular matroids $M_1$ and $M_2$ for $k=1,2$ or $3$
such that one of them, say $M_1$, is a graphic, cographic or $R_{10}$ matroid.
The advantage of this stronger statement is that we can take a relatively smaller bound on the number of circuits of $M_1$,
which gives us more room for the inductive argument.
Formally, we know from above that when $M_1$ is a graphic or cographic matroid, the number of its circuits of size less than $\alpha r$ is at most $m_1^{4 \alpha}$.
One can choose the constant $c$ in our induction hypothesis to be sufficiently large so that the product
$m_1^{4\alpha} m_2^{c(\alpha- \nicefrac{1}{2} )}$ is bounded by $m_0^{c\alpha}$.
\subsection*{A stronger induction hypothesis}
To summarize, we work with an inductive hypothesis as follows:
If a regular matroid (with weights) has no circuits of weight less than $r$ that avoid a given set $R$ of elements
then the number of circuits of weight less than $\alpha r$ that contain the set $R$ is bounded by $m^{c\alpha}$.
As the base case, \cref{lem:graphic-set} shows this statement for the graphic and cographic case.
When we rerun the whole inductive argument with weights and with a fixed set $R$, we run into another issue.
It turns out that in the case when the size of $C_1$ is very small, our arguments above do not go through if $C_1$
has some elements from $R$.
To avoid such a situation we use yet another
strengthened version of Seymour's Theorem.
It says that any regular matroid with a given element $\widetilde{e}$ can be written as a $k$-sum of two smaller regular matroids $M_1$ and $M_2$,
such that $M_1$ is a graphic, cographic or $R_{10}$ matroid and $M_2$ is a regular matroid containing $\widetilde{e}$ (\cref{thm:decomp}).
When our $R$ is a single element set, say $\{\widetilde{e}\}$, we use this theorem to ensure that $M_1$, and thus $C_1$,
has no elements from $R$.
This rectifies the problem when $R$ has size $1$.
However, as we go deeper inside the induction, the set $R$ can grow in size.
Essentially, whenever $\alpha$ decreases by $ \nicefrac{1}{2} $ in the induction, the size of $R$ grows by $1$.
Thus, we take $\alpha$ to be $ \nicefrac{3}{2} $, which means that to reach $\alpha =1$ we need only one step of decrement,
and thus, the size of $R$ at most becomes $1$.
This is the reason our main theorem only deals with circuits of size less than $ \nicefrac{3}{2} $ times the smallest size.
In order to generalize this result for an arbitrary constant $\alpha$, a different method is required. This will be the subject of a follow-up work.
\paragraph{\bf Organization of the rest of the paper.}
The remainder of the paper is dedicated to the formal proof of \cref{thm:regular}.
We first give some matroid preliminaries and Seymour's decomposition theorem for regular matroids
in Section~\ref{sec:matroids}.
Finally, in Section~\ref{sec:ShortCircuits}, we prove \cref{thm:regular}.
\section{Matroids}\label{sec:matroids}
In Section~\ref{sec:matroidprelim},
we recall some basic definitions and well-known facts about matroids
(see, for example, \cite{Oxl06,Sch03B}).
In Section~\ref{sec:seymour}, we describe Seymour's decomposition theorem for regular matroids.
\subsection{Matroids preliminaries}\label{sec:matroidprelim}
We start with some basic definitions.
\begin{definition}[Matroid]
\label{def:matroid}
A pair $M=(E,\mathcal{I})$ is a \emph{matroid} if~$E$ is a finite set
and~$\mathcal{I}$ is a nonempty collection of subsets of~$E$ satisfying
\begin{enumerate}
\item if $I \in \mathcal{I}$ and $J \subseteq I$, then $J \in \mathcal{I}$,
\item if $I,J \in \mathcal{I}$ and $|I| < |J|$, then $I \cup \{z\} \in \mathcal{I}$, for some $z \in J \setminus I$.
\end{enumerate}
A subset $I$ of~$E$ is said to be {\em independent}, if~$I$ belongs to~$\mathcal{I}$ and {\em dependent} otherwise.
An inclusionwise maximal independent subset of~$E$ is a {\em base} of~$M$.
An inclusionwise minimal dependent set is a \emph{circuit} of~$M$.
\end{definition}
\noindent
We define some special classes of matroids.
\begin{definition}[Linear, binary, and regular matroid]
A matroid $M=(E,\mathcal{I})$ with $m = |E|$ is \emph{linear} or \emph{representable} over some field~$\mathbb{F}$,
if there is a matrix $A \in \mathbb{F}^{n \times m}$, for some~$n$,
such that the collection of subsets of the columns of $A$ that are linearly independent over $\mathbb{F}$ is identical to $\mathcal{I}$.
A matroid $M$ is \emph{binary}, if $M$ is representable over $\rm{GF}(2)$.
A matroid $M$ is \emph{regular}, if $M$ is representable over every field.
\end{definition}
It is well known that regular matroids can be characterized in terms of TU matrices.
\begin{theorem}[See~\cite{Oxl06,Sch03B}]
\label{thm:reg}
A matroid $M$ is regular if, and only if, $M$ can be represented by a TU matrix over~$\mathbb{R}$.
\end{theorem}
Two special classes of regular matroids are graphic matroids
and their duals, cographic matroids.
\begin{definition}[Graphic and cographic matroid]
A matroid $M=(E,\mathcal{I})$ is said to be a \emph{graphic}, if there is an undirected graph $G=(V,E)$ whose edges correspond to the
ground set~$E$ of $M$, such that $I \in \mathcal{I}$ if and only if~$I$ forms a forest in~$G$.
By~$M(G)$ we denote the graphic matroid corresponding to~$G$.
The \emph{dual of}~$M$ is the matroid $M^*=(E,\mathcal{I}^*)$ over the same ground set
such that a set $I \subseteq E$ is independent in~$M^*$
if and only if $E\setminus I$ contains a base set of~$M$.
A \emph{cographic matroid} is the dual of a graphic matroid.
\end{definition}
\noindent
For $G=(V,E)$, we can represent~$M(G)$ by the vertex-edge incidence matrix $A_G \in \{0,1\}^{V \times E}$ (over $GF(2)$),
$$A_G(v,e) = \begin{cases}
1 & \text{if } e \text{ is incident on } v, \\
0 & \text{otherwise. }
\end{cases} $$
\begin{definition}[Graph cut and cut-set]
For a graph~$G=(V,E)$, a \emph{cut} is a partition $(V_1,V_2)$ of $V$ into two disjoint subsets.
Any cut $(V_1,V_2)$ uniquely determines a \emph{cut-set}, the set of edges that have one endpoint in $V_1$ and the other in~$V_2$.
The \emph{size of a cut} is the number of edges in the corresponding cut-set.
A \emph{minimum cut} is one of minimum size.
\end{definition}
\begin{fact}
\label{fac:cographic-circuits}
Let $G=(V,E)$ be a graph.
\begin{enumerate}
\item
The circuits of the graphic matroid~$M(G)$ are exactly the simple cycles of~$G$.
\item
The circuits of the cographic matroid~$M^*(G)$ are exactly the inclusionwise minimal cut-sets of~$G$.
\end{enumerate}
\end{fact}
\noindent
The symmetric difference of two cycles in a graph is a disjoint union of cycles.
The analogous statement is true for binary matroids.
\begin{fact}
\label{fac:binary}
Let $M$ be binary.
If~$C_1$ and~$C_2$ are circuits of~$M$,
then the symmetric difference $C_1 \triangle C_2$ is a disjoint union of circuits.
\end{fact}
\noindent
To prove \cref{thm:regular},
we have to bound the number of short circuits in regular matroids.
In \cref{lem:graphic-set},
we start by providing such a bound for graphic and cographic matroids.
The lemma is a variant of the following theorem
that bounds the number of near-shortest cycles~\cite{Sub95}
and the number of near-minimum cuts~\cite{Kar93} in a graph.
\begin{theorem}
\label{thm:graphic-cographic}
Let $G=(V,E)$ be a graph with $m \geq 1$ edges and $\alpha \geq 2$.
\begin{enumerate}
\item If $G$ has no cycles of length at most~$r$, then the number of cycles in $G$ of length at most~$\alpha r/2$ is bounded by $(2m)^{\alpha}$~{\rm\cite{Sub95}}.
\item If $G$ has no cuts of size at most~$r$, then the number of cuts in $G$ of size at most~$\alpha r/2$ is bounded by~$m^{\alpha}$~{\rm\cite{Kar93}}.
\end{enumerate}
\end{theorem}
\noindent
We define two operations on matroids.
\begin{definition}[Deletion, contraction, minor]
Let $M=(E,\mathcal{I})$ be a matroid and $e \in E$.
The \emph{matroid obtained from $M$ by deleting $e$} is denoted by $M\setminus e $.
Its independent sets are given by the collection $\set{I \in \mathcal{I}}{ e \not\in I}$.
The \emph{matroid obtained by contracting $e$} is denoted by~$M/e$.
Its independent sets are given by the collection $\set{I \subseteq E\setminus \{e\} }{ I \cup \{e\} \in \mathcal{I}}$.
A matroid obtained after a series of deletion and contraction operations on~$M$ is called a \emph{minor of~$M$}.
\end{definition}
\begin{fact}
\label{fac:closed}
Let $M=(E,\mathcal{I})$ be a matroid and $e \in E$.
\begin{enumerate}
\item
The circuits of $M \setminus e$ are those circuits of $M$ that do not contain $e$.
\item
The classes of regular matroids, graphic matroids, and cographic matroids are
minor closed.
\end{enumerate}
\end{fact}
\noindent
For a characterization of regular matroids, we will need a specific matroid $R_{10}$, first introduced by~\cite{Bix77}.
It is a matroid, with 10 elements in the ground set, represented over $GF(2)$ by the following matrix.
\[
\begin{pmatrix}
1 & 1 & 0&0&1& 1&0&0&0&0 \\
1&1&1&0&0 &0&1&0&0&0 \\
0&1&1&1&0 &0&0&1&0&0 \\
0&0&1&1&1& 0&0&0&1&0 \\
1&0&0&1&1& 0&0&0&0&1
\end{pmatrix}
\]
\begin{fact}[\cite{Sey80}]
\label{fac:R10}
Any matroid obtained by deleting some elements from $R_{10}$ is a graphic matroid.
\end{fact}
\subsection{Seymour's Theorem and its variants}
\label{sec:seymour}
The main ingredient for the proof of \cref{thm:regular} is a theorem of
Seymour~\cite[Theorem 14.3]{Sey80} that shows that every regular matroid can be constructed from piecing together
three kinds of matroids -- graphic matroids, cographic matroids, and the matroid $R_{10}$.
This piecing together is done via matroid operations called $1$-sum, $2$-sum and $3$-sum.
These operations are defined for binary matroids.
\begin{definition}[Sum of two matroids \cite{Sey80}, see also \cite{Oxl06}]
\label{def:sum}
Let $M_1 = (E_1, \mathcal{I}_1)$ and $M_2 = (E_2, \mathcal{I}_2)$ be two binary matroids, and let $S = E_1 \cap E_2$.
The \emph{sum of $M_1$ and $M_2$} is a matroid denoted by $M_1 \triangle M_2$.
It is defined over the ground set $E_1 \triangle E_2$
such that the circuits of $M_1 \triangle M_2$ are minimal non-empty subsets of $E_1 \triangle E_2$ that
are of the form $C_1 \triangle C_2$,
where $C_i$ is a (possibly empty) disjoint union of circuits of $M_i$, for $i=1,2$.
\end{definition}
\noindent
From the characterization of the circuits of a matroid~\cite[Theorem 1.1.4]{Oxl06},
it can be verified that the sum $M_1 \triangle M_2$ is indeed a matroid.
We are only interested in three special sums:
\begin{definition}[$1,2,3$-sums]
Let $M_1 = (E_1, \mathcal{I}_1)$ and $M_2 = (E_2, \mathcal{I}_2)$ be two binary matroids and $E_1 \cap E_2 = S$.
Let $m_1 = \abs{E_1}$, $m_2 = \abs{E_2}$, and $s = \abs{S}$.
Let furthermore $m_1, m_2 < |E_1 \triangle E_2| = m_1 + m_2 -2s$.
The sum $M_1 \triangle M_2$ is called a
\begin{itemize}
\item $1$-sum, if $s=0$,
\item $2$-sum, if $s=1 $ and $S$ is not a circuit of $M_1, M_2, M^*_1$ or $M^*_2$,
\item $3$-sum, if $s=3 $ and $S$ is a circuit of $M_1$ and $M_2$ that
does not contain a circuit of $M^*_1$ or~$M^*_2$.
\end{itemize}
\end{definition}
\noindent
Note that the condition $m_1, m_2 < m_1 + m_2 -2s$ implies that
\begin{equation}
m_1,m_2 \geq 2s+1
\label{eq:2s+1}
\end{equation}
\noindent
From the definition of $M_1\triangle M_2$ the following fact follows easily.
\begin{fact}
\label{cla:disjointCircuits}
Let $C_i$ be a disjoint union of circuits of~$M_i$, for $i=1,2$.
If $C_1 \triangle C_2$ is a subset of $E_1 \triangle E_2$ then it is a disjoint union of circuits of $M_1 \triangle M_2$.
\end{fact}
\noindent
In particular, it follows that for $i=1,2$, any circuit~$C_i$ of~$M_i$ with $C_i \subseteq E_i \setminus S$
is a circuit of~$M_1 \triangle M_2$.
Further, for $1$-sums, circuits are easy to characterize.
\begin{fact}[Circuits in a $1$-sum]
\label{fac:1sum-circuits}
If $M$ is a $1$-sum of~$M_1$ and~$M_2$ then any circuit of~$M$ is either a circuit of~$M_1$
or a circuit of~$M_2$.
\end{fact}
\noindent Thus, if one is interested in the number of circuits, one can assume that the given matroid is not a $1$-sum
of two smaller matroids.
\begin{definition}[Connected matroid]
\label{def:connected}
A matroid $M$ is \emph{connected} if it cannot be written as a $1$-sum of two smaller matroids.
\end{definition}
\noindent A characterization of circuits in a 2-sum or 3-sum is not as easy.
Seymour~\cite[Lemma 2.7]{Sey80} provides a unique representation
of the circuits for these cases.
\begin{lemma}[Circuits in a $2$- or $3$-sum, \cite{Sey80}]
\label{lem:3sum-circuits}
Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be the sets of circuits of
$M_1$ and $M_2$, respectively.
Let $M$ be a $2$- or $3$-sum of $M_1$ and $M_2$.
For $S = E_1 \cap E_2$, we have $\abs{S} = 1$ or $\abs{S} = 3$, respectively.
Then for any circuit~$C$ of~$M$, one of the following holds:
\begin{enumerate}
\item $C \in \mathcal{C}_1$ and $S \cap C = \emptyset$, or
\item $C \in \mathcal{C}_2$ and $S \cap C = \emptyset$, or
\item there exist unique $e \in S$, $C_1 \in \mathcal{C}_1$ and $C_2 \in \mathcal{C}_2$
such that
$$S \cap C_1 = S \cap C_2 = \{e\} \mbox{ and } C = C_1 \triangle C_2.$$
\end{enumerate}
\end{lemma}
\noindent
Seymour proved the following decomposition theorem for regular matroids.
\begin{theorem}[Seymour's Theorem, \cite{Sey80}]
\label{thm:Seymour}
Every regular matroid can be obtained by means of $1$-sums, $2$-sums and $3$-sums,
starting from matroids that are graphic, cographic or $R_{10}$.
\end{theorem}
\noindent
However, to prove \cref{thm:regular}, we need a
refined version of Seymour's Theorem that was proved by Truemper~\cite{Tru98}.
Seymour's Theorem decomposes a regular matroid into a sum of two smaller regular matroids.
Truemper showed that one of the two smaller regular matroids can be chosen
to be graphic, cographic, or the $R_{10}$ matroid.
The theorem we write here slightly differs from the one by Truemper~\cite[Lemma 11.3.18]{Tru98}.
A proof of \cref{thm:decomp} is presented in Appendix~\ref{sec:appendix-k-sums}.
\begin{theorem}[Truemper's decomposition for regular matroids, \cite{Tru98}]
\label{thm:decomp}
Let $M$ be a connected regular matroid, that is not graphic or cographic and is not isomorphic to $R_{10}$.
Let $\widetilde{e}$ be a fixed element of the ground set of $M$.
Then $M$ is a $2$-sum or $3$-sum of $M_1$ and $M_2$,
where $M_1$ is a graphic or cographic matroid, or a matroid isomorphic to $R_{10}$
and $M_2$ is a regular matroid that contains~$\widetilde{e}$.
\end{theorem}
\section{A Bound on the Number of near-shortest Circuits in Regular Matroids: Proof of \cref{thm:regular}}
\label{sec:ShortCircuits}
In this section, we prove our main technical tool:
in a regular matroid,
the number of circuits that have size close to a shortest circuit is polynomially bounded (\cref{thm:regular}).
The proof
argues along the decomposition provided by \cref{thm:decomp}.
First, we need to show
a bound on the number of circuits for
the two base cases -- graphic and cographic matroids.
\subsection{Base Case: Graphic and cographic matroids}
\label{sec:co-graphic}
We actually prove a lemma for graphic and cographic matroids that does more --
it gives an
upper bound on the number of circuits that contain a fixed element of the ground set.
For a weight function $w\colon E \to \mathbb{N}$ on the ground set,
the weight of any subset $C \subseteq E$ is defined as $w(C) :=\sum_{e \in C} w(e)$.
\begin{lemma}\label{lem:graphic-set}
Let $M=(E,\mathcal{I})$ be a graphic or cographic matroid, where $\abs{E} = m \geq 2$,
and $w\colon E \to \mathbb{N}$ be a weight function.
Let $R \subseteq E$ with $\abs{R} \leq 1$ (possibly empty) and
$r$ be a positive integer.
If there is no circuit~$C$ in~$M$ such that $w(C)< r$ and $C \cap R = \emptyset$,
then, for any integer $\alpha \geq 2$, the number of circuits~$C$ such that $R \subseteq C$ and $w(C) < \alpha r/2$
is at most $(2(m-\abs{R}))^{\alpha}$.
\end{lemma}
\begin{proof}
\textbf{Part 1: $M$ graphic}.
(See \cite{TK92,Sub95} for a similar argument as in this case.)
Let $G=(V,E)$ be the graph corresponding to the graphic matroid $M$.
By the assumption of the lemma, any cycle~$C$ in~$G$ such that $C \cap R = \emptyset$
has weight $w(C) \geq r$.
Consider a cycle~$C$ in~$G$ with $R \subseteq C$ and $ w(C) < \alpha r/2$.
Let the edge sequence of the cycle $C$ be $(e_1,e_2,e_3, \ldots, e_{q})$ such that
if $R$ is nonempty then $R = \{e_1\}$.
We choose $\alpha$ edges of the cycle $C$ as follows:
Let ${i_1} = 1$ and for $j =2,3, \dots, \alpha$,
define $i_j$ to be the least index greater than $i_{j-1}$ (if one exists) such that
\begin{equation}
\sum_{a=i_{j-1}+1}^{i_j} w(e_{a}) \geq r/2.
\label{eq:ijchoice}
\end{equation}
If such an index does not exists then define $i_j=q$.
Removing the edges $e_{i_1},e_{i_2},\dots,e_{i_{\alpha}}$ from $C$ gives us
$\alpha$ paths: for $j=1,2,\dots,\alpha-1$
$$p_j := (e_{i_{j}+1}, e_{i_{j}+2}, \dots, e_{i_{j+1}-1}),$$
and
$$p_\alpha := (e_{i_{\alpha}+1}, e_{i_{\alpha}+2}, \dots, e_q).$$
Note that some of these paths might be empty.
By the choice of $i_j$ we know that $w(p_j) < r/2$ for $j=1,2,\dots,\alpha-1$.
Combining \eqref{eq:ijchoice} with the fact that $w(C) < \alpha r/2$, we obtain that $w(p_\alpha) < r/2$.
We associate the ordered tuple of oriented edges $(e_{i_1},e_{i_2},\dots,e_{i_\alpha})$ with the cycle $C$.
\begin{claim}
For two distinct cycles $C,C'$ in $G$, such that both contain $R$ and $w(C),w(C') < \alpha r/2$, the two associated tuples (defined as above) are different.
\end{claim}
\begin{proof}
For the sake of contradiction, assume that the associated tuples are same for both the cycles.
Thus, $C$ and $C'$ pass through $(e_{i_1},e_{i_2},\dots, e_{i_\alpha})$ with the same orientation of
these edges.
Further, there are
$\alpha$ paths connecting them, say $p_1,p_2,\dots, p_\alpha$ from $C$ and $p'_1,p'_2,\dots,p'_\alpha$ from $C'$.
Since $C$ and $C'$ are distinct, for at least one $j$, it must be that $p_j\neq p'_j$.
However, since the starting points and the end points of $p_j$ and $p'_j$ are same, $p_j \cup p'_j$
contains a cycle $C''$.
Moreover, since $w(p_j),w(p'_j) < r/2$, we can deduce that
$w(C'') < r$.
Finally, since neither of $p_j$ and $p'_j$ contain $e_1$, we get $C'' \cap R = \emptyset$.
This is a contradiction.
\end{proof}
\noindent
Since, each cycle $C$ with $w(C) < \alpha r/2$ and $R \subseteq C$ is associated with a different tuple, the number of such tuples
upper bounds the number of such cycles.
We bound the number of tuples depending on whether $R$ is empty or not.
\begin{itemize}
\item When $R$ is empty, the number of tuples of $\alpha$ oriented edges is at most $(2m)^{\alpha}$.
\item When $R = \{e_1\}$, the number of choices for the rest of the $\alpha -1$ edges and their orientation is
a most $(2(m-1))^{\alpha-1}$.
\end{itemize}
\noindent
\textbf{Part 2: $M$ cographic}.
Let $G=(V,E)$ be the graph corresponding to the cographic matroid~$M$ and let $n=\abs{V}$.
Recall from \cref{fac:cographic-circuits} that circuits in cographic matroids are inclusionwise minimal cut-sets in~$G$.
By the assumption of the lemma,
any cut-set~$C$ in~$G$ with $R \cap C = \emptyset$ has weight $w(C) \geq r$.
Note that this implies that~$G$ is connected, and therefore $m \geq n-1$.
We want to give a bound on the number of cut-sets~$C \subseteq E$
such that $w(C) < \alpha r/2$ and $R \subseteq C$.
We argue similar to the probabilistic construction of a minimum cut of Karger~\cite{Kar93}.
The basic idea is to contract randomly chosen edges.
\emph{Contraction of an edge} $e = (u,v)$ means
that all edges between~$u$ and~$v$ are deleted and then~$u$ is identified with~$v$.
Note that we get a multi-graph that way:
if there were two edges $(u,w)$ and $(v,w)$ before the contraction,
they become two parallel edges after identifying~$u$ and~$v$.
The contracted graph is denoted by~$G/e$.
The intuition behind contraction is,
that randomly chosen edges are likely to avoid the edges of a minimum cut.
The following algorithm implements the idea.
It does~$k \leq n$ contractions in the first phase and then chooses a random cut
within the remaining nodes of the contracted graph in the second phase that contains the edges of~$R$.
Note that any cut-set of the contracted graph is also a cut-set of the original graph.
\newcommand{\leftarrow}{\leftarrow}
\begin{tabbing}
xxx\=xxx\=xxx\=xxx\=xxx\=xxx\= \kill
{\sc Small Cut} $(G = (V,E),R,\alpha)$ \\[0.1cm]
\emph{Contraction}\\
1 \> {\bf Repeat} $k = n-\alpha-\abs{R}$ times\\
2 \> \> {\bf randomly choose} $e \in E \setminus R$ with probability $w(e)/w(E \setminus R)$\\
3 \> \> $G \leftarrow G/e$\\
4 \> \> $R \leftarrow R \cup \{\text{new parallel edges to the edges in } R\}$\\[1ex]
\emph{Selection}\\
5 \> Among all possible cut-sets $C$ in the obtained graph $G$ with $R \subseteq C$, \\
\> choose one uniformly at random and return it.
\end{tabbing}
\noindent
Let~$C \subseteq E$ be a cut-set with $w(C) < \alpha r/2$ and $R \subseteq C$.
We want to give a lower bound on the probability that {\sc Small Cut} outputs~$C$.
Let $G_0 = G$ and $G_i = (V_i,E_i)$ be the graph after the $i$-th contraction,
for $i = 1,2, \dots,k$.
Note that~$G_i$ has $n_i = n-i$ nodes
since each contraction decreases the number of nodes by~$1$.
Let~$R_i$ denote the set~$R$ after the $i$-th contraction.
That is,
if $R = \{e_1\}$,
then~$R_i$ contains all edges parallel to~$e_1$ in~$G_i$.
In case that $R = \emptyset$, also $R_i = \emptyset$.
Note that in either case $R_i \subseteq C$, if no edge of $C$ has been contracted till iteration $i$.
Conditioned on the event that no edge in~$C$ has been contracted in iterations~1 to~$i$,
the probability that an edge from~$C$ is contracted in the $(i+1)$-th iteration is at most
$$ w(C\setminus R_i)/w(E_i \setminus R_i).$$
We know that $ w(C\setminus R_i) \leq w(C) < \alpha r/2$.
For a lower bound on $w(E_i \setminus R_i)$,
consider the graph~$G'_i$ obtained from~$G_i$ by contracting the edges in~$R_i$.
The number of nodes in~$G'_i$ will be $n'_i = n -i - \abs{R}$ and
its set of edges will be $E_i \setminus R_i$.
For any node~$v$ in~$G'_i$, consider the set~$\delta(v)$ of edges incident on~$v$ in~$G'_i$.
The set~$\delta(v)$ forms a cut-set in~$G'_i$ and also in~$G$.
Note that $\delta(v) \cap R = \emptyset$, as the edge in~$R$ has been contracted in~$G'_i$.
Thus, we can deduce that $w(\delta(v)) \geq r$.
By summing this up for all nodes in~$G'_i$, we obtain
$$w(E_i \setminus R_i) \geq r\, n'_i/2.$$
Hence,
$$w(E_i \setminus R_i) \geq r\, (n-i-\abs{R})/2.$$
Therefore the probability that an edge from~$C$ is contracted in the $(i+1)$-th iteration is
$$\leq~ \frac{w(C \setminus R_i)}{w(E_i \setminus R_i)}
~\leq~ \frac{\alpha\, r/2}{r\, (n-i-\abs{R})/2}
~=~ \frac{\alpha}{n-i-\abs{R}}.$$
This bound becomes greater than~$1$, when $i > n-\alpha-\abs{R}$.
This is the reason why we stop the contraction process after
$k = n-\alpha-\abs{R}$ iterations.
The probability that no edge from~$C$ is contracted in any of the rounds is
\begin{eqnarray*}
&\geq& \prod_{i=0}^{k-1} \left( 1-\frac{\alpha}{n-i-\abs{R}} \right)\\
&=& \prod_{i=0}^{k-1} \left( 1-\frac{\alpha}{k + \alpha-i} \right)\\
&=& \prod_{i=0}^{k-1} \frac{k-i}{k + \alpha-i}\\
&=& \frac{1}{{{k+\alpha} \choose k}}
\\
&=& \frac{1}{{{n-\abs{R}} \choose \alpha}}.
\end{eqnarray*}
After $n-\alpha-\abs{R}$ contractions we are left with $\alpha+\abs{R}$ nodes.
We claim that the number of possible cut-sets on these nodes that contain~$R$ is~$2^{\alpha-1}$.
In case when $R = \emptyset$, then the number of partitions of $\alpha$ nodes into two sets is clearly~$2^{\alpha-1}$.
When $R = \{e_1\}$, then the number of partitions of $\alpha+1$ nodes,
such that the endpoints of~$e_1$ are in different parts, is again~$2^{\alpha-1}$.
We choose one of these cuts randomly.
%
Thus, the probability that $C$ survives the \emph{contraction} process and is also chosen in the
\emph{selection} phase is
at least
$$ \frac{1}{2^{\alpha-1} {{n-\abs{R}} \choose \alpha} } \geq \frac{1}{({n-\abs{R}})^\alpha}.$$
Note that in the end we get exactly one cut-set.
Thus, the number of cut-sets $C$ of weight $< \alpha r/2$ and $R \subseteq C$
must be at most $(n-\abs{R})^{\alpha}$, which is bounded by $(2(m-\abs{R}))^\alpha$
because $m \geq n-1$.
\end{proof}
\subsection{General regular matroids}
\label{sec:proof-3rby2}
In this section, we prove our main result about regular matroids.
\begin{theorem*}[\cref{thm:regular}]
Let $M=(E,\mathcal{I})$ be a regular matroid with $m = \abs{E} \geq 2$
and $w\colon E \to \mathbb{N}$ be a weight function.
Suppose~$M$ does not have any circuit~$C$ such that $w(C)< r$, for some number~$r$.
Then
\[
\abs{\set{C}{C \text{ circuit in $M$ and } w(C) < 3r/2}} ~\leq~ 240\, m^{5}.
\]
\end{theorem*}
\begin{proof}
The proof
is by an induction on~$m$,
the size of the ground set.
For the base case, let $m \leq 10$.
There are at most $2^{m}$ circuits in $M$.
This number is bounded by $240\, m^{5}$, for any $2 \leq m \leq 10$.
For the inductive step,
let~$M = (E,{\mathcal I})$ be a regular matroid with $\abs{E} = m > 10$
and assume that the theorem holds for all smaller regular matroids.
Note that~$M$ cannot be~$R_{10}$ since $m>10$.
We can also assume that matroid~$M$ is neither graphic nor cographic,
otherwise the bound follows from \cref{lem:graphic-set}.
By \cref{thm:Seymour},
matroid~$M$ can be written as a 1-, 2-, or 3-sum of
two regular matroids~$M_1 = (E_1,{\mathcal I}_1)$ and~$M_2 = (E_2,{\mathcal I}_2)$.
We define
\begin{eqnarray*}
S &:=& E_1 \cap E_2,\\
s &:=& \abs{S},\\
m_i &:=& \abs{E_i}, \text{ for } i =1,2,\\
\mathcal{C}_i &:=& \set{C}{C \text{ is a circuit of } M_i}.
\end{eqnarray*}
In case that $M$ is the 1-sum of~$M_1$ and~$M_2$,
we have $S = \emptyset$, and therefore $m = m_1 + m_2$.
By \cref{fac:1sum-circuits},
the set of circuits of~$M$ is the union of the sets of circuits of~$M_1$ and~$M_2$.
From the induction hypothesis, we have that $M_i$ has at most $240\, m_i^5$ circuits of weight less than $3r/2$, for $i=1,2$.
For the number of such circuits in~$M$ we get the bound of
\[
240\, m_1^5 + 240\, m_2^5 \leq 240\, m^5 .
\]
This proves the theorem in case of a 1-sum.
Hence, in the following it remains to consider the case that~$M$ cannot be written as a 1-sum.
In other words,
we may assume that~$M$ is connected (\cref{def:connected}).
Now we can apply \cref{thm:decomp} and assume
that~$M$ is a $2$- or $3$-sum of~$M_1$ and~$M_2$,
where~$M_1$ is a graphic, cographic or the~$R_{10}$ matroid,
and~$M_2$ is a regular matroid.
We define for $i=1,2$ and $e \in S$
\begin{eqnarray*}
\mathcal{C}_{i,e} &:=& \set{C}{C \in \mathcal{C}_i \text{ and } C \cap S = \{e\}},\\
M'_i &:=& M_i \setminus S,\\
\mathcal{C}'_i &:=& \set{C}{C \text{ is a circuit of } M'_i}.
\end{eqnarray*}
By \cref{fac:closed,fac:R10},
matroid~$M'_1$ is graphic or cographic, and $M'_2$ is regular.
Recall from \cref{lem:3sum-circuits} that
any circuit~$C$ of~$M$ can be uniquely written as $C_1 \triangle C_2$ such that one of
the following holds:
\begin{itemize}
\item $C_1 =\emptyset$ and $C_2 \in \mathcal{C}'_2$.
\item $C_2 =\emptyset$ and $C_1 \in \mathcal{C}'_1$.
\item $C_1 \in \mathcal{C}_{1,e}$, and $C_2 \in \mathcal{C}_{2,e}$, for some $e \in S$.
\end{itemize}
Thus, we will view each circuit~$C$ of~$M$ as $C_1 \triangle C_2$
and consider cases based on
how the weight of~$C$ is distributed among~$C_1$ and~$C_2$.
Recall that the weight function~$w$ is defined on $E = E_1 \triangle E_2$.
We extend~$w$ to a function on $E_1 \cup E_2$ by defining
\[
w(e) = 0, \text{ for } e \in S.
\]
Now, for the desired upper bound,
we will divide the set of circuits of~$M$ of weight less than~$3r/2$ into three cases.
\begin{description}
\item[{\bf Case 1.}] $C_1 \in \mathcal{C}'_{1}$.
\item[{\bf Case 2.}] $w(C_1) < r/2$. This includes the case that $C_1 = \emptyset$.
\item[{\bf Case 3.}] $w(C_1) \geq r/2$ and $C_2 \neq \emptyset$.
\end{description}
In the following,
we will derive an upper bound for the number of circuits in each of the three cases.
Then the sum of these bounds will be an upper bound on the number of circuits in~$M$.
We will show that the sum is less than $240\, m^5$.
\subsection*{Case 1: $C_1 \in \mathcal{C}'_{1}$}
We have $C_2 = \emptyset$ and $C = C_1 \in \mathcal{C}'_1$.
That is, we need to bound the number of circuits of $M'_1$.
Recall that any circuit of $M'_1$ is also a circuit of $M$.
Hence, we know there is no circuit $C_1$ in $M'_1$ with $w(C_1) < r$.
Since $M'_1$ is graphic or cographic, from \cref{lem:graphic-set},
the number of circuits $C_1$ of $M'_1$ with $w(C_1) < 3r/2$
is at most
$(2(m_1-s))^{3}.$
Recall from (\ref{eq:2s+1}) that $m_1 \geq 2s+1$.
For any $m_1 \geq 2s+2$, one can verify that
$$(2(m_1-s))^{3} \leq 240\,(m_1-2s)^{5} =: T_0.$$
On the other hand, when $m_1 = 2s+1$,
the number of circuits can be at most $2^{m_1-s} \leq 2^4$,
which is again bounded by~$ T_0 $.
\subsection*{Case 2: $w(C_1) < r/2$}
The main point why we distinguish case~2 is that here~$C_1$ is uniquely determined.
\begin{claim}
\label{cla:unique}
For any $e \in S$,
there is at most one circuit $C_1 \in \mathcal{C}_{1,e}$ with
$w(C_1) < r/2$.
\end{claim}
\begin{proof}
For the sake of contradiction, assume that there are two circuits~$C_1, C'_1 \in \mathcal{C}_{1,e}$,
with $w(C_1), w(C'_1) < r/2$.
By \cref{fac:binary},
we know that $C_1 \triangle C'_1$ is a disjoint union of circuits in~$M_1$.
Note that $C_1 \cap S = C'_1 \cap S = \{e\}$,
and hence $(C_1 \triangle C'_1) \cap S = \emptyset$.
Thus, $C_1 \triangle C'_1$ is in fact a disjoint union of circuits in~$M$.
Let $\widetilde{C}$ be a subset of $C_1 \triangle C'_1$ that is a circuit.
For the weight of~$\widetilde{C}$ we have
$$
w(\widetilde{C}) \leq w(C_1 \triangle C'_1)
\leq w(C_1) + w(C'_1) < r/2+r/2 =r.$$
This is a contradiction because~$M$ has no circuit of weight less than~$r$.
\end{proof}
\noindent
Thus, as we will see, it suffices to bound the number of circuits $C_2$ in $M_2$.
Let $C^*_e$ be the unique choice of a circuit provided by \cref{cla:unique}
(if one exists) for element~$e \in S$.
For the ease of notation,
we assume in the following that there is a~$C^*_e$ for every $e \in S$.
Otherwise we would delete any element $e\in S$ from~$M_2$ for which no~$ C^*_e$ exists,
and then would consider the resulting smaller matroid.
It might actually be that we thereby delete all of~$S$ from~$M_2$.
We define a weight function~$w'$ on~$E_2$ as follows:
$$
w'(e) := \begin{cases}
w(C^*_e), & \text{ if } e \in S, \\
w(e), & \text{ otherwise}.
\end{cases}
$$
We now have that any circuit~$C$ of Case~2 can be written as
$C^*_e \triangle C_2$, for some $e \in S$, or $C = C_2$ when $C_1 = \emptyset$.
Because~$C^*_e$ is unique,
the mapping $C \mapsto C_2$ is injective for circuits~$C$ of Case~2.
Moreover,
we have $w(C) = w'(C_2)$.
This follows from the definition in case that $C = C_2$.
In the other case, we have
\begin{equation}
w(C) = w({C^*_e \triangle C_2}) = w(C^*_e) + w(C_2) = w'(C_2).
\label{eq:ww'}
\end{equation}
For the equalities, recall that $w(e) = 0$ for $e\in S$.
We conclude that the number of circuits~$C_2$ in $M_2$ with $w'(C_2) < 3r/2$
is an upper bound on the number of Case 2 circuits~$C$ of~$M$ with $w(C) < 3r/2$.
Now, to get an upper bound on the number of circuits in $M_2$, we want to apply induction hypothesis.
We need the following claim.
\begin{claim}
\label{cla:nocircuit}
There is no circuit $C_2$ in $M_2$ with $w'(C_2) < r$.
\end{claim}
\begin{proof}
For the sake of contradiction let $C_2$ be such a circuit.
We show that there exists a circuit~$C'$ in~$M$
with $w(C') < r$.
This would contradict the assumption of the lemma.
Case(i): $C_2 \cap S = \emptyset$.
Then $C_2 \in \mathcal{C}'_2$ itself yields the contradiction
because it is a circuit of~$M$ and $w(C_2) = w'(C_2) < r$.
Case(ii): $C_2 \cap S = \{e\}$.
By \cref{cla:disjointCircuits},
the set $C_2 \triangle C^*_e$ is a disjoint union of circuits of~$M$.
Let $C' \subseteq C_2 \triangle C^*_e$ be a circuit of $M$.
Then, because $w(e)=0$, we have
$$
w(C') \leq w({C^*_e \triangle C_2}) = w(C^*_e) + w(C_2) = w'(C_2) < r.
$$
Case(iii): $C_2 \cap S = \{e_1,e_2\}$.
By \cref{cla:disjointCircuits},
similar as in case~(ii),
there is a set $C' \subseteq C_2 \triangle C^*_{e_1} \triangle C^*_{e_2}$ that is a circuit of~$M$.
%
Then, because $w(e_1)=w(e_2)=0$, we have
$$
w(C') \leq w(C_2 \triangle C^*_{e_1} \triangle C^*_{e_2}) \leq w(C_2) + w(C^*_{e_1})+ w(C^*_{e_2}) = w'(C_2) < r.
$$
Case(iv): $C_2 \cap S = \{e_1,e_2,e_3\}$.
%
Since $S$ is a circuit, it must be the case that $C_2 =S$.
%
Since $C^*_{e_1},C^*_{e_2},C^*_{e_3}$ and $S$ constitute all the circuits of~$M_1$,
the set $ C^*_{e_1} \triangle C^*_{e_2} \triangle C^*_{e_3} \triangle S$
contains a circuit~$C'$ of~$M_1$.
%
Since $\{e_i\} = C^*_{e_i} \cap S$, for $i=1,2,3$, we know that $S \cap C' = \emptyset$.
Thus, $C' \in \mathcal{C}'_1$ is a circuit of~$M$.
Since $w(e_1) = w(e_2) = w(e_3) = 0$, we obtain that
$$w(C') \leq w(C^*_{e_1}) + w(C^*_{e_2})+ w(C^*_{e_3}) = w'(S) =w'(C_2) < r.$$
This proves the claim.
\end{proof}
\noindent
By \cref{cla:nocircuit},
we can apply the induction hypothesis for $M_2$ with the weight function~$w'$.
We get that the number of circuits~$C_2$ in $M_2$
with $w'(C_2) < 3 r/2$ is bounded by
$$
T_1 := 240 \, m_2^{5}.
$$
As mentioned above,
this is an upper bound on the number of circuits~$C$ in~$M$ with $w(C) < 3r/2$ in Case~2.
\subsection*{Case 3: $w(C_1) \geq r/2$}
Since $w(C) = w(C_1) + w(C_2) < 3r/2$,
we have
$w(C_2) < r$ in this case.
We also assume that $C_2 \neq \emptyset$.
Hence,
there is an $e \in S$ such that $C_1 \in \mathcal{C}_{1,e}$ and $C_2 \in \mathcal{C}_{2,e}$.
Let $T_{2}$ be an upper bound on the number of circuits $C_1 \in \mathcal{C}_{1,e}$ with
$w(C_1) < 3r/2$, for each $e \in S$.
Let $T_{3}$ be an upper bound on the number of circuits $C_2 \in \mathcal{C}_{2,e}$ with
$w(C_2) < r$, for each $e \in S$.
Because there are $s$ choices for the element $e \in S$,
the number of circuits $C = C_1 \triangle C_2$ with $w(C) < 3r/2$ in Case~3
will be at most
\begin{equation}
\label{eq:type2}
s \, T_{2} \, T_{3}.
\end{equation}
To get an upper bound on the number of circuits in $\mathcal{C}_{1,e}$ and $\mathcal{C}_{2,e}$,
consider two matroids $M_{1,e}$ and $M_{2,e}$.
These are obtained from $M_1$ and $M_2$, respectively, by deleting the elements in $S\setminus \{e\}$.
The ground set cardinalities of these two matroids are $m_1-s+1$ and $m_2-s+1$.
We know that for $i=1,2$,
any circuit~$C_i$ of~$M_{i,e}$ with $e \not\in C_i$ is in~$\mathcal{C}'_{i}$ and
hence, is a circuit of~$M$.
Therefore, there is no circuit~$C_i$ of~$M_{i,e}$ with $e \not\in C_i$ and $w(C_i) < r$.
Using this fact, we want to bound the number of circuits~$C_i$ of~$M_{i,e}$ with $e \in C_i$.
We start with $M_{1,e}$.
\begin{claim}\label{cla:T1}
An upper bound on the number of circuits~$C_1$ in~$M_{1,e}$ with $e \in C_1$ and
$w(C_1) < 3r/2$ is
\begin{equation}
\label{eq:T1}
T_2 := \min \{8(m_1-s)^{3}, 2^{m_1-s} \}
\end{equation}
\end{claim}
\begin{proof}
Recall that the decomposition of~$M$ was such that~$M_1$ is graphic, cographic or the~$R_{10}$ matroid.
Case(i). When $M_1$ is graphic or cographic,
the matroid~$M_{1,e}$ falls into the same class by \cref{fac:closed}.
Recall that the ground set of~$M_{1,e}$ has cardinality $m_1-s+1$.
In this case, we apply \cref{lem:graphic-set} to~$M_{1,e}$
with $R = \{e\}$ and $\alpha = 3$ and get a bound of
$ 8(m_1-s)^{3}.$
The number of circuits containing~$e$ is also trivially bounded by the number of all subsets
that contain~$e$,
which is $2^{m_1-s}$.
Thus, we get Equation~(\ref{eq:T1}).
Case(ii). When $M_1$ is the $R_{10}$ matroid, then the cardinality of $M_{1,e}$, that is $m_1-s+1$, is at most~10.
In this case again, we use the trivial upper bound of $2^{m_1-s}$. One can verify that when $m_1-s+1 \leq 10$ then
$2^{m_1-s} \leq 8(m_1-s)^{3}$. Thus, we get Equation~(\ref{eq:T1}).
\end{proof}
\noindent
Next,
we want to bound the number of circuits~$C_2$ in~$M_{2,e}$
with $e \in C_2$ and $w(C_2) < r$.
This is done in \cref{lem:circuitsR} below,
where we get a bound of $T_3 := 48(m_2-s)^2$.
To finish Case~3,
we now have
\begin{eqnarray*}
T_2 &=& \min \{8(m_1-s)^{3}, 2^{m_1-s} \},\\
T_3 &=& 48(m_2-s)^2.
\end{eqnarray*}
By Equation (\ref{eq:type2}),
the number of circuits in Case~3 is bounded by~$s \, T_2 \, T_3$.
\begin{claim}
\label{cla:sT1}
For $s=1,3$ and $m_1 \geq 2s+1$,
$$ s \, T_2 \, T_3 \leq 2400\, (m_1-2s)^{3} \, (m_2-s)^{2}.$$
\end{claim}
\begin{proof}
We consider $s \, T_2$.
For $m_1-2s \geq 12$, we have
$$s \cdot 8(m_1-s)^{3} \leq 50 (m_1-2s)^{3}.$$
On the other hand, when $m_1-2s \leq 11 $,
$$s \cdot 2^{m_1-s} \leq 50 (m_1-2s)^{3}.$$
This proves the claim.
\end{proof}
\subsection*{Summing up Cases 1, 2 and 3}
Finally we add the bounds on the number of circuits of Case~1,~2 and~3.
The total upper bound we get is
\begin{eqnarray*}
T_0 + T_1 + s \, T_2 \, T_3
&\leq& 240\, (m_1-2s)^{5} + 240 \, m_2^{5} + 240\, {5 \choose 2} (m_1-2s)^{3} (m_2-s)^{2} \\
&\leq& 240\, (m_2 + m_1-2s)^{5} \\
&\leq& 240 \, m^{5}
\end{eqnarray*}
This completes the proof of \cref{thm:regular},
except for the bound on~$T_3$ that we show in \cref{lem:circuitsR}.
\end{proof}
\noindent
Now we move on to prove \cref{lem:circuitsR}, which completes the proof of \cref{thm:regular}.
The lemma is similar to \cref{thm:regular}, but differs in two aspects:
(i)
we want to count circuits up to a smaller weight bound, that is, $r$, and
(ii)
we have a weaker assumption that there is no circuit of weight less than~$r$
that does not contain a fixed element~$e$.
\begin{lemma}
Let $M=(E,\mathcal{I})$ be a connected, regular matroid with ground set size $m \geq 2$
and $w\colon E \to \mathbb{N}$ be a weight function on $E$.
Let $r$ be a positive integer and let $\widetilde{e} \in E$ be any fixed element of the ground set.
Assume that there is no circuit $C$ in $M$ such that $\widetilde{e} \not\in C$ and $w(C)< r$.
Then,
the number of circuits $C$ in $M$ such that $\widetilde{e} \in C$ and $w(C) < r$
is bounded by $48(m-1)^2$.
\label{lem:circuitsR}
\end{lemma}
\begin{proof}
We closely follow the proof of \cref{thm:regular}.
We proceed again by an induction on~$m$, the size of the ground set~$E$.
For the base case, let $m \leq 10$.
There are at most $2^{m-1}$ circuits that contain $\widetilde{e}$.
This number is bounded by $48(m-1)^2$, for any $2 \leq m \leq 10$.
For the inductive step,
let~$M = (E,{\mathcal I})$ be a regular matroid with $\abs{E} = m > 10$
and assume that the theorem holds for all smaller regular matroids.
Since $m>10$, matroid~$M$ cannot be~$R_{10}$.
If~$M$ is graphic or cographic,
then the bound of the lemma follows from \cref{lem:graphic-set}.
Thus, we may assume that~$M$ is neither graphic nor cographic.
By \cref{thm:Seymour},
matroid~$M$ can be written as a 1-, 2-, or 3-sum of
two regular matroids~$M_1 = (E_1,{\mathcal I}_1)$ and~$M_2 = (E_2,{\mathcal I}_2)$.
We use the same notation as \cref{thm:regular},
\begin{eqnarray*}
S &=& E_1 \cap E_2,\\
s &=& \abs{S},\\
m_i &=& \abs{E_i}, \text{ for } i =1,2,\\
\mathcal{C}_i &=& \set{C}{C \text{ is a circuit of } M_i}.
\end{eqnarray*}
The case that~$M$ is a 1-sum of $M_1$ and~$M_2$ is again trivial.
Hence, we may assume that~$M$ is connected.
By \cref{thm:decomp},
$M$ is a $2$-sum or a $3$-sum of~$M_1$ and~$M_2$,
where~$M_1$ is a graphic, cographic or the~$R_{10}$ matroid,
and~$M_2$ is a regular matroid containing~$\widetilde{e}$.
For $i=1,2$ and $e \in S$, define
\begin{eqnarray*}
\mathcal{C}_{i,e} &:=& \set{C}{C \in \mathcal{C}_i \text{ and } C \cap S = \{e\}}.
\end{eqnarray*}
Also the weight function~$w$ is extended on~$S$ by $w(e) = 0$, for any $e \in S$.
We again view each circuit $C$ of $M$ as $C_1 \triangle C_2$ and consider cases based on
how the weight of~$C$ is distributed among~$C_1$ and~$C_2$.
Note that~$\widetilde{e}$ is in~$M_2$ and
we are only interested in circuits~$C$ that contain~$\widetilde{e}$.
Hence, we have $\widetilde{e} \in C_2$.
Therefore we do not have the case where $C_2 = \emptyset$.
We consider the following two cases.
\begin{description}
\item[{\bf Case (i).}] $w(C_1) < r/2$.
\item[{\bf Case (ii).}] $w(C_1) \geq r/2$.
\end{description}
\noindent
We will give an upper bound for the number of circuits in each of the two cases.
\subsubsection*{Case (i): $w(C_1) < r/2$}
Since~$\widetilde{e} \not\in C_1$,
we can literally follow the proof for Case~2 from \cref{thm:regular} for this case.
We have again \cref{cla:unique},
that~$C_1$ is uniquely determined as $C_1 = C_e^*$, for $e \in S$,
or $C_1= \emptyset$.
Therefore the mapping $C \mapsto C_2$ is injective.
The only point to notice now is that the mapping maintains that~$\widetilde{e} \in C$ if and only if $\widetilde{e} \in C_2$.
With the same definition of~$w'$, we also have $w(C) = w'(C_2)$.
Therefore it suffices to get an upper bound on
the number of circuits~$C_2$ in~$M_2$ with $w'(C_2) < r$ and
$\widetilde{e} \in C_2$.
To apply the induction hypothesis,
we need the following variant of \cref{cla:nocircuit}.
It has a similar proof.
\begin{claim}
\label{cla:nocircuit1}
There is no circuit~$C_2$ in~$M_2$ such that $w'(C_2) < r$ and $\widetilde{e} \not\in C_2$.
\end{claim}
\noindent
By the induction hypothesis applied to~$M_2$,
the number of circuits~$C_2$ in~$M_2$ with $w'(C_2) < r$ and
$\widetilde{e} \in C_2$ is bounded by
$$T_0:=48(m_2-1)^{2}.$$
\subsubsection*{Case (ii): $w(C_1) \geq r/2$}
Since $w(C) = w(C_1) + w(C_2) < r$,
we have $w(C_2) < r/2$ in this case.
This is the major difference to Case~3 from \cref{thm:regular}
where the weight of~$C_2$ was only bounded by~$r$.
Hence,
now we have again a uniqueness property similar as in \cref{cla:unique},
but for~$C_2$ this time.
A difference comes with~$\widetilde{e}$.
But the proof remains the same.
\begin{claim}
\label{cla:unique2}
For any $e \in S$,
there is at most one circuit $C_2 \in \mathcal{C}_{2,e}$ with
$w(C_2) < r/2$ and $\widetilde{e} \in C_2$.
\end{claim}
\noindent
We conclude that any circuit~$C$ in case~(ii) can be written as
$C = C_1 \triangle C_e^*$, for a $e \in S$ and the unique circuit $C_e^* \in \mathcal{C}_{2,e}$.
Therefore the mapping $C \mapsto C_1$ is injective for the circuits~$C$ of case~(ii).
Thus, it suffices to count circuits $C_1 \in \mathcal{C}_{1,e}$ with $w(C_1) < r$,
for every $e \in S$.
Let $e \in S$ and consider the matroid~$M_{1,e}$
obtained from~$M_1$ by deleting the elements in $S\setminus \{e\}$.
It has $m_1-s+1$ elements.
Since~$M_1$ is a graphic, cographic or~$R_{10}$,
the matroid~$M_{1,e}$ is graphic or cographic by \cref{fac:closed,fac:R10}.
The circuits in~$\mathcal{C}_{1,e}$ are also circuits of~$M_{1,e}$.
Any circuit~$C_1$ of~$M_{1,e}$ with $e \not\in C_1$ is also a circuit of~$M$.
Thus, there is no circuit $C_1$ of $M_{1,e}$ with $e \not\in C_1$ and $w(C_1) < r$.
Therefore we can apply \cref{lem:graphic-set} to~$M_{1,e}$ with $R = \{e\}$.
We conclude that the number of circuits $C_1 \in \mathcal{C}_{1,e}$ with $w(C_1) < r$ is at most
$$T_1:= 4(m_1-s)^{2}.$$
Since there are~$s$ choices for $ e \in S$,
we obtain a bound of~$s \, T_1$.
There is also a trivial bound of $s \, 2^{m_1-s}$ on the number of such circuits.
We take the minimum of the two bounds.
Recall from the definition of $2$-sum and $3$-sum that $m_1 \geq 2s+1$.
\begin{claim}
For $s = 1$ or $3$ and $m_1 \geq 2s+1$,
$$\min\{ s \, 2^{m_1-s} , 4s \, (m_1-s)^{2}\} \leq 48(m_1-2s)^2.$$
\end{claim}
\begin{proof}
One can verify that when $m_1 -2s \leq 4 $ then
$$s \, 2^{m_1-s} \leq 48(m_1-2s)^2.$$
On the other hand, when $m_1 -2s \geq 5 $ then
$$4s \, (m_1-s)^{2} \leq 48(m_1-2s)^2.$$
This proves the claim.
\end{proof}
\noindent
Hence, we get a bound of $48(m_1-2s)^2$ on the number circuits in case~(ii).
Now we add the number of circuits of case~(i) and~(ii)
and get a total upper bound of
\begin{eqnarray*}
48(m_2-1)^{2} + 48(m_1-2s)^2
&\leq & 48(m_2-1+m_1-2s)^2 \\
&\leq & 48(m-1)^2.
\end{eqnarray*}
This gives us the desired bound and completes the proof of \cref{lem:circuitsR}.
\end{proof}
\bibliographystyle{plain}
|
1,116,691,497,649 | arxiv | \section{Introduction}
\subsubsection*{Code-Based Signature Schemes.}
It is a long standing open problem to build an efficient and secure
digital signature scheme based on the hardness of decoding a linear
code which could compete with widespread schemes like DSA or
RSA. Those signature schemes are well known to be broken by quantum
computers and code-based schemes could indeed provide a valid quantum
resistant replacement. A first answer to this question was given by
the CFS scheme proposed in \cite{CFS01}. It consisted in finding
parity-check matrices ${\mathbf{H}}\in\mathbb{F}_2^{r \times n}$ such that the solution
${\mathbf{e}}$ of smallest weight of the equation
\begin{equation}
\label{eq:decoding}
{\mathbf{e}}\transpose{{\mathbf{H}}}={\mathbf{s}}.
\end{equation}
could be found for a non-negligible proportion of all ${\mathbf{s}}$ in
$\mathbb{F}_{2}^{r}$. This task was achieved by using high rate Goppa
codes. This signature scheme has however two drawbacks: (i) for high
rates Goppa codes the indistinguishability assumption used in its
security proof has been invalidated in \cite{FGOPT11}, (ii) security scales
only weakly superpolynomially in the keysize for polynomial time signature time. A crude extrapolation of
parallel CFS \cite{F10} and its implementations \cite{LS12,BCS13}
yields for 128 bits of classical security a public key size of several
gigabytes and a signature time of several seconds. Those figures even
grow to terabytes and hours for quantum-safe security levels, making
the scheme unpractical.
This scheme was followed by other proposals using other code families
such as for instance \cite{BBCRS13,GSJB14,LKLN17}. All of them were broken,
see for instance \cite{PT16,MP16}.
Other signature schemes based on codes were also given in the
literature such as for instance the KKS scheme \cite{KKS97,KKS05},
its variants \cite{BMS11,GS12} or the RaCoSS proposal \cite{FRXKMT17} to the NIST. But they can be considered at best to
be one-time signature schemes and great care has to be taken to choose the parameters
of these schemes in the light of the attacks given in
\cite{COV07,OT11,HBPL18}.
Finally,
another possibility is to use the Fiat-Shamir
heuristic. For instance by turning the Stern zero-knowledge authentication scheme
\cite{S93} into a signature scheme but this leads to rather large
signature lengths (hundred(s) of kilobits).
There has been some recent progress in this area for another metric,
namely the rank metric. A hash and sign signature scheme was proposed, RankSign \cite{GRSZ14}, that enjoys remarkably small key sizes, but it got broken too in \cite{DT18b}. On the other hand, following the Schnorr-Lyubashevsky \cite{L09_sv} approach, a new scheme was recently proposed, namely Durandal \cite{ABGHZ18}. This scheme enjoys small key sizes and managed to meet the challenge of adapting the Lyubashevsky \cite{L09} approach for code-based cryptography. However, there is a lack of genericity in its security reduction, the security of Durandal is reduced to a rather convoluted problem, namely PSSI$^{+}$ (see \cite[\S 4.1]{ABGHZ18}), capturing the problem of using possibly information leakage in the signatures to break the secret key.
This is due to the fact that it is not proven in their scheme that their signatures do not leak information.
\paragraph{\bf One-Way Preimage Sampleable Trapdoor Functions.}
There is a very powerful tool for building a hash-and-sign signature scheme. It is based on the notion of
{\em one-way trapdoor preimage sampleable function} \cite[\S 5.3]{GPV08} (PSF in short).
Roughly speaking, this is a family of trapdoor
one-way functions $(f_a)_a$ such that with overwhelming probability
over the choice of $f_a$ (i) the distribution of the
images $f_a(x)$ is very close to the uniform distribution over its range
(ii) the distribution of the output of the
trapdoor algorithm inverting $f_a$
samples from all possible preimages in an appropriate way. This trapdoor inversion algorithm should namely
sample for any $x$ in the output domain of $f_a$ its outputs $e$ such that the distribution of $e$ is indistinguishable in a statistical sense from the
input distribution to $f_a$ conditioned on $f_a(e)=x$.
This notion and its lattice-based instantiation allowed in \cite{GPV08}
to give a full-domain hash (FDH) signature scheme
with a tight security reduction based on lattice assumptions, namely that the Short Integer Solution (SIS) problem is hard on average.
Furthermore, this approach also allowed to build the first identity
based encryption scheme that could be resistant to a quantum computer.
We will call in this paper, this approach for obtaining a FDH scheme, the GPV strategy (the authors of \cite{GPV08} are namely
Gentry, Peikert and Vaikuntanathan). This
strategy has also been adopted in Falcon \cite{FHKLPPRSWZ}, a lattice
based signature submission to the NIST call for post-quantum
cryptographic primitives that was recently selected as a second round candidate.
This PSF primitive is notoriously difficult to obtain when the functions $f_a$ are not trapdoor permutations but many-to-one
functions. This is typically the case when one wishes quantum resistant primitives based on lattice
based assumptions.
The reason is the following. The hard problem on which this primitive relies is the
SIS problem where we want to find for a matrix ${\mathbf{A}}$ in $\mathbb{Z}_q^{n \times m}$ (with $m \geq n$) and an element
${\mathbf{s}} \in \mathbb{Z}_q^n$
a short enough (for the Euclidean norm) solution ${\mathbf{e}} \in \mathbb{Z}_q^m$ to the equation
\begin{equation}\label{eq:SIS}
{\mathbf{e}} \transpose{{\mathbf{A}}} = {\mathbf{s}} \mod{q}.
\end{equation}
Such a matrix defines a corresponding PSF function as $f_{{\mathbf{A}}}({\mathbf{e}}) = {\mathbf{e}} \transpose{{\mathbf{A}}}$ and the
input to this function is chosen according to a Gaussian distribution that outputs
${\mathbf{e}}$
of large enough euclidean norm $W$ so that \eqref{eq:SIS} has a solution. Obtaining a nearly uniform distribution for the
$f_{{\mathbf{A}}}({\mathbf{e}})$'s over its range requires
to choose $W$ large enough so that there are actually {\em exponentially many} solutions to \eqref{eq:SIS}.
It is a highly non-trivial task to build in this case a trapdoor inversion algorithm that samples appropriately
among all possible preimages, i.e. that is oblivious of the trapdoor.
The situation is actually exactly the same if we want to use another candidate problem for building this PSF primitive
for being resistant to a quantum computer, namely the decoding problem in code-based cryptography. Here we rely on the difficulty of finding
a solution ${\mathbf{e}}$ of Hamming weight {\em exactly w} with coordinates in a finite field field $\F_q$ for the equation
\begin{equation}
\label{eq:ourdecoding}
{\mathbf{e}} \transpose{{\mathbf{H}}} = {\mathbf{s}}.
\end{equation}
where ${\mathbf{H}}$ is a given matrix and ${\mathbf{s}}$ (usually called a syndrome) a given vector with entries in $\F_q$.
The weight $w$ has to be chosen large enough so that this equation has always exponentially many solutions
(in $n$ the length of ${\mathbf{e}}$). As in the lattice based setting, it is non-trivial to build trapdoor candidates with
a trapdoor inversion algorithm for $f_{{\mathbf{H}}}$ (defined as $f_{{\mathbf{H}}}({\mathbf{e}})={\mathbf{e}}\transpose{{\mathbf{H}}}$) that is oblivious of the trapdoor.
\paragraph{\bf Our Contribution: a Code-Based PSF Family and an FDH Scheme.}
Our main contribution is to give here a code-based PSF family that relies on the difficulty of solving
\eqref{eq:ourdecoding}. We derive from it an FDH signature scheme which is shown to be
existentially unforgeable under a chosen-message attack
(EUF-CMA) with a tight reduction to solving two code-based problems:
one is a distinguishing problem related to the trapdoor used
in our scheme, the other one is a multiple target version of the
decoding problem \eqref{eq:ourdecoding}, the so called ``Decoding One Out
of Many'' problem (DOOM in short) \cite{S11}.
In \cite{GPV08} a signature scheme based on preimage sampleable
functions is given that is shown to be strongly existentially
unforgeable under a chosen-message attack if in addition the preimage
sampleable functions are also collision resistant. With our choice of
$w$ and $\F_q$, our preimage sampleable functions are not collision
resistant. However, as observed in \cite{GPV08}, collision resistance
allows a tight security reduction but is not necessary: a security
proof could also be given when the function is ``only'' preimage
sampleable. Moreover, contrarily to the lattice setting where the size of the alphabet $q$ grows
with $n$, our alphabet size will be constant in our proposal, it is fixed to $q=3$.
\paragraph{\bf Our Trapdoor: Generalized $(U,U+V)$-Codes.}
In \cite{GPV08} the trapdoor consists in a short basis of the lattice
considered in the construction. Our trapdoor will be of a different
nature, it consists in choosing parity-check matrices of generalized
$(U,U+V)$-codes. In our construction, $U$ and $V$ are chosen as random codes.
The number of such generalized $(U,U+V)$-codes of dimension $k$ and length $n$ is of the same order
as the number of linear codes with the same parameters, namely $q^{\Th{n^2}}$ when $k=\Th{n}$. A
generalized $(U,U+V)$ code ${\mathcal C}$ of length $n$ over $\F_q$ is built from two codes $U$ and $V$ of length $n/2$ and $4$ vectors ${\mathbf{a}}, {\mathbf{b}}, {\mathbf{c}}$ and ${\mathbf{d}}$ in $\F_q^{n/2}$ as the following ``mixture'' of $U$ and $V$:
$$
{\mathcal C} = \{({\mathbf{a}} \odot {\mathbf{u}} + {\mathbf{b}} \odot {\mathbf{v}},{\mathbf{c}} \odot {\mathbf{u}} + {\mathbf{d}} \odot {\mathbf{v}}): {\mathbf{u}} \in U,\;{\mathbf{v}} \in V\}
$$
where ${\mathbf{x}} \odot {\mathbf{y}}$ stands here for the component-wise product, also called the Hadamard or Schur product.
It is defined as:
$$
{\mathbf{x}} \odot {\mathbf{y}} \mathop{=}\limits^{\triangle} (x_{1}y_{1},\cdots,x_{n/2}y_{n/2}).
$$
Standard $(U,U+V)$-codes correspond to ${\mathbf{a}}={\mathbf{c}}={\mathbf{d}}={\mat{1}}_{n/2}$ and ${\mathbf{b}}={\mat{0}}_{n/2}$,
the all-one and the all-zero vectors respectively.
The point of introducing such codes is that they have a natural decoding algorithm $\text{\tt{D}}_{UV}$
solving the decoding problem \eqref{eq:ourdecoding} that is
based on a generic decoding algorithm $\text{\tt {D}}_{\text{gen}}$ for linear codes.
$\text{\tt{D}}_{UV}$ works by combining the decoding of $V$ with $\text{\tt {D}}_{\text{gen}}$
with the decoding of $U$ by $\text{\tt {D}}_{\text{gen}}$.
The nice feature is that $\text{\tt{D}}_{UV}$ is more powerful than
$\text{\tt {D}}_{\text{gen}}$ applied directly on the generalized $(U,U+V)$-code:
the weight of the error produced by $\text{\tt{D}}_{UV}$ can be much smaller than the weight of the error produced by $\text{\tt {D}}_{\text{gen}}$ applied directly to the generalized $(U,U+V)$-code. In our case,
$\text{\tt {D}}_{\text{gen}}$ will be here a very
simple decoder,
namely a variation of the Prange decoder \cite{P62} that is able to
produce for {\em any} parity-check matrix ${\mathbf{H}} \in \F_q^{r \times n}$ at will a solution of
\eqref{eq:ourdecoding} when $w$ is in the range
$\IInt{\frac{q-1}{q}r}{n-\frac{r}{q}}$. Note that this algorithm
works in polynomial time and that outside this range of weights, the
complexity of the best known algorithms is exponential in $n$ for
weights $w$ of the form $w= \omega n$ where $\omega$ is a constant
that lies outside the interval
$[\frac{q-1}{q}\rho, 1 - \frac{\rho}{q}]$ where
$\rho \mathop{=}\limits^{\triangle} \frac{r}{n}$.
The point of using $\text{\tt{D}}_{UV}$ is that it produces errors outside this interval.
This is in essence the
trapdoor of our signature scheme. A tweak in this decoder
consisting in performing only a small amount of rejection sampling (with our choice of parameters one rejection every $10$ or $12$ signatures)
allows to obtain solutions that are uniformly distributed over the
words of weight $w$. This is the key for obtaining a PSF family
and a signature scheme from it.
Finally, a
variation of the proof technique of \cite{GPV08} allows to
give a tight security proof of our signature scheme that relies only
on the hardness of two problems, namely
\begin{description}
\item[Decoding Problem:] Solving at least one instance of the decoding
problem \eqref{eq:decoding} out of multiple instances for a certain
$w$ that is outside the range $\IInt{\frac{q-1}{q}r}{n-\frac{r}{q}}$
\item[Distinguishing Problem:] Deciding whether a linear code is a
permuted generalized $(U,U+V)$ code or not.
\end{description}
Interestingly, some recent work \cite{CD17} has shown that these two
properties (namely statistical indistinguishability of the signatures and
the syndromes associated to the code family chosen in the scheme) are
also enough to obtain a tight security reduction in the Quantum Random
Oracle Model (QROM) for generic code-based signatures. The security reduction is made to a problem that is
called the Claw with Hash problem. It can be viewed as an adaptation of the DOOM problem to the quantum setting.
In this case, an adversary has access to a quantum oracle for producing the instances that he wants to decode. In other words, this can
be used to give a tight security proof of our generalized $(U,U+V)$-codes
in the QROM.
\paragraph{\bf Hardness of the Decoding Problem.}
All code-based cryptography relies upon that problem. Here we are in
a case where there are multiple solutions of \eqref{eq:ourdecoding} and
the adversary may produce any number of instances of
\eqref{eq:ourdecoding} with the same matrix ${\mathbf{H}}$ and various syndromes
${\mathbf{s}}$ and is interested in solving only one of them. This relates to
the, so called, Decoding One Out of Many (DOOM) problem. This problem
was first considered in \cite{JJ02}. It was shown there how to adapt
the known algorithms for decoding a linear code in order to solve this
modified problem. This modification was later analyzed in
\cite{S11}. The parameters of the known algorithms for solving
\eqref{eq:ourdecoding} can be easily adapted to this scenario where we
have to decode simultaneously multiple instances which all have
multiple solutions.
\paragraph{\bf Hardness of the Distinguishing Problem.}
This problem might seem at first sight to be ad-hoc. However, even in
the very restricted case of
$(U,U+V)$-codes, deciding whether a code is a permuted $(U,U+V)$-code or not is an NP-complete problem.
Therefore the Distinguishing Problem is also
NP-complete for generalized $(U,U+V)$-codes. This theorem is proven in the
case of binary $(U,U+V)$-codes in \cite[\S 7.1, Thm 3]{DST17b} and the
proof carries over to an arbitrary finite field $\F_q$. However as
observed in \cite[p. 3]{DST17b}, these NP-completeness reductions hold
in the particular case where the dimensions $k_U$ and $k_V$ of the
code $U$ and $V$ satisfy $k_U < k_V$. If we stick to the binary case,
i.e. $q=2$, then in order that our $(U,U+V)$ decoder works outside the
integer interval $\IInt{\frac{r}{2}}{n-\frac{r}{2}}$ it is necessary
that $k_U > k_V$. Unfortunately in this case there is an efficient
probabilistic algorithm solving the distinguishing problem that is
based on the fact that in this case the hull of the permuted
$(U,U+V)$-code is typically of large dimension, namely $k_U - k_V$ (see
\cite[\S1 p.1-2]{DST17}). This problem can not be settled in the
binary case by considering generalized $(U,U+V)$-codes instead of just
plain $(U,U+V)$-codes, since it is only for the restricted class of $(U,U+V)$-codes that the decoder considered in \cite{DST17} is able to
work properly outside the critical interval
$\IInt{\frac{r}{2}}{n-\frac{r}{2}}$. This explains why the ancestor Surf \cite{DST17}
of the scheme proposed here that relies on binary $(U,U+V)$-codes can not work.
This situation changes drastically when we move to larger finite
fields. In order to have a decoding algorithm $\text{\tt{D}}_{UV}$ that has an advantage
over the generic decoder $\text{\tt {D}}_{\text{gen}}$ we do not need to have
${\mathbf{a}}={\mathbf{c}}={\mathbf{d}}={\mat{1}}_{n/2}$ and ${\mathbf{b}}={\mat{0}}_{n/2}$ (i.e. $(U,U+V)$-codes) we just need that
${\mathbf{a}} \odot {\mathbf{c}}$ and ${\mathbf{a}} \odot {\mathbf{d}} - {\mathbf{b}} \odot {\mathbf{c}}$ are vectors with only non-zero components.
This freedom of choice for the ${\mathbf{a}},{\mathbf{b}},{\mathbf{c}}$ and ${\mathbf{d}}$ thwarts completely the attacks based
on hull considerations and changes completely the nature of the distinguishing problem.
In this case, it seems that the best
approach for solving the distinguishing problem is based on the
following observation. The generalized $(U,U+V)$-code has codewords of weight
slightly smaller than the minimum distance of a random code of the
same length and dimension. It is very tempting to conjecture that the
best algorithms for solving the Distinguishing Problem come from
detecting such codewords. This approach can be easily thwarted by
choosing the parameters of the scheme in such a way that the best
algorithms for solving this task are of prohibitive complexity. Notice
that the best algorithms that we have for detecting such codewords are
in essence precisely the generic algorithms for solving the Decoding
Problem. In some sense, it seems that we might rely on the very same
problem, namely solving the Decoding Problem, even if our proof
technique does not show this.
\paragraph{\bf $q=3$ and Large weights Decoding.}
In terms of simplicity of the decoding procedure used in the signing process, it
seems that defining our codes over the finite field $\mathbb{F}_3$ is
particularly attractive. In such a case, the biggest advantage of $\text{\tt{D}}_{UV}$ over $\text{\tt {D}}_{\text{gen}}$ is obtained for large weights
rather than for small weights (there is an explanation for this asset in the paragraph
{\em ``Why is the trapdoor more powerful for large weights than for small weights?''} \S \ref{subsec:genUVcodes}).
This is a bit unusual in code-based cryptography to rely on the difficulty of finding solutions
of large weight to the decoding problem. However, it also opens the issue whether it would not be advantageous to
make certain (non-binary) code-based primitives rely on the hardness of solving the decoding problem for large weights rather than for small weights. Of course these two problems are equivalent in the binary case, i.e. $q=2$, but this is not the case for larger alphabets anymore and still everything seems to point to the direction that large weights problem is by no means easier
than its small weight counterpart.
All in all, this gives the first practical signature scheme based on
ternary codes which comes with a security proof and which scales well
with the parameters: it can be shown that if one wants a security
level of $2^\lambda$, then signature size is of order $O(\lambda)$,
public key size is of order $O(\lambda^2)$, signature generation is of
order $O(\lambda^3)$, whereas signature verification is of order
$O(\lambda^2)$. It should be noted that contrarily to the current
thread of research in code-based or lattice-based cryptography which
consists in relying on structured codes or lattices based on ring
structures in order to decrease the key-sizes we did not follow this
approach here. This allows for instance to rely on the NP-complete
Decoding Problem which is generally believed to be hard on average
rather that on decoding in quasi-cyclic codes for instance whose
status is still unclear with a constant number of circulant
blocks. Despite the fact that we did not use the standard approach for
reducing the key sizes relying on quasi-cyclic codes for instance, we
obtain acceptable key sizes (about 3.8 megabytes for 128 bits of
security) which compare very favorably to unstructured lattice-based
signature schemes such as TESLA for instance \cite{ABBDEGKP17}. This
is due in part to the tightness of our security reduction.
\section{Notation}
\label{sec:nota}
We provide here some notation that will be used throughout the paper.
\newline
{\noindent \bf General Notation.}
The notation $x \mathop{=}\limits^{\triangle} y$ means
that $x$ is defined to be equal to $y$. We denote by $\mathbb{F}_{q}$
the finite field with $q$ elements and by $S_{w,n}$, or $S_w$ when $n$
is clear from the context, the subset of $\mathbb{F}_q^n$ of words of weight
$w$. For $a$ and $b$ integers with $a \leq b$, we denote by
$\IInt{a}{b}$ the set of integers $\{a,a+1,\dots,b\}$.
\newline
{\noindent \bf Vector and Matrix Notation.}
Vectors will be written with bold letters (such as ${\mathbf{e}}$) and uppercase bold letters are used to denote matrices (such as ${\mathbf{H}}$). Vectors are in row notation.
Let ${\mathbf{x}}$ and ${\mathbf{y}}$ be two vectors, we will write $({\mathbf{x}},{\mathbf{y}})$ to denote their concatenation.
We also denote by ${\mathbf{x}}_\mathcal{I}$ the vector whose coordinates are those of ${\mathbf{x}}=(x_i)_{1 \leq i \leq n}$ which are indexed by $\mathcal{I}$, i.e.
$
{\mathbf{x}}_\mathcal{I} = (x_i)_{i \in cI}
$. We will denote by ${\mathbf{H}}_{\mathcal{I}}$ the matrix whose columns are those of ${\mathbf{H}}$ which are indexed by $\mathcal{I}$.
Sometimes we denote for a vector ${\mathbf{x}}$ by ${\mathbf{x}}(i)$ its $i$-th entry, or for a matrix ${\mathbf{A}}$, by ${\mathbf{A}}(i,j)$ its entry in row $i$ and column $j$. We define the support of ${\mathbf{x}} = (x_i)_{1 \leq i \leq n}$ as
$$
\supp({\mathbf{x}}) \mathop{=}\limits^{\triangle} \{ i \in \{1,\cdots,n \} \mbox{ such that } x_{i} \neq 0 \}
$$
The Hamming weight of ${\mathbf{x}}$ is denoted by
$|{\mathbf{x}}|$.
By some abuse of notation, we will use the same notation
to denote the size of a finite set: $|S|$ stands for the size of the finite set $S$.
It will be clear from the context whether $|{\mathbf{x}}|$ means the Hamming weight or the size of a finite set.
Note that
$
|{\mathbf{x}}| = |\supp({\mathbf{x}})|.
$
For a vector ${\mathbf{a}} \in \F_q^n$, we denote by $\mathbf{Diag}({\mathbf{a}})$ the $n \times n$ diagonal matrix ${\mathbf{A}}$ with its entries
given by ${\mathbf{a}}$, i.e. ${\mathbf{A}}(i,i)=a_i$ for all $i \in \IInt{1}{n}$ and ${\mathbf{A}}(i,j) = 0$ for $i \neq j$.
\newline
{\noindent \bf Probabilistic Notation.} Let $S$ be a finite set, then $x \hookleftarrow S$ means
that $x$ is assigned to be a random element chosen uniformly at random in $S$. For two random variables $X,Y$, $X \sim Y$
means that $X$ and $Y$ are identically distributed. We will also use the same notation for a random variable
and a distribution ${\mathcal D}$, where $X \sim {\mathcal D}$ means that that $X$ is distributed according to ${\mathcal D}$.
We denote the uniform distribution on $S_{w}$ by $\mathcal{U}_{w}$.
The statistical distance between two discrete probability distributions over a same space $\mathcal{E}$ is defined as:
$
\rho(\mathcal{D}_0,\mathcal{D}_1) \mathop{=}\limits^{\triangle} \frac{1}{2} \sum_{x \in \mathcal{E}} |\mathcal{D}_0(x)-\mathcal{D}_1(x) |.
$
Recall that a function $f(n)$ is said to be negligible, and we denote this by $f \in \textup{negl}(n)$, if for all polynomials $p(n)$, $|f(n)| < p(n)^{-1}$ for all sufficiently large $n$.
\newline
{\noindent \bf Coding Theory.}
For any matrix ${\mathbf{M}}$ we denote by $\vectspace{{\mathbf{M}}}$ the vector space
spanned by its rows. A $q$-ary linear code $\mathcal{C}$ of length $n$ and
dimension $k$ is a subspace of $\mathbb{F}_{q}^{n}$ of dimension $k$
and is often defined by a {\em parity-check matrix} ${\mathbf{H}}$ over $\mathbb{F}_q$
of size $r \times n$ as
$$
{\mathcal C} = \vectspace{{\mathbf{H}}}^\perp = \left\{ {\mathbf{x}} \in \mathbb{F}_{q}^{n}: {\mathbf{x}} \transpose{\mathbf{H}}=\mathbf{0}\right\}.
$$
When ${\mathbf{H}}$ is of full rank (which is usually the case) we have
$r = n-k$. A {\em generator matrix} of ${\mathcal C}$ is a $k \times n$ full
rank matrix ${\mathbf{G}}$ over $\mathbb{F}_q$ such that $\vectspace{{\mathbf{G}}}={\mathcal C}$. The
code rate, usually denoted by $R$, is defined as the ratio ${k}/{n}$.
An {\em information set} of a code ${\mathcal C}$ of length $n$ is a set of $k$
coordinate indices ${\mathcal I}\subset\llbracket 1,n \rrbracket$ which indexes $k$
independent columns on any generator matrix. Its complement indexes
$n-k$ independent columns on any parity check matrix. For any
${\mathbf{s}}\in\F_q^{n-k}$, ${\mathbf{H}}\in\F_q^{(n-k)\times n}$, and any information
set ${\mathcal I}$ of ${\mathcal C}=\vectspace{{\mathbf{H}}}^\perp$, for all ${\mathbf{x}}\in\F_q^{n}$
there exists a unique ${\mathbf{e}}\in\F_q^n$ such that ${\mathbf{e}} \transpose{\mathbf{H}}={\mathbf{s}}$
and ${\mathbf{x}}_{\mathcal I}={\mathbf{e}}_{\mathcal I}$.
\section{The Wave-family of Trapdoor One-Way Preimage Sampleable Functions}
\label{sec:genSig}
\subsection{One-way Preimage Sampleable Code-based Functions}\label{subsec:WPS}
In this work we will use the
FDH paradigm
\cite{BR96,C02} using as one-way the syndrome function:
\begin{displaymath}
\begin{array}{lccc}
f_{w,{\mathbf{H}}} : &{\mathbf{e}} \in S_{w} & \longmapsto & {\mathbf{e}}\transpose{{\mathbf{H}}}\in \F_q^{n-k}\\
\end{array}
\end{displaymath}
The corresponding FDH signature uses a trapdoor to choose
${\mat{\sigma}} \in f_{w,{\mathbf{H}}}^{-1}({\mathbf{h}})$ where ${\mathbf{h}}$ is the digest of the message to be
signed. Here, the signature domain is $S_w$
and its range is the set of
syndromes $\F_q^{n-k}$ according to ${\mathbf{H}}$, an $(n-k) \times n$ parity
check matrix of some $q$-ary linear $[n,k]$ code. The weight $w$ is
chosen such that the one-way function $f_{w,{\mathbf{H}}}$ is surjective but
not bijective. Building a secure FDH signature in this situation can
be achieved by imposing additional properties \cite{GPV08} to the
one-way function (we will speak of the GPV strategy). This is mostly
captured by the notion of Preimage Sampleable Functions (PSF), see
\cite[Definition 5.3.1]{GPV08}. We express below this notion in our
code-based context with a slightly weaker definition that drops the collision resistance
condition.
This will be sufficient for proving the security of our code-based FDH scheme.
The key feature is a trapdoor inversion of $f_{w,{\mathbf{H}}}$ which
achieves (close to) uniform distribution over the domain $S_w$.
\begin{definition}[One-way Preimage Sampleable Code-based
Functions] \label{def:WPS} It is a pair of probabilistic
polynomial-time algorithms $(\trap,\sampPre)$ together with a triple
of functions $(n(\lambda),k(\lambda),w(\lambda))$
growing polynomially with the security parameter $\lambda$
and giving the length and dimension of the codes and the weights we
consider for the syndrome decoding problem, such that
\begin{itemize}
\item $\trap$ when given $\lambda$, outputs $({\mathbf{H}},T)$ where ${\mathbf{H}}$ is
an $(n-k) \times n $ matrix over $\F_q$ and $T$ the trapdoor
corresponding to ${\mathbf{H}}$. Here and elsewhere we drop the dependence
in $\lambda$ of the functions $n,k$ and $w$.
\item $\sampPre$ is a probabilistic algorithm which takes as input $T$
and an element ${\mathbf{s}} \in \F_q^{n-k}$ and outputs an ${\mathbf{e}} \in
S_{w,n}$ such that ${\mathbf{e}}\tran{{\mathbf{H}}} = {{\mathbf{s}}}$.
\end{itemize}
The following properties have to hold for all but a negligible
fraction of ${\mathbf{H}}$ output by $\trap$.
\begin{enumerate}
\item \textup{Domain Sampling with uniform output:}
$$\rho({\mathbf{e}}\tran{{\mathbf{H}}},{{\mathbf{s}}}) \in \textup{negl}(\lambda)$$
where ${\mathbf{e}}$ and ${\mathbf{s}}$ are two random variables, with ${\mathbf{e}}$ being uniformly distributed over
$S_{w,n}$ and ${\mathbf{s}}$ being uniformly distributed over $\F_q^{n-k}$.
\item \textup{Preimage Sampling with trapdoor:} for every ${\mathbf{s}} \in \F_q^{n-k}$, we have
$$\rho\left( \sampPre({\mathbf{s}},T),{\mathbf{e}}_s \right) \in \textup{negl}(\lambda),$$
where ${\mathbf{e}}_s$ is uniformly distributed over the set $\{{\mathbf{e}} \in S_{w,n}:{\mathbf{e}} \transpose{{\mathbf{H}}}={\mathbf{s}}\}$.
\item \textup{One wayness without trapdoor:} for any probabilistic
poly-time algorithm $\mathcal{A}$
outputting an element ${\mathbf{e}}
\in S_{w,n}$ when given ${\mathbf{H}} \in \F_q^{(n-k) \times n }$ and ${\mathbf{s}}
\in \F_q^{n-k}$, the probability that ${\mathbf{e}}\tran{{\mathbf{H}}} =
{{\mathbf{s}}}$ is negligible, where the probability is taken over the
choice of ${\mathbf{H}}$,
the target value ${\mathbf{s}}$
chosen uniformly at random, and $\mathcal{A}$'s random coins.
\end{enumerate}
\end{definition}
Given a one-way preimage sampleable code-based function
$(\trap,\sampPre)$ we easily define a code-based FDH signature scheme
as follows. We generate the public/secret key as
$(\mathrm{pk},\mathrm{sk})=({\mathbf{H}},T) \leftarrow \trap(\lambda)$.
We also select a cryptographic hash function $\hash: \{0,1\}^{*}
\rightarrow \F_q^{n-k}$ and a salt ${\mathbf{r}}$ of size $\lambda_{0}$.
The algorithms \ensuremath{\mathtt{Sgn}^{\mathrm{sk}}}\ and \ensuremath{\mathtt{Vrfy}^{\mathrm{pk}}}\ are defined as follows
\begin{center}
\begin{tabular}{l@{\hspace{3mm}}|@{\hspace{3mm}}l}
$\ensuremath{\mathtt{Sgn}^{\mathrm{sk}}}({\mathbf{m}})\!\!: \qquad \qquad \qquad$ & $\ensuremath{\mathtt{Vrfy}^{\mathrm{pk}}}({\mathbf{m}},({\mathbf{e}}',{\mathbf{r}}))\!\!:$ \\
$\quad {\mathbf{r}} \hookleftarrow \{ 0,1 \}^{\lambda_{0}}$ &$\quad {\mathbf{s}} \leftarrow \hash({\mathbf{m}} ,{\mathbf{r}})$ \\
$\quad {\mathbf{s}} \leftarrow \hash({\mathbf{m}} ,{\mathbf{r}})$ & $\quad \texttt{if } {\mathbf{e}}'\transpose{{\mathbf{H}}} = {{\mathbf{s}}} \texttt{ and } |{\mathbf{e}}'| = w \texttt{ return } 1$\\
$\quad{\mathbf{e}} \leftarrow \sampPre({\mathbf{s}},T) $ &$\quad \texttt{else return } 0 $ \\
$\quad \texttt{return}({\mathbf{e}},{\mathbf{r}})$& \\
\end{tabular}
\end{center}
A tight security reduction in the random oracle model is given in
\cite{GPV08} for PSF signature schemes. It requires collision
resistance. Our construction uses a ternary alphabet $q=3$ together with large
values of $w$ and collision resistance is not met. Still, we achieve a
tight security proof by considering in \S\ref{sec:securityProof} a reduction to
the multiple target decoding problem.
\subsection{The Wave Family of One-Way Trapdoor Preimage Sampleable Functions}
\label{subsec:waveTrap}
The trapdoor family of codes which gives an advantage for inverting $f_{w,{\mathbf{H}}}$ is built upon the following
transformation:
\begin{definition}
Let ${\mathbf{a}}$, ${\mathbf{b}}$, ${\mathbf{c}}$ and ${\mathbf{d}}$ be vectors of $\F_q^{n/2}$. We define
\begin{eqnarray*}
\varphi_{{\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}}} :\F_q^{n/2} \times \F_q^{n/2} & \rightarrow & \F_q^{n/2} \times \F_q^{n/2}\\
({\mathbf{x}},{\mathbf{y}}) & \mapsto & ({\mathbf{a}} \odot {\mathbf{x}}+{\mathbf{b}} \odot {\mathbf{y}},{\mathbf{c}} \odot {\mathbf{x}} + {\mathbf{d}} \odot {\mathbf{y}}).
\end{eqnarray*}
We will say that $\varphi_{{\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}}}$ is UV-normalized if
\begin{equation} \label{eq:cdtInv}
\forall i \in \llbracket 1,n/2 \rrbracket, \quad a_{i}d_{i} -
b_{i}c_{i} = 1, \mbox{ } a_{i}c_{i} \neq 0.
\end{equation}
For any two subspaces $U$ and $V$ of $\F_q^{n/2}$, we extend the notation
\begin{displaymath}
\varphi_{{\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}}} (U,V) \mathop{=}\limits^{\triangle} \left\{ \varphi_{{\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}}}({\mathbf{u}}, {\mathbf{v}}) : {\mathbf{u}} \in U, {\mathbf{v}} \in V\right\}
\end{displaymath}
\end{definition}
\begin{proposition}[Normalized Generalized $(U,U+V)$-code]\label{prop:genUV}
Let $n$ be an even integer and let
$\varphi=\varphi_{{\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}}}$ be a UV-normalized mapping. The
mapping $\varphi$ is bijective with
\begin{displaymath}
\varphi^{-1}({\mathbf{x}},{\mathbf{y}}) = ({\mathbf{d}} \odot {\mathbf{x}} -{\mathbf{b}} \odot {\mathbf{y}},-{\mathbf{c}} \odot {\mathbf{x}} + {\mathbf{a}} \odot {\mathbf{y}}).
\end{displaymath}
For any two subspaces $U$ and $V$ of $\F_q^{n/2}$ of parity check
matrices ${\mathbf{H}}_U$ and ${\mathbf{H}}_V$, the vector space $\varphi(U,V)$ is
called a {\em normalized generalized $(U,U+V)$-code}. It has dimension
$\dim U + \dim V$ and admits the following parity check matrix
\begin{equation}\label{eq:pcmUV}
{\mathcal H}(\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V) \mathop{=}\limits^{\triangle} \begin{pmatrix}[r|r]
{\mathbf{H}}_U {\mathbf{D}} & - {\mathbf{H}}_U {\mathbf{B}}\\ \hline
- {\mathbf{H}}_V {\mathbf{C}} & {\mathbf{H}}_V {\mathbf{A}}
\end{pmatrix}
\end{equation}
where ${\mathbf{A}} \mathop{=}\limits^{\triangle} \mathbf{Diag}({\mathbf{a}})$, ${\mathbf{B}} \mathop{=}\limits^{\triangle} \mathbf{Diag}({\mathbf{b}})$,
${\mathbf{C}} \mathop{=}\limits^{\triangle} \mathbf{Diag}({\mathbf{c}})$ and ${\mathbf{D}}\mathop{=}\limits^{\triangle} \mathbf{Diag}({\mathbf{d}})$.
\end{proposition}
In the sequel, a UV-normalized mapping $\varphi$ implicitly
defines a quadruple of vectors $({\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}})$ such that
$\varphi=\varphi_{{\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}}}$. We will use this implicit
notation and drop the subscript whenever no ambiguity may arise.
\begin{remark}
\begin{itemize}
\item This construction can be viewed as taking two codes of length
$n/2$ and making a code of length $n$ by ``mixing'' together a
codeword ${\mathbf{u}}$ in $U$ and a codeword ${\mathbf{v}}$ in $V$ as the vector
formed by the set of $a_i u_i + b_i v_i$'s and
$c_i u_i + d_i v_i$'s.
\item The condition $a_i c_i \neq 0$ is here to ensure that
coordinates of $U$ appear in all the coordinates of the normalized
generalized $(U,U+V)$ codeword. This is essential for having a decoding
algorithm for the generalized $(U,U+V)$-code that has an advantage over
standard information set decoding algorithms for linear codes. The
trapdoor of our scheme builds upon this advantage. It can really be
viewed as the ``interesting'' generalization of the standard $(U,U+V)$
construction.
\item We have fixed $a_{i}d_{i} - b_{i}c_{i} = 1$ for every $i$ to
simplify some of the expressions in what follows. It is readily
seen that any generalized $(U,U+V)$-code that can be obtained in the
more general case $ a_{i}d_{i} - b_{i}c_{i} \neq 0$ can also be
obtained in the restricted case $a_{i}d_{i} - b_{i}c_{i} = 1$ by
choosing $U$ and $V$ appropriately.
\end{itemize}
\end{remark}
\subsubsection{Defining $\trap$ and $\sampPre$.}
From the security parameter $\lambda$, we derive the system parameters
$n,k,w$ and split $k=k_U+k_V$ as described in \S\ref{sec:ch_params}.
The secret key is a tuple $\mathrm{sk}=(\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V,{\mathbf{S}},{\mathbf{P}})$ where
$\varphi$ is a UV-normalized mapping,
${\mathbf{H}}_U\in\F_q^{(n/2-k_U)\times n/2}$,
${\mathbf{H}}_V\in\F_q^{(n/2-k_V)\times n/2}$, ${\mathbf{S}}\in\F_q^{(n-k)\times (n-k)}$
is non-singular with $k=k_U+k_V$, and ${\mathbf{P}}\in\F_q^{n\times n}$ is a
permutation matrix. Each element of $\mathrm{sk}$ is chosen randomly and
uniformly in its domain.
From $(\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V)$ we derive the parity check matrix
${\Hm_{\textup{sk}}}={\mathcal H}(\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V)$ as in
Proposition~\ref{prop:genUV}. The public key is $\Hm_{\textup{pk}}={\mathbf{S}}{\Hm_{\textup{sk}}}{\mathbf{P}}$.
Next, we need to produce an algorithm $D_{\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V}$ which
inverts $f_{w,{\Hm_{\textup{sk}}}}$. The parameter $w$ is such that this can be
achieved using the underlying $(U,U+V)$ structure while the generic
problem remains hard. In \S\ref{sec:rejSampl} we will show how to use
rejection sampling to devise $D_{\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V}$ such that its
output is uniformly distributed over $S_w$ when ${\mathbf{s}}$ is uniformly
distributed over $\F_q^{n-k}$. This enables us to instantiate
algorithm $\sampPre$. To summarize:
\begin{displaymath}
\left.
\begin{array}{rcl}
\mathrm{sk} & \gets & (\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V,{\mathbf{S}},{\mathbf{P}}) \\
\mathrm{pk} & \gets & \Hm_{\textup{pk}} \\
\left(\mathrm{pk},\mathrm{sk}\right) & \gets & \trap(\lambda)
\end{array}~~~\right|~~~
\begin{array}{l}
\sampPre(\mathrm{sk},{\mathbf{s}}) \\
\quad{\mathbf{e}} \leftarrow D_{\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V}({\mathbf{s}}\transpose{\left({\mathbf{S}}^{-1}\right)})\\
\quad \texttt{return}~ {\mathbf{e}}{\mathbf{P}}
\end{array}
\end{displaymath}
As in \cite{GPV08}, putting this together with a domain sampling
condition --which we prove in \S\ref{sec:domSampl} from a variation of
the left-over hash lemma-- allows us to define a family of trapdoor
preimage sampleable functions, later referred to as the Wave-PSF
family.
\section{Inverting the Syndrome Function}
\label{sec:trapdoor}
This section is devoted to the inversion of $f_{w,{\mathbf{H}}}$. It amounts to solve the following problem.
\begin{problem}[Syndrome Decoding with fixed weight]
\label{prob:CSD}
Given ${\mathbf{H}}\in\F_q^{(n-k)\times n}$, ${\mathbf{s}}\in\F_q^{n-k}$, and an integer
$w$, find ${\mathbf{e}}\in\F_q^n$ such that ${\mathbf{e}}\transpose{{\mathbf{H}}}={{\mathbf{s}}}$ and $\wt{{\mathbf{e}}}=w$.
\end{problem}
We consider three nested intervals $\IInt{w^-_{\text{easy}}}{w^+_{\text{easy}}} \subset
\IInt{w^-_{\text{UV}}}{w^+_{\text{UV}}} \subset \IInt{w^-}{\wp}$ for $w$ such that for ${\mathbf{s}}$
randomly chosen in $\F_q^{n-k}$:
\begin{itemize}\vspace{-1em}
\item $f^{-1}_{w,{\mathbf{H}}}({\mathbf{s}})$ is likely/very likely to exist if $w\in
\IInt{w^-}{\wp}$ (Gilbert-Varshamov bound)
\item ${\mathbf{e}}\in f^{-1}_{w,{\mathbf{H}}}({\mathbf{s}})$ is easy to find if $w\in
\IInt{w^-_{\text{easy}}}{w^+_{\text{easy}}}$ for all ${\mathbf{H}}$ (Prange algorithm)
\item ${\mathbf{e}}\in f^{-1}_{w,{\mathbf{H}}}({\mathbf{s}})$ is easy to find if
$w\in \IInt{w^-_{\text{UV}}}{w^+_{\text{UV}}}$ and ${\mathbf{H}}$ is the parity check matrix of a
generalized $(U,U+V)$-code. This is the key for exploiting
the underlying $(U,U+V)$ structure as a trapdoor for
inverting $f_{w,{\mathbf{H}}}$.
\end{itemize}
\subsection{Surjective Domain of the Syndrome Function}
The issue is here for which value of $w$ we may expect that $f_{w,{\mathbf{H}}}$
is surjective. This clearly implies that $|S_w| \geq q^{n-k}$. In
other words we have:
\begin{fact}
\label{fac:lower_bound}
If $f_{w,{\mathbf{H}}}$ is surjective, then $w \in \IInt{w^-}{\wp}$ where $w^-<\wp$ are the extremum of the set $\left\{ w \in \llbracket 0,n \rrbracket\mid\binom{n}{w}(q-1)^{w} \geq q^{n-k} \right\}.$
\end{fact}
For a fixed rate $R=k/n$, let us define
$
\omega^- \mathop{=}\limits^{\triangle} \mathop{\lim}\limits_{n \to + \infty} w^-/n$ and $\omega^+ \mathop{=}\limits^{\triangle} \mathop{\lim}\limits_{n \to + \infty} w^-/n.
$
Note that $\omega^-$ is known as the asymptotic Gilbert-Varshamov
distance. A straightforward computation of the expected number of
errors ${\mathbf{e}}$ of weight $w$ such that ${\mathbf{e}}\transpose{{\mathbf{H}}} = {{\mathbf{s}}}$ when
${\mathbf{H}}$ is random shows that we expect an exponential number of
solutions when $w/n$ lies in $(\omega^-,\omega^+)$.
However, coding theory has never come up with an efficient algorithm for finding
a solution to this problem in the whole range $(\omega^-,\omega^+)$.
\subsection{Easy Domain of the Syndrome Function} \label{subsec:prangeStep}
The subrange of $(\omega^-,\omega^+)$ for which we know how to solve
efficiently Problem \ref{prob:CSD} is given by the condition
$w/n \in [\omega^-_{\text{easy}},\omega^+_{\text{easy}}]$ where
\begin{eqnarray}
\omega^-_{\text{easy}} & \mathop{=}\limits^{\triangle} & \frac{q-1}{q} (1-R) \quad \mbox{and} \quad \omega^+_{\text{easy}} \mathop{=}\limits^{\triangle} \frac{q-1}{q} + \frac{R}{q},
\end{eqnarray}
where $R \mathop{=}\limits^{\triangle} \frac{k}{n}$. This is achieved by a sightly
generalized version of the Prange decoder \cite{P62}. We want to find
for a given ${\mathbf{s}}$ and error ${\mathbf{e}}$ of weight $w$ such that
${\mathbf{e}}\transpose{{\mathbf{H}}} = {{\mathbf{s}}}$. The matrix ${\mathbf{H}}$ is a
full-rank matrix and it therefore contains an invertible submatrix
${\mathbf{A}}$ of size $(n-k)\times (n-k)$. We choose a set of positions $\mathcal{I}$
of size $n-k$ for which ${\mathbf{H}}$ restricted to these positions is a full
rank matrix. For simplicity assume that this matrix is in the first
$n-k$ positions: ${\mathbf{H}} = \begin{pmatrix} {\mathbf{A}} | {\mathbf{B}}\end{pmatrix}$. We
look for an ${\mathbf{e}}$ of the form ${\mathbf{e}} = ({\mathbf{e}}'',{\mathbf{e}}')$ where
${\mathbf{e}}' \in \F_q^{k}$ and ${\mathbf{e}}'' \in \F_q^{n-k}$. We should therefore have
${{\mathbf{e}}''} = ({{\mathbf{s}}} - {\mathbf{e}}'\transpose{{\mathbf{B}}})\transpose{({\mathbf{A}}^{-1})}$. In
this way we can arbitrarily choose the error ${\mathbf{e}}'$ of length
$k$ but in any case we expect for the remaining part a vector ${\mathbf{e}}''$ with about
$\frac{q-1}{q}(n-k)$ positions that are non zero. Therefore, the
weights that are easily attainable by this strategy are between
$\frac{q-1}{q}(n-k) = n \omega^-_{\text{easy}}$ and
$k + \frac{q-1}{q}(n-k) = n \omega^+_{\text{easy}}$ by choosing appropriately the weight
of ${\mathbf{e}}'$ between $0$ and $k$. This procedure, that we call
$\Call{PrangeOne}{\cdot}$, is formalized in
Algorithm~\ref{algo:Prangesdd}.
\begin{algorithm}[htb]
\caption{\calltxt{PrangeOne}{${\mathbf{H}},{\mathbf{s}}$} --- One iteration of the Prange decoder}\label{algo:Prangesdd}
Parameters: $q,n,k$, ${\mathcal D}$ a distribution over $\llbracket 0,k\rrbracket$
\begin{algorithmic}[1]
\hrule
\Require ${\mathbf{H}}\in\F_q^{(n-k)\times n}$, ${\mathbf{s}}\in\F_q^{n-k}$
\Ensure ${\mathbf{e}}\transpose{{\mathbf{H}}}={\mathbf{s}}$
\State $t\hookleftarrow{\mathcal D}$
\State ${\mathcal I}\gets\Call{InfoSet}{{\mathbf{H}}}$
\Comment {{\em \Call{InfoSet}{${\mathbf{H}}$} returns an information set of $\vectspace{{\mathbf{H}}}^\perp$}}
\State ${\mathbf{x}}\hookleftarrow\{{\mathbf{x}}\in\F_q^n\mid\wt{{\mathbf{x}}_{\mathcal I}}=t\}$
\State ${\mathbf{e}}\gets\Call{PrangeStep}{{\mathbf{H}},{\mathbf{s}},{\mathcal I},{\mathbf{x}}}$
\State \Return ${\mathbf{e}}$
\end{algorithmic}
\smallskip
\hrule
{\bf function} \Call{PrangeStep}{${\mathbf{H}},{\mathbf{s}},{\mathcal I},{\mathbf{x}}$} --- Prange vector completion
\hrule
\begin{algorithmic}
\Require ${\mathbf{H}}\in\F_q^{(n-k)\times n}$, ${\mathbf{s}}\in\F_q^{n-k}$, ${\mathcal I}$ an
information set of $\vectspace{{\mathbf{H}}}^\perp$, ${\mathbf{x}}\in\F_q^n$
\Ensure ${\mathbf{e}}\transpose{{\mathbf{H}}}={\mathbf{s}}$ and ${\mathbf{e}}_{\mathcal I}={\mathbf{x}}_{\mathcal I}$
\State ${\mathbf{P}}\gets$ any $n\times n$ permutation matrix sending ${\mathcal I}$ on the last
$k$ coordinates
\State $({\mathbf{A}}\mid{\mathbf{B}})\gets {\mathbf{H}}{\mathbf{P}}$
\Comment ${\mathbf{A}}\in\F_q^{(n-k)\times(n-k)}$
\State $({\mat{0}}\mid{\mathbf{e}}')\gets {\mathbf{x}}$
\Comment ${\mathbf{e}}'\in\F_q^{k}$
\State ${\mathbf{e}}\gets\left(\left({\mathbf{s}} -
{\mathbf{e}}'\tran{{\mathbf{B}}}\right)\tran{\left({\mathbf{A}}^{-1}\right)},{\mathbf{e}}'\right)\tran{{\mathbf{P}}}$
\State \Return ${\mathbf{e}}$
\end{algorithmic}
\end{algorithm}
\begin{proposition} \label{propo:Prange} When ${\mathbf{H}}$ is chosen
uniformly at random in $\F_q^{(n-k)\times n}$ and ${\mathbf{s}}$ uniformly at
random in $\F_q^{n-k}$, for the output ${\mathbf{e}}$
of \calltxt{PrangeOne}{${\mathbf{H}},{\mathbf{s}}$} we have
$$
|{\mathbf{e}}| = S+T
$$
where $S$ and $T$ are independent random variables,
$S \in \IInt{0}{n-k}$, $T \in \IInt{0}{k}$, $S$ is the Hamming
weight of a vector that is uniformly distributed over $\F_q^{n-k}$
and $\mathbb{P}(T=t) = \mathcal{D}(t)$. The distribution of $|{\mathbf{e}}|$ is given by
\begin{eqnarray*}
\mathbb{P}\left(|{\mathbf{e}}|=w \right) & = & \sum_{t=0}^{w} \frac{\binom{n-k}{w-t}(q-1)^{w-t}}{q^{n-k}} \mathcal{D}(t),\quad \mathbb{E}(|{\mathbf{e}}|) = \overline{\mathcal{D}} + \textstyle{\frac{q-1}{q}} (n-k) = \overline{\mathcal{D}} + n \omega^-_{\text{easy}}\label{eq:probaprange}
\end{eqnarray*}
where $\overline{\mathcal{D}} = \sum_{t=0}^k t\mathcal{D}(t)$.
\end{proposition}
From this proposition, we deduce immediately that any weight $w$ in
$\IInt{\omega^-_{\text{easy}} n}{\omega^+_{\text{easy}} n}$ can be reached by this Prange decoder
with a probabilistic polynomial time algorithm that uses a
distribution $\mathcal{D}$ such that $\overline{\mathcal{D}} = w - \omega^-_{\text{easy}} n$ and which is sufficiently concentrated around its expectation. It
will be helpful in what follows to be able to choose a probability
distribution $\mathcal{D}$ as this gives a rather large degree of freedom in
the distribution of $|{\mathbf{e}}|$ that will come very handy to simulate
an output distribution that is uniform over the words of weight $w$ in
the generalized $(U,U+V)$-decoder that we will consider in what
follows.
To summarize this discussion we have shown that when we want to ensure that $f_{{\mathbf{H}}}$ is surjective, $w$ has to verify $w^- \leq w \leq \wp$. However, in a cryptographic setting $w/n$ cannot
lie in $[\omega^-_{\text{easy}},\omega^+_{\text{easy}}] \subseteq [\omega^-,\omega^+]$ otherwise
anybody that uses the generalized Prange algorithm would be able to
invert $f_{{\mathbf{H}}}$. All of this is summarized in Figure
\ref{fig:distSgn} where we draw the above different areas
asymptotically in $n$ of $w/n$ when $k/n$ is fixed.
\begin{figure}
\caption{Areas of relative signature distances. \label{fig:distSgn}}
\centering
\includegraphics[height=10cm]{dist.png}
\end{figure}
\subsubsection{Enlarging the Easy Domain $\IInt{w^-_{\text{easy}}}{w^+_{\text{easy}}}$.}
Inverting the syndrome function $f_{w,{\mathbf{H}}}$ is the basic problem upon which all code-based cryptography relies. This problem has been studied for a long time for relative weights $\omega \mathop{=}\limits^{\triangle} \frac{w}{n}$ in $(0,\omega^-_{\text{easy}})$ and despite many efforts the best algorithms \cite{S88,D91,B97b,MMT11,BJMM12,MO15,DT17,BM18} for solving this problem are all exponential in $n$ for such fixed relative weights.
In other words, after
more than fifty years of research, none of those algorithms came up with a polynomial complexity for relative weights
$\omega$ in $(0, \omega^-_{\text{easy}})$. Furthermore, by adapting all the previous algorithms beyond this point we observe for them the same behaviour: they are all polynomial in the range of relative weights $[\omega^-_{\text{easy}},\omega^+_{\text{easy}}]$ and become exponential once again when $\omega$ is in $(\omega^+_{\text{easy}},1)$. All these results point towards the fact
that inverting $f_{w,{\mathbf{H}}}$ in polynomial time on a larger range is fundamentally a hard problem.
In the following subsection we present a trapdoor on the matrices ${\mathbf{H}}$ that enables to invert in polynomial time $f_{w,{\mathbf{H}}}$ on a larger range by tweaking the Prange decoder.
\subsection{Solution with Trapdoor} \label{subsec:genUVcodes}
Let us recall that our trapdoor to invert $f_{w,{\mathbf{H}}}$ is given by the
family of normalized generalized $(U,U+V)$-codes (see Proposition
\ref{prop:genUV} in \S\ref{subsec:waveTrap}). As we will see in what
follows, this family comes with a simple procedure which enables to
invert $f_{w,{\mathbf{H}}}$ with errors of weight which belongs to
$\IInt{w^-_{\text{UV}}}{w^+_{\text{UV}}} \subset \IInt{w^-}{\wp}$ but with
$\IInt{w^-_{\text{easy}}}{w^+_{\text{easy}}} \subsetneq \IInt{w^-_{\text{UV}}}{w^+_{\text{UV}}}$. We summarize this
situation in Figure \ref{fig:rewUV}.
We wish to point out here, to avoid any misunderstanding that led the
authors of \cite{BP18a} to make a wrong claim that they had an attack
on Wave, that the procedure we give here is not the one we use at the
end to instantiate Wave, but is merely here to give the underlying
idea of the trapdoor. Rejection sampling will be needed as explained
in the following section to avoid any information leakage on the
trapdoor coming from the outputs of the algorithm given here.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.83]
\tikzstyle{valign}=[text height=1.5ex,text depth=.25ex]
\draw[line width=2pt,gray] (0,2) -- (1,2);
\draw (2.5,2.5) node[red]{{\sf hard}};
\draw (12,2.5) node[right,red]{{\sf hard}};
\draw (12,2.5) node[right,red]{{\sf hard}};
\draw[line width=2pt,red!50] (1,2) -- (5,2);
\draw[line width=2pt,blue!50] (5,2) --
node[above,midway,blue,valign]{{\sf easy}} (11,2);
\draw[line width=2pt,red!50] (11,2) -- (13,2);
\draw[->,>=latex,line width=2pt,gray] (13,2) -- (14,2)
node[right,black] {$\displaystyle w$};
\tikzstyle{valign}=[text height=2ex]
\draw[thick] (1,1.9) node[below,valign]{$0$} -- (1,2.1);
\draw[thick] (5,1.9) node[below,valign]{$w^-_{\text{easy}}$} -- (5,2.1);
\draw[thick] (11,1.9) node[below,valign]{$w^+_{\text{easy}}$~~} -- (11,2.1);
\draw[thick] (13,1.9) node[below,valign]{$n$} -- (13,2.1);
\draw[thick] (4,1.9) node[below,valign]{$w^-_{\text{UV}}$} -- (4,3.1);
\draw[thick] (11.75,1.9) node[below,valign]{~~$w^+_{\text{UV}}$} -- (11.75,3.1);
\draw[<->,>=latex,thin,blue!50] (4,3) -- node[above,blue,midway]{{\sf
easy with (U,U+V){} trapdoor}} (11.75,3);
\draw[<->,>=latex,thin,red!50] (1,3) -- (4,3);
\draw[<->,>=latex,thin,red!50] (11.75,3) -- (13,3);
\end{tikzpicture}
\caption{Hardness of $(U,U+V)$ Decoding}
\label{fig:rewUV}
\end{figure}
It turns out that in the case of a normalized generalized $(U,U+V)$-code, a simple tweak of
the Prange decoder will be able to reach relative weights $w/n$
outside the ``easy'' region $[\omega^-_{\text{easy}},\omega^+_{\text{easy}}]$. It exploits
the fundamental leverage of the Prange decoder : it consists in
choosing the error ${\mathbf{e}}$ satisfying ${\mathbf{e}} \tran{{\mathbf{H}}} = {{\mathbf{s}}}$ as we
want in $k$ positions when the code that we decode is random and of dimension
$k$. When we want an error of low weight, we put zeroes on those
positions, whereas if we want an error of large weight, we put
non-zero values. This idea leads to even smaller or larger weights in the case of a normalized
generalized
$(U,U+V)$-code.
To explain this point, recall that we want to solve the following decoding problem in this case.
\begin{problem}[decoding problem for normalized generalized $(U,U+V)$-codes]\label{prob:decodingNGUV}
Given a normalized generalized $(U,U+V)$ code $(\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V)$
(see Proposition \ref{prop:genUV}) of parity-check matrix
${\mathbf{H}} = {\mathcal H}(\varphi,{\mathbf{H}}_U,{\mathbf{H}}_V)\in\F_q^{(n-k)\times n}$, and a
syndrome ${\mathbf{s}} \in \F_q^{n-k}$, find ${\mathbf{e}} \in \F_q^n$ of weight $w$
such that ${\mathbf{e}} \transpose{{\mathbf{H}}} = {\mathbf{s}}.$
\end{problem}
The following notation will be very useful to explain how we solve
this problem.
\begin{notation} \label{nota:euv}
For a vector ${\mathbf{e}}$ in $\F_q^n$, we denote by ${\mathbf{e}}_U$ and ${\mathbf{e}}_V$ the vectors in $\F_q^{n/2}$ such that
$$({\mathbf{e}}_U,{\mathbf{e}}_V)=\varphi^{-1}({\mathbf{e}}).$$
\end{notation}
The decoding algorithm we will consider recovers ${\mathbf{e}}_V$ and then ${\mathbf{e}}_U$. From ${\mathbf{e}}_U$ and ${\mathbf{e}}_V$ we recover
${\mathbf{e}}$ since ${\mathbf{e}}=\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)$. The point of introducing such an ${\mathbf{e}}_U$ and a ${\mathbf{e}}_V$ is that
\begin{restatable}{proposition}{prop:decomposition}
\label{prop:decomposition}
Solving the decoding problem \ref{prob:decodingNGUV} is equivalent to find an ${\mathbf{e}} \in \F_q^n$ of weight
$w$ satisfying
\begin{eqnarray}
{{\mathbf{e}}_U} \tran{{\mathbf{H}}}_U & = & {{\mathbf{s}}^U} \label{eq:U}\\
{{\mathbf{e}}_V} \tran{{\mathbf{H}}}_V & = & {{\mathbf{s}}^V} \label{eq:V}
\end{eqnarray}
where ${\mathbf{s}} = ({\mathbf{s}}^U,{\mathbf{s}}^V)$ with ${\mathbf{s}}^U \in \F_q^{n/2-k_U}$ and ${\mathbf{s}}^V \in \F_q^{n/2-k_V}$.
\end{restatable}
\begin{remark}
We have put $U$ and $V$ as superscripts in ${\mathbf{s}}^U$ and ${\mathbf{s}}^V$ to avoid any confusion with the notation we have just introduced for
${\mathbf{e}}_U$ and ${\mathbf{e}}_V$.
\end{remark}
\begin{proof}
Let us observe that
$
{\mathbf{e}} = \varphi({\mathbf{e}}_U,{\mathbf{e}}_V)
= ({\mathbf{a}} \odot {\mathbf{e}}_U+ {\mathbf{b}} \odot {\mathbf{e}}_V,{\mathbf{c}} \odot {\mathbf{e}}_U + {\mathbf{d}} \odot {\mathbf{e}}_V)
= ({\mathbf{e}}_U {\mathbf{A}} + {\mathbf{e}}_V {\mathbf{B}}, {\mathbf{e}}_U {\mathbf{C}} + {\mathbf{e}}_V {\mathbf{D}})
$
with ${\mathbf{A}} = \mathbf{Diag}({\mathbf{a}}),{\mathbf{B}} = \mathbf{Diag}({\mathbf{b}}),{\mathbf{C}} = \mathbf{Diag}({\mathbf{c}}), {\mathbf{D}} = \mathbf{Diag}({\mathbf{d}})$.
By using this, ${\mathbf{e}} \transpose{{\mathbf{H}}} = {\mathbf{s}}$ translates into
\begin{eqnarray*}
\left\{
\begin{array}{lcr}
{\mathbf{e}}_U {\mathbf{A}} \transpose{{\mathbf{D}}} \transpose{{\mathbf{H}}}_U + {\mathbf{e}}_V {\mathbf{B}} \transpose{{\mathbf{D}}} \transpose{{\mathbf{H}}}_U -
{\mathbf{e}}_U {\mathbf{C}} \transpose{{\mathbf{B}}} \transpose{{\mathbf{H}}}_U - {\mathbf{e}}_V {\mathbf{D}} \transpose{{\mathbf{B}}} \transpose{{\mathbf{H}}}_U & = & {\mathbf{s}}^U\\
-{\mathbf{e}}_U {\mathbf{A}} \transpose{{\mathbf{C}}} \transpose{{\mathbf{H}}}_V - {\mathbf{e}}_V {\mathbf{B}} \transpose{{\mathbf{C}}} \transpose{{\mathbf{H}}}_V +
{\mathbf{e}}_U {\mathbf{C}} \transpose{{\mathbf{A}}} \transpose{{\mathbf{H}}}_V + {\mathbf{e}}_V {\mathbf{D}} \transpose{{\mathbf{A}}} \transpose{{\mathbf{H}}}_V & = & {\mathbf{s}}^V
\end{array}
\right.
\end{eqnarray*}
which amounts to
$
{\mathbf{e}}_U ({\mathbf{A}} {\mathbf{D}} - {\mathbf{B}} {\mathbf{C}})\transpose{{\mathbf{H}}}_U = {\mathbf{s}}^U$ and $
{\mathbf{e}}_V ({\mathbf{A}} {\mathbf{D}} - {\mathbf{B}} {\mathbf{C}}) \transpose{{\mathbf{H}}}_V = {\mathbf{s}}^V
$, since ${\mathbf{A}}$, ${\mathbf{B}}$, ${\mathbf{C}}$, ${\mathbf{D}}$ are diagonal matrices, they are therefore symmetric and commute
with each other. We finish the proof by observing that ${\mathbf{A}} {\mathbf{D}} - {\mathbf{B}} {\mathbf{C}} = {\mathbf{I}}_{n/2}$, the identity matrix of
size $n/2$. \qed
\end{proof}
Performing the two decoding
\eqref{eq:U} and \eqref{eq:V} independently with the Prange algorithm
gains nothing. However if we first solve \eqref{eq:V} with the Prange
algorithm, and then seek a solution of \eqref{eq:U} which properly
depends on ${\mathbf{e}}_V$ we increase the range of weights accessible in polynomial time
for ${\mathbf{e}}$. It then turns out that the range $[\omega^-_{\text{UV}},\omega^+_{\text{UV}}]$
of relative weights $w/n$ for which the $(U,U+V)$-decoder works in polynomial time is
larger than $[\omega^-_{\text{easy}},\omega^+_{\text{easy}}]$.
This will
provide an advantage to the trapdoor owner.
\paragraph{Tweaking the Prange Decoder for Reaching Large Weights.}
When $q=2$, small and large weights play a symmetrical role. This is
not the case anymore for $q \geq 3$. In what follows we will suppose that
$
q\geq 3.
$
In order to find a solution ${\mathbf{e}}$ of large weight to the decoding problem
${\mathbf{e}} \transpose{{\mathbf{H}}} = {\mathbf{s}}$, we use Proposition \ref{prop:decomposition} and first find an
arbitrary solution ${\mathbf{e}}_V$ to ${\mathbf{e}}_V \transpose{{\mathbf{H}}_V} = {\mathbf{s}}^V$.
The idea, now for performing the second decoding ${\mathbf{e}}_U \transpose{{\mathbf{H}}_U} = {\mathbf{s}}^U$,
is to take advantage of ${\mathbf{e}}_V$ to
find a solution ${\mathbf{e}}_U$ that maximizes the weight of ${\mathbf{e}}=\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)$.
On any information set of the $U$ code, we can fix arbitrarily ${\mathbf{e}}_U$.
Such a set is of size $k_U$ and on those positions $i$ we can always choose
${\mathbf{e}}_U(i)$ such that this induces {\em simultaneously} two positions in ${\mathbf{e}}$ that are non-zero.
These are ${\mathbf{e}}_i$ and ${\mathbf{e}}_{i+n/2}$. We just have to choose ${\mathbf{e}}_U(i)$ so that we have simultaneously
$$
\left\{
\begin{array}{ll}
a_i{\mathbf{e}}_U(i)+b_i{\mathbf{e}}_V(i) \neq 0 \\
c_i{\mathbf{e}}_U(i)+d_i{\mathbf{e}}_V(i) \neq 0.
\end{array}
\right.
$$
This is always possible since $q \geq 3$ and it gives an expected weight of ${\mathbf{e}}$:
\begin{eqnarray}
\mathbb{E}(|{\mathbf{e}}|) = 2\left( k_U + \frac{q-1}{q}(n/2-k_U)\right) = \frac{q-1}{q} n + \frac{2k_U}{q}
\end{eqnarray}
The best choice for $k_U$ is to take $k_U=k$ up to the point where
$\frac{q-1}{q} n + \frac{2k}{q}=n$, that is $k=n/2$ and for larger values
of $k$ we choose $k_U=n/2$ and $k_V = k-k_U$.
\paragraph{Why Is the Trapdoor More Powerful for Large Weights than for Small Weights?}
This strategy can be clearly adapted for small weights. However, it is less powerful in this case.
Indeed, to minimize the weight of the final error we would like to choose ${\mathbf{e}}_U(i)$ in $k_U$ positions such that
$$
\left\{
\begin{array}{ll}
a_i{\mathbf{e}}_U(i)+b_i{\mathbf{e}}_V(i) = 0 \\
c_i{\mathbf{e}}_U(i)+d_i{\mathbf{e}}_V(i) = 0
\end{array}
\right.
$$
Here as $a_id_i - b_ic_i = 1$ and $a_ic_i \neq 0$ in the family of codes we consider, this is possible if and only if ${\mathbf{e}}_V(i) = 0$. Therefore, contrarily to the case where we want to reach errors of large weight, the area of positions where we can gain twice is constrained to be of size $n/2 - |{\mathbf{e}}_V|$. The minimal weight for ${\mathbf{e}}_V$ we can reach in polynomial time with the Prange decoder is given by $\frac{q-1}{q}(n/2-k_V)$. In this way the set of positions where we can double the number of $0$ will be of size $n/2 - \frac{q-1}{q}(n/2-k_V)= \frac{n}{2q} + \frac{q-1}{q}k_V$. It can be verified that this strategy would give the following expected weight for the final error we get:
$$
\mathbb{E}(|{\mathbf{e}}|) = \left\{
\begin{array}{ll}
\frac{q-1}{q}n - 2\frac{q-1}{q}k_U \quad \mbox{if } k_U \leq \frac{n}{2q} + \frac{q-1}{q}k_V \\
\frac{2(q-1)^2}{(2q-1)q}(n-k) \quad\mbox{ }\mbox{ else.}
\end{array}
\right.
$$
This discussion is summarized in Figure \ref{fig:trapDist}
where we draw $\omega^-_{\text{UV}}$ and $\omega^+_{\text{UV}}$ which are the highest and
the smallest relative distances that our decoder can reach
asymptotically in $n$ when $k/n$ is fixed and $q = 3$.
\begin{figure}[h!]
\caption{Areas of relative signature distances with our trapdoor when $q = 3$ \label{fig:trapDist}}
\centering
\includegraphics[width=10cm]{distUV3}
\end{figure}
\section{Preimage Sampling with Trapdoor: Achieving a Uniformly Distributed Output}
\label{sec:rejSampl}
We restrict here our study to the case $q=3$ but it can be generalized to larger values of $q$. To be a trapdoor one-way preimage sampleable function, we have to enforce that
the outputs of our algorithm, which inverts our trapdoor function, are very close to be uniformly distributed over $S_w$.
The procedure described in the previous section using directly the Prange decoder, does
not meet this property. As we will prove, by changing it slightly, we will
achieve this task by still keeping the property to output errors of
weight $w$ for which it is hard to solve the decoding problem for this
weight. However, the parameters will have to be chosen carefully and the area of weights $w$ for which we can output errors in polynomial time decreases. Figure \ref{fig:uniDimRej} gives a rough picture of what will happen.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.83]
\tikzstyle{valign}=[text height=1.5ex,text depth=.25ex]
\draw[line width=2pt,gray] (0,2) -- (1,2);
\draw (2.5,2.5) node[red]{{\sf hard}};
\draw (12,2.5) node[right,red]{{\sf hard}};
\draw (12,2.5) node[right,red]{{\sf hard}};
\draw[line width=2pt,red!50] (1,2) -- (5,2);
\draw[line width=2pt,blue!50] (5,2) --
node[above,midway,blue,valign]{{\sf easy}} (11,2);
\draw[line width=2pt,red!50] (11,2) -- (13,2);
\draw[->,>=latex,line width=2pt,gray] (13,2) -- (14,2)
node[right,black] {$\displaystyle w$};
\tikzstyle{valign}=[text height=2ex]
\draw[thick] (1,1.9) node[below,valign]{$0$} -- (1,2.1);
\draw[thick] (5,1.9) -- node[above,valign]{$w^-_{\text{easy}}$} (5,2.1);
\draw[thick] (11,1.9) -- node[above,valign]{$w^+_{\text{easy}}$} (11,2.1);
\draw[thick] (13,1.9) node[below,valign]{$n$} -- (13,2.1);
\draw[thick] (4,1.9) node[below,valign]{$w^-_{\text{UV}}$} -- (4,3.1);
\draw[thick] (11.75,1.9) node[below,valign]{\mbox{ }$w^+_{\text{UV}}$} -- (11.75,3.1);
\draw[<->,>=latex,thin,blue!50] (4,3) -- node[above,blue,midway]{{\sf
easy with (U,U+V){} trapdoor}} (11.75,3);
\draw[<->,>=latex,thin,red!50] (1,3) -- (4,3);
\draw[<->,>=latex,thin,red!50] (11.75,3) -- (13,3);
\draw[<->,>=latex,thin,purple!50] (5,1) -- node[below,purple,midway]{{\sf no leakage with $(U,U+V){}$ trapdoor}} (11.4,1);
\draw[thick] (5,0.9) node[below,valign]{$ $} -- (5,2.1);
\draw[thick] (11.4,0.9) node[below,valign]{$ $} -- (11.4,2.1);
\end{tikzpicture}
\caption{Hardness of $(U,U+V)$ Decoding with no leakage of signature}
\label{fig:uniDimRej}
\end{figure}
\subsection{Rejection Sampling to reach Uniformly Distributed Output}
\label{subsec:rej}
We will tweak slightly the generalized $(U,U+V)$-decoder from the previous section by performing in particular rejection
sampling on ${\mathbf{e}}_U$ and ${\mathbf{e}}_V$ in order to obtain an error ${\mathbf{e}}$ satisfying ${\mathbf{e}} \transpose{{\mathbf{H}}} = {\mathbf{s}}$ that is uniformly distributed over the words of weight $w$ when the syndrome
${\mathbf{s}}$ is randomly chosen in $\mathbb{F}_3^{n-k}$. Solving the decoding problem \ref{prob:decodingNGUV} of the generalized $(U,U+V)$-code will be done by solving
\eqref{eq:U} and \eqref{eq:V} through an algorithm whose skeleton is given in Algorithm \ref{algo:realskeleton}.
$\Call{DecodeV}{{\mathbf{H}}_V,{\mathbf{s}}^V}$ returns a vector satisfying ${\mathbf{e}}_V \transpose{{\mathbf{H}}_V} = {\mathbf{s}}^V$, whereas
$\Call{DecodeU}{{\mathbf{H}}_U,\varphi,{\mathbf{s}}^U,{\mathbf{e}}_V}$ is assumed to return a vector satisfying ${\mathbf{e}}_U \transpose{{\mathbf{H}}_U} = {\mathbf{s}}^U$ {\em and} such that $|\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)|=w$.
Here ${\mathbf{s}} = ({\mathbf{s}}^U,{\mathbf{s}}^V)$ with ${\mathbf{s}}^U \in \mathbb{F}_3^{n/2-k_U}$ and ${\mathbf{s}}^V \in \mathbb{F}_3^{n/2-k_V}$.
\begin{algorithm}[htb]
\caption{\calltxt{DecodeUV}{${\mathbf{H}}_V,{\mathbf{H}}_U,\varphi,{\mathbf{s}}$}}
\label{algo:realskeleton}
\begin{algorithmic}[1]
\Repeat
\State ${\mathbf{e}}_V\gets\Call{DecodeV}{{\mathbf{H}}_V,{\mathbf{s}}^V}$ \label{ske:ev}
\Until{Condition 1 is met}\label{skerej:V}
\Repeat
\State ${\mathbf{e}}_U \gets\Call{DecodeU}{{\mathbf{H}}_U,\varphi,{\mathbf{s}}^U,{\mathbf{e}}_V}$ \label{ske:U}\Comment{We assume that $|\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)|=w$ here.}
\State ${\mathbf{e}} \gets \varphi({\mathbf{e}}_U,{\mathbf{e}}_V)$ \label{ske:e}
\Until{Condition 2 is met}\label{skerej:U}
\State \Return ${\mathbf{e}}$
\end{algorithmic}
\end{algorithm}
What we want to achieve by rejection sampling is that the distribution of ${\mathbf{e}}$ output by this algorithm is the same as
the distribution of $\uni{\ev}$ that denotes a vector that is chosen uniformly at random among the words of weight $w$ in $\mathbb{F}_3^n$.
This will be achieved by ensuring that
\begin{enumerate}
\item the ${\mathbf{e}}_V$ fed into $\Call{DecodeU}{\cdot}$ at Step \ref{ske:U} has the same distribution as $\uni{\ev}_V$,
\item the distribution of ${\mathbf{e}}_U$ surviving to Condition 2 at Step \ref{skerej:U} conditioned on the value of ${\mathbf{e}}_V$
is the same as the distribution of $\uni{\ev}_U$ conditioned on $\uni{\ev}_V$.
\end{enumerate}
There is a property of the decoders $\Call{DecodeV}{\cdot}$ and $\Call{DecodeU}{\cdot}$ derived from Prange decoders that we will consider that will be very helpful here. They will namely be very close to meet the following conditions.
\begin{definition}\label{def:weightU}
$\Call{DecodeV}{\cdot}$ is said to be weightwise uniform if the output ${\mathbf{e}}_V$ of $\Call{DecodeV}{{\mathbf{H}}_V,{\mathbf{s}}^V}$ is such that
$\mathbb{P}({\mathbf{e}}_V)$ is just a function of $|{\mathbf{x}}|$
when ${\mathbf{s}}^V$ is
chosen uniformly at random in $\mathbb{F}_3^{n/2-k_V}$.
$\Call{DecodeU}{\cdot}$ is $m_{1}$-uniform if the outputput ${\mathbf{e}}_U$ of $\Call{DecodeU}{{\mathbf{H}}_U,\varphi,{\mathbf{s}}^U,{\mathbf{e}}_V}$ satisfies that the conditional probability $\mathbb{P}({\mathbf{e}}_U|{\mathbf{e}}_V)$ is
just a function of the pair $(|{\mathbf{e}}_V|,m_{1}(\varphi({\mathbf{e}}_U,{\mathbf{e}}_V))$ where
\begin{displaymath}
m_{1}({\mathbf{x}}) \mathop{=}\limits^{\triangle} \left|\left\{ 1 \leq i \leq n/2 : |(x_i,x_{i+n/2})| = 1 \right\}\right|.
\end{displaymath}
\end{definition}
It is readily observed that $\mathbb{P}(\uni{\ev}_V)$ and $\mathbb{P}(\uni{\ev}_U|\uni{\ev}_V)$ are also only functions of $|\uni{\ev}_V|$ and $(|\uni{\ev}_V|,m_{1}(\uni{\ev}))$ respectively. From this it is readily seen that we obtain the
right distributions for ${\mathbf{e}}_V$ and ${\mathbf{e}}_U$ conditioned on ${\mathbf{e}}_V$ by just ensuring that
the distribution of $|{\mathbf{e}}_V|$ follows the same distribution as $|\uni{\ev}_V|$ and that the distribution of $m_{1}({\mathbf{e}})$ conditioned on $|{\mathbf{e}}_V|$ is the
same as the distribution of $m_{1}(\uni{\ev})$ conditioned on $|\uni{\ev}_V|$. This is shown by the following lemma.
\begin{lemma}\label{lemm:rejSampl} Let ${\mathbf{e}}$ be the output of Algorithm \ref{algo:realskeleton} when ${\mathbf{s}}^V$ and ${\mathbf{s}}^U$ are chosen uniformly at random in $\mathbb{F}_3^{n/2-k_V}$ and
$\mathbb{F}_3^{n/2-k_U}$ respectively.
Assume that $\Call{DecodeU}{\cdot}$ is $m_{1}$-uniform whereas $\Call{DecodeV}{\cdot}$ is weightwise uniform.
If for any possible $y$ and $z$,
\begin{equation}
|{\mathbf{e}}_V| \sim |\uni{\ev}_V| \mbox{ and }
\mathbb{P}(m_{1}({\mathbf{e}})=z \mid |{\mathbf{e}}_V| =y)=\mathbb{P}(m_{1}(\uni{\ev})=z \mid |\uni{\ev}_V|=y)
\end{equation}
then
$
{\mathbf{e}} \sim \uni{\ev}.
$ The probabilities are taken here over the choice of ${\mathbf{s}}^U$ and ${\mathbf{s}}^V$ and over the internal coins
of $\Call{DecodeU}{\cdot}$ and $\Call{DecodeV}{\cdot}$.
\end{lemma}
\begin{proof}
We have for any ${\mathbf{x}}$ in $S_w$
\begin{eqnarray}
\mathbb{P}({\mathbf{e}} = {\mathbf{x}}) &=&\mathbb{P}({\mathbf{e}}_U={\mathbf{x}}_U\mid{\mathbf{e}}_V={\mathbf{x}}_V)\mathbb{P}({\mathbf{e}}_V={\mathbf{x}}_V) \nonumber\\
& = & \mathbb{P}(\Call{DecodeU}{{\mathbf{H}}_U,\varphi,{\mathbf{s}}^U,{\mathbf{e}}_V}={\mathbf{x}}_U\mid{\mathbf{e}}_V={\mathbf{x}}_V)\mathbb{P}(\Call{DecodeV}{{\mathbf{H}}_V,{\mathbf{s}}^V}={\mathbf{x}}_V)
\nonumber \\
& = & \frac{\mathbb{P}(m_{1}({\mathbf{e}})=z \mid
|{\mathbf{e}}_V|=y)}{n(y,z)}\frac{\mathbb{P}(|{\mathbf{e}}_V|=y)}{n(y)} \mathop{=}\limits^{\triangle} P \label{eq:uniformity}
\end{eqnarray}
where $n(y)$ is the number of vectors of $\mathbb{F}_3^n$ of weight $y$ and $n(y,z)$ is the number of vectors ${\mathbf{e}}$
in $\mathbb{F}_3^n$ such that ${\mathbf{e}}_V={\mathbf{x}}_V$ and such that $m_{1}({\mathbf{e}})=z$ (this last number only depends on ${\mathbf{x}}_V$ through
its weight $y$). Equation \eqref{eq:uniformity} is here a consequence of the weightwise uniformity of $\Call{DecodeV}{\cdot}$ on one hand and
the $m_{1}$-uniformity of $\Call{DecodeU}{\cdot}$ on the other hand.
We conclude by noticing that
\begin{eqnarray}
P & = &
\frac{\mathbb{P}(m_{1}(\uni{\ev})=z \mid |\uni{\ev}_V|=y)}{n(y,z)}\frac{\mathbb{P}(|\uni{\ev}_V|=y)}{n(y)}\label{eq:passage} \\
& = & \mathbb{P}(\uni{\ev}_U={\mathbf{x}}_U\mid\uni{\ev}_V={\mathbf{x}}_V)\mathbb{P}(\uni{\ev}_V={\mathbf{x}}_V)\nonumber \\
& =& \mathbb{P}(\uni{\ev} = {\mathbf{x}}).
\end{eqnarray}
Equation \eqref{eq:passage} follows from the assumptions on the distribution of $|{\mathbf{e}}_V|$ and of the conditional distribution
of $m_{1}({\mathbf{e}})$ for a given weight $|{\mathbf{e}}_V|$.
\qed \end{proof}
This shows that in order to obtain that ${\mathbf{e}}$ is uniformly distributed over $S_w$ it is enough to perform rejection sampling based on the weight $|{\mathbf{e}}_V|$ for
$\Call{DecodeV}{\cdot}$ and based on the pair $(|{\mathbf{e}}_V|,m_{1}({\mathbf{e}}))$ for $\Call{DecodeU}{\cdot}$. In other words, our decoding algorithm with rejection sampling will use a rejection vector ${\mathbf{r}}_V$ on the weights of
${\mathbf{e}}_V$ for $\Call{DecodeV}{\cdot}$ and a two-dimensional rejection vector ${\mathbf{r}}_U$ for the values of $(|{\mathbf{e}}_V|,m_{1}({\mathbf{e}}))$ for $\Call{DecodeU}{\cdot}$. The corresponding algorithm is specified
in Algorithm \ref{algo:skeleton}.
\begin{algorithm}[htb]
\caption{\calltxt{DecodeUV}{${\mathbf{H}}_V,{\mathbf{H}}_U,\varphi,{\mathbf{s}}$}}
\label{algo:skeleton}
\begin{algorithmic}[1]
\Repeat
\State ${\mathbf{e}}_V\gets\Call{DecodeV}{{\mathbf{H}}_V,{\mathbf{s}}^V}$ \label{alg:ev}
\Until{rand$([0,1]) \leq {\mathbf{r}}_{V}(|{\mathbf{e}}_V|)$}\label{rej:V}
\Repeat \label{alg:U}
\State ${\mathbf{e}}_U \gets\Call{DecodeU}{{\mathbf{H}}_U,\varphi,{\mathbf{s}}^U,{\mathbf{e}}_V}$
\State ${\mathbf{e}} \gets \varphi({\mathbf{e}}_U,{\mathbf{e}}_V)$ \label{alg:e}
\Until{rand$([0,1]) \leq {\mathbf{r}}_U(|{\mathbf{e}}_V|,m_{1}({\mathbf{e}}))$}\label{rej:U}
\State \Return ${\mathbf{e}}$
\end{algorithmic}
\end{algorithm}
Standard results on rejection sampling yield the following proposition:
\begin{restatable}{proposition}{propoRejUnif} \label{propo:rejDistrib}
Let \begin{equation}
q_{1}(i) \mathop{=}\limits^{\triangle} \mathbb{P}\left( |{\mathbf{e}}_V| = i \right) \mbox{ };\mbox{ } \uni{q}_1(i) \mathop{=}\limits^{\triangle} \mathbb{P}\left( |\uni{\ev}_V| = i \right)
\end{equation}
\begin{equation}
q_{2}(s,t) \mathop{=}\limits^{\triangle} \mathbb{P}\left( m_{1}({\mathbf{e}}) = s \mid |{\mathbf{e}}_V| = t \right) \mbox{ };\mbox{ } \uni{q}_{2}(s,t) \mathop{=}\limits^{\triangle} \mathbb{P}\left( m_{1}(\uni{\ev}) = s \mid |\uni{\ev}_V| = t \right)
\end{equation}
for any $i,t \in \llbracket 0,n/2 \rrbracket$ and $s \in \llbracket 0,t \rrbracket$.
Let ${\mathbf{r}}_{V}$ and ${\mathbf{r}}_{U}$ be defined as
\begin{displaymath}
r_{V}(i) \mathop{=}\limits^{\triangle} \frac{1}{M^{\text{rs}}_V} \frac{\uni{q}_1(i)}{q_{1}(i)} \quad \mbox{and} \quad r_{U}(s,t) \mathop{=}\limits^{\triangle} \frac{1}{M^{\text{rs}}_U(t)} \frac{\uni{q}_{2}(s,t)}{q_{2}(s,t)}
\end{displaymath}
with
\begin{equation*}
M^{\text{rs}}_V \mathop{=}\limits^{\triangle} \mathop{\max}\limits_{\substack{0 \leq i \leq n/2}}\frac{\uni{q}_1(i)}{q_1(i)} \quad \mbox{and} \quad M^{\text{rs}}_U(t) \mathop{=}\limits^{\triangle} \mathop{\max}\limits_{\substack{0 \leq s \leq t}}\frac{\uni{q}_2(s,t)}{q_2(s,t)}
\end{equation*}
Then if $\Call{DecodeV}{\cdot}$ is weightwise uniform and $\Call{DecodeU}{\cdot}$ is $m_{1}$-uniform, the output ${\mathbf{e}}$ of Algorithm \ref{algo:skeleton}
satisfies
$
{\mathbf{e}} \sim \uni{\ev}.
$
\end{restatable}
\subsection{Application to the Prange Decoder} \label{subsec:prangeDecUV}
To instantiate rejection sampling, we have to provide here $(i)$ how $\Call{DecodeV}{\cdot}$ and $\Call{DecodeU}{\cdot}$ are instantiated and $(ii)$ how $\uni{q}_1,\uni{q}_2, q_1$ and $q_2$ are computed.
Let us begin by the following proposition which gives $\uni{q}_1$ and $\uni{q}_2$.
\begin{restatable}{proposition}{propoqu} \label{propo:qu} Let $n$ be an even integer, $w \leq n$, $i,t \leq n/2$ and $s \leq t$ be integers. We have,
\begin{equation}
\uni{q}_1(i) = \frac{\binom{n/2}{i}}{\binom{n}{w}2^{w/2}} \mathop{\sum}\limits_{\substack{p=0 \\
w+p \equiv 0 \mod 2}}^{i}\binom{i}{p}\binom{n/2-i}{(w+p)/2-i}2^{3p/2}
\end{equation}
\begin{equation}
\uni{q}_2(s,t) = \left\{
\begin{array}{ll}
\frac{\binom{t}{s}\binom{n/2 - t}{\frac{w+s}{2}-t}2^{\frac{3s}{2}}}{\sum\limits_{p} \binom{t}{p}\binom{n/2-t}{\frac{w+p}{2}-t}2^{\frac{3p}{2}}} &\mbox{if } w+s \equiv 0 \mod 2. \\
0 &\mbox{ else}
\end{array}
\right.
\end{equation}
\end{restatable}
The proof of this proposition is given in Appendix \ref{app:usefulDistribs}. Algorithms $\Call{DecodeV}{\cdot},\Call{DecodeU}{\cdot}$ are described in Algorithms \ref{algo:DV} and $\ref{algo:DU}$. They use the rejection vectors given in Proposition \ref{propo:rejDistrib} which is
based on the expressions given in Proposition \ref{propo:qu}.
\begin{algorithm}\label{algoV}
\caption{\calltxt{DecodeV}{${\mathbf{H}}_V,{\mathbf{s}}^V$} the Decoder outputting an ${\mathbf{e}}_V$ such that ${\mathbf{e}}_V {\mathbf{H}}_V^{\intercal}={\mathbf{s}}^V$. \label{algo:DV}}
\begin{algorithmic}[1]
\State\label{D_V:J}$\mathcal{J},\mathcal{I} \gets\Call{FreeSet}{{\mathbf{H}}_V}$
\State $\ell\hookleftarrow{\mathcal D}_V$
\State\label{DV_:x}${\mathbf{x}}_V\hookleftarrow\left\{{\mathbf{x}}\in\mathbb{F}_3^{n/2}\mid\wt{{\mathbf{x}}_{\mathcal{J}}}=\ell,\Sp({\mathbf{x}}) \subseteq \mathcal{I} \right\}$
\Comment $({\mathbf{x}}_V)_{\mathcal{I} \mbox{\textbackslash} \mathcal{J}}$ is random
\State ${\mathbf{e}}_V \gets\Call{PrangeStep}{{\mathbf{H}}_V,{\mathbf{s}}^V,\mathcal{I},{\mathbf{x}}_V}$ \label{line:eVoutput}
\State \Return ${\mathbf{e}}_V$
\end{algorithmic}
\smallskip
\hrule
{\bf function} \Call{FreeSet}{${\mathbf{H}}$}
\hrule
\begin{algorithmic}[1]
\Require ${\mathbf{H}}\in\mathbb{F}_3^{(n-k)\times n}$
\Ensure ${\mathcal I}$ an
information set of $\vectspace{{\mathbf{H}}}^\perp$ and $\mathcal{J} \subset \mathcal{I}$ of size $k - d$
\Repeat
\State $\mathcal{J} \hookleftarrow \llbracket 1,n \rrbracket$ of size $k - d$
\Until the rank of the columns of ${\mathbf{H}}$ indexed by $ \llbracket 1,n \rrbracket \mbox{\textbackslash} \mathcal{J}$ is $n-k$
\Repeat
\State $\mathcal{J}' \hookleftarrow \llbracket 1,n \rrbracket \mbox{\textbackslash} \mathcal{J}$ of size $d$
\State $\mathcal{I} \leftarrow \mathcal{J} \sqcup \mathcal{J}'$
\Until $\mathcal{I}$ is an information set of $\vectspace{{\mathbf{H}}}^\perp$
\State \Return $\mathcal{J},\mathcal{I}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}\label{algoU}
\caption{\calltxt{DecodeU}{${\mathbf{H}}_U,\varphi,{\mathbf{s}}^U,{\mathbf{e}}_V$} the U-Decoder outputting an ${\mathbf{e}}_U$ such that ${\mathbf{e}}_U {\mathbf{H}}_U^{\intercal}={\mathbf{s}}^U$ and $|\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)|=w$. \label{algo:DU}}
\begin{algorithmic}[1]
\State $t\gets |{\mathbf{e}}_V|$
\State $k_{\neq 0}\hookleftarrow{\mathcal D}_U^{t}$\label{line:kneq}
\State $k_0 \gets k_U' - k_{\neq 0}$
\Comment{$k_U' \mathop{=}\limits^{\triangle} k_U -d$}
\Repeat
\State\label{line:infSet1}$\mathcal{J},\mathcal{I}\gets\Call{FreeSetW}{{\mathbf{H}}_U,{\mathbf{e}}_V,k_{\neq 0}}$
\State \label{line:infSet2} ${\mathbf{x}}_U \hookleftarrow\{{\mathbf{x}}\in\mathbb{F}_3^{n/2}\mid \forall j \in \mathcal{J}, \mbox{ } {\mathbf{x}}(j) \notin \{ -\frac{b_i}{a_i}{\mathbf{e}}_V(i), -\frac{d_i}{c_i}{\mathbf{e}}_V(i) \} \mbox{ and } \Sp({\mathbf{x}}) \subseteq \mathcal{I} \}$
\State \label{line:eUoutput} ${\mathbf{e}}_U\gets\Call{PrangeStep}{{\mathbf{H}}_U,{\mathbf{s}}^U,{\mathcal I},{\mathbf{x}}_U}$
\Until $|\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)| = w$
\State \Return ${\mathbf{e}}_U$
\end{algorithmic}
\smallskip
\hrule
{\bf function} \Call{FreeSetW}{${\mathbf{H}},{\mathbf{x}},k_{\neq 0}$}
\hrule
\begin{algorithmic}[1]
\Require ${\mathbf{H}}\in\F_q^{(n-k)\times n}, {\mathbf{x}} \in \F_q^{n}$ and $k_{\neq 0} \in \llbracket 0,k \rrbracket$.
\Ensure $\mathcal{J}$ and ${\mathcal I}$ an information set of $\vectspace{{\mathbf{H}}}^\perp$ such that $\left|\{i \in \mathcal{J}: x_i \neq 0\}\right| =k_{\neq 0}$ and $\mathcal{J} \subset \mathcal{I}$ of size $k - d$.
\Repeat
\State $\mathcal{J}_1 \hookleftarrow \Sp({\mathbf{x}})$ of size $k_{\neq 0}$
\State $\mathcal{J}_2 \hookleftarrow \llbracket 1,n \rrbracket \mbox{\textbackslash} \Sp({\mathbf{x}})$ of size $k - d - k_{\neq 0}$.
\State $\mathcal{J} \leftarrow \mathcal{J}_1 \sqcup \mathcal{J}_2$
\Until the rank of the columns of ${\mathbf{H}}$ indexed by $ \llbracket 1,n \rrbracket \mbox{\textbackslash} \mathcal{J}$ is $n-k$
\Repeat
\State $\mathcal{J}' \hookleftarrow \llbracket 1,n \rrbracket \mbox{\textbackslash} \mathcal{J}$ of size $d$
\State $\mathcal{I} \leftarrow \mathcal{J} \sqcup \mathcal{J}'$
\Until $\mathcal{I}$ is an information set of $\vectspace{{\mathbf{H}}}^\perp$
\State \Return $\mathcal{J},\mathcal{I}$
\end{algorithmic}
\end{algorithm}
These two algorithms both use the Prange decoder in the same way as we did with the procedure described in \S\ref{subsec:genUVcodes} to reach large weights, except that here we introduced some internal distributions $\mathcal{D}_V$ and the $\mathcal{D}_U^t$'s. These distributions are here to tweak the weight distributions of $\Call{DecodeV}{\cdot}$ and $\Call{DecodeU}{\cdot}$ in order to reduce
the rejection rate. We have:
\begin{restatable}{proposition}{propoq} \label{propo:q}
Let $n$ be an even integer, $w \leq n$, $i,t,k_U \leq n/2$ and $s \leq t$ be integers. Let $d$ be an integer, $k_V' \mathop{=}\limits^{\triangle} k_V - d$ and $k_U' \mathop{=}\limits^{\triangle} k_U - d$. Let $X_V$ (resp. $X_U^{t}$) be a random variable distributed according to $\mathcal{D}_V$ (resp. $\mathcal{D}_U^{t}$). We have,
\begin{equation}
q_1(i) = \sum_{t=0}^{i} \frac{\binom{n/2-k_V'}{i-t}2^{i-t}}{3^{n/2-k_V'}} \mathbb{P}(X_V = t)
\end{equation}
\begin{equation}
q_{2}(s,t) = \left\{
\begin{array}{ll}
\mathop{\sum}\limits_{\substack{t + k_U' - n/2 \leq k_{\neq 0} \leq t \\ k_0 \mathop{=}\limits^{\triangle} k_U' - k_{\neq 0} }} \frac{\binom{t - k_{\neq 0}}{s}\binom{n/2 - t - k_0}{\frac{w+s}{2} - t - k_0}2^{\frac{3s}{2}}}{\mathop{\sum}\limits_{p} \binom{t - k_{\neq 0}}{p}\binom{n/2 - t - k_0}{\frac{w+p}{2} - t - k_0}2^{\frac{3p}{2}} } \mathbb{P}(X_U^{t} = k_{\neq 0}) &\;\mbox{if } w \equiv s \bmod 2. \\
\quad\quad 0 &\quad\mbox{else}
\end{array}
\right.
\end{equation}
\end{restatable}
The information set $\mathcal{I}$ is also chosen by first choosing randomly a set $\mathcal{J}$ of size $k-d$ where $k$ is the size of the information set and $d$ will be chosen so that $3^d \approx 2^\lambda$ where $\lambda$ is the security parameter.
Then $d$ positions are added to $\mathcal{J}$ until finding an information set.
The reason for this is the following: by choosing $\mathcal{I}$ as this, we ensure that $\mathcal{I}$ contains $k-d$ almost completely random
positions (the probability that $\mathcal{J}$ gets rejected will be of order $\frac{1}{2^\lambda}$). On these positions $\mathcal{J}$
we choose the weight of ${\mathbf{e}}_{\mathcal{J}}$ according to $\mathcal{D}$ but ${\mathbf{e}}_{\mathcal{I} \mbox{\textbackslash} \mathcal{J}}$ as a random vector and we complete ${\mathbf{e}}_{\mathcal{I}}$ with the Prange algorithm. If instead we had chosen the information set $\mathcal{I}$ by picking $k$ positions
at random, then $\mathcal{I}$ is rejected with some constant probability. Even if the Prange decoder based on this way of choosing the information set is very likely to be very close to meet the two uniformity conditions of Definition \ref{def:weightU}, this
constant rejection probability makes a proof that the Prange decoder is close enough to behave uniformly
very difficult. This is actually what we need to ensure
that the output of these Prange decoders is close after rejection sampling to output ${\mathbf{e}}_V$ and ${\mathbf{e}}_U$ that
are close to be distributed like $\uni{\ev}_V$ and $\uni{\ev}_U$. This is circumvented by choosing $\mathcal{I}$ as we do here.
For this way of forming the information set we can namely prove (see Appendix \ref{app:weightUnif}).
\begin{theorem}\label{theo:trueRej}
Let ${\mathbf{e}}$ be the output of Algorithm \ref{algo:skeleton} based on Algorithms \ref{algo:DV},\ref{algo:DU} and $\uni{\ev}$ be a uniformly distributed error of weight $w$. There exists a constant $\alpha >0$ depending on $k_U/n$ and $k_V/n$ such that any integer
integer $d$ in the range $\IInt{0}{\alpha n}$ we have,
\begin{equation*}
\mathbb{P}\left( \rho({\mathbf{e}},\uni{\ev}) > \frac{1}{3^{d}} \right) \in \textup{negl}(n)
\end{equation*}
where the probability is taken over the choice of matrices ${\mathbf{H}}_V$ and ${\mathbf{H}}_U$.
\end{theorem}
A sketch of the proof appears in the appendix in Section \ref{app:weightUnif}.
\subsection{Instantiating the Distributions}
Any choice for the distributions $\mathcal{D}_V$ and $\mathcal{D}_U^t$ in Algorithms
\ref{algo:DV} and \ref{algo:DU} will enable uniform sampling by a proper
choice of the rejection vectors ${\mathbf{r}}_V$ and ${\mathbf{r}}_U$ in Algorithm
\ref{algo:skeleton}. We argue here, through a case study, that an
appropriate choice of the distributions may considerably reduce the
rejection rate. In fact, what matters is to have the smallest possible
values for $M^{\text{rs}}_V$ and $M^{\text{rs}}_U(t)$ in
Proposition~\ref{propo:rejDistrib}.
The first step to achieve this is to correctly align the distributions
to their targets, we do that by a proper choice for the mean value or
of the mode ({\em i.e.\ } maximum value) of the distributions. Next we
choose a ``shape'' for the distributions. Here we will take
(truncated) Laplace distributions with a prescribed mean and choose a
variance which minimizes rejection.
For typical parameters with 128 bits of classical security, we will
give a case study with the above strategy, in which the total
rejection rate is about 8\%.
Let $k_V' \mathop{=}\limits^{\triangle} k_V-d$ and $k_U' \mathop{=}\limits^{\triangle} k_U - d$ be parameters of Algorithm \ref{algo:DV} and Algorithm \ref{algo:DU}.
\subsubsection{Aligning the Distributions:}
\begin{enumerate}
\item For the distribution $\mathcal{D}_V$. The output of
Algorithm~\ref{algo:DV} has an average weight
$\bar{\ell}+2/3(n/2-k_V')$, where $\bar{\ell}$ denotes the mean of
$\mathcal{D}_V$. It must be close to $\mathbb{E}(\wt{\uni{\ev}_V})$. We
will admit
$
\mathbb{E}(|\uni{\ev}_V|) = \sum_{i=0}^{n/2} i \uni{q}_V(i) = \frac{n}{2}\left( 1 - \left( 1 - \frac{w}{n}\right)^{2} - \frac{1}{2}\left(\frac{w}{n}\right)^{2} \right).
$
The mean value $\bar{\ell}$ of $\mathcal{D}_V$ is chosen (close to) $(1 -
\alpha) k_V'$ where $\alpha\in[0,1]$ is defined as follows
\begin{equation}\label{eq:alpha}
(1 - \alpha) k_V' = \frac{n}{2}\left( 1 - \left( 1 -
\frac{w}{n}\right)^{2} -
\frac{1}{2}\left(\frac{w}{n}\right)^{2} \right) - \frac{2}{3}\left(\frac{n}{2}-k_V'\right).
\end{equation}
\item For the distribution $\mathcal{D}_U^t$, $0\le t\le n/2$. Here, for every
$t$, we want to align the functions $s\mapsto q_2(s,t)$ and
$s\mapsto \uni{q}_2(s,t)$ (see Proposition~\ref{propo:rejDistrib}). We
get a very good estimate of the $s$ which maximizes $\uni{q}_2(s,t)$ by
solving numerically the equation $\uni{q}_2(s-1,t)=\uni{q}_2(s+1,t)$, that
is
\begin{displaymath}
{\frac {8\, \left( t-s \right) \left( t-s+1 \right) \left( n-w-s+1
\right) }{ \left( s+1 \right) s \left( w+s+1-2\,t \right) }} = 1
\end{displaymath}
We will denote $m_{\textup{target}}^{\textup{max}}(t)$ the unique real positive root
of the above polynomial equation.
We use the notations of Algorithm~\ref{algo:DU}, with in addition
${\mathbf{e}}=\varphi({\mathbf{e}}_U,{\mathbf{e}}_V)$. We now have to
determine which value of $k_{\neq0}$ (line~\ref{line:kneq}) will be
such that $q_2(s,t)$ also reaches its maximum for $s=m_{\textup{target}}^{\textup{max}}(t)$. For
a given $t$, $q_2(s,t)$ is the probability to have
$m_{1}({\mathbf{e}})=s$. This number counts the pairs $(i,i+n/2)$ with
$i\in\llbracket0,n/2\rrbracket$ such that exactly one of ${\mathbf{e}}(i)$
and ${\mathbf{e}}(i+n/2)$ is non-zero. This may only happen when
$i\in\supp({\mathbf{e}}_V)\setminus\mathcal{J}$, in which case ${\mathbf{e}}(i)$ and
${\mathbf{e}}(i+n/2)$ are two random distinct elements of $\mathbb{F}_3$ and this
particular $i$ is counted in $m_{1}({\mathbf{e}})$ with probability
$2/3$. Since $\wt{\supp({\mathbf{e}}_V)\setminus\mathcal{J}}=t-k_{\neq0}$, we
typically have $m_{1}({\mathbf{e}})=\frac23(t-k_{\neq0})$ and the best
alignment is reached when the most probable output of distribution
$\mathcal{D}_U^t$ is $k_{\neq0}=t-\frac32m_{\textup{target}}^{\textup{max}}(t)$.
\end{enumerate}
\subsubsection{Matching the ``Shapes'':} to avoid a high rejection
rate we need to choose distributions so that the tails of the emulated
$q_1$ and $q_2$ are not lower than their respective targets. A bad
choice in this respect could lead to values of $M^{\text{rs}}_V$ and
$M^{\text{rs}}_U(t)$ growing exponentially with the block size. We choose
truncated Laplace distributions to avoid this.
\begin{definition}[Truncated Discrete Laplace Distribution (TDLD)]
Let $\mu,\sigma$ be positive real numbers, let $a$ and $b$ be two
integers. We say that a random variable $X$ is distributed according
to the Truncated Discrete Laplace Distribution \textup{(TDLD)} of
parameters $\mu,\sigma,a,b$, which is denoted
$X \sim \Lap{\mu}{\sigma}{a,b}$, if for all $i\in\llbracket a,b\rrbracket$,
$$
\mathbb{P}\left( X = i \right) = \frac{e^{-\frac{|i-\mu|}{\sigma}}}{N}
$$
where $N$ is a normalization factor.
\end{definition}
We choose
\begin{displaymath}
\left\{
\begin{array}{rcl}
\mathcal{D}_V &\sim& \Lap{\mu_V}{\sigma_V}{0,k_V'}\\
\mathcal{D}_U^t &\sim& \Lap{\mu_U(t)}{\sigma_U(t)}{t + k_U' - n/2,t}
\end{array}
\right.
\mbox{ with }
\left\{
\begin{array}{lcl}
\mu_V & = & (1-\alpha)k_V' \\
\mu_U(t) & = & t-\frac32m_{\textup{target}}^{\textup{max}}(t)+\varepsilon
\end{array}
\right.
\end{displaymath}
and $\sigma_V$ and $\sigma_U(t)$ to minimize $M^{\text{rs}}_V$ and
$M^{\text{rs}}_U(t)$. We also observed heuristically that the alignment is
improved by choosing a small $\varepsilon>0$, typically $\varepsilon=2$.
\subsubsection{Case Study:} $n=9078$, $(k_U,k_V)=(3749,1998)$,
$w=8444$, $\alpha=0.5907$ and $d = 162$. With $\sigma_V=17.6$, we obtain
$M^{\text{rs}}_V\approx 1.0417$. With $\sigma_U=6.8$ and $\varepsilon=0.2$ for all $t$,
we obtain $M^{\text{rs}}_U\approx 1.0380$ on average. The result could be
marginally better by selecting the best $\sigma_U(t)$ (and $\varepsilon$)
for each $t$.
\subsection{Choosing the parameters}\label{sec:ch_params}
Using the parameter $\alpha$ introduced in \eqref{eq:alpha} in the previous subsection as
$$
(1-\alpha)k_V' = \frac{n}{2}\left( 1 - \left( 1 -
\frac{w}{n}\right)^{2} - \frac{1}{2}\left(\frac{w}{n}\right)^{2}
\right) - \frac{2}{3}\left(\frac{n}{2}-k'_V\right).
$$
we may define all the system parameters depending only on $\alpha$,
the code rate $k/n$, $d$ and the block size $n$
\begin{eqnarray}\label{eq:alpha-w}
w & = &\left\lfloor n \left( 1-\alpha + \frac{1}{3} \sqrt{ (3\alpha - 1) \left( 3\alpha + 4 \frac{k'}{n} - 1 \right)} \right) \right\rfloor \\
\label{eq:alpha-V}
k_V' &= &\left\lfloor \frac{n}{2} \frac{3}{3\alpha - 1} \left( \left( 1 - \frac{w}{n} \right)^{2} + \frac{1}{2}\left( \frac{w}{n} \right)^{2} - \frac{1}{3} \right) \right\rfloor \mbox{ ; }
k_U' = \left\lfloor \frac{n}{2}\left( -2 + 3 \frac{w}{n} \right) \right\rfloor
\end{eqnarray}
where $k' \mathop{=}\limits^{\triangle} k_U + k_V - 2d$.
\section{Achieving Uniform Domain Sampling}
\label{sec:domSampl}
The following definition will be useful to understand the structure of normalized generalized $(U,U+V)$-codes.
\begin{restatable}{definition}{defVblocks}{\textbf{\textup{(number of $V$ blocks of type I).}}}
\label{def:Vpositions} In a normalized generalized $(U,U+V)$-code of length $n$
associated to $({\mathbf{a}},{\mathbf{b}},{\mathbf{c}},{\mathbf{d}})$,
the number of $V$ blocks of type $I$, which we denote by $n_I$, is defined by:
\begin{displaymath}
n_I \mathop{=}\limits^{\triangle} \left| \left\{ 1 \leq i \leq n/2 : b_id_i=0
\right\}\right|.
\end{displaymath}
\end{restatable}
\begin{remark}
\label{rem:nI}
$n_I$ can be viewed as the number of positions in which a codeword
of the form $({\mathbf{b}}\odot{\mathbf{v}},{\mathbf{d}}\odot{\mathbf{v}})$ is necessarily equal to $0$:
this comes from the fact that on a position where either
$b_i=0$ or $d_i=0$, the other one is necessarily
different from $0$ as $a_id_i - b_ic_i = 1$. In other words we also have
\begin{displaymath}
n_I = \left| \left\{ 1 \leq i \leq n/2 : b_i=0\right\}\right| +
\left| \left\{ 1 \leq i \leq n/2 : d_i=0 \right\}\right|.
\end{displaymath}
\end{remark}
We denote by $\Hm_{\textup{pk}}$ the public parity-check matrix of a normalized generalized $(U,U+V)$-code as described in \S \ref{subsec:waveTrap}. It turns out that $\Hm_{\textup{pk}}$ has enough randomness in it for making
the syndromes associated to it indistinguishable in the strongest possible sense, i.e.
statistically, from random syndromes as the following proposition shows. In other words,
our scheme achieves the Domain Sampling property of
Definition \ref{def:WPS}.
Note that the upper-bound
we give here depends on the number $n_I$ we have just introduced.
\begin{restatable}{proposition}{propoDist}
\label{prop:statDist}
Let $\Dsw{{\mathbf{H}}}$ be the distribution of
${\mathbf{e}}\transpose{{\mathbf{H}}}$ when ${\mathbf{e}}$ is drawn uniformly at random among $S_w$
and let ${\mathcal U}$ be the uniform
distribution over $\mathbb{F}_3^{n-k}$. We have
\begin{displaymath}
\mathbb{E}_{\Hm_{\textup{pk}}} \left( \rho(\Dsw{\Hm_{\textup{pk}}}, {\mathcal U}) \right) \leq \frac{1}{2} \sqrt{\varepsilon} \quad \mbox{with,}
\end{displaymath}
$$
\varepsilon = \frac{3^{n-k}}{2^{w}\binom{n}{w}} + 3^{n/2-k_V}\sum_{j=0}^{n/2}\frac{ \uni{q}_1(j)^{2}}{2^{j}\binom{n/2}{j}} + 3^{n/2-k_U}
\sum_{j=0}^{n_I} \frac{\binom{n_I}{j}\binom{n-n_I}{w-j}^{2}}{\binom{n}{w}^{2}2^{j}}
$$
where $\uni{q}_1$ is given in Proposition \ref{propo:qu} in \S\ref{sec:rejSampl}.
\end{restatable}
The proof of this proposition relies among other things on the following
variation of the left-over hash lemma (see
\cite{BDKPPS11})
that is adapted to our case:
here the hash function to which we apply the left-over hash lemma
is defined as $h({\mathbf{e}}) = {\mathbf{e}} \transpose{\Hm}_{\textup{pk}}$. Functions $h$
do not form a universal family of hash functions (essentially
because the distribution of the $\Hm_{\textup{pk}}$'s is not the uniform
distribution over $\mathbb{F}_3^{(n-k)\times n}$). However in our case we
can still bound $\varepsilon$ by a direct computation.
\begin{restatable}{lemma}{lemleftoverHash} \label{lem:leftOver}
Consider a finite family ${\mathcal H} = (h_i)_{i \in I}$ of functions from a finite set $E$ to a finite set $F$.
Denote by $\varepsilon$ the bias of the collision probability, i.e. the quantity such that
\begin{displaymath}
\mathbb{P}_{h,e,e'}(h(e)=h(e')) = \frac{1}{|F|} (1 + \varepsilon)
\end{displaymath}
where $h$ is drawn uniformly at random in ${\mathcal H}$, $e$ and $e'$ are
drawn uniformly at random in $E$. Let ${\mathcal U}$ be the uniform
distribution over $F$ and ${\mathcal D}(h)$ be the distribution of the
outputs $h(e)$ when $e$ is chosen uniformly at random in $E$. We
have
\begin{displaymath}
\mathbb{E}_h \left( \rho({\mathcal D}(h),{\mathcal U}) \right) \leq \frac{1}{2} \sqrt{\varepsilon}.
\end{displaymath}
\end{restatable}
This lemma is proved in Appendix \S \ref{ss:leftoverhash}.
In order to use this lemma to bound the statistical distance we are interested in, we have proved in Appendix \S\ref{lem:syndromeDistribution} the following lemma:
\begin{restatable}{lemma}{lemSyndromeDistribution}
\label{lem:syndromeDistribution}
Assume that ${\mathbf{x}}$ and ${\mathbf{y}}$ are random vectors of $S_w$ that are drawn uniformly at random in this set. We have
$$
\mathbb{P}_{\Hm_{\textup{pk}},{\mathbf{x}},{\mathbf{y}}}
\left( {\mathbf{x}} \transpose{\Hm}_{\textup{pk}} = {\mathbf{y}} \transpose{\Hm}_{\textup{pk}} \right) \leq \frac{1}{3^{n-k}} (1 + \varepsilon) \mbox{ with } \varepsilon \mbox{ given in Proposition \ref{prop:statDist}.} $$
\end{restatable}
\section{Security Proof}
\label{sec:securityProof}
\subsection{Basic Tools}
\subsubsection{Basic Definitions.}
A {\em distinguisher} between two distributions $\mathcal{D}^{0}$ and
$\mathcal{D}^{1}$ over the same space $\mathcal{E}$ is a randomized
algorithm which takes as input an element of $\mathcal{E}$ that
follows the distribution $\mathcal{D}^{0}$ or $\mathcal{D}^{1}$ and
outputs $b \in \{0,1\}$. It is characterized by its advantage:
$
Adv^{\mathcal{D}^{0},\mathcal{D}^{1}}(\mathcal{A}) \mathop{=}\limits^{\triangle}
\mathbb{P}_{\xi \sim \mathcal{D}^{0}}\left( \mathcal{A}(\xi) \mbox{
outputs } 1 \right) - \mathbb{P}_{\xi \sim
\mathcal{D}^{1}}\left(\mathcal{A}(\xi) \mbox{ outputs } 1
\right).
$
\begin{definition}
[Computational Distance and Indistinguishability]
The computational distance between two distributions $\mathcal{D}^{0}$ and $\mathcal{D}^{1}$ in time $t$ is:
\begin{displaymath}
\rho_{c}\left( \mathcal{D}^{0},\mathcal{D}^{1}\right)(t) \mathop{=}\limits^{\triangle}
\mathop{\max}\limits_{ |\mathcal{A}| \leq t} \left\{
Adv^{\mathcal{D}^{0},\mathcal{D}^{1}}(\mathcal{A}) \right\}
\end{displaymath}
where $|\mathcal{A}|$ denotes the running time of $\mathcal{A}$ on
its inputs.
\end{definition}
For signature schemes, one of the strongest security notion is {\em
existential unforgeability under an adaptive chosen message attack}
(EUF-CMA). In this model the adversary has access to all signatures
of its choice and its goal is to produce a valid forgery. A valid
forgery is a message/signature pair $({\mathbf{m}},\sigma)$ such that
$\ensuremath{\mathtt{Vrfy}^{\mathrm{pk}}}({\mathbf{m}},\sigma)=1$ whereas the signature of ${\mathbf{m}}$ has never been
requested.
\begin{definition}
[EUF-CMA Security] \label{def:EUF-CMA} A forger $\mathcal{A}$
is a $(t,q_{\textup{hash}},q_{\textup{sign}},\varepsilon)$-adversary in \textup{EUF-CMA} against
a signature scheme $\mathcal{S}$ if after at most $q_{\textup{hash}}$ queries to the hash oracle, $q_{\textup{sign}}$
signatures queries and $t$ working time, it outputs a valid forgery
with probability at least $\varepsilon$.
The \textup{EUF-CMA} success probability against $\mathcal{S}$ is:
\begin{displaymath}
Succ_{\mathcal{S} }^{\textup{EUF-CMA}}(t,q_{\textup{hash}},q_{\textup{sign}}) \mathop{=}\limits^{\triangle}
\max \left( \varepsilon \mbox{} | \mbox{it exists a }
(t,q_{\textup{hash}},q_{\textup{sign}},\varepsilon) \mbox{-adversary} \right).
\end{displaymath}
\end{definition}
\subsection{Code-Based Problems}
\label{subsec:cbProb}
We introduce the code-based problems that will be
used in the security reduction.
\begin{restatable}{problem}{doom}[\textup{DOOM} -- Decoding One Out of Many]
\label{prob:DOOM}
For ${\mathbf{H}}\in\mathbb{F}_3^{(n-k)\times n}$,
${\mathbf{s}}_{1},\cdots,{\mathbf{s}}_{N} \in \mathbb{F}_3^{n-k}$, integer $w$, find
${\mathbf{e}}\in\mathbb{F}_3^{n}$ and $i \in \IInt{1}{N}$ such that
${\mathbf{e}}\transpose{{\mathbf{H}}}={\mathbf{s}}_i$ and $\wt{{\mathbf{e}}}=w$.
\end{restatable}
We will come back to the best known algorithms
to solve this problem as a function of the distance $w$ in
\S\ref{subsec:messAtt}.
\begin{definition}[One-Wayness of DOOM]
We define the success of an algorithm $\mathcal{A}$ against \ensuremath{\mathrm{DOOM}}\ with the parameters $n,k,N,w$ as:
\begin{align*}
Succ_{\ensuremath{\mathrm{DOOM}}}^{n,k,N,w}\left( \mathcal{A} \right) = \mathbb{P} \big( \mathcal{A}&\left( {\mathbf{H}},{\mathbf{s}}_{1},\cdots,{\mathbf{s}}_{N} \right) \mbox{solution of } \ensuremath{\mathrm{DOOM}} \big)
\end{align*}
where ${\mathbf{H}} \hookleftarrow \mathbb{F}_3^{(n-k)\times n}$, ${\mathbf{s}}_i \hookleftarrow \mathbb{F}_3^{n-k}$ and
the probability is taken over ${\mathbf{H}}$, the ${\mathbf{s}}_i$'s and the internal coins of $\mathcal{A}$.
The computational success in time $t$ of breaking \ensuremath{\mathrm{DOOM}}\ with the parameters $n,k,N,w$ is then defined as:
$$
Succ_{\ensuremath{\mathrm{DOOM}}}^{n,k,N,w}(t) = \mathop{\max}\limits_{|\mathcal{A}|\leq t} \left\{
Succ_{\ensuremath{\mathrm{DOOM}}}^{n,k,N,w}\left( \mathcal{A} \right) \right\}.
$$
\end{definition}
Another problem appears in the security proof: distinguish
random codes from a code drawn uniformly at random in the family used for public keys
in the signature scheme. In what follows $\Dc_{\textup{pub}}$ denotes the distribution of public keys $\Hm_{\textup{pk}}$
whereas
$\Dc_{\textup{rand}}$ denotes the uniform distribution over $\mathbb{F}_3^{(n-k_U-k_V)\times n}$.
\subsection{EUF-CMA Security Proof}
\label{sec:securityProof3}
\begin{restatable}{theorem}{secuRed}\textbf{\textup{(Security Reduction)}}.
\label{theo:secRedu}
Let $q_{\textup{hash}}$ (resp. $q_{\textup{sign}}$) be the number of queries to the hash
(resp. signing) oracle. We assume that
$\lambda_{0} = \lambda + 2\log_{2}(q_{\textup{sign}})$ where $\lambda$ is the security parameter of the signature scheme. We have in the random oracle model for all time $t$, $t_{c} = t + O \left( q_{\textup{hash}} \cdot n^{2} \right)$ and $\varepsilon$ given in Proposition \ref{prop:statDist}:
\begin{multline*}
Succ_{\mathcal{S}_{\textup{Wave}}}^{\textup{EUF-CMA}}(t,q_{\textup{hash}},q_{\textup{sign}}) \leq
2 Succ_{\ensuremath{\mathrm{DOOM}}}^{n,k,q_{\textup{hash}},w}(t_{c}) +\rho_{c} \left( \Dc_{\textup{rand}},\Dc_{\textup{pub}} \right)(t_{c}) \\ + q_{\textup{sign}} \rho\left( \mathcal{D}_{w},\mathcal{U}_{w} \right) + \frac{1}{2}q_{\textup{hash}}\sqrt{ \varepsilon } + \frac{1}{2^{\lambda}}
\end{multline*}
where $\mathcal{D}_{w}$ is the output distribution of Algorithm \ref{algo:skeleton} using Algorithms \ref{algo:DV} and \ref{algo:DU} and $\mathcal{U}_w$ is the uniform distribution over $S_w$.
\end{restatable}
\section{Security Assumptions and Parameter Selection}
Our scheme is secure under two security assumptions. One relates to the
hardness decoding and the other to the indistinguishability of
generalized $(U,U+V)$-codes.
\subsection{Message Attack -- Hardness of Decoding}\label{subsec:messAtt}
Here we are interested in the hardness of the DOOM problem as stated in Problem \ref{prob:DOOM}
for the case $q=3$ when the target weight $w$
is large. This variant of the problem, including the multiple target
(DOOM) aspect, was recently investigated in \cite{BCDL19}. This work
adapted to this setting the best generic decoding techniques
\cite{D91,S88,MMT11,BJMM12} which use the so-called PGE+SS framework
(``Partial Gaussian Elimination and Subset Sum''). It also uses Wagner's
generalized birthday algorithm \cite{W02} and the representation
technique \cite{HJ10}.
\subsection{Key Attack -- Indistinguishability of generalized $(U,U+V)$-Codes}\label{subsec:keyAtt}
Here we are interested in the hardness of the problem to distinguish
random codes from permuted generalized normalized $(U,U+V)$-code. All the proofs of this subsection are in Appendix \ref{sec:keyAtt}.
A normalized generalized $(U,U+V)$-code where $U$ and $V$ are random
seems very close to a random linear code.
There is for instance only a
very slight difference between the weight distribution of a random
linear code and the weight distribution of a random normalized
generalized $(U,U+V)$-code of the same length and dimension. This
slight difference happens for small and large weights and is due to
codewords where ${\mathbf{v}} = \mathbf{0}$ or ${\mathbf{u}} = \mathbf{0}$ which are of
the form $({\mathbf{a}} \odot {\mathbf{u}}, {\mathbf{c}} \odot {\mathbf{u}})$ where ${\mathbf{u}}$ belongs to $U$ or
codewords of the form $({\mathbf{b}} \odot {\mathbf{v}},{\mathbf{d}} \odot {\mathbf{v}})$ where ${\mathbf{v}}$ belongs to
$V$ as shown by the following proposition:
\begin{restatable}{proposition}{propdensity}
\label{prop:density}
Assume that we choose a normalized generalized $(U,U+V)$-code
over $\mathbb{F}_3$ with a number $n_I$ of linear combinations of type
I by picking the parity-check matrices of $U$ and $V$
uniformly at random among the ternary matrices of size
$(n/2-k_U) \times n/2$ and $(n/2-k_V) \times n/2$
respectively. Let $a_{({\mathbf{u}},{\mathbf{v}})}(z)$,
$a_{({\mathbf{u}},\mathbf{0})}(z)$ and $a_{(\mathbf{0},{\mathbf{v}})}(z)$ be the
expected number of codewords of weight $z$ that are
respectively in the normalized generalized $(U,U+V)$-code, of
the form $({\mathbf{a}}\odot{\mathbf{u}},{\mathbf{c}}\odot{\mathbf{u}})$ where ${\mathbf{u}}$ belongs to $U$ and
of the form $({\mathbf{b}} \odot {\mathbf{v}},{\mathbf{d}}\odot {\mathbf{v}})$ where ${\mathbf{v}}$ belongs to $V$.
These numbers are given for even $z$ in $\llbracket 0,n\rrbracket$ by
\begin{displaymath}
a_{({\mathbf{u}},\mathbf{0})}(z) = \frac{\binom{n/2}{z/2}2^{z/2}}{3^{n/2 - k_U}} \quad ; \quad a_{(\mathbf{0},{\mathbf{v}})}(z) = \frac{1}{3^{n/2 - k_V}}\mathop{\sum}\limits_{\substack{j=0 \\
j \text{ even}}}^{z} \binom{n_I}{j}\binom{n/2 -n_I}{\frac{z-j}{2}}2^{(z+j)/2}
\end{displaymath}
$$
a_{({\mathbf{u}},{\mathbf{v}})}(z) = a_{({\mathbf{u}},\mathbf{0})}(z) + a_{(\mathbf{0},{\mathbf{v}})}(z) + \frac{1}{3^{n - k_U - k_V}} \left( \binom{n}{z}2^{z} - \binom{n/2}{z/2}2^{z/2} -\mathop{\sum}\limits_{\substack{j=0 \\
j \text{ even}}}^{z} \binom{n_I}{j}\binom{n/2 - n_I}{\frac{z-j}{2}}2^{(z+j)/2} \right)
$$
and for odd $z \in \llbracket 0,n\rrbracket$ by
\begin{displaymath}
a_{({\mathbf{u}},\mathbf{0})}(z) = 0 \quad ; \quad a_{(\mathbf{0},{\mathbf{v}})}(z) = \frac{1}{3^{n/2 - k_V}}\mathop{\sum}\limits_{\substack{j=0 \\
j \text{ odd} }}^{z} \binom{n_I}{j}\binom{n/2 - n_I}{\frac{z-j}{2}}2^{(z+j)/2}
\end{displaymath}
$$
a_{({\mathbf{u}},{\mathbf{v}})}(z) = a_{(\mathbf{0},{\mathbf{v}})}(z) + \frac{1}{3^{n - k_U - k_V}} \left( \binom{n}{z}2^{z} - \mathop{\sum}\limits_{\substack{j=0 \\
j \text{ odd}}}^{z} \binom{n_I}{j}\binom{n/2 - n_I}{\frac{z-j}{2}}2^{(z+j)/2} \right)
$$
On the other hand, when we choose a linear code of length $n$ over $\mathbb{F}_3$ with a random parity-check matrix of size $(n-k_U-k_V)\times n$
chosen uniformly at random, then the expected number $a(z)$ of codewords of weight $z>0$ is given by
\begin{displaymath}
a(z) = \frac{\binom{n}{z}2^{z}}{3^{n-k_U-k_V}}.
\end{displaymath}
\end{restatable}
We have plotted in Figure \ref{fig:density} the normalized logarithm of the density of codewords of the form $({\mathbf{a}}\odot{\mathbf{u}},{\mathbf{c}}\odot{\mathbf{u}})$ and $({\mathbf{b}}\odot{\mathbf{v}},{\mathbf{d}}\odot{\mathbf{v}})$ of relative
{\em even} weight $x \mathop{=}\limits^{\triangle} \frac{z}{n}$ against $x$ in the case where $U$ is of rate $\frac{k_U}{n/2}=0.7$,
$V$ is of rate $\frac{k_V}{n/2}=0.3$ and $\frac{n_I}{n/2} = \frac{1}{2}$. These two relative densities are defined respectively by
\begin{displaymath}
\alpha_{{\mathbf{u}}}(z/n) \mathop{=}\limits^{\triangle} \frac{\log_2(a_{({\mathbf{u}},\mathbf{0})}(z)/a_{({\mathbf{u}},{\mathbf{v}})}(z))}{n} \quad ; \quad
\alpha_{{\mathbf{v}}}(z/n) \mathop{=}\limits^{\triangle} \frac{\log_2(a_{(\mathbf{0},{\mathbf{v}})}(z)/a_{({\mathbf{u}},{\mathbf{v}})}(z))}{n}
\end{displaymath}
We see that for a relative weight $z/n$ below approximately $0.26$ almost all the codewords are of the form $({\mathbf{a}}\odot{\mathbf{u}},{\mathbf{c}}\odot{\mathbf{u}})$.
\begin{figure}
\centering
\includegraphics[scale = 0.2,height=6cm]{density.png}
\caption{$\alpha_{{\mathbf{u}}}(z/n)$ and $\alpha_{{\mathbf{v}}}(z/n)$ against $x \mathop{=}\limits^{\triangle} \frac{z}{n}$.\label{fig:density}}
\end{figure}
Since the weight distribution is invariant by permuting the positions, this slight difference also survives
in the permuted version of the normalized generalized $(U,U+V)$-code. These considerations lead to the
best attack we have found for recovering the structure of a permuted normalized generalized $(U,U+V)$-code.
It consists in applying known algorithms aiming at recovering low weight codewords in a linear code.
We run such an algorithm until getting at some point either a permuted $({\mathbf{a}}\odot{\mathbf{u}},{\mathbf{c}}\odot{\mathbf{u}})$ codeword where ${\mathbf{u}}$ is in $U$ or
a permuted $({\mathbf{b}}\odot{\mathbf{v}},{\mathbf{d}}\odot{\mathbf{v}})$ codeword where ${\mathbf{v}}$ belongs to $V$.
The rationale behind this algorithm is that the
density of codewords of the form $({\mathbf{a}}\odot{\mathbf{u}},{\mathbf{c}}\odot{\mathbf{u}})$ or $({\mathbf{b}}\odot{\mathbf{v}},{\mathbf{d}}\odot{\mathbf{v}})$ is bigger when the weight of the codeword gets smaller.
Once we have such a codeword we can bootstrap from there
very similarly to what has been done in \cite[Subs. 4.4]{OT11}.
Note that this attack is actually very close in spirit to the attack that was devised on the KKS signature scheme \cite{OT11}.
In essence, the attack against the KKS scheme really amounts to recover the support of the $V$ code.
The difference with the KKS scheme is that the support of $V$ is much bigger in our case. As explained in the conclusion of \cite{OT11} the attack against the KKS scheme has in essence
an exponential complexity. This exponent becomes really prohibitive in our case when the parameters of $U$ and $V$
are chosen appropriately as we will now explain.
Let us first introduce the following notation that will be useful in the following.
\newline
{\bf Punctured Code.} For a
subset $\mathcal{I} \subset \llbracket 1,n\rrbracket$ and a code $\mathcal{C}$ of length $n$, we
denote by $\punc_{\mathcal{I}}(\mathcal{C})$, the code $\mathcal{C}$ punctured in $\mathcal{I}$, namely
$\{{\mathbf{c}}_{\bar{\mathcal{I}}}=(c_j)_{j \in \llbracket 1,n\rrbracket \setminus \mathcal{I}}:{\mathbf{c}} \in
\mathcal{C}\}$.
In other words, the set of vectors obtained by deleting in the
codewords of $\mathcal{C}$ the positions that belong to $\mathcal{I} $.
\subsubsection{Recovering the $U$ Code up to Permutation.}
We consider here the permuted code
\begin{displaymath}
U' \mathop{=}\limits^{\triangle} ({\mathbf{a}}\odot U,{\mathbf{c}}\odot U){\mathbf{P}} = \{({\mathbf{a}}\odot {\mathbf{u}},{\mathbf{c}}\odot{\mathbf{u}}){\mathbf{P}}: {\mathbf{u}} \in U\}.
\end{displaymath}
The attack in this case consists in recovering a basis of $U'$. Once this is done, it is easy to recover the $U$ code up to permutation by matching the pairs of coordinates which are either always equal or always sum to $0$ in $U'$. The basic algorithm for recovering the code $U'$ is given in Algorithm \ref{algo:ComputeU}.
\begin{algorithm}[htbp]
\textbf{Parameters: } (i) $\ell$ : small integer (typically $\ell \leqslant 40$),\\
(ii) $p$ : very small integer (typically $1 \leqslant p
\leqslant 10$).\\
{\bf Input:} (i) $\Cc_{\text{pk}}$ the public code used for verifying signatures.\\
(ii) $N$ a certain number of iterations\\
{\bf Output:} an independent set of elements in $U'$
\begin{algorithmic}[1]
\Function{ComputeU}{$\Cc_{\text{pk}}$,$N$}
\For{$i=1,\dots,N$}
\State $B \leftarrow \emptyset$
\State Choose a set $\mathcal{I}\subset \llbracket 1,n\rrbracket$ of size $n-k-\ell$ uniformly at random
\State ${\mathcal L} \leftarrow$ \Call{Codewords}{$\punc_{\mathcal{I}}(\Cc_{\text{pk}}),p$} \label{l:codewords}
\ForAll{${\mathbf{x}} \in {\mathcal L}$}
\State ${\mathbf{x}} \leftarrow$ \Call{Complete}{${\mathbf{x}},\mathcal{I},\Cc_{\text{pk}}$}
\If{\Call{CheckU}{${\mathbf{x}}$}}
\State add ${\mathbf{x}}$ to $B$ if ${\mathbf{x}} \notin <B>$
\EndIf
\EndFor
\EndFor
\State \Return $B$
\EndFunction
\end{algorithmic}
\caption{\textsc{ComputeU}: algorithm that computes a set of independent elements in $U'$.} \label{algo:ComputeU}
\end{algorithm}
It uses other auxiliary functions
\begin{itemize}
\item \textsc{Codewords}$(\punc_{\mathcal{I}}(\Cc_{\text{pk}}),p)$ which computes all (or a big fraction of) codewords of weight $p$ of the punctured public code
$\punc_{\mathcal{I}}(\Cc_{\text{pk}})$. All modern \cite{D91,FS09,MMT11,BJMM12,MO15} algorithms for decoding linear codes perform such
a task in their inner loop.
\item \textsc{Complete}$({\mathbf{x}},\mathcal{I},\Cc_{\text{pk}})$ which computes the codeword ${\mathbf{c}}$ in $\Cc_{\text{pk}}$ such that its restriction outside $\mathcal{I}$ is equal to ${\mathbf{x}}$.
\item \textsc{CheckU}$({\mathbf{x}})$ which checks whether ${\mathbf{x}}$ belongs to $U'$.
\end{itemize}
\subsubsection{Choosing $N$ Appropriately.} Let us first analyse how we have to choose $N$ such that
\textsc{ComputeU} returns $\Omega(1)$ elements. This is essentially
the analysis which can be found in \cite[\S 5.2]{OT11}.
\begin{restatable}{proposition}{proporecovU}\label{propo:recovU}
The probability ${P_{\text{succ}}}$ that one iteration of the for loop (Instruction 2) in \textsc{ComputeU}
adds elements to the list $B$ is lower-bounded by
\begin{equation}
{P_{\text{succ}}} \geq \sum_{z=0}^{n/2} \frac{\binom{n/2}{z}\binom{n/2-z}{k+\ell-2z}2^{k+\ell-2z}}{\binom{n}{k+\ell}} f\left(\frac{\binom{k+\ell-2z}{p-2i} \binom{z}{i}2^{p-i} }{3^{\max(0,k+\ell-z-k_U)}}\right)
\end{equation}
where $f$ is the function
defined by
$f(x) \mathop{=}\limits^{\triangle} \max \left(x(1-x/2),1-\frac{1}{x} \right)$.
Algorithm \ref{algo:ComputeU} returns a non zero list with probability $\Omega(1)$ when $N$ is chosen as
$N = \Omega\left( \frac{1}{{P_{\text{succ}}}}\right)$.
\end{restatable}
\subsubsection{Complexity of Recovering a Permuted Version of $U$.}
The complexity of a call to \textsc{ComputeU} can be estimated as follows. We denote the complexity of
computing the list of codewords of weight $p$ in a
code of length $k+\ell$ and dimension $k$ by $C_1(p,k,\ell)$. It depends on the particular algorithm used here.
For more details see \cite{D91,FS09,MMT11,BJMM12,MO15}. This is the complexity of the call \textsc{Codewords}$(\punc_{\mathcal{I}}(\Cc_{\text{pk}}),p)$ in Step
\ref{l:codewords} in Algorithm \ref{algo:ComputeU}. The complexity of \textsc{ComputeU} and hence the complexity of recovering a permuted version of
$U$ is clearly lower bounded by
$\Omega\left( \frac{C_1(p,k,\ell)}{{P_{\text{succ}}}} \right)$.
It turns out that the whole complexity of recovering
a permuted version of $U$ is actually of this order, namely $ \Theta\left( \frac{C_1(p,k,\ell)}{{P_{\text{succ}}}} \right)$. This can be done by a combination of two techniques
\begin{itemize}
\item Once a non-zero element of $U'$ has been identified, it is much easier to find other ones. This uses one of the tricks for breaking the KKS scheme
(see \cite[Subs. 4.4]{OT11}). The point is the following: if we start again the procedure \textsc{ComputeU}, but this time by choosing a set $\mathcal{I}$
on which we puncture the code which contains the support of the codeword that we already found, then the number $N$ of iterations that we have to perform until finding a new element is negligible
when compared to the original value of $N$.
\item The call to \textsc{CheckU} can be implemented in such a way that the additional complexity coming from all the calls to this function is of the same order as the $N$ calls
to \textsc{Codewords}. The strategy to adopt depends on the values of the dimensions $k$ and $k_U$. In certain cases, it is easy to detect such codewords since they have
a typical weight that is significantly smaller than the other codewords. In more complicated cases, we might have to combine a technique checking first the weight of ${\mathbf{x}}$, if it is
above some prescribed threshold, we decide that it is not in $U'$, if it is below the threshold, we decide that it is a suspicious candidate and use then the previous trick.
We namely check whether
the support of the codeword ${\mathbf{x}}$ can be used to find other suspicious candidates much more quickly than performing $N$ calls to \textsc{CheckU}.
\end{itemize}
To keep the length of this paper within some reasonable limit we avoid here giving the analysis of those steps and we will just use
the aforementioned lower bound on the complexity of recovering a permuted version of $U$.
\subsubsection{Recovering the $V$ Code up to a Permutation}
\label{ss:V}
We consider here the permuted code
\begin{displaymath}
V' \mathop{=}\limits^{\triangle} ({\mathbf{b}}\odot V,{\mathbf{d}}\odot V){\mathbf{P}} = \{ ({\mathbf{b}}\odot {\mathbf{v}},{\mathbf{d}}\odot {\mathbf{v}}){\mathbf{P}} \mbox{ where }{\mathbf{v}} \in V \}.
\end{displaymath}
The attack in this case consists in recovering a basis of $V'$. Once this is achieved, the support $\Sp(V')$ of $V'$ can easily be obtained. Recall that this is the set of positions for which there exists at least one codeword
of $V'$ that is non-zero in this position. This allows to easily recover the code $V$ up to some permutation. The algorithm for recovering
$V'$ is the same as the algorithm for recovering $U'$. We call the
associated function \textsc{ComputeV} though since they differ in the
choice for $N$. The analysis is slightly different indeed.
\subsubsection{Choosing $N$ Appropriately.} As in the previous subsection let us analyse how we have to choose $N$ in order that \textsc{ComputeV} returns
$\Omega(1)$ elements of $V'$.
We have in this case the following result.
\begin{restatable}{proposition}{proporecovV}\label{propo:recovV}
The probability ${P_{\text{succ}}}$ that one iteration of the for loop (Instruction 2) in \textsc{ComputeV} adds elements to the list $B$ is lower-bounded by
\begin{multline*}
{P_{\text{succ}}} \geq \sum_{z=0}^{\min(n-k-\ell,n - n_I)}\sum_{m = 0}^{n/2-n_I}\frac{\binom{\frac{n}{2} - n_I}{m}\binom{n_I}{n-k-\ell-z}}{\binom{n}{n-k-\ell}}\max_{i=0}^{\lfloor p/2 \rfloor}f\left(\frac{\binom{n - n_I - z - 2m}{p-2i}\binom{m}{i}2^{p-i} }{3^{\max(0,n - n_I - z - m - k_V)}}\right) \\ \sum_{j = 0}^{n/2-n_{I} - m} \binom{n/2 - n_I-m}{j}2^{j}\binom{n_I}{z-n + 2n_I + 2m + j}
\end{multline*}
where $f$ is the function
defined by
$f(x) \mathop{=}\limits^{\triangle} \max \left(x(1-x/2),1-\frac{1}{x} \right)$.
\textsc{ComputeV} returns a non-zero list with probability $\Omega(1)$ when $N$ is chosen as
$N = \Omega\left( \frac{1}{{P_{\text{succ}}}}\right)$.
\end{restatable}
\subsubsection{Complexity of Recovering a Permuted Version of $V$.} As for recovering the permuted $U$ code, the complexity for recovering the permuted $V$ is of order
$\Omega\left( \frac{C_1(p,k,\ell)}{{P_{\text{succ}}}} \right)$.
\subsubsection{Distinguishing a Generalized $(U,U+V)$-Code}
It is not clear in the second case that from the single knowledge of $V'$ and a permuted version of $V$ we are able to find a permutation of the positions
which gives to the whole code the structure of a generalized $(U,U+V)$-code. However in both cases as single successful call to
\textsc{ComputeV} (resp. \textsc{ComputeU}) is really distinguishing the code from a random code
of the same length and dimension. In other words, we have a distinguishing attack whose complexity is given by
the following proposition
\begin{restatable}{proposition}{prcomplexityUV}
\label{pr:complexity_U_V}
Algorithm \ref{algo:ComputeU} lead to a distinguishing attack whose complexity is given by
$$\min\left(O\left(\min_{p,\ell}C_U(p,\ell)\right),O\left(\min_{p,\ell}C_V(p,\ell)\right)\right)$$
\begin{equation}
C_U(p,\ell) \mathop{=}\limits^{\triangle} \frac{C_1(p,k,\ell)}{\mathop{\sum}\limits_{z=0}^{n/2} \frac{\binom{n/2}{z}\binom{n/2-z}{k+\ell-2z}2^{k+\ell-2z}}{\binom{n}{k+\ell}} \mathop{\max}\limits_{i=0}^{\lfloor p/2 \rfloor}f\left(\frac{\binom{k+\ell-2z}{p-2i} \binom{z}{i}2^{p-i} }{3^{\max(0,k+\ell-z-k_U)}}\right)}\label{eq:secU}
\end{equation}\vspace{-5mm}
\begin{multline}\label{eq:secV}
C_V(p,\ell) \mathop{=}\limits^{\triangle}\\ \frac{C_1(p,k,\ell)}{\mathop{\sum}_{\mathcal I}\frac{\binom{\frac{n}{2} - n_I}{m}\binom{n_I}{n-k-\ell-z}}{\binom{n}{n-k-\ell}}\mathop{\max}\limits_{i=0}^{\lfloor p/2 \rfloor}f\left(\frac{\binom{n - n_I - z - 2m}{p-2i}\binom{m}{i}2^{p-i} }{3^{\max(0,n - n_I - z - m - k_V)}}\right)\binom{n/2 - n_I-m}{j}2^{j}\binom{n_I}{z-n + 2n_I + 2m + j}.}
\end{multline}
where $C_1(p,k,\ell)$ is the the complexity of a computing a constant
fraction (say half of them) of the codewords of weight $p$ in a code
of length $k+\ell$ and dimension $k$ and $f$ is the function
$f(x) \mathop{=}\limits^{\triangle} \max \left(x(1-x/2),1-\frac{1}{x} \right)$. The sum in
the denominator of \eqref{eq:secV} is over the domain
${\mathcal I}=\{(z,m,j)\mid 0\le z\le\min(n-k-\ell,n-n_I), 0\le m
\le n/2-n_I,0\le j\le n/2-n_{I} - m\}$.
\end{restatable}
We explain in Appendices \S\ref{app:CU} and \S\ref{app:CV} how to estimate $C_U$ and $C_V$.
\subsection{Parameter Selection}\label{ss:parameter}
With proper rejection sampling, the security of Wave provably reduces
to the two previous hard computational problems. The best known solvers, presented
above, both have an exponential complexity. For a given set of system
parameters $(n,w,k_U,k_V,k=k_U+k_V)$, their asymptotic complexities
can be expressed as
\begin{itemize}
\item for the message attack, $2^{c_Mn(1+o(1))}$ where $c_M$ is a function of
$w/n$ and $k/n$
\item for the key attack, $2^{c_Kn(1+o(1))}$ where $c_K$ is a function of
$k_U/n$ and $k_V/n$
\end{itemize}
Using the relations of \S\ref{sec:ch_params}, both $c_M$ and $c_K$ can
be expressed as functions of the code rate $R=k/n$ and of the
parameter $\alpha$. Minimizing the public key size under the
constraint $c_M(R,\alpha)=c_K(R,\alpha)$, we obtain
\begin{displaymath}
R = 0.633, \alpha=0.590656, c_M\approx c_K\approx 0.0141.
\end{displaymath}
For $\lambda$ bits of (classical) security we get ($K$ the key size
in bits):
\begin{displaymath}
n = \frac{\lambda}{0.0141}, ~~ w = 0.9302\, n, ~~ k_U = 0.8259\,
\frac{n}{2}, ~~ k_V = 0.4402\, \frac{n}{2}, ~~ K = 0.368\, n^2
\end{displaymath}
To reach 128 bits of security we obtain $n=9078$, $w=8444$, $k_U=3749$,
$k_V=1998$ for a public key size of $3.8$ megabytes. We also checked that
the other terms in the security reduction do not interfere here. For instance, we recommend
to choose the vectors ${\mathbf{a}}, {\mathbf{b}}, {\mathbf{c}}, {\mathbf{d}}$ uniformly at random among the choices
that give a $\varphi$ that is $UV$-normalized, meaning that for all $i$ in $\IInt{1}{n/2}$ we should have
$a_id_i-b_ic_i=1$ and
$a_i c_i \neq 0$. We reject choices that lead to a number $n_I$ of V blocks of type I
that are not close to their expected value $\mathbb{E}(n_I)=n/6$. By doing so we can control
the parameter $\varepsilon$ giving an upper-bound on $\mathbb{E}_{\Hm_{\textup{pk}}} \left( \rho(\Dsw{\Hm_{\textup{pk}}}, {\mathcal U}) \right)$.
In the case $n_I=n/6$ this upper-bound is of order
$\approx 2^{-254}$.
\subsection{Implementation}
The scheme was implemented in SageMath as a proof of concept. For the
parameters $(n,w)=(9078,8444)$ each signature is produced in a few
seconds. This gives a compelling argument to debunk the claim made in
\cite{BP18} to break Wave. The algorithm of \cite{BP18} collects a set
${\mathcal S}$ of signatures, measures for each pair of indices $(i,j)$ the
quantity
$\wt{\{e_i=-e_j\mid{\mathbf{e}}\in{\mathcal S}\}} - \wt{\{e_i=e_j\mid{\mathbf{e}}\in{\mathcal S}\}}$ and
selects for each $i$ the pair $(i,j)$ which maximizes this quantity. A
tentative secret key is then derived from the selected pairs. The
first version of this paper \cite{BP18a} proposed an algorithm that
recovers the secret key when rejection sampling was left out from the
$(U,U+V)$-decoder. It uses information leakage from a few hundred
signatures to achieve its purpose. The authors of \cite{BP18a} were
told that the rejection sampling step was critical to ensure uniformly
distributed signatures over $S_w$ and thus resistance against leakage
attack. Subsequent versions of \cite{BP18} claimed that their
algorithm also worked with the rejection sampling step. There was no
implementation of Wave at that time to give a practical refutation of
this conjecture. We could now test our implementation against the
algorithm given in \cite{BP18}. With a set of $25\,000$ properly
generated signatures the algorithm failed as expected to recover the
secret key.
\section{Concluding Remarks and Further Work}\label{sec:conclusion}
We have presented Wave the first code-based ``hash-and-sign'' signature
scheme which strictly follows the GPV strategy \cite{GPV08}. This
strategy provides a very high level of security, but because of the
multiple constraints it imposes, very few schemes managed to comply to
it. For instance, only one such scheme based on hard lattice problems
\cite{FHKLPPRSWZ} was proposed to the recent NIST standardization
effort.
Our scheme is secure under two assumptions from coding theory. Both
of those assumptions relate closely to hard decoding problems. Using
rejection sampling, we have shown how to efficiently avoid key leakage
from any number of signatures. The main purpose of our work was to
propose this new scheme and assess its security. Still, it has a few
issues and extensions that are of interest.
\smallskip {\noindent \em The Far Away Decoding Problem.} The message security of
Wave{} relates to the hardness of finding a codeword {\em far} from a
given word. A recent work \cite{BCDL19} adapts the best ISD techniques
for low weight \cite{MMT11,BJMM12} and goes even further with a higher
order generalized birthday algorithm \cite{W02}.
Interestingly enough, in the non-binary case, this work gives a worst case
exponent for the far away codeword that is significantly larger than the close codeword
worst case exponent. This seems to point to the fact that the far away codeword problem may even be more difficult to solve than the
close codeword problem. This raises the issue of obtaining code-based primitives with better parameters
that build upon the far away codeword rather than on the usual close codeword problem.
\smallskip {\noindent \em Distinguishability.} Deciding whether a
matrix is a parity check matrix of a generalized $(U,U+V)$-code is also a
new problem. As shown in \cite{DST17b} it is hard in the worst case
since the problem is NP-complete. In the binary case, $(U,U+V)$ codes have
a large hull dimension for some set of parameters which are precisely
those used in \cite{DST17b}. In the ternary case the normalized
generalized $(U,U+V)$-codes do not suffer from this flaw. The freedom of
the choice on vectors ${\mathbf{a}},{\mathbf{b}},{\mathbf{c}}$ and ${\mathbf{d}}$ is very likely to make
the distinguishing problem much harder for generalized $(U,U+V)$-codes
than for plain $(U,U+V)$-codes. Coming up with non-metric based
distinguishers in the generalized case seems a tantalizing problem
here.
\smallskip{\noindent \em On the Tightness of the Security Reduction.}
It could be argued that one of the reasons of why we have a tight
security-reduction comes from the fact that we reduce to the multiple
instances version of the decoding problem, namely DOOM, instead of the
decoding problem itself. This is true to some extent, however this
problem is as natural as the decoding problem itself. It has already
been studied in some depth \cite{S11} and the decoding techniques for
linear codes have a natural extension to DOOM as noticed in
\cite{S11}. We also note that with our approach, where a message has
many possible signatures, we avoid the tightness impossibility results
given in \cite{BJLS16} for instance.
\smallskip
{\noindent \em Rejection Sampling.} Rejection sampling in our
algorithm is relatively unobtrusive: a rejection every few signatures
with a crude tuning of the decoder. We believe that it can be further
improved. Our decoding has two steps. Each step is parametrized by a
weight distribution which conditions the output weight distribution. We
believe that we can tune those distributions to
reduce the probability of rejection to an arbitrarily small
value. This task requires a better understanding of the distributions
involved. This could offer an interesting trade-off in which the
designer/signer would have to precompute and store a set of
distributions but in exchange would produce a signing algorithm that
emulates a uniform distribution without rejection sampling.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,116,691,497,650 | arxiv | \section{\protect\bigskip Introduction and main results\label{intro}}
Quadratic differentials appear in many areas of mathematics and mathematical
physics such as orthogonal polynomials, moduli spaces of algebraic curves,
univalent functions, asymptotic theory of linear ordinary differential
equations etc...
One of the most common problems in the study of a given quadratic
differential, is the existence or not of its short trajectories. In this
note, we answer this question, under suitable assumptions.
In section \ref{app}, we present new proofs of the existence of short
trajectories of quadratic differentials related to generalized Laguerre and
Jacobi polynomials with varying parameters.
Let $\Omega $ be a non empty connected subset of
\mathbb{C}
,$ and $Q\left( z\right) =\prod_{k=1}^{3}\left( z-a_{k}\right) ^{m_{k}}$ be
a polynomial that is with simple or double zeros ($m_{k}\in \left\{
1,2\right\} $). Let $a,b:\Omega \longrightarrow $
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} $ be two continuous functions
such that
\begin{equation}
\forall t\in \Omega ,a\left( t\right) \neq b\left( t\right) . \label{1}
\end{equation
We consider the family of rational and polynomial functions $R_{t}$ and
P_{t}$
\begin{eqnarray*}
R_{t}\left( z\right) &=&\frac{\left( z-a\left( t\right) \right) \left(
z-b\left( t\right) \right) }{Q\left( z\right) }, \\
P_{t}\left( z\right) &=&\left( z-a\left( t\right) \right) \left( z-b\left(
t\right) \right) Q\left( z\right) .
\end{eqnarray*
We denote $\mathcal{J}_{a\left( t\right) ,b\left( t\right) }$ the set of all
Jordan arcs in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} $ joining $a\left( t\right) $
and $b\left( t\right) ,$ and we suppose that there exists a continuous
function (in the Haussd\H{o}rf metric)
\begin{equation*}
\begin{array}{cc}
\Phi :\Omega \longrightarrow \mathcal{J}_{a\left( t\right) ,b\left( t\right)
} & t\longmapsto \phi _{t}
\end{array
\end{equation*
such that
\begin{equation}
\phi _{t}\left( 0\right) =a\left( t\right) ,\phi _{t}\left( 1\right)
=b\left( t\right) . \label{2}
\end{equation
We assume that for some choice of branches of the square roots $\sqrt{\
R_{t}\left( z\right) }$ and $\sqrt{\ P_{t}\left( z\right) },\phi _{t}$
satisfies
\begin{eqnarray}
\Re \int_{\phi _{t}}\sqrt{\ R_{t}\left( z\right) }dz &=&0; \label{3} \\
\Re \int_{\phi _{t}}\sqrt{\ P_{t}\left( z\right) }dz &=&0. \label{4}
\end{eqnarray
We consider the quadratic differentials
\begin{eqnarray*}
\varpi \left( R_{t},z\right) &=&-R_{t}\left( z\right) dz^{2}, \\
\varpi \left( P_{t},z\right) &=&-P_{t}\left( z\right) dz^{2}.
\end{eqnarray*
Then, the following results hold
\begin{proposition}
\label{rational}Under assumptions (\ref{1}),(\ref{2}), and (\ref{3}),
either, for any $t\in \Omega ,$ there exists exactly one short trajectory of
the quadratic differential $\varpi \left( R_{t},z\right) $ that connects
a\left( t\right) $ and $b\left( t\right) ,$ homotopic to $\phi _{t}$ in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} ,$ or there does not exist any
such trajectory for any $t\in \Omega .$
\end{proposition}
\begin{proposition}
\label{polyn}With assumptions (\ref{1}),(\ref{2}), and (\ref{4}), the set of
all $t\in \Omega $ such that $\varpi \left( P_{t},z\right) $ has a short
trajectory connecting $a\left( t\right) $ and $b\left( t\right) $ is a
closed subset of $\Omega .$
\end{proposition}
\section{\protect\bigskip Basics of quadratic differentials}
We first present some basics for quadratic differentials.
\begin{definition}
A rational quadratic differential on the Riemann sphere $\overline
\mathbb{C}
}$ is a form $\varpi =\varphi (z)dz^{2}$, where $\varphi $ is a rational
function of a local coordinate $z$. If $z=z(\zeta )$ is a conformal change
of variables then
\begin{equation*}
\widetilde{\varphi }(\zeta )d\zeta ^{2}=\varphi (z(\zeta ))(dz/d\zeta
)^{2}d\zeta ^{2}
\end{equation*
represents $\varpi $ in the local parameter $\zeta $.
\end{definition}
The \emph{critical points} of $\varpi $ are its zeros and poles; a critical
point is \emph{finite} if it is a zero or a simple pole, otherwise, it is
\emph{infinite. }All other points of $\overline
\mathbb{C}
}$ are called \emph{regular} points.
The horizontal trajectories (or just trajectories ) are the zero loci of the
equatio
\begin{equation}
\Im \int^{z}\sqrt{\varphi \left( t\right) }dt=\text{\emph{const}},
\label{traj}
\end{equation
or equivalentl
\begin{equation*}
\varphi \left( z\right) dz^{2}>0;
\end{equation*}
the vertical trajectories are obtained by replacing $\Im $ by $\Re $ in the
equation above. The horizontal and vertical trajectories of $\varpi $
produce two pairwise orthogonal foliations of the Riemann sphere $\overline
\mathbb{C}
}$. A critical trajectory is a trajectory passing through a critical point.
A finite critical trajectory or \emph{short trajectory} is a critical
trajectory connecting two finite critical points of $\varpi $, it will be
called \emph{unbroken} if it is not passing through other finite critical
points except its two endpoints, otherwise, we call it \emph{broken}. The
set of finite and infinite critical trajectories of $\varpi $ together with
their limit points (critical points of $\varpi $) is called the \emph
critical graph} of $\varpi $.
Notice that, if $z\left( t\right) ,t\in
\mathbb{R}
$ is a trajectory of (\ref{traj}), then the function
\begin{equation*}
t\longmapsto \Im \int^{t}\sqrt{\varphi \left( z\left( u\right) \right)
z^{\prime }\left( u\right) du
\end{equation*
is monotone. For more details, we refer the reader to \cite{Striebel}.
The local structure of the trajectories is as follow :
\begin{itemize}
\item At any regular point horizontal (resp. vertical) trajectories look
locally as simple analytic arcs passing through this point, and through
every regular point of $\varpi $ passes a uniquely determined horizontal
(resp. vertical) trajectory of $\varpi ;$ these horizontal and vertical
trajectories are locally orthogonal at this point.
\item From every zero with multiplicity $r$ of $\varpi $, there emanate
\left( r+2\right) $ horizontal (resp. vertical) trajectories, and the angle
between any two adjacent trajectories equals $\pi /\left( r+2\right) .$
\item At a simple pole there emanates only one trajectory (see Figure \ref{Figa}).
\item At a double pole, the local behavior of the trajectories depends on
the vanishing of the real or imaginary part of the residue; they have either
the radial, the circular or the log-spiral form (Figure \ref{Figb}).
\item At a pole of order $r$ greater than $2,$ there are $\left( r-2\right) $
asymptotic directions (called \emph{critical directions}) spacing with equal
angle $\frac{2\pi }{r-2},$ and a neighborhood $\mathcal{U}$, such that each
trajectory entering $\mathcal{U}$ stays in $\mathcal{U}$ and tends to this
pole in one of the critical directions.
\end{itemize}
\begin{figure}[h
\centering
\fbox{\includegraphics[
height=2.0903in,
width=3.7048in
{a.png}
}\caption{Strucure of trajectories in a
neighborhood of a simple zero (left), simple pole (middle), and a 4th order
pole (right)..
\label{Figa
\end{figure}
\begin{figure}[h
\centering
\fbox{\includegraphics[
height=2.0903in,
width=3.7048in
{b.png
}\caption{Structure of trajectories in a
neighborhood of a double pole, circle form (left), radial form (middle), and
log-spiral form (right).}
\label{Figb
\end{figure}
\bigskip The main trouble in the global behaviour of trajectories which are
dense in some domains in
\mathbb{C}
$ comes from the so-called recurrent trajectory; Jenkins' three pole Theorem
asserts that such a situation cannot happen for a quadratic differential
that has at most three poles.
A necessary condition for the existence of a short trajectory connecting two
finite critical points $a$ and $b$ of a quadratic differential $\varphi
\left( z\right) dz^{2}$ is the existence of Jordan arc $\gamma $ connecting
a$ and $b$ in
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} ,$ such that
\begin{equation*}
\Im \int_{\gamma }\sqrt{\varphi \left( t\right) }dt=0,
\end{equation*
but this condition is not sufficient. Indeed, Figure \ref{Figc}
illustrates the critical graph of the quadratic differential $Q\left(
z\right) =-\left( z^{4}-1\right) dz^{2}$; in particular, there is no short
trajectory connecting the zeros $\pm i.$ However, if $\gamma $ is an
oriented Jordan arc joining $\pm i$ in
\mathbb{C}
\setminus \left[ -1,1\right] $ , and $\sqrt{z^{4}-1}$ is chosen in
\mathbb{C}
\setminus \left( \left[ -1,1\right] \cup \gamma \right) $ with condition
\sqrt{z^{4}-1}$ $\backsim z^{2},z\rightarrow \infty ,$ then from the Laurent
expansion at $\infty $ of $\sqrt{z^{4}-1}:
\begin{equation*}
\sqrt{z^{4}-1}=z^{2}+\mathcal{\allowbreak O}\left( z^{-2}\right)
,z\rightarrow \infty ,
\end{equation*
we deduce the residue of $\sqrt{z^{4}-1}$ at $\infty $ :
\begin{equation*}
res_{\infty }\left( \sqrt{z^{4}-1}\right) =\allowbreak 0.
\end{equation*
For $t\in \left[ -1,1\right] \cup \gamma ,$ we denote by $\left( \sqrt
t^{4}-1}\right) _{+}$ and $\left( \sqrt{t^{4}-1}\right) _{-}$ the limits
from the $+$-side and $-$-side respectively. (As usual, the $+$-side of an
oriented curve lies to the left, and the $-$-side lies to the right, if one
traverses the curve according to its orientation.)
Le
\begin{equation*}
I=\int_{-1}^{1}\left( \sqrt{t^{4}-1}\right) _{+}dt+\int_{\gamma }\left(
\sqrt{t^{4}-1}\right) _{+}dt.
\end{equation*
Since $\left( \sqrt{t^{4}-1}\right) _{+}=-\left( \sqrt{t^{4}-1}\right) _{-},$
for $t\in \left[ -1,1\right] \cup \gamma ,$ we have
\begin{equation*}
2I=\int_{\left[ -1,1\right] \cup \gamma }\left[ \left( \sqrt{t^{4}-1}\right)
_{+}-\left( \sqrt{t^{4}-1}\right) _{-}\right] dt=\oint_{\Gamma _{i,j}\cup
\Gamma _{l,k}}\sqrt{z^{4}-1}dz,
\end{equation*
where $\Gamma _{i,j}$ and $\Gamma _{l,k}$ are two closed contours encircling
respectively the curve $\left[ -1,1\right] $ and $\gamma $ once in the
clockwise direction. After the contour deformation we pick up residue at
z=\infty .$ We get
\begin{equation*}
I=\frac{1}{2}\oint_{\Gamma _{i,j}\cup \Gamma _{l,k}}\sqrt{z^{4}-1}dz=\pm
i\pi res_{\infty }\left( \sqrt{z^{4}-1}\right) =0.
\end{equation*
By the other hand, it is straightforward that $\Re \int_{-1}^{1}\left( \sqrt
t^{4}-1}\right) _{+}dt=0,$ which implies that
\begin{equation*}
\Re \int_{\gamma }\left( \sqrt{t^{4}-1}\right) _{+}dt=0.
\end{equation*
\begin{figure}[h
\centering
\fbox{\includegraphics[
height=2.0903in,
width=3.7048in
{c.png
}\caption{ Critical graph of the quadratic
differential $Q\left( z\right) =-\left( z^{4}-1\right) dz^{2}.$}
\label{Figc
\end{figure}
\bigskip The quadratic differential $\varphi \left( z\right) dz^{2}$ defines
a $\varphi $-metric with the differential element $\sqrt{\left\vert \varphi
\left( z\right) \right\vert }\left\vert dz\right\vert $. If $\gamma $ is a
rectifiable arc in $\overline
\mathbb{C}
}$, then its $\varphi $-length is defined b
\begin{equation*}
\left\vert \gamma \right\vert _{\varphi }=\int_{\gamma }\sqrt{\left\vert
\varphi \left( z\right) \right\vert }\left\vert dz\right\vert .
\end{equation*}
A trajectory of $\varphi \left( z\right) dz^{2}$ is finite if, and only if
its $\varphi $-length is finite, otherwise is infinite. In particular, a
critical trajectory is finite if and only if its two end points each are
finite critical point.
Two Jordan arcs $\alpha ,\beta :\left[ 0,1\right] \longrightarrow
\mathbb{C}
$ joining a point $p_{1}$ to a point $p_{2}$ in
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} $ are homotopic if there
exists a continuous function $H:\left[ 0,1\right] \times $ $\left[ 0,1\right]
\longrightarrow $
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} $ such tha
\begin{equation*}
\left\{
\begin{array}{c}
H\left( t,0\right) =\alpha \left( t\right) \\
H\left( t,1\right) =\beta \left( t\right
\end{array
,t\in \left[ 0,1\right] .\right.
\end{equation*
It is an equivalence relation on the set $\mathcal{J}_{p_{1},p_{2}}$ of all
Jordan arcs joining $p_{1}$ to $p_{2}$ in
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} $. If $card\left\{ \text
poles of }\varphi \right\} =m\in
\mathbb{N}
,$ then, it is well known that
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} $ and the wedge of $m$
circles have the same type of homotopy; in particular, there are $2^{m}$
classes of equivalence of the relation "homotopic" on $\mathcal{J
_{p_{1},p_{2}}$.
\begin{definition}
A locally rectifiable (in the spherical metric) curve $\gamma _{0}$ is
called a $\varphi $-geodesic if it is locally shortest in the $\varphi
-metric. It is called critical geodesic it is $\varphi $-geodesic and
passing through a critical point of the quadratic differential $\varphi
\left( z\right) dz^{2}.$
\end{definition}
\begin{proposition}[{\protect\cite[Theorem 16.2]{Striebel}}]
\label{geod}\bigskip Let $\gamma $ be a $\varphi $-geodesic arc joining
p_{1}$ to $p_{2}$ in
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} .$ Then for every $\gamma
_{1}\in $ $\mathcal{J}_{p_{1},p_{2}}$ which is homotopic to $\gamma $ on
\mathbb{C}
\setminus \left\{ \text{poles of }\varphi \right\} $, we have $\left\vert
\gamma _{1}\right\vert _{\varphi }\geq $ $\left\vert \gamma \right\vert
_{\varphi },$ with equality if and only if $\gamma _{1}=\gamma .$
\end{proposition}
We finish this section by the so-called Teichmuller Lemma that will be in
use in the next section.
\begin{definition}
\bigskip A domain in
\mathbb{C}
$ bounded only by segments of $\varphi $-geodesic and/or horizontal and/or
vertical trajectories of the quadratic differential $\varphi \left( z\right)
dz^{2}$ (and their endpoints) is called $\varphi $-polygon.
\end{definition}
\begin{lemma}[Teichmuller]
\label{teichmuller}Let $\Omega $ be a $\varphi $-polygon, and let $z_{j}$ be
the singular points of $\varphi \left( z\right) dz^{2}$ on the boundary
\partial \Omega $ of $\Omega ,$ with multiplicities $n_{j},$ and let $\theta
_{j}$ $\in \left[ 0,2\pi \right] $be the corresponding interior angles with
vertices at $z_{j},$ respectively. The
\begin{equation}
\sum \left( 1-\theta _{j}\dfrac{n_{j}+2}{2\pi }\right) =2+\sum n_{i},
\label{teich}
\end{equation
where $n_{i}$ are the multiplicities of the singular points inside $\Omega .$
\end{lemma}
\section{Proofs}
\begin{lemma}
In the notation of Proposition \ref{2}
\begin{enumerate}
\item[(a)] there exists at most one unbroken short trajectory of the
quadratic differential $\varpi \left( P_{t},z\right) $ connecting $a\left(
t\right) $ and $b\left( t\right) $.
\item[(b)] If there exist two short trajectories of the quadratic
differential $\varpi \left( R_{t},z\right) $ connecting $a\left( t\right) $
and $b\left( t\right) ,$ then they are not homotopic in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} $.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item[(a)] Suppose that $\gamma _{1}$ and $\gamma _{2}$ are two unbroken
short trajectories of $\varpi \left( P_{t},z\right) $ connecting $a\left(
t\right) $ and $b\left( t\right) ,$ and let $\Omega $ be the $\varpi
-polygon with vertices $a\left( t\right) $ and $b\left( t\right) ,$ and
edges $\gamma _{1}$ and $\gamma _{2}$. From Lemma \ref{teichmuller}, the
left-hand side of (\ref{teich}) is smaller than $2$, whereas the righthand
side is clearly at least $2$, a contradiction.
\item[(b)] In the same vein of the previous proof, by taking $\gamma _{1}$
and $\gamma _{2}$ are two short trajectories of $\varpi \left(
R_{t},z\right) $ connecting $a\left( t\right) $ and $b\left( t\right) ;$ the
fact that are homotopic in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} $ means that there is no pole of
$R_{t}$ inside $\Omega ;$ and again, we get a contradiction with Lemma \re
{teichmuller}.
\end{enumerate}
\end{proof}
\begin{remark}
The number of unbroken short geodesics of $\varpi _{P}\left( t,z\right) $
can be any integer between $\deg \left( P_{t}(z)\right) -1$ and $\left(
\begin{array}{c}
2 \\
\deg \left( P_{t}(z)\right
\end{array
\right) $. We refer the reader to \cite{SHA} for the proof.
\end{remark}
\begin{remark}
It is well known that, by using $3$ wedged circles, there are $8$ homotopy
classes in $\mathcal{J}_{a\left( t\right) ,b\left( t\right) }.$ With the
same way of the previous proof, and by Proposition \ref{geod}, there exist
at most $8$ unbroken short geodesics of $\varpi \left( R_{t},z\right) $
joining $a\left( t\right) $ and $b\left( t\right) .$
\end{remark}
\begin{proof}[Proof of Proposition \protect\ref{rational}]
Let us denote $\Lambda $ the subset of $\Omega $ formed by all $t$ such that
there exists a short trajectory of $\varpi \left( R_{t},z\right) $ homotopic
to $\phi _{t}$ in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} .$
Let $t_{0}$ $\in \Lambda .$ By continuity of the quadratic differential
\varpi \left( R_{t},z\right) $, for every $\varepsilon >0$ there exists
\delta >0$ such that for any $t\in \Omega $ satisfying $|t-t_{0}|<\delta $,
there exists a trajectory of $\varpi \left( R_{t},z\right) ,$ say $\gamma
_{t},$ emanating from $a\left( t\right) $ and intersecting the $\varepsilon
-neighborhood $\mathcal{U}_{\varepsilon }$ of $b(t)$. If $\gamma _{t}$ does
not pass through $b(t)$, then, we may assume that $\delta >0$ is small
enough so that $\gamma _{t}$ is intersected by an orthogonal trajectory
\sigma _{t}$ emanating from $b(t)$ in some point $c\left( t\right) $. We
denote by $\varphi _{t}$ the path that follows the arc of $\gamma _{t}$ from
$a(t)$ to $c\left( t\right) $ and then continues to $b(t)$ along $\sigma
_{t} $. Clearly the arcs $\phi _{t}$ and $\varphi _{t}$ are homotopic in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} ,$ and, by definition of
orthogonal trajectories, the real part of the integral along $\varphi _{t}$
of $\sqrt{\ R_{t}\left( z\right) }$ cannot vanish. This contradiction shows
that the whole small neighborhood of $t_{0}$ still in $\Lambda ,$ and then
\Lambda $ is an open subset of $\Omega $.
Suppose now that $\left( t_{n}\right) $ is sequence of $\Lambda $ converging
to $t\in \Omega $, so that $a\left( t_{n}\right) $ and $b\left( t_{n}\right)
$ converge respectively to $a$ and $b.$ For each $t_{n}$, there exists a
unique short trajectory $\gamma _{n}$ joining $a\left( t_{n}\right) $ and
b\left( t_{n}\right) ,$ and all the $\gamma _{n}$ are homotopic to $\phi
_{t_{n}}$ in
\mathbb{C}
\setminus \left\{ a_{1},a_{2},a_{3}\right\} $. It is obvious that the limit
set of the sequence $\gamma _{n}$ (in the Hausd\H{o}rff metrics) is either
another short trajectory connecting $a$ and $b$, or a union of two infinite
critical trajectories $\gamma _{a}$ and $\gamma _{b}$ emanating respectively
from $a$ and $b,$ and each of them diverges to some pole of the quadratic
differential $\varpi \left( R_{t},z\right) .$ If $\gamma _{a}$ and $\gamma
_{b}$ do not diverge to the same pole, or one of them diverges to a simple
pole, then
\begin{equation*}
\inf_{x\in \gamma _{a},y\in \gamma _{b}}\left\vert x-y\right\vert
=dist\left( \gamma _{a},\gamma _{b}\right) >0,
\end{equation*
which contradicts the fact that $\lim_{n\rightarrow \infty }\gamma
_{n}=\gamma _{a}$ $\cup \gamma _{b}.$ Let $c$ $\in \left\{
a_{1},a_{2},a_{3}\right\} \cup \left\{ \infty \right\} $ be the common pole
of divergence of $\gamma _{a}$ and $\gamma _{b}.$
If $c$ is a double pole (we assume without loss of generality that the
residue of the quadratic differential $\varpi \left( R_{t},z\right) $ at the
pole $c$ is non real, and then $\gamma _{a}$ and $\gamma _{b}$ diverge to $c$
in log-spiral). Let $\sigma $ be an orthogonal trajectory that diverges (of
course, in log-spiral) to $c$. Then $\sigma $ intersects $\gamma _{a}$ and
\gamma _{b}$ infinitely many times. Considering three consecutive points of
intersection, it is obvious that we can construct two paths $\gamma $ and
\gamma ^{\prime }$ joining $a$ and $b$ formed by the three parts, from
\gamma _{a},\sigma $ and $\gamma _{b}.$ Clearly, $\gamma $ and $\gamma
^{\prime }$ are not homotopic in
\mathbb{C}
\setminus \left\{ c\right\} ,$ and by continuity of the family $\phi
_{t_{n}},$ one of them must be homotopic to $\phi _{t_{n}}$ for $n\geq n_{0}$
for some integer $n_{0}$. Then we get
\begin{equation*}
\Re \int_{\gamma }\sqrt{\ R_{t}\left( z\right) }dz\neq 0,\text{ and }\Re
\int_{\gamma ^{\prime }}\sqrt{\ R_{t}\left( z\right) }dz\neq 0,
\end{equation*
which contradicts (\ref{3}). Then, the limit set of the sequence $\gamma
_{n} $ is a short trajectory joining $a\left( t\right) $ and $b\left(
t\right) ,$ and $\Lambda $ is a closed subset of $\Omega $. The cases when
the residues at $c$ are real (positive or negative) are in the same vein.
Finally, since $\Omega $ is a connected subset of
\mathbb{C}
,$ we conclude that either $\Lambda =\Omega ,$ or $\Lambda =\emptyset .$
\end{proof}
\begin{proof}[Proof of Proposition \protect\ref{polyn}]
In order to discuss the possible existence of a short trajectory of $\varpi
\left( P_{t},z\right) $ connecting $a\left( t\right) $ and $b\left( t\right)
$ for some $t\in \Omega ,$ we denote by $\Gamma _{a\left( t\right) }$ and
\Gamma _{b\left( t\right) }$ the sets of the three critical trajectories
that emanate respectively from $a\left( t\right) $ and $b\left( t\right) ,$
and we consider the euclidian distance
\begin{equation*}
dist\left( \Gamma _{a\left( t\right) },\Gamma _{b\left( t\right) }\right)
=\inf_{x\in \Gamma _{a\left( t\right) },y\in \Gamma _{b\left( t\right)
}}\left\vert x-y\right\vert .
\end{equation*
Then we claim the following : The quadratic differential $\varpi \left(
P_{t},z\right) $ has a short trajectory connecting $a\left( t\right) $ and
b\left( t\right) $, if and only if, $dist\left( \Gamma _{a\left( t\right)
},\Gamma _{b\left( t\right) }\right) =0.$ Indeed, there are $n+2$ asymptotic
directions, spacing with equal angle $\frac{2\pi }{n+2}$ that can take any
horizontal (resp. vertical) trajectory of the quadratic differential $\varpi
\left( P_{t},z\right) $ diverging to infinity; the asymptotic directions of
vertical trajectories are obtained by rotation of angle $\frac{\pi }{2}$.
Obviously, if $dist\left( \Gamma _{a\left( t\right) },\Gamma _{b\left(
t\right) }\right) >0,$ then there is no short trajectory connecting $a\left(
t\right) $ and $b\left( t\right) $. Assume that $dist\left( \Gamma _{a\left(
t\right) },\Gamma _{b\left( t\right) }\right) =0$ and no short trajectory
connects $a\left( t\right) $ and $b\left( t\right) $. Since $\Gamma
_{a\left( t\right) }\cap \Gamma _{b\left( t\right) }=\emptyset ,$ there
exist two horizontal trajectories $\gamma _{a\left( t\right) }$ and $\gamma
_{b\left( t\right) }$ that emanate from $a\left( t\right) $ and $b\left(
t\right) $ and diverge to infinity in the same direction $D$; let $\sigma $
be a vertical trajectory (not critical) diverging to infinity in the two
directions adjacent to $D.$ Obviously, $\sigma $ intersects $\gamma
_{a\left( t\right) }$ and $\gamma _{b\left( t\right) }$ in exactly two
points $P_{a\left( t\right) }$ and $P_{b\left( t\right) }.$ Let $\gamma \in
\mathcal{J}_{a\left( t\right) ,b\left( t\right) }$ be the union of the part
of $\gamma _{a\left( t\right) }$ from $a\left( t\right) $ to $P_{a\left(
t\right) },$ and the part of $\sigma $ from $P_{a\left( t\right) }$ to
P_{b\left( t\right) },$ and finally, the part of $\gamma _{b\left( t\right)
} $ from $P_{b\left( t\right) }$ to $b\left( t\right) $. Integrating along
\gamma ,$ and since
\begin{equation*}
\Re \int_{a\left( t\right) }^{P_{a\left( t\right) }}\sqrt{P_{t}\left(
z\right) }dz=\Re \int_{P_{b\left( t\right) }}^{b\left( t\right) }\sqrt
P_{t}\left( z\right) }dz=0,
\end{equation*
we get
\begin{equation*}
\Re \int_{\gamma }\sqrt{P_{t}\left( z\right) }dz=\Re \int_{P_{a\left(
t\right) }}^{P_{b\left( t\right) }}\sqrt{P_{t}\left( z\right) }dz\neq 0,
\end{equation*
which violates (\ref{4}). By continuity of the function $t\longmapsto $
dist\left( \Gamma _{a\left( t\right) },\Gamma _{b\left( t\right) }\right) ,$
it follows that the set of all $t\in \Omega $ such that the quadratic
differential $\varpi \left( P_{t},z\right) $ has no short trajectory
connecting $a\left( t\right) $ and $b\left( t\right) ,$ is an open subset of
$\Omega $. Notice that Proposition \ref{polyn} still valid with polynomials
Q$ with higher degree or multiplicities of zeros.
\end{proof}
\bigskip
\section{\protect\bigskip Connection with Laguerre and Jacobi polynomials
\label{app}}
The rescaled generalized Laguerre polynomials $L_{n}^{nC}\left( nz\right) $
with varying parameters $nC,$ and the Jacobi polynomials
P_{n}^{(nA,nB)}\left( z\right) $ with varying parameters $nA$ and $nB$ can
be given explicitly respectively by (see \cite{Szego}):
\begin{equation*}
L_{n}^{nC}\left( nz\right) =\sum_{k=0}^{n}\left(
\begin{array}{c}
n+nC \\
n-
\end{array
\right) \frac{\left( -z\right) ^{k}}{k!},
\end{equation*}
\begin{equation*}
P_{n}^{(nA,nB)}\left( z\right) =2^{-n}\sum_{k=0}^{n}\left(
\begin{array}{c}
n+nA \\
n-
\end{array
\right) \left(
\begin{array}{c}
n+nB \\
\
\end{array
\right) \left( z-1\right) ^{k}\left( z+1\right) ^{n-k}.
\end{equation*
Jacobi or Laguerre polynomials with (real) parameters, depending on the
degree $n$ appear naturally as polynomial solutions of hypergeometric
differential equations, or in the expressions of the wave functions of many
classical systems in quantum mechanics; see \cite{hyper}.
With each polynomial $p_{n}$, we associate its normalized zero-counting
measure $\mu _{n},$
\begin{equation*}
\mu _{n}=\mu \left( p_{n}\right) =\frac{\sum_{p_{n}\left( z\right) =0}\delta
_{z}}{n}.
\end{equation*
For a compact subset $K$ in
\mathbb{C}
,$
\begin{equation*}
\int_{K}d\mu _{n}=\frac{\text{number of zeros of }p_{n}\text{ in }K}{n}.
\end{equation*
The zeros are counted with their multiplicities.
Following the works of Gonchar-Rakhmanov \cite{gonchar} and Stahl \cit
{stahl}, it was shown that the sequence $\mu _{n}$ converges (as
n\rightarrow \infty $) in the weak-* topology to a measure, supported on
short trajectories of related quadratic differentials. For the case of
Laguerre, see \cite{amf pgm ro},\cite{abj mac},\cite{Atia}; for the case of
Jacobi, see \cite{abjk amf},\cite{abjk amf ro},\cite{AMF FT},\cit
{FTMChouikhi}.
The related quadratic differential for Laguerre polynomials is,
\begin{equation}
\varpi _{C}=-\frac{D_{C}(z)}{z^{2}}dz^{2}, \label{laguerre}
\end{equation
wher
\begin{equation*}
D_{C}(z)=z^{2}-2(C+2)z+C^{2}.
\end{equation*
The zeros of $D_{C}(z)$ are
\begin{equation}
a\left( C\right) =C+2+2\sqrt{C+1},b(C)=C+2-2\sqrt{C+1}.
\label{zeros laguerre}
\end{equation
The related quadratic differential for Jacobi polynomials is
\begin{equation}
\varpi _{A,B}=-\frac{D_{A,B}\left( z\right) }{\left( z^{2}-1\right) ^{2}
\,dz^{2}, \label{jacobi}
\end{equation
where
\begin{equation*}
D_{A,B}\left( z\right) =\left( A+B+2\right) ^{2}z^{2}+2\left(
A^{2}-B^{2}\right) z+\left( A-B\right) ^{2}-4\left( A+B+1\right) .
\end{equation*
The zeros of $D_{A,B}(z)$ are
\begin{eqnarray*}
a\left( A,B\right) &=&\frac{-A^{2}+B^{2}+4\sqrt{\left( A+1\right) \left(
B+1\right) \left( A+B+1\right) }}{\left( A+B+2\right) ^{2}}, \\
b(A,B) &=&\frac{-A^{2}+B^{2}-4\sqrt{\left( A+1\right) \left( B+1\right)
\left( A+B+1\right) }}{\left( A+B+2\right) ^{2}}.
\end{eqnarray*}
\begin{proposition}[\protect\cite{Atia}]
Assume that $C\in
\mathbb{C}
_{+}$, and that $\gamma $ is a Jordan arc connecting the zeros of $D_{C}(z)$
in the punctured plane
\mathbb{C}
\setminus \{0\}$. Denote by $\sqrt{D_{C}(z)}$ the single-valued branch of
this function in
\mathbb{C}
\setminus \gamma $ determined by the condition
\begin{equation*}
\sqrt{D_{C}(z)}\sim z,z\rightarrow \infty ,
\end{equation*
and let $\left( \sqrt{D_{C}(z)}\right) _{+}$ stand for its boundary values
on the $+$-side of $\gamma $. Then
\begin{equation}
\int_{\gamma }\frac{\left( \sqrt{D_{C}(t)}\right) _{+}}{t}\,dt\in \pm 2\pi
i\left\{ 1,(C+1)\right\} . \label{firstInt}
\end{equation
Moreover, the integral in the left hand of (\ref{firstInt}) takes the value
\pm 2\pi i$ if and only if $\gamma $ is such that it can be continuously
deformed in
\mathbb{C}
\setminus \{0\}$ to an arc not intersecting the positive real axis.
\end{proposition}
If we denote $\Omega =\left\{ C\in
\mathbb{C}
:\Im C\geq 0\right\} ,$ and $R_{C}(z)=-\frac{D_{C}(z)}{z^{2}},$ then
conditions (\ref{1}),(\ref{2}), and (\ref{3}) are full-filled. Since it can
be easily shown that for $C\in \left( -1,+\infty \right) ,$ the zeros
a\left( C\right) $ and $b\left( C\right) $ satisfy
\begin{equation*}
0<b\left( C\right) <a\left( C\right) ,
\end{equation*
and the segment $\left[ b\left( C\right) ,a\left( C\right) \right] $ is a
short trajectory of the quadratic differential \ref{laguerre} (see Figure
\ref{Figd}), we conclude the existence of the short trajectory for any $C$
\in \Omega $.
\begin{figure}[h
\centering
\fbox{\includegraphics[
height=2.0903in,
width=3.7048in
{d.png
}\caption{Critical graphs of $\protec
\varpi _{-0.95}$ (left) and $\protect\varpi _{-0.95+0.1i}$ (right).}
\label{Figd
\end{figure}
\begin{proposition}[\protect\cite{abjk amf},\protect\cite{AMF FT}]
Let $A,B$ satisfy assumptions
\begin{equation}
A+1\neq 0,B+1\neq 0,A+B+1\neq 0,A+B+2\neq 0, \label{cond AB}
\end{equation
and let $\gamma $ be a Jordan arc in
\mathbb{C}
\setminus \{-1,1\}$ joining the zeros of $D_{A,B}$, and $\sqrt{D_{A,B}}$ is
its single-valued branch in
\mathbb{C}
\setminus \gamma $ fixed by the conditio
\begin{equation*}
\sqrt{D_{A,B}}\left( z\right) \sim \left( A+B+2\right) z,z\rightarrow \infty
.
\end{equation*
Then
\begin{equation}
\int_{\gamma }\frac{\left( \sqrt{D_{A,B}\left( t\right) }\right) _{+}}
t^{2}-1}dt\in \pm 2\pi i\left\{ 1,\left( A+1\right) ,\left( B+1\right)
,\left( A+B+1\right) \right\} , \label{secondInt}
\end{equation
where $\left( \sqrt{D_{A,B}\left( t\right) }\right) _{+}$ is the boundary
value on one of the sides of $\gamma $.
Moreover, if in addition of (\ref{cond AB}), $B>0,$ then the integral in the
left hand side of (\ref{secondInt}) takes the value $\pm 2\pi i$ if and only
if $\gamma $ is such that conditions
\begin{equation*}
\sqrt{D_{A,B}}(1)=2A,\quad \sqrt{D_{A,B}}(-1)=-2B
\end{equation*
are satisfied.
\end{proposition}
For $B>-1$ we denote
\begin{equation*}
\Omega =\left\{ A\in
\mathbb{C}
:A+1\neq 0,A+B+1\neq 0,A+B+2\neq 0\right\} ,
\end{equation*
and $R_{A}(z)=-\frac{D_{A,B}(z)}{\left( z^{2}-1\right) ^{2}},$ then
conditions (\ref{1}),(\ref{2}), and (\ref{3}) are satisfied. Taking into
account that for $A\in
\mathbb{R}
\cap \Omega ,$ there exists a short trajectory of the quadratic differential
\ref{jacobi}, we conclude the existence of the short
trajectory for any $A$ $\in \Omega $. By repeating the same reasoning, we
conclude the result for any $A$ and $B$ satisfying (\ref{cond AB}) (see
Figures \ref{Fige}, \ref{Figf}).
\begin{figure}[h]
\centering
\begin{minipage}[t]{9cm}
\centering
\fbox{\includegraphics[
width=3.7048in
]
{e.png}
}\caption{Critical graph of $\protect\varpi _{10,10}.$.}
\label{Fige}
\end{minipage}
\begin{minipage}[t]{9cm}
\centering
\fbox{\includegraphics[
width=3.7048in
{f.png}
}\caption{Critical graph of $
\varpi _{10+i,10}$}
\label{Figf
\end{minipage}
\end{figure}
\begin{acknowledgement}
This note is a resume of discussions with Professor Andrei Mart\'{\i
nez-Finkelshtein; it was carried out during a visit to the Department of
Mathematics of the Stockholm University. The author acknowledges the
hospitality of the hosting department, and especially, Professor Boris
Shapiro. This work was entirely supported by the Stockholm university.
The author acknowledges the contribution of the anonymous referee whose
careful reading of the manuscript helped to improve the presentation.
\end{acknowledgement}
\bigskip
|
1,116,691,497,651 | arxiv | \section{Introduction}
Multi-label learning \cite{LearningFML} aims to find a mapping from the feature space $\mathcal X\subseteq \mathbb R^p$ to the label vector space $\mathcal Y\subseteq \{0,1\}^k$, wherein $k$ is the number of labels and $y_i=1$ denotes the sample belongs to label $i$. Binary relevance (BR) \cite{MLreview2} and label powerset (LP) \cite{MLreview2} are two early and natural solutions. BR and LP transform a multi-label learning problem to several binary classification tasks and single-label classification task, respectively. Specifically, BR associates each label with an individual class, i.e., assigns samples with the same label to the same class. LP treats each unique set of labels as a class, in which samples share the same label vector.
Although BP/LP and their variants can directly transform a multi-label learning problem into multiple binary classification tasks or single-label classification task, multi-label learning brings new problems. First, the labels are not mutually exclusive in multi-label learning, and thus it is necessary to consider not only the discriminative information between different labels but also their correlations. Second, the large number of labels always leads to the imbalance between positive samples and negative ones in each class, and this limits the performance of binary classification algorithms. Third, the problem size of multi-label learning will be significantly increased when it is decomposed into many binary classification problems.
Recent multi-label learning methods more or less tackle some of the above problems and demonstrate that the prediction performance can be improved by exploiting specific properties of the multi-label data, e.g., label dependence, label structure, and the dependence between samples and the corresponding labels. We categorize popular methods into two groups.
\begin{enumerate}
\item The first group of methods transform multi-label prediction into a sequence of binary classification methods with special structures implied by label correlations. For example, the random k-labelsets (RAkEL) method \cite{RAkEL} randomly selects an ensemble of subset from the original labelsets, and then LP is applied to each subset. The final prediction is obtained by ranking and thresholding of the results on the subsets. Hierarchical binary relevance (HBR) \cite{HBR} builds a general-to-specific tree structure of labels, where a sample with a label must be associated with its parent labels. A binary classifier is trained on each non-root label. Hierarchy of multi-label classifiers (HOMER) \cite{HOMER} recursively partitions the labels into several subsets and build a tree-shaped hierarchy. A binary classifier is trained on each non-root label subset. The classifier chain (CC) \cite{CC} adopts a greedy way to predict unknown label from feature and predicted labels via binary classifier.
\item The second group of methods formulate the multi-label prediction to other kinds of problems rather than binary classification. For example, the C\&W procedure \cite{CandW} separates the problem into two stages, i.e., BR and correction of the BR results by using label dependence. Regularized multi-task learning \cite{YeLowrankSparse} and shared-subspace learning \cite{SharedSubspace} formulate the problem as regularized regression or classification problem. Multi-label k-nearest neighbor (ML-kNN) \cite{MLKNN} is an extension of kNN. Multi-label dimensionality reduction via dependence maximization (MDDM) \cite{MDDM} maximizes the dependence between feature space and label space, and provides a data preprocessing for other multi-label learning method. A linear dimensionality reduction method for multi-label data is proposed in \cite{JiML}. In \cite{MultiLabelCS}, multi-label prediction is formulated as a sparse signal recovery problem.
\end{enumerate}
However, the problem size always significantly increases when multi-label learning are decomposed into a set of binary classification problems or formulated as another existing problem, because the label correlations need to be additionally considered. Furthermore, the mapping of label structure in feature space has not been studied. In this paper, we propose a novel multi-label learning method ``Structured Decomposition + Group Sparsity (SDGS)'', which assigns each label a corresponding feature subspace via randomized decomposition of the training data, and predicts the labels of a new sample by estimating its group sparse representation in the obtained multi-subspace.
In the training stage, SDGS approximately decomposes the data matrix $X\in\mathbb R^{n\times p}$ (each row is a training sample) as $X=\sum_{i=1}^kL^i+S$. In the matrix $L^i$, only the rows corresponding to samples with label $i$ (i.e., $y_i=1$) are nonzero. These rows represent the components determined by label $i$ in the samples and compose a low-rank matrix, which row space is the feature subspace characterized by label $i$. The matrix $S$ represents the residual components that cannot be explained by the given labels and is constrained to be sparse. The decomposition is obtained via a randomized optimization with low time complexity.
In the prediction stage, SDGS estimates the group sparse representation of a new sample in the obtained multi-subspace via group \emph{lasso} \cite{GroupLasso}. The representation coefficients associated with basis in the same subspace are in the same group. Since the components caused by a specific label can be linearly represented by the corresponding subspace obtained in the training stage, the nonzero representation coefficients will concentrate on the groups corresponding to the labels that the sample belongs to. This gives the rational of the proposed SDGS for multi-label learning. Group \emph{lasso} is able to select these nonzero coefficients group-wisely and thus the labels can be identified.
SDGS provides a novel and natural multi-label learning method by building a mapping of label structure in decomposed feature subspaces. Group sparse representation in the multi-subspace is applied to recover the unknown labels. It embeds the label correlations without increasing the problem size and is robust to the imbalance problem. By comparing SDGS with different multi-label learning methods, we show its effectiveness and efficiency on several datasets.
\section{Assumption and Motivation}
Given a sample $x\in\mathbb R^p$ and its label vector $y\in\{0,1\}^k$, we assume that $x$ can be decomposed as the sum of several components $l^i$ and a sparse residual $s$:
\begin{equation}\label{E:model}
x=\sum\limits_{i:y_i=1}l^i+s.
\end{equation}
The component $l^i$ is caused by the label $i$ that $x$ belongs to. Thus $l^i$ can be explained as the mapping of label $i$ in $x$. The residual $s$ is the component that all the labels in $y$ cannot explain. The model in (\ref{E:model}) reveals the general relationship between feature space and labels.
For all the samples with label $i$, we assume their components corresponding to label $i$ lies in a linear subspace $C^i\in\mathbb R^{r^i\times p}$, i.e., $l^i=\beta_{G_i}C^i$, wherein $\beta_{G_i}$ is the representation coefficients corresponding to $C^i$. Thus the model (\ref{E:model}) can be equivalently written as:
\begin{equation}\label{E:modell}
\begin{array}{ll}
x&=\sum\limits_{i=1}^k\beta_{G_i}C^i+s,\\
&\forall i\in\{i:y_i=0\},\beta_{G_i}=\textbf{0}.
\end{array}
\end{equation}
If we build a dictionary $C=[C^1;\dots;C^k]$ as the multi-subspace characterized by the $k$ labels, the corresponding representation coefficient vector for $x$ is $\beta=[\beta_{G_1},\dots,\beta_{G_k}]$. The coefficients $\beta_{G_i}$ corresponding to the labels $x$ does not belong to are zeros, so $\beta$ is group sparse, wherein the groups are $G_i,i=1,\dots,k$.
In training stage of SDGS, we learn the multi-subspace $C^i,i=1,\dots,k$ from the training data via a structured decomposition, in which the components corresponding to label $i$ from all the samples consists a low-rank matrix $L^i_{\Omega_i}$, wherein $\Omega_i$ is the index set of samples with label $i$. Thus the row space of $L^i_{\Omega_i}$ is the subspace $C^i$. In the prediction stage of SDGS, given a new sample $x$, we apply group \emph{lasso} to find the group sparse representation $\beta$ on the multi-subspace $C$, and then a simple thresholding is used to test which groups $\beta$ concentrates on. The labels that these groups corresponds to are predicted labels for the sample $x$.
In the training stage, the label correlations and structure are naturally preserved in their mappings $C^i$. In the prediction stage, both discriminative and structured information encoded in labels are considered via group \emph{lasso}. Therefore, SDGS explores label correlations without increasing the problem size.
\vspace{-2mm}
\section{Training: Structured Decomposition}
In this section, we introduce the training stage of SDGS, which approximately decomposes the data matrix $X\in\mathbb R^{n\times p}$ into $X=\sum_{i=1}^kL^i+S$. For the matrix $L^i$, the rows corresponding to the samples with label $i$ are nonzero, while the other rows are all-zero vectors. The nonzero rows are the components caused by label $i$ in the samples. We use $\Omega_i$ to denote the index set of samples with label $i$ in the matrix $X$ and $L^i$, and then the matrix composed of the nonzero rows in $L^i$ is represented by $L^i_{\Omega_i}$. In the decomposition, the rank of $L^i_{\Omega_i}$ is upper bounded, which indicates that all the components caused by label $i$ nearly lies in a linear subspace. The matrix $S$ is the residual of the samples that cannot be explained by the given labels. In the decomposition, the cardinality of $S$ is upper bounded, which makes $S$ sparse.
If the label matrix of $X$ is $Y\in\{0,1\}^{n\times k}$, the rank of $L^i_{\Omega_i}$ is bounded not more than $r^i$ and the cardinality of $S$ is bounded not more than $K$, the decomposition can be written as solving the following constrained minimization problem:
\begin{equation}\label{E:ms}
\begin{array}{rl}
\min\limits_{L^i,S}&\left\|X-\sum_{i=1}^kL^i-S\right\|_F^2\\
s.t.&{\rm rank}\left(L^i_{\Omega_i}\right)\leq r^i,L^i_{\overline{\Omega}_i}=\textbf{0},\forall i=1,\dots,k\\
&{\rm card}\left(S\right)\leq K.
\end{array}
\end{equation}
Therefore, each training sample in $X$ is decomposed as the sum of several components, which respectively correspond to several labels that the sample belongs to. SDGS separates these components from the original sample by building the mapping of $Y$ in the feature space of $X$. For label $i$, we obtain its mapping in the feature subspace as the row space of $L^i_{\Omega_i}$.
\vspace{-2mm}
\subsection{Alternating minimization}
Although the rank constraint to $L^i_{\Omega_i}$ and cardinality constraint to $S$ are not convex, the optimization in (\ref{E:ms}) can be solved by alternating minimization that decomposes it as the following $k+1$ subproblems, each of which has the global solutions:
\begin{equation}\label{E:mssub}
\left\{
\begin{array}{ll}
L^i_{\Omega_i}=\arg\min\limits_{{\rm rank}\left(L^i_{\Omega_i}\right)\leq r^i}\left\|X-\sum\limits_{j=1,j\neq i}^kL^j-S-L^i\right\|_F^2, \\
~~~~~~~~\forall i=1,\dots,k.\\
S=\arg\min\limits_{{\rm card}\left(S\right)\leq K}\left\|X-\sum\limits_{j=1}^kL^j-S\right\|_F^2.
\end{array}
\right.
\end{equation}
The solutions of $L^i_{\Omega_i}$ and $S$ in above subproblems can be obtained via hard thresholding of singular values and the entries, respectively. Note that both SVD and matrix hard thresholding have global solutions. In particular, $L^i_{\Omega_i}$ is built from the first $r^i$ largest singular values and the corresponding singular vectors of $\left(X-\sum_{j=1,j\neq i}^kL^j-S\right)_{\Omega_i}$, while $S$ is built from the $K$ entries with the largest absolute value in $X-\sum_{j=1}^kL^j$, i.e,
\begin{equation}\label{E:mssolution}
\left\{
\begin{array}{ll}
L^i_{\Omega_i}=\sum\limits_{q=1}^{r^i}\lambda_qU_qV_q^T, i=1,\dots,k,\\
{\rm svd}\left[\left(X-\sum_{j=1,j\neq i}^kL^j-S\right)_{\Omega_i}\right]=U\Lambda V^T; \\
S=\mathcal {P}_{\Phi}\left(X-\sum\limits_{j=1}^kL^j\right), \Phi:\left|\left(X-\sum\limits_{j=1}^kL^j\right)_{{r,s}\in{\Phi}}\right|\neq0 \\ {\rm~and~} \geq \left|\left(X-\sum\limits_{j=1}^kL^j\right)_{{r,s}\in{\overline{\Phi}}}\right|, |\Phi|\leq K.
\end{array}
\right.
\end{equation}
The projection $S=\mathcal {P}_{\Phi}(R)$ represents that the matrix $S$ has the same entries as $R$ on the index set $\Phi$, while the other entries are all zeros.
The decomposition is then obtained by iteratively solving these $k+1$ subproblems in (\ref{E:mssub}) according to (\ref{E:mssolution}). In this paper, we initialize $L^i_{\Omega_i}$ and $S$ as
\begin{equation}\label{E:msinitial}
\left\{
\begin{array}{ll}
L^i_{\Omega_i}:=Z_{\Omega_i},i=1,\dots,k,\\
Z=D^{-1}X,D={\rm diag}\left(Y\textbf{1}\right);\\
S:=\textbf{0}.
\end{array}
\right.
\end{equation}
In each subproblem, only one variable is optimized with the other variables fixed. The convergence of this alternating minimization can be proved in Theorem \ref{T:ls_convergence} by demonstrating that the approximation error keeps monotonically decreasing throughout the algorithm.
\begin{theorem}\label{T:ls_convergence}
The alternating minimization of subproblems (\ref{E:mssub}) produces a sequence of $\|X-\sum_{i=1}^kL^i-S\|_F^2$ that converges to a local minimum.
\end{theorem}
\begin{proof}
Let the objective value (decomposition error) $\|X-\sum_{i=1}^kL^i-S\|_F^2$ after solving the $k+1$ subproblems in (\ref{E:mssub}) to be $E^1_{(t)},\dots,E^{k+1}_{(t)}$ respectively for the $t^{th}$ iteration round. We use subscript $(t)$ to signify the variable that is updated in the $t^{th}$ iteration round. Then $E^1_{(t)},\dots,E^{k+1}_{(t)}$ are
\begin{align}
&E^1_{(t)}=\left\|X-S_{(t-1)}-L^1_{(t)}-\sum_{i=3}^kL^i_{(t-1)}-L^2_{(t-1)}\right\|_F^2,\\
&E^2_{(t)}=\left\|X-S_{(t-1)}-L^1_{(t)}-\sum_{i=3}^kL^i_{(t-1)}-L^2_{(t)}\right\|_F^2,\\
\notag&~~~~~~~~~~~~~~~~~~~~~~~~\vdots\\
&E^k_{(t)}=\left\|X-\sum_{i=1}^kL^i_{(t)}-S_{(t-1)}\right\|_F^2,\\
&E^{k+1}_{(t)}=\left\|X-\sum_{i=1}^kL^i_{(t)}-S_{(t)}\right\|_F^2,
\end{align}
The global optimality of $L^i_{(t)}$ yields $E^1_{(t)}\geq E^2_{(t)}\geq\cdots\geq E^k_{(t)}$. The global optimality of $S_{(t)}$ yields $E^k_{(t)}\geq E^{k+1}_{(t)}$. In addition, we have
\begin{align}
&E^{k+1}_{(t)}=\left\|X-\sum_{i=2}^kL^i_{(t)}-S_{(t)}-L^1_{(t)}\right\|_F^2, \\
&E^1_{(t+1)}=\left\|X-\sum_{i=2}^kL^i_{(t)}-S_{(t)}-L^1_{(t+1)}\right\|_F^2.
\end{align}
The global optimality of $L^1_{(t+1)}$ yields $E^{k+1}_{(t)}\geq E^1_{(t+1)}$. Therefore, the objective value (or the decomposition error) $\|X-\sum_{i=1}^kL^i-S\|_F^2$ keeps decreasing throughout the iteration rounds of (\ref{E:mssolution}), i.e.,
\begin{equation}\label{E:converge}
E^1_{(1)}\geq E^{k+1}_{(1)}\geq \cdots\geq E^1_{(t)}\geq E^{k+1}_{(t)}\geq\cdots
\end{equation}
Since the objective value of (\ref{E:ms}) is monotonically decreasing and the constraints are satisfied all the time, iteratively solving (\ref{E:mssub}) produces a sequence of objective values that converge to a local minimum. This completes the proof.
\end{proof}
After obtaining the decomposition by solving (\ref{E:ms}), each training sample is represented by the sum of several components in $L^i$ characterized by the labels it belongs to and the residual in $S$. Therefore, the mapping of label $i$ in feature subspace is defined as the row space $C^i\in\mathbb R^{{r^i}\times p}$ of the matrix $L^i_{\Omega_i}$, which can be obtained via the QR decomposition of $\left(L^i_{\Omega_i}\right)^T$.
\subsection{Accelerate SDGS via bilateral random projections}
The main computation in (\ref{E:mssolution}) is the $k$ times of SVD in obtaining $L^i_{\Omega_i}(i=1,\dots,k)$. SVD requires $\min\left(mn^2,m^2n\right)$ flops for an $m\times n$ matrix, and thus it is impractical when $X$ is of large size. Random projection is effective in accelerating the matrix multiplication and decomposition \cite{RandomSVD}. In this paper, we introduce ``bilateral random projections (BRP)'', which is a direct extension of random projection, to accelerate the optimization of $L^i_{\Omega_i}(i=1,\dots,k)$.
For clear representation, we use letters independent to the ones we use in other parts of this paper to illustrate BRP. In particular, given $r$ bilateral random projections (BRP) of an $m\times n$ dense matrix $X$ (w.l.o.g, $m\geq n$), i.e., $Y_1=XA_1$ and $Y_2=X^TA_2$, wherein $A_1\in\mathbb R^{n\times r}$ and $A_2\in\mathbb R^{m\times r}$ are random matrices,
\begin{equation}\label{E:lr_app}
L=Y_1\left(A_2^TY_1\right)^{-1}Y_2^T
\end{equation}
is a fast rank-$r$ approximation of $X$. The computation of $L$ includes an inverse of an $r\times r$ matrix and three matrix multiplications. Thus, for a dense $X$, $2mnr$ floating-point operations (flops) are required to obtain BRP, $r^2(2n+r)+mnr$ flops are required to obtain $L$. The computational cost is much less than the SVD based approximation.
We build the random matrices $A_1$ and $A_2$ in an adaptive way. Initially, both $A_1$ and $A_2$ are set to standard Gaussian matrices whose entries are independent variables following standard normal distributions. We firstly compute $Y_1=XA_1$, update $A_2:=Y_1$ and calculate the left random projection as $Y_2=X^TA_2$ by using the new $A_2$, and then we update $A_1:=Y_2$ and calculate the right random projection $Y_1=XA_1$ by using the new $A_1$. This adaptive updating of random matrices requires additional flops of $mnr$.
Algorithm 1 summarizes the training stage of SDGS with BRP based acceleration.
\begin{algorithm}[htb]
\SetAlgoLined
\KwIn{$X$, $\Omega_i$, $r^i,i=1,\dots,k$, $K$, $\epsilon$}
\KwOut{$C^i,i=1,\dots,k$}
Initialize $L^i$ and $S$ according to (\ref{E:msinitial}), $t:=0$\;
\While{$\left\|X-\sum_{j=1}^kL^j-S\right\|_F^2>\epsilon$}{
$t:=t+1$\;
\For{$i\leftarrow 1$ \KwTo $k$}{
$N:=\left(X-\sum_{j=1,j\neq i}^kL^j-S\right)_{\Omega_i}$\;
Generate standard Gaussian matrix $A_1\in\mathbb R^{p\times{r^i}}$\;
$Y_1:=NA_1$, $A_2:=Y_1$\;
$Y_2:=N^TY_1$, $Y_1:=NY_2$\;
$L^i_{\Omega_i}:=Y_1\left(A_2^TY_1\right)^{-1}Y_2^T, L^i_{\overline{\Omega}_i}:=\textbf{0}$\;
}
$N:=\left|X-\sum_{j=1}^kL^j\right|$\;
$S:=\mathcal {P}_{\Phi}\left(N\right)$, $\Phi$ is the index set of the first $K$ largest entries of $\left|N\right|$\;
}
QR decomposition $\left(L^i_{\Omega_i}\right)^T=Q^iR^i$ for $i=1,\dots,k$, $C^i:=\left(Q^i\right)^T$\;
\caption{SDGS Training}
\end{algorithm}\vspace{-2mm}
\section{Prediction: Group Sparsity}
In this section, we introduce the prediction stage of SDGS by estimating a group sparse representations of a given sample. In the training stage, we decompose the training data into the sum of low-rank components $L^i_{\Omega_i}$ characterized by their labels and a sparse residual $S$. The mapping of label $i$ in the feature subspace is defined as the row space $C^i$ of $L^i_{\Omega_i}$, because the components of the training data characterized by label $i$ lies in the linear subspace $C^i$.
In the prediction stage of SDGS, we use group \emph{lasso} \cite{GroupLasso} to estimate the group sparse representation $\beta\in\mathbb R^{\sum {r^i}}$ of a test sample $x\in\mathbb R^p$ on the multi-subspace $C=[C^1;\dots;C^k]$, wherein the $k$ groups are defined as index sets of the coefficients corresponding to $C^1,\dots,C^k$. Since group \emph{lasso} selects nonzero coefficients group-wisely, nonzero coefficients in the group sparse representation will concentrate on the groups corresponding to the labels that the sample belongs to.
According to above analysis, we solve the following group \emph{lasso} problem in the prediction of SDGS:
\begin{equation}\label{E:mspredict}
\min\limits_\beta \frac{1}{2}\left\|x-\beta C\right\|_F^2+\lambda\sum\limits_{i=1}^k\left\|\beta_{G_i}\right\|_2,\\
\end{equation}
where the index set $G_i$ includes all the integers between $1+\sum_{j=1}^{i-1}r^j$ and $\sum_{j=1}^{i}r^j$ (including these two numbers).
To obtain the final prediction of label vector $y\in\{0,1\}^k$ for the test sample $x$, we use a simple thresholding of the magnitude sum of coefficients in each group to test which groups that the sparse coefficients in $\beta$ concentrate on:
\begin{equation}\label{E:thresh}
y_\Psi=\textbf{1},y_{\overline\Psi}=\textbf{0},\Psi=\left\{i:\left\|\beta_{G_i}\right\|_1\geq\delta\right\}.
\end{equation}
Although $y$ can also be obtained via selecting the groups with nonzero coefficients when $\lambda$ in (\ref{E:mspredict}) is chosen properly, we set the threshold $\delta$ as a small positive value to guarantee the robustness to $\lambda$.
Algorithm 2 summarizes the prediction stage of SDGS.
\vspace{-5mm}
\begin{algorithm}[htb]
\SetAlgoLined
\KwIn{$x$, $C^i,i=1,\dots,k$, $\lambda$, $\delta$}
\KwOut{$y$}
Solve group \emph{lasso} in (\ref{E:mspredict}) by using \texttt{group \emph{lasso}}\;
Predict $y$ via thresholding in (\ref{E:thresh})\;
\caption{SDGS Prediction}
\end{algorithm}\vspace{-8mm}
\section{Experiments}
In this section, we evaluate SDGS on several datasets of text classification, image annotation, scene classification, music categorization, genomics and web page classification. We compare SDGS with BR \cite{MLreview2}, ML-KNN \cite{MLKNN} and MDDM \cite{MDDM} on five evaluation metrics for evaluating the effectiveness, as well as the CPU seconds for evaluating the efficiency. All the experiment are run in MatLab on a server with dual quad-core 3.33 GHz Intel Xeon processors and 32 GB RAM.
\subsection{Evaluation metrics}
In the experiments of multi-label prediction, five metrics, which are Hamming loss, precision, recall, F1 score and accuracy, are used to measure the prediction performance.
Given two label matrices $Y1,Y2\in\{0,1\}^{n\times k}$, wherein $Y1$ is the real one an $Y2$ is the prediction one, the Hamming loss measures the recovery error rate:
\begin{equation}\label{E:HammingLoss}
HamL=\frac{1}{nk}\sum\limits_{i=1}^n\sum\limits_{j=1}^k {Y1}_{ij}\oplus{Y2}_{ij},
\end{equation}
where $\oplus$ is the XOR operation, a.k.a. the exclusive disjunction.
The other four metrics are precision, recall, F1 score and accuracy and are defined as:
\begin{align}
&Prec=\frac{1}{n}\sum\limits_{i=1}^n \frac{{\rm card}\left({Y1}_{i}\cap {Y2}_{i}\right)}{{\rm card}\left(Y2_i\right)},\\
&Rec=\frac{1}{n}\sum\limits_{i=1}^n \frac{{\rm card}\left({Y1}_{i}\cap {Y2}_{i}\right)}{{\rm card}\left(Y1_i\right)},\\
&F1=\frac{1}{n}\sum\limits_{i=1}^n \frac{2{\rm card}\left({Y1}_{i}\cap {Y2}_{i}\right)}{{\rm card}\left(Y1_i\right)+{\rm card}\left(Y2_i\right)},\\
&Acc=\frac{1}{n}\sum\limits_{i=1}^n \frac{{\rm card}\left({Y1}_{i}\cap {Y2}_{i}\right)}{{\rm card}\left({Y1}_{i}\cup {Y2}_{i}\right)}.
\end{align}
\subsection{Datasets}
We evaluate the prediction performance and time cost of SGDS on 11 datasets from different domains and of different scales, including Corel5k (image), Mediamill (video), Enron (text), Genbase (genomics), Medical (text), Emotions (music), Slashdot (text) and $4$ sub datasets selected in Yahoo dataset (web data). These datasets were obtained from Mulan's website \footnote{\texttt{http://mulan.sourceforge.net/datasets.html}} and MEKA's website \footnote{\texttt{http://meka.sourceforge.net/}}. They were collected from different practical problems. Table \ref{Table:datasets} shows the number of samples $n$ (training samples+test samples), number of features $p$, number of labels $k$, and the average cardinality of all label vectors $Card$ of different datasets.
\begin{table}[H]
\caption{Information of datasets that are used in experiments of MS. In the table, $n$ (training samples+test samples) is the number of samples, $p$ is the number of features, $k$ is the number of labels, ``Card'' is the average cardinality of all label vectors.}
\begin{center}\vspace{-1mm}
\begin{tabular}{l*{4}{c}}
\hline
Datasets & $n$ & $p$ & $k$ & Card \\
\hline \hline
Corel5k & $4500+500$ & $499$ & $374$ & $3.522$ \\
Mediamill & $30993+12914$ & $120$ & $101$ & $4.376$ \\
Enron & $1123+579$ & $1001$ & $53$ & $3.378$ \\
Genbase & $463+199$ & $1186$ & $27$ & $1.252$ \\
Medical & $333+645$ & $1449$ & $45$ & $1.245$ \\
Emotions & $391+202$ & $72$ & $6$ & $1.869$ \\
Slashdot & $2338+1444$ & $1079$ & $22$ & $1.181$ \\
Yahoo-Arts & $2000+3000$ & $462$ & $26$ & $1.636$ \\
Yahoo-Education & $2000+3000$ & $550$ & $33$ & $1.461$ \\
Yahoo-Recreation & $2000+3000$ & $606$ & $22$ & $1.423$ \\
Yahoo-Science & $2000+3000$ & $743$ & $40$ & $1.451$ \\
\hline
\end{tabular}
\end{center}\vspace{-4mm}
\label{Table:datasets}
\end{table}
\subsection{Performance comparison}
We show the prediction performance and time cost in CPU seconds of BR, ML-KNN, MDDM and SDGS in Table \ref{Table:exp} and Table \ref{Table:yahoo}. In BR, we use the MatLab interface of LIBSVM 3.0 \footnote{\texttt{http://www.csie.ntu.edu.tw/\~cjlin/libsvm/}} to train the classic linear SVM classifiers for each label. The parameter $C\in\left\{10^{-3},10^{-2},0.1,1,10,10^2,10^3\right\}$ with the best performance was used. In ML-KNN, the number of neighbors was $30$ for all the datasets.
\begin{table}[H]
\caption{Prediction performances (\%) and CPU seconds of BR \cite{MLreview2}, ML-KNN \cite{MLKNN}, MDDM \cite{MDDM} and SDGS on Yahoo.}
\begin{center}
\begin{tabular}{|c|l|*{6}{c}|}
\hline
& Methods & Hamming loss & Precision & Recall & F1 score & Accuracy & CPU seconds \\
\hline
\multirow{4}{*}{Arts}&BR & $5$ &$76$ &$25$ &$26$ &$24$ &$46.8$ \\
&ML-knn & $6$ &$62$ &$7$ &$25$ &$6$ &$77.6$ \\
&MDDM & $6$ &$68$ &$6$ &$21$ &$5$ &$37.4$ \\
&SDGS & $9$ &$35$ &$40$ &$31$ &$28$ &$11.7$ \\
\hline
\multirow{4}{*}{Education}&BR & $4$ &$69$ &$27$ &$28$ &$26$ &$50.1$ \\
&ML-knn & $4$ &$58$ &$6$ &$31$ &$5$ &$99.8$ \\
&MDDM & $4$ &$59$ &$5$ &$26$ &$5$ &$45.2$ \\
&SDGS & $4$ &$41$ &$35$ &$32$ &$29$ &$12.6$ \\
\hline
\multirow{4}{*}{Recreation}&BR & $5$ &$84$ &$23$ &$23$ &$22$ &$53.2$ \\
&ML-knn & $6$ &$70$ &$9$ &$23$ &$8$ &$112$ \\
&MDDM & $6$ &$66$ &$7$ &$18$ &$6$ &$41.9$ \\
&SDGS & $7$ &$41$ &$49$ &$36$ &$30$ &$19.1$ \\
\hline
\multirow{4}{*}{Science}&BR & $3$ &$79$ &$19$ &$19$ &$19$ &$84.9$ \\
&ML-knn & $3$ &$59$ &$4$ &$20$ &$4$ &$139$ \\
&MDDM & $3$ &$66$ &$4$ &$19$ &$4$ &$53.0$ \\
&SDGS & $5$ &$31$ &$39$ &$29$ &$26$ &$20.1$ \\
\hline
\end{tabular}
\end{center}
\label{Table:yahoo}
\end{table}
In MDDM, the regularization parameter for uncorrelated subspace dimensionality reduction was selected as $0.12$ and the dimension of the subspace was set as $20\%$ of the dimension of the original data. In SDGS, we selected $r^i$ as an integer in $\left[1,6\right]$, $K\in\left[10^{-6},10^{-3}\right]$, $\lambda\in\left[0.2,0.45\right]$ and $\delta\in\left[10^{-4},10^{-2}\right]$. We roughly selected $4$ groups of parameters in the ranges for each dataset and chose the one with the best performance on the training data. Group \emph{lasso} in SDGS can be solved by many convex optimization methods, e.g., submodular optimization \cite{BachGL} and SLEP \cite{SLEP}. We use SLEP in our experiments.
The experimental results show that SDGS is competitive on both prediction performance and speed, because it explores label correlations and structure without increasing the problem size. In addition, the bilateral random projections further accelerate the computation. SDGS has smaller gaps between precision and recall on different tasks than other methods, and this implies it is robust to the imbalance between positive and negative samples.
\section{Conclusion}
In this paper, we propose a novel multi-label learning method ``Structured Decomposition + Group Sparsity (SDGS)''. Its training stage decomposes the training data as the sum of several low-rank components $L^i_{\Omega_i}$ corresponding to their labels and a sparse residual $S$ that cannot be explained by the given labels. This structured decomposition is accomplished by a bilateral random projections based alternating minimization, and it converges to a local minimum. The row space $C^i$ of $L^i_{\Omega_i}$ is the mapping of label $i$ in the feature subspace. The prediction stage estimates the group sparse representation of a new sample on the multi-subspace $C^i$ via group \emph{lasso}. SDGS predicts the labels by selecting the groups that the nonzero representation coefficients concentrate on.
SDGS finds the mapping of labels in the feature space, where the label correlations are naturally preserved in the corresponding mappings. Thus it explores the label structure without increasing the problem size. SDGS is robust to the imbalance between positive and negative samples, because it uses group sparsity in the multi-subspace to select the labels, which considers both the discriminative and relative information between the mappings of labels in feature subspace.
\bibliographystyle{splncs}
|
1,116,691,497,652 | arxiv | \section{Introduction}
Pursuit-evasion games have a long history, especially in the setup of \emph{differential games} \cite{Is65,Ji15,Le94,Pa70,Pe93}. Differential games with more pursuers were also introduced in the 1970s, see, e.g. \cite{HaBr74,Ch76,Ps76,LePa85} or a more recent paper \cite{FeIbAlSa20} and references therein. More recent important application is design of robot movement in complicated environment, see e.g. \cite{AHRW17}. A more general class of such games played on finite graphs has been devised in discrete setting. Nowakowski and Winkler \cite{NoWi83} and Quilliot \cite{Qui78} independently introduced the game of Cop and Robber that is played on a (finite) graph. Aigner and Fromme \cite{AiFr84} extended the game to include more than one cop. For each graph $G$ and a positive integer $k$, the \emph{Cops and Robber game} on $G$, involves two players. The first player controls $k$ \emph{cops} placed at the vertices of the graph, and the second player controls the \emph{robber}, who is also positioned at some vertex. While the players alternately move to adjacent vertices (or stay at their current position), the cops want to catch the robber and the robber wants to prevent this ever to happen. The main question is how many cops are needed on the given graph $G$ in order that they can guarantee the capture. The minimum such number of cops is termed as the \emph{cop number} $c(G)$ of the graph.
The game of cops and robbers gained attention because of its ties with structural graph theory. Classes of graphs that can be embedded in a surface of bounded genus \cite{AiFr84} and those that exclude some fixed graph as a minor \cite{An86} have bounded cop number. In particular, all graphs that can be embedded in the plane have cop number at most 3 \cite{AiFr84}. We refer to the monograph by Bonato and Nowakowski \cite{BoNo11} for further details about the history of the game and for overview of the main results.
One of our aims is to introduce the game in a more general setup of geodesic metric spaces and study the relationship between the cop number and the topology and geometry of the geodesic space.
The famous Lion and Man problem that was proposed by Richard Rado in the late 1930s and discussed in Littlewood's Miscellany \cite{Li53,Li86} is a version of the game with one pursuer (the Lion) and one evader (the Man). The man and the lion are within a circular arena (unit disk in the plane) and they run with equal maximum speed. It seems that in order to avoid the lion, the man would choose to run on the boundary of the disk. A simple argument then shows that the lion could always catch the man by staying on the segment joining the center of the disk with the point of the man and slowly approaching him. However, Besicovitch proved in 1952 (see \cite[pp.~114--117]{Li86}) that the man has a simple strategy, in which he will approach but never reach the boundary, that enables him to avoid capture forever no matter what the lion does.\footnote{The game defined in this paper allows the use of Besicovitch strategy for the man, so this example shows that the lion is able to come arbitrarily close to the man, but can never catch him.} More details can be found in~\cite{BoLeWa12}.
One can prove that two lions are enough to catch the man in a disk. A recent work by Abrahamsen et al.\ \cite{AHRW17,AHRW20} discusses the game with many lions versus one man in an arbitrary compact subset of the plane whose boundary consists of finitely many rectifiable simple closed curves and prove that three lions can always get their prey. They also discuss the game when the man is just slightly faster than lions, and find some surprising conclusions.
The game of cops and robbers can be defined on any metric space.
However, it is far from obvious how such a game can be defined in order to be natural, resembling interesting examples and allowing for powerful mathematical tools. Subtleties of the various versions of the game are nicely outlined in an influential paper by Bollob\'as, Leader, and Walters \cite{BoLeWa12}, who were the first to provide a general setup for such a game.
In this article we discuss the game of cops and robbers on arbitrary geodesic spaces (see Section \ref{sect:geodesic spaces} or \cite{BuBuIv01,BuSh04} for definitions). We come up with a version of the game that is somewhat different from the game version in \cite{BoLeWa12}, but preserves all the beauty and power of discrete games played on graphs.
Moreover, our version keeps the characteristics of the pursuit-evasion games played in a continuous setting and for instance allows for using strategies similar to that of Besicovitch in the case of the Man and Lion game. It is shown that our game can be approximated by finite games of discrete type and as a consequence we are able to prove the min-max theorem.
\section{Intrinsic metric in geodesic spaces}
\label{sect:geodesic spaces}
We consider a metric space $(X,d)$ and the corresponding metric space topology on $X$. For $x,y\in X$, an \emph{$(x,y)$-path} is a continuous map $\gamma: I\to X$ where $I=[0,1]$ is the unit interval on $\RR$ and $\gamma(0)=x$ and $\gamma(1)=y$.
We allow the paths to be parametrized differently and in particular we can replace $I$ with any finite interval on $\RR$.
The space is \emph{path-connected} if for any $x,y\in X$, there exists an $(x,y)$-path connecting them.
One can define the \emph{length} $\ell(\gamma)$ of the path $\gamma$ by taking the supremum over all finite sequences $0=t_0<t_1<t_2< \cdots < t_n=1$ of the values $\sum_{i=1}^n d(\gamma(t_{i-1}),\gamma(t_i))$. Note that $\ell(\gamma)$ may be infinite; if it is finite, we say that $\gamma$ is \emph{rectifiable}. Note that the length of any $(x,y)$-path is at least $d(x,y)$. The metric space $X$ is a \emph{geodesic space} if for every $x,y\in X$ there is an $(x,y)$-path $\gamma$ whose length is equal to $d(x,y)$.
An $(x,y)$-path $\gamma$ is \emph{isometric} if $\ell(\gamma) = d(x,y)$. Observe that for $0\le t < t' \le 1$ the subpath $\gamma|_{[t,t']}$ is also isometric. Therefore the set $\gamma(I) = \{\gamma(t)\mid t\in I\}$ is an isometric subset of $X$. With a slight abuse of terminology, we say that the image $\gamma(I)\subset X$ is an \emph{isometric path} in $X$.
A path $\gamma$ is a \emph{geodesic} if it is locally isometric, i.e., for every $t\in [0,1]$ there is an $\varepsilon>0$ such that the subpath $\gamma|_J$ on the interval $J = [t-\varepsilon,t+\varepsilon]\cap[0,1]$ is isometric. A path with $\gamma(0)=\gamma(1)$ is called a \emph{loop} (or a \emph{closed path}). When we say that a loop is a geodesic, we mean it is geodesic as a path and it is also locally isometric around its base point, i.e. $\gamma|_{[1-\varepsilon,1]\cup[0,\varepsilon]}$ is isometric for some $\varepsilon>0$.
Alternatively, one can consider any path-connected compact metric space $X$ and then define the shortest path distance. For $x,y\in X$, the \emph{shortest path distance} from $x$ to $y$ is defined as the infimum of the lengths of all $(x,y)$-paths in $X$. If any two points in $X$ are joined by a path of finite length, then the shortest path distance gives the same topology on $X$. By the Arzel\`a-Ascola theorem (see e.g. \cite{BuBuIv01}), compactness implies that any sequence of $(x,y)$-paths contains a point-wise convergent subsequence, and that the limit points determine an $(x,y)$-path. This implies that there is a path whose length is equal to the infimum of all path lengths. Hence, for this metric, which is also known as the \emph{intrinsic metric}, $X$ is a geodesic space.
If $X$ is a geodesic space, each of its points appears on a geodesic. But some points only appear as the end-points of isometric paths in $X$ and cannot appear as interior points of those. Such points are called \emph{corners}. All other points appear as internal points on geodesics in $X$ and are said to be \emph{regular points} in $X$. It is obvious that regular points are dense in $X$. On the other hand, the set of corners can also be very rich. It may contain the whole boundary component, but in the interior of $X$, it is totally path-disconnected in the sense that every path containing only corners is either trivial (a single point), or is contained in $\partial X$.
Common examples of geodesic spaces include any connected cell complex endowed with the intrinsic metric.
If a geodesic space is homeomorphic to a 1-dimensional cell complex (graph), then we say that it is a \emph{metric graph}.
If $G$ is a graph and $w:E(G)\to \RR_+$ is a function specifying the length of each edge, we define the \emph{metric graph} $X(G,w)$ corresponding to $G$ and $w$ as the metric graph $G$ in which each edge $e$ is represented by a real interval of length $w(e)$.
We refer to \cite{BuBuIv01} and \cite{BuSh04} for a thorough treatment of geodesic spaces.
\section{Game of Cops and Robber on geodesic spaces}
\subsubsection*{Rules of the game}
Let $X$ be a compact geodesic space endowed with intrinsic metric $d$, and let $k\ge1$ be an integer. A \emph{Game of Cops and Robber} on the \emph{game space} $X$ with $k$ cops is a two-person game with complete information defined as follows. The first player controls an avatar, who is positioned at a point $r\in X$ and whom we call the \emph{robber}. The second player controls a set of $k$ \emph{cops} $C_1,\dots,C_k$ that are also positioned in $X$. It is allowed that different cops occupy the same position in $X$. There are rules how the game starts and how the players move, and the goal of the second player is to come as close as possible to the robber (possibly catching him, i.e. one of the cops to occupy the same point in $X$ as the robber). The details about these rules are specified below.
A \emph{position} in a game with $k$ cops is a $(k+1)$-tuple $(r,c_1,\dots,c_k)\in X^{k+1}$ enlisting the positions of the robber and the cops. Instead of $(r,c_1,\dots,c_k)$ we also write $(r,c)$, where $c=(c_1,\dots,c_k)\in X^k$. The game is defined by the following parameters (in addition to the game space $X$ and $k$):
\begin{description}
\item[(I)] A rule that specifies the set of \emph{admissible initial positions} of the robber and the cops. This is just a set of $(k+1)$-tuples, $\mathcal Y^0 \subseteq X^{k+1}$.
\item[(S)] A set $\Sigma^0$ of \emph{agility functions}, each of which maps $\NN\to\RR_+$.
\end{description}
The \emph{standard game} $\Gamma_0$ has $\mathcal Y^0 = X^{k+1}$ and $\Sigma^0$ contains all positive functions $\tau:\NN\to\RR_+$ for which $\sum_{n\ge1} \tau(n) = \infty$. Throughout this paper we will stick with these assumptions unless stated differently.
Given (I) and (S), the robber selects an initial position $Y^0 = (r^0,c_1^0,\dots,c_k^0) \in \mathcal Y^0$ and selects his agility $\tau\in \Sigma^0$.
Then the game proceeds as a discrete game in consecutive steps. Having made $n-1$ steps $(n\ge1)$, the players are in position $(r^{n-1},c_1^{n-1},\dots,c_k^{n-1})\in X^{k+1}$. The $n$th step will have its duration determined by the agility: the move will last for time $\tau(n)$, and each player can move with unit speed up to a distance at most $\tau(n)$ from his current position.
First, the robber moves to a point $r^n\in X$ at distance at most $\tau(n)$ from its current position, i.e. $d(r^{n-1}, r^n)\le \tau(n)$. The destination $r^n$ is revealed to the cops. Then each cop $C_i$ ($i\in[k]$) selects his new position $c_i^n$ at distance at most $\tau(n)$ from its current position, i.e. $d(c_i^{n-1}, c_i^n)\le \tau(n)$. The game stops if $c_i^n = r^n$ for some $i\in[k]$. In that case, the \emph{value of the game} is 0 and we say that the cops \emph{have caught} the robber. Otherwise the game proceeds with the next step. If it never stops, the \emph{value of the game} is
\begin{equation}\label{eq:value of game}
v = \inf_{n\ge0} \min_{i\in[k]} d(r^n, c_i^n).
\end{equation}
If the value is 0, we say that the \emph{cops won} the game; otherwise the \emph{robber wins}. Note that the cops can win even if they never catch the robber.\footnote{Consider the afore-mentioned strategy of Besicovitch \cite{BoLeWa12} for the game of Lion and Man.}
The traditional description of pursuit-evasion games starts by the cops first choosing their position and then the robber choosing his. This setting is actually equivalent to our standard game, since the cops can always move to their desired initial positions during the beginning of the game.
We will also consider other variants of the game:
\begin{itemize}
\item
The \emph{game $\Gamma(r,c)$ with the fixed initial position $(r,c)$}. Here we have $\mathcal Y^0 = \{(r,c)\}$.
\item
The \emph{game $\Gamma(\tau)$ with the fixed agility} $\tau$, and its version where also the initial position is fixed, $\Gamma(r,c,\tau)$. Here we have $\Sigma^0 = \{\tau\}$ (and $\mathcal Y^0 = \{(r,c)\}$), and we allow that $T = \sum_{n\ge1} \tau(n)$ is either finite or infinite.
\item A \emph{finite $N$-step game} $\Gamma(N,\tau)$ and its version with the initial position fixed, $\Gamma(r,c,N,\tau)$. The game stops after $N$ steps. Here, only the first $N$ values of $\tau$ are important for the game, so we may assume that $\tau: [N] \to \RR^+$ or that $\tau(n)=0$ for $n > N$.
\item A \emph{finite-time game} $\Gamma(T)$ or $\Gamma(r,c,T)$ is a finite-step game, where the constraint is not the number of steps but the total duration, i.e., the agility functions satisfy $\sum_{n\ge1} \tau(n) = T$. Here $T$ is a positive real number, the \emph{duration of the game}. This version may be combined with fixing the number of steps, $\Gamma(N,T)$ or even fixing the agility, $\Gamma(N,\tau,T)$, where we ask $\sum_{n=1}^N \tau(n) = T$; additionally, we can fix the initial position.
\end{itemize}
It will always be clear by the listed set of parameters which version of the game we have in mind.
In general, we should also add the game space $X$ and the number $k$ of cops among the parameters, but since this is almost always implicit from the context, we usually omit them.
\subsubsection*{Strategies and value of the game}
The value of the game when played in $X$ is defined by (\ref{eq:value of game}). Note that the strategy of a player depends not only on the current position but also on the agility chosen by the robber at the very beginning. To formalize this dependence, we introduce the notion of a strategy.
The strategy of the robber is first to select an initial position and agility. This is a formal part of his strategy. The rest of his strategy and a strategy of the cops may be defined via a game with a fixed initial position and fixed agility, $\Gamma(r,c,\tau)$. Formally (and leaving out the initial choice of the robber), a \emph{strategy of the robber} is a function $s: (r,c,\tau) \mapsto r'$, such that $d(r,r')\le \tau(1)$. This can be interpreted as moving the robber from $r$ to $r'$ along some geodesic of length $d(r,r')$. Then, each cop $C_i$ moves from his current position $c_i$ to a point $c_i'$ at distance at most $\tau(1)$ from $c_i$. The choice of such destinations $c' = (c_1',\dots,c_k')$ constitutes a \emph{strategy of cops}. Formally, it is a function $q: (r',c,\tau) \mapsto c'$. Performing the moves determined by both strategies gives the new game $\Gamma(r',c',\delta\tau)$ with the fixed position and agility, where $\delta\tau(n) := \tau(n+1)$ ($n\ge1$).
Given the agility $\tau$ and strategies $s,q$ of the robber and the cops, we denote by $v_\tau(s,q)$ the value of the game when it is played using these strategies. Now we define the \emph{guaranteed outcome} for each of the players. First for the robber:
$$
\ValR(\tau) = \inf_q \sup_s v_\tau(s,q) \quad \textrm{and} \quad \ValR = \sup_\tau \ValR(\tau).
$$
Similarly for the cops,
$$
\ValC(\tau) = \sup_s \inf_q v_\tau(s,q) \quad \textrm{and} \quad \ValC = \sup_\tau \ValC(\tau).
$$
For each $\varepsilon>0$, there is $q$ such that for every $s$, $v_\tau(s,q) < \ValR(\tau)+\varepsilon$. This implies that $$\ValC(\tau) \le \ValR(\tau) \quad \textrm{and} \quad \ValC \le \ValR.$$
If $\ValC = 0$, then we say that \emph{cops win} the game. If $\ValR>0$, then the \emph{robber wins}.
It is an interesting question whether it can happen that $\ValC < \ValR$ for some game space $X$ and some $k$. In particular, is it possible that both players, the cops and the robber win the game? This question was offered as the main open problem in the afore-mentioned work by Bollob\'as et al.~\cite{BoLeWa12}. For our version of the game, we will answer this question in the negative. Indeed, the subtleties of our definition will allow us to make the conclusion that $\ValC = \ValR$, see Theorem \ref{thm:ValC=ValR}.
Since $X$ is compact, for every $\varepsilon>0$ there exists an integer $k$ such that $k$ cops can always achieve the value of the game be less than $\varepsilon$. (Place the cops at the centers of open balls of radius $\varepsilon$ that cover $X$. Then, no matter where the robber is, he will be at distance less than $\varepsilon$ from one of the cops.) Hence, with the growing number of cops, the value of the game tends to 0.
Given a game space $X$, let $k$ be the minimum integer such that $k$ cops win the game on $X$. This minimum value will be denoted by $c(X)$ and called the \emph{cop number} of $X$. If such a $k$ does not exist, then we set $c(X)=\infty$. Similarly we define the \emph{strong cop number} $c_1(X)$ as the minimum $k$ such that $k$ cops can always catch the robber.
\subsubsection*{Examples:}
{\bf 1. The $n$-ball.}
Let us consider the game with one cop on the $n$-dimensional ball of radius~$1$, $X = B^n = \{x\in \RR^n \mid \Vert x \Vert_2 \le 1\}$. This is the higher-dimensional analogue of the game of Man and Lion. It turns out that the ball of any dimension has cop number 1, i.e., one cop can win the game (although the robber can make sure he is never caught). In this game, the cop can use a strategy from the Man and Lion game (the 2-dimensional version) as follows. First, the cop moves to the center of the ball. From now on he will make sure to always be at the line segment from the center to the current position of the robber. By considering the 2-dimensional plane $\Pi$ through the origin, containing the former and the new position of the robber, he can keep this strategy and approach the robber using the 2-dimensional strategy in $\Pi$ (in which the cop keeps the requirement to be on the line from the center to the robber), thus approaching the robber and achieving infimum of his distance to the robber to be 0.
Note that a slight modification of the described strategy of the cop can be defined so that it only depends on the positions of the players and the current step length, i.e., it does not need the whole information on agility. It goes as follows: If the step duration is $t$ and the cop can reach a point on the segment from the center to the robber, he moves to the point on this segment that is closest to the robber. If he cannot reach the segment in the current step, then he moves distance $t$ towards the center.
The above strategy shows that $c(B^n) = 1$. It can be shown that $c_1(B^n) = n$, see \cite{Croft64,IrMo22}.
\medskip
\noindent
{\bf 2. The $n$-sphere.}
The game with one cop played on the $n$-dimensional sphere of radius $1$ in $\RR^{n+1}$, $S^n = \{x\in \RR^{n+1} \mid \Vert x \Vert_2 = 1\}$ is somewhat different. Here the robber may invoke the following strategy. Let $\tau_\varepsilon$ be the agility in which each step has length $\varepsilon$. The robber can choose this agility and select the initial position of the cop to be at the north pole, while he positions himself at the south pole. At the first step, the robber stays put. Then, the cop moves, and in the next step, the robber can move to the point that is antipodal to the position of the cop. It is easy to see that the minimum distance between the cop and the robber is never below $\pi-\varepsilon$, so the value of the standard game is $\pi$ in this case.
Differential pursuit-evasion game on $S^n$ was studied by Satimov and Kuchkarov \cite{SaKu00}, who proved that $n+1$ cops can catch the robber on $S^n$ and that $n$ cops cannot catch him. Our version of the game has the same outcome, i.e. $c_1(S^n) = n+1$, see \cite{IrMo22}. However, an interesting fact from \cite{IrMo22} is that two cops can win the game, i.e. $c(S^n) = 2$.
\medskip
\noindent
{\bf 3. The cylinder.}
Let $B$ be a game space (a compact geodesic space with intrinsic metric $d$). The \emph{cylinder over $B$} is the geodesic space $X=B\times I$, endowed with the product topology, where $I=[0,1]$ is a real interval of length 1. We consider the $\ell_p$-metric $d_p$ for some $p\ge1$:
$$
d_p((a,s),(b,t)) = \left(d(a,b)^p + |s-t|^p\right)^{1/p}.
$$
Clearly, $X$ is a geodesic space. For any number $k$ of cops, the value of the game of Cops and Robber on $X$ is the same as on $B$ if $p>1$: $\ValR(X) = \ValR(B)$.
To see this, let us consider any agility $\tau$ and for any $\varepsilon>0$, consider a strategy $s_0$ of the robber in $B$ such that
$$
\ValC(B,\tau) = \sup_s \inf_q v_\tau(s,q) \le \inf_q v_\tau(s_0,q) + \varepsilon.
$$
The robber can use the same agility and the same strategy on $X$, if he always stays in $B \approx B\times \{0\}$ and considers the cops' positions as being in $B$ by projecting them to the first coordinate. By using the strategy $s_0$, he will be able to keep distance $v_\tau(s_0,q)$ from the cops, where $q$ is the projection of cops' strategy to $B$. Since $\varepsilon$ is arbitrarily small, this shows that $\ValC(B,\tau)\le \ValC(X,\tau)$, and since $\tau$ is arbitrary, it also implies that $\ValC(B)\le \ValC(X)$.
Similarly we see that $\ValR(B)\ge \ValR(X)$. Here, the cops will use a strategy in $B$ to get close to the robber in $B \approx B\times \{0\}$. Once one of the cops is at distance less than $\varepsilon$ from the projection of the robber's position onto $B$, then that cop follows the moves of the robber in the $B$-coordinate and slowly increases its $I$-coordinate with a constant slope, so that after making distance $t$, his $I$-coordinate would have changed by 1, and the length $t$ of his path will be at most $\varepsilon$ larger than the length $t_0$ of the projected path. For instance, we can take $t$ large enough so that $t-(t^p-1)^{1/p} < \varepsilon$. Such $t$ exists since $p>1$. It is easy to see that in this way, the cop will come to a distance less than $2\varepsilon$ from the robber. Again, since $\varepsilon$ is arbitrarily small and $\tau$ is arbitrary, we conclude that $\ValR(B)\ge \ValR(X)$. Both inequalities show that
$\ValC(B) \le \ValC(X) \le \ValR(X) \le \ValR(B)$.
Finally, our Theorem \ref{thm:ValC=ValR} implies that
$$
\ValC(B) = \ValC(X) = \ValR(X) = \ValR(B).
$$
The same result holds if instead of $X=B\times I$ we consider the product $B\times T$, where $T$ is any metric graph homeomorphic to a tree.
\section{Approximation with a finite game}
The analysis of the game of cops and robber as defined in this article is somewhat complicated due to the fact that the game may have infinitely many steps and that the set of agility functions is not compact. However, the game can be approximated by a finite game within an arbitrary precision. This approximation result is described next.
Suppose that we fix an initial position $(r,c)\in X^{k+1}$, a positive integer $N$ and an agility $\tau$. Then we can consider $N$ steps of the game, and let $T=T_N(\tau)=\sum_{i=1}^N \tau(i)$ be the duration of the game during these $N$ steps. We will use $\tau^N$ to denote the restriction of $\tau$ to the first $N$ values since the rest of $\tau$ is not important for the finite game. This means that we either view $\tau^N$ as a function $[N]\to\RR_+$ or as a function that has $\tau^N(n)=0$ for every $n>N$. We also define $d(r,c) = \min\{d(r,c_i)\mid i\in [k]\}$, $d(c,c') = \max\{d(c_i,c_i')\mid i\in [k]\}$, and $d((r,c),(r',c')) = \max\{d(r,r'),d(c,c')\}$. Given $\tau$, we define its \emph{shift} $\tau_1=\delta\tau$ by the rule $\tau_1(i)=\tau(i+1)$ for $i\ge1$.
For each game $\Gamma(*)$ (where $(*)$ is the defining set of parameters), we will define the value of the game $\Val(\Gamma(*))$, which we will abbreviate by writing simply $\Val(*)$.
Given the finite game $\Gamma(r,c,N,\tau^N)$ with initial position $(r,c)$, its \emph{value} can be defined recursively as follows:
\begin{equation}\label{eq:def_value_finite_game}
\Val(r,c,N,\tau^N) =
\left\{
\begin{array}{ll}
d(r,c), & \hbox{if $N=0$;} \\[1.5mm]
\max\limits_{\substack{r'\\d(r,r')\le\tau(1)}}~~\min\limits_{\substack{c'\\d(c,c')\le\tau(1)}} \Val(r',c',N-1,\tau_1^{N-1}), & \hbox{otherwise.}
\end{array}
\right.
\end{equation}
The reader will realize that we used maximum and minimum (instead of supremum and infimum) in the definition; this is justified since $X$ is compact and the value of the game is continuous. This fact is formally stated and proved as Lemma \ref{lem:value_is_continuous} below.
Definition (\ref{eq:def_value_finite_game}) then shows that both players have optimal strategies that assure them achieving the value $\Val(r,c,N,\tau^N)$ if they stick with these strategies. Let $s_0: (r,c,N,\tau^N) \mapsto r'$, where $r'$ is the position of the robber for which the maximum in (\ref{eq:def_value_finite_game}) is attained; and for each $r'$, let $q_0: (r',c,N,\tau^N) \mapsto c'$, where $c'$ is a cops' position at which the minimum in (\ref{eq:def_value_finite_game}) is attained. Note that for any strategies $s$ and $q$ of the robber and the cops (respectively), we have:
$$
v_\tau(s,q_0) \le v_\tau(s_0,q_0) = \Val(r,c,N,\tau^N) \le v_\tau(s_0,q).
$$
There is another small detail about the definition of the value $\Val(r,c,N,\tau^N)$ in (\ref{eq:def_value_finite_game}). Namely, (\ref{eq:def_value_finite_game}) defines the value as the distance $d(r^N,c^N)$ at the end of the game, while the definition of $v_\tau(s,q)$ also considers intermediate distances. This means that the definition should have been introduced as
\begin{equation}\label{eq:def_value_finite_game_intermediate}
\Val\nolimits'(r,c,N,\tau^N) =
\left\{
\begin{array}{ll}
d(r,c), & \hbox{if $N=0$;} \\[1.5mm]
\min \bigl\{ d(r,c),\ \max\limits_{r'}~\min\limits_{c'}~ \Val(r',c',N-1,\tau_1^{N-1}) \bigr\}, & \hbox{otherwise}
\end{array}
\right.
\end{equation}
where the maximum and the minimum are taken over the same nearby points $r'$ and $c'$ as in (\ref{eq:def_value_finite_game}).
However, this definition is equivalent as shown below.
\begin{lemma}\label{lem:L1}
For any finite-step game $\Gamma(r,c,N,\tau^N)$, we have $\Val(r,c,N,\tau^N) = \Val'(r,c,N,\tau^N)$.
\end{lemma}
\begin{proof}
The proof is by induction on the number of steps. By using the induction hypothesis, the only way that $\Val' \ne \Val$ is that $d(r,c)<\Val(r',c',N-1,\tau_1^{N-1})$. However, if that were the case, the cop at distance $d(r,c)$ from the robber could just follow the robber throughout the game, and achieve the value $d(r,c)$, which is a contradiction to the assumption that $d(r,c) < \Val(r',c',N-1,\tau_1^{N-1})$.
\end{proof}
A corollary of Lemma \ref{lem:L1} is that the value of the game is non-increasing in terms of its number of steps, which we state formally below.
\begin{lemma}\label{lem:value decreases with more steps}
For any agility $\tau$, initial position $(r,c)$ and positive integers $N\le M$, we have
$$\Val(r,c,N,\tau^N) \ge \Val(r,c,M,\tau^M).$$
\end{lemma}
For our later use we now define the value of a (non-finite) game with given agility:
\begin{equation}\label{eq:Value(r,c,tau) definition}
\Val(r,c,\tau) = \lim_{N\to\infty} \Val(r,c,N,\tau^N).
\end{equation}
The limit exists since the value of the game decreases with $N$.
Our goal is to approximate the game with finite games, and we use the following as the definition what it means to approximate the game.
The game $\Gamma(r,c,\tau)$ is \emph{$\varepsilon$-approximated} with a finite game $\Gamma(r,c,N,\tau^N)$ if $\Val(r,c,N,\tau^N) - \Val(r,c,\tau) < \varepsilon$. Note that by Lemma \ref{lem:value decreases with more steps}, the difference $\Val(r,c,N,\tau^N) - \Val(r,c,\tau)$ cannot be negative.
We are ready to proceed with a proof that the game value is continuous.
\begin{lemma}\label{lem:value_is_continuous}
The value $\Val(r,c,N,\tau^N)$ of a finite game is continuous in terms of its initial position $(r,c)$ and $\tau^N$. More precisely, if $d((r,c),(r',c')) \le \delta$, then
$$\left|\Val(r,c,N,\tau^N) - \Val(r',c',N,\tau^N)\right| \le 2\delta$$
and if\/ $\Vert\tau^N-{\tau'}^N\Vert_1 := \sum_{n=1}^N |\tau(n)-\tau'(n)| \le \varepsilon$, then
$$\left|\Val(r,c,N,\tau^N) - \Val(r,c,N,{\tau'}^N)\right| \le 2\varepsilon.$$
\end{lemma}
\begin{proof}
Suppose that $d((r,c),(r',c')) \le \delta$. Suppose that the starting position is $(r',c')$. We say that the robber \emph{mimics} the game strategy for the initial position $(r,c)$ if he plays so that he is always within distance $\delta$ from the position he would have in the game when starting with $(r,c)$. The first move is now obvious, the robber would move to a point $x$ at distance at most $\tau(1)$ from $r$. Since $d(r',r)\le\delta$, he can move to a point $r''$ that is within distance $\delta$ from $x$. Now the cops move to their new positions $c_i''$. Since $c_i'$ was within distance $\delta$ from $c_i$, there is a point $y_i$ at distance at most $\delta$ from $c_i''$ and at distance at most $\tau(1)$ from $c_i'$. The robber considers the position $(x,y_1,\dots,y_k)$ as the imaginary position in this step and uses the strategy for the game $\Gamma(r,c,N,\tau^N)$ game as being at this position in the current step. This shows that the distance between $x$ and $y$ is all the time at least $\Val(r,c,N,\tau^N)$, and thus
$$d(r'',c'') \ge \Val(r,c,N,\tau^N) - 2\delta.$$
This implies that
$\Val(r',c',N,\tau^N) \ge \Val(r,c,N,\tau^N) - 2\delta$. To obtain the inequality in the other way, just switch the roles of $(r,c)$ and $(r',c')$.
Similarly we prove continuity with respect to $\tau$. Here we assume that $\sum_{n=1}^N |\tau(n)-\tau'(n)| \le \varepsilon$. In the game $\Gamma(r,c,N,{\tau'}^N)$, either player can mimic his strategy for $\Gamma(r,c,N,\tau^N)$, and all the time being at most $\varepsilon$ away from the imaginary positions in the gameplay of $\Gamma(r,c,N,\tau^N)$. This implies the stated inequality.
\end{proof}
Having defined the value of a finite-step game by (\ref{eq:def_value_finite_game}) (or, equivalently, by (\ref{eq:def_value_finite_game_intermediate})), let us fix $T>0$ and consider all finite-time games $\Gamma(r,c,T)$ with duration $T$. Then we define
\begin{equation}\label{eq:game_value_fixed_T}
\Val(r,c,T) = \sup_{\substack{N,\tau\\T_N(\tau) = T}} \Val(r,c,N,\tau^N)
\end{equation}
and for the standard game $\Gamma(r,c)$, we define:
\begin{equation}\label{eq:game_value_T_grows}
\Val(r,c) = \inf_{T \to \infty} \Val(r,c,T).
\end{equation}
Let us observe that by Lemma \ref{lem:value decreases with more steps}, $\Val(r,c,T)$ is non-increasing with $T$, thus the infimum in (\ref{eq:game_value_T_grows}) can also be replaced by a limit when $T\to \infty$.
Instead of (\ref{eq:game_value_T_grows}), we could as well have used (\ref{eq:Value(r,c,tau) definition}) since we have
$$
\Val(r,c) = \sup_\tau \Val(r,c,\tau).
$$
\section{Choice of agility functions}
It is helpful to know that the ``approximately-best'' agility (which is initially chosen by the robber) may be assumed to be decreasing. This is an easy consequence of the fact that by subdividing a step of duration $t$ into two or more steps of the same total duration will not decrease the value of the game. Formally this is settled by the following result.
Let $\tau$ be an agility, let $0\le\alpha\le1$ and $i\ge 1$. Let $\sigma_i^\alpha \tau$ be the agility defined by the rule:
\begin{equation}
\sigma_i^\alpha \tau(j) =
\left\{
\begin{array}{ll}
\tau(j), & \hbox{if $1\le j<i$;} \\
\alpha\tau(i), & \hbox{if $j=i$;} \\
(1-\alpha)\tau(i), & \hbox{if $j=i+1$;} \\
\tau(j-1), & \hbox{if $j\ge i+2$.}
\end{array}
\right.\label{eq:subdivide agility}
\end{equation}
We say that the agility $\sigma_i^\alpha\tau$ has been obtained from $\tau$ by an \emph{elementary subdivision} of the $i$th step. Agility $\tau'$ is a \emph{subdivision} of $\tau$ if it can be obtained from $\tau$ by a series of elementary subdivisions. Here we allow infinitely many elementary subdivisions, but it is requested that each step of $\tau$ is subdivided into finitely many substeps in order to obtain $\tau'$. We write $\tau' \preceq \tau$ if $\tau'$ is a subdivision of $\tau$.
For further reference we state the following easy observation, whose proof is left to the reader.
\begin{lemma}\label{lem:common subdivision}
Any two agility functions have a common subdivision.
\end{lemma}
The following lemma shows that using $\tau'$ instead of $\tau$ goes to the favour of the robber. As a consequence, we may assume that the robber always chooses a decreasing agility.
\begin{lemma}\label{lem:subdivide tau}
Suppose that $\tau'$ is a subdivision of an agility $\tau$. Then
$$\Val(r,c,\tau) \le \Val(r,c,\tau').$$
\end{lemma}
\begin{proof}
It suffices to prove the result for finite-step games, and for each such game $\Gamma(r,c,N,\tau^N)$, it suffices to prove that an elementary subdivision does not increase the value of the game, i.e.
$$\Val(r,c,N,\tau^N) \le \Val(r,c,N+1,\sigma_i^\alpha\tau^{N+1}).$$
To see this, consider steps $i$ and $i+1$ of the game with the subdivided agility. The robber can just follow the optimal strategy of the $i$th step from the original game $\Gamma(r,c,N,\tau^N)$ and move to the desired position in two steps. It is easy to see that this cannot give him smaller value of the game.
\end{proof}
\begin{lemma}\label{lem:approx_RC}
For every initial position $(r,c)$ and every $\varepsilon>0$, there is an integer $N$ and an agility $\tau^N$ such that for any initial position $(r,c)$ we have:
$$
\Val(r,c)-\varepsilon < \Val(r,c,N,\tau^N) < \Val(r,c)+\varepsilon.
$$
\end{lemma}
\begin{proof}
Take $T$ large enough so that $\Val(r,c,T) \le \Val(r,c)+\varepsilon$ and then choose $N,\tau^N$ so that $T_N(\tau)=T$ and
$\Val(r,c,N,\tau^N) \ge \Val(r,c,T)-\varepsilon$. This implies that $\Val(r,c,N,\tau^N)\ge \Val(r,c,T)-\varepsilon \ge \Val(r,c)-\varepsilon$ and $\Val(r,c,N,\tau^N)\le \Val(r,c,T) \le \Val(r,c)+\varepsilon$.
\end{proof}
\begin{corollary}\label{cor:approx_allRC}
For every $\varepsilon>0$ and $T>0$, there exists a finite game with $N$ steps and finite agility $\tau^N$ with $T_N(\tau)\ge T$ such that for every initial position $(r,c)$, we have
$$
\Val(r,c)-\varepsilon < \Val(r,c,N,\tau^N) < \Val(r,c)+\varepsilon.
$$
\end{corollary}
\begin{proof}
The goal is to show that there are $N$ and $\tau^N$ that approximate the game for any initial position. We will say that $N,\tau^N$ give \emph{$\varepsilon$-approximation} for $(r,c)$ if the inequalities of the corollary hold for the initial position $(r,c)$. Since $X$ and hence also $X^{k+1}$ is compact, there is a finite set $Y$ of initial positions such that any other initial position is within distance $\varepsilon/4$ from one of the positions in $Y$. For each $(r,c)\in Y$, there are $N$ and $\tau^N$ that give $\varepsilon/2$-approximation. By Lemma \ref{lem:common subdivision} there is a common subdivision of all such agility functions $\tau^N$. Since by subdividing the agility, the value only increases (Lemma \ref{lem:subdivide tau}), we may assume that the same pair $N,\tau^N$ $\varepsilon/2$-approximates every initial position $(r,c)\in Y$. Since any other position is at distance at most $\varepsilon/4$ from $Y$, Lemma \ref{lem:value_is_continuous} implies that the finite game with $N,\tau^N$ $\varepsilon$-approximates every initial position.
\end{proof}
Corollary \ref{cor:approx_allRC} combined with Lemma \ref{lem:value_is_continuous} implies that the value $\Val(r,c)$ of any Cops and Robber game is a continuous function, depending on the initial position. This in particular shows that the set of cop-winning initial positions is closed.
The use of agility functions is the main technical reason that makes the game ``non-compact". Usually, there will be no optimal agility since the set of all agility functions is not closed. However, we can make some general observations that will enable us to use certain assumptions about ``near-optimal" agility.
We say that $\tau$ is \emph{decreasing} if $\tau(n+1)<\tau(n)$ for every $n\ge1$. It is easy to see by using induction that for every $\tau$ there is a decreasing agility $\tau'$ such that $\tau'\preceq\tau$. The following lemma shows that using $\tau'$ instead of $\tau$ goes to the favour of the robber, and therefore, we may assume that the robber always chooses a decreasing agility.
For each agility $\tau\in\Sigma^0$, there is a decreasing agility $\tau'\in \Sigma^0$ that is a subdivision of $\tau$. The following statement enables us to restrict our attention to decreasing agility functions
\begin{lemma}\label{lem:subdivide strategies}
Suppose that $\tau\in\Sigma^0$ is an agility function and that $\tau'$ is a subdivision of $\tau$ that is decreasing.
(a) If $s: X^{k+1} \to X$ is a strategy of the robber with agility function $\tau$,
then there is a strategy $s': X^{k+1} \to X$ for agility $\tau'$ such that
\begin{equation}\label{eq:subdividing agility better for R}
\inf_{q'} v_{\tau'}(s',q') \ge \inf_{q} v_{\tau}(s,q),
\end{equation}
where $q$ and $q'$ in both infima run over all strategies of the cops with agility $\tau$ and $\tau'$, respectively.
(b) For any position $(r,c)$, $\Val(r,c,\tau') \ge \Val(r,c,\tau)$.
\end{lemma}
\begin{proof}
(a) For the subdivided $n$th step of $\tau$, the agility $\tau'$ provides a finite number of steps, whose total duration is equal to $\tau(n)$. If the strategy $s$ tells the robber to move from position $r$ to $r'$, the robber can do the same with subdivided steps, and his strategy $s'$ will move from $r$ through all substeps to $r'$. Formally, there could have been a problem with this if at some later substep would need to make the same decision. But since $\tau'$ is decreasing, the strategy for the same position will have smaller values in the agility function, so any later substeps can use different strategy values.
To prove (\ref{eq:subdividing agility better for R}), we just argue that if the robber, when using strategy $s$, is able to keep distance $v$ from the cops for any cops' strategy $q$, he is also able to keep the same distance by using his strategy $s'$. Suppose not. Then the cops have a strategy $q'$ achieving better outcome. They can mimic that strategy under $\tau$. If they gain better distance in a substep, they can keep that distance by following the robber.
(b) This is an immediate corollary of the definition (\ref{eq:Value(r,c,tau) definition}) of $\Val(r,c,\tau)$ and of part (a) of this lemma.
\end{proof}
\section{Volatile games}
Suppose that we have nonnegative real numbers $(\varepsilon_n)_{n\ge0}$ and that two players play the game of cops and robber in $X$. Suppose that after completing each step $n$ of the game ($n\ge0$), an \emph{adversary} changes the positions of the robber and of the cops by moving them to points that are at most $\varepsilon_n$ away from their current position. Then we say that the players play a \emph{volatile game} with \emph{perturbation $(\varepsilon_n)_{n\ge0}$}.
Lemma \ref{lem:value_is_continuous} can be interpreted as making a small change in the position before making step 1 (which is ``after making step 0''), thus it corresponds to a volatile game where $\varepsilon_0=\varepsilon$ and $\varepsilon_n=0$ for $n\ge 1$. For a general finite-step volatile game $\Gamma(r,c,N,\tau^N,(\varepsilon_n)_{n\ge0})$ we define the guaranteed value for the robber as the minimum distance from the cops over all possible strategies, when the adversary ``helps the cops'' as much as he can. This can be defined recursively in the same way as in (\ref{eq:def_value_finite_game}):
\begin{equation}\label{eq:def_value_volatile_game}
\ValC(r,c,N,\tau^N,(\varepsilon_n)_{n\ge0}) =
\left\{
\begin{array}{ll}
\max\{d(r,c)-2\varepsilon_0,0\}, & \hbox{if $N=0$;} \\[1.5mm]
\min\limits_{r',c'}\,
\max\limits_{r''}\,
\min\limits_{c''}\, \ValC(r'',c'',N-1,\tau_1^{N-1},(\varepsilon'_n)_{n\ge0})), & \hbox{otherwise;}
\end{array}
\right.
\end{equation}
where $\varepsilon'_n = \varepsilon_{n+1}$ for $n\ge0$ and the minima and the maximum are taken over all $r',c',r'',c''$ for which
$d(r,r')\le \varepsilon_0$, $d(c_i,c'_i)\le \varepsilon_0$, $d(r',r'')\le\tau(1)$, and $d(c'_i,c''_i)\le\tau(1)$ ($i\in [k]$).
In the definition (the case $N=0$) we have used the fact that at the last step of the game, the adversary can move the robber and its closest cop towards each other by the distance of maximum allowed perturbation in this step. For the recursive step we also used the possibility of the worst perturbation of the positions made by the adversary.
Similarly, we define the guaranteed value for the cops as the minimum distance from the robber over all possible strategies. This can be defined recursively in the same way as in (\ref{eq:def_value_volatile_game}):
\begin{equation}\label{eq:def_value_volatile_game_cops}
\ValR(r,c,N,\tau^N,(\varepsilon_n)_{n\ge0}) =
\left\{
\begin{array}{ll}
d(r,c), & \hbox{if $N=0$;} \\[1.5mm]
\max\limits_{r',c'}\, \max\limits_{r''}\, \min\limits_{c''}\, \ValR(r'',c'',N-1,\tau_1^{N-1},(\varepsilon'_n)_{n\ge0})), & \hbox{otherwise.}
\end{array}
\right.
\end{equation}
The difference here is that we have estimated the worst way for the cops when the adversary will try to ``help the robber''.
Clearly, $\ValC$ and $\ValR$ for volatile games need not be the same, and it may also happen that $\ValC>\ValR$. However, the two values cannot be too far from each other as we prove next.
\begin{lemma}\label{lem:value volatile game}
Let $\Gamma(r,c,N,\tau^N)$ be a finite-step game and let $\Gamma(r,c,N,\tau^N,(\varepsilon_n)_{n\ge0})$ be a volatile version. Set $\delta_n = \sum_{i=0}^n \varepsilon_i$. Then
$$
\ValC(r,c,N,\tau^N,(\varepsilon_n)_{n\ge0}) \ge \Val(r,c,N,\tau^N) - 2\delta_N
$$
and
$$
\ValR\left(r,c,N,\tau^N,(\varepsilon_n)_{n\ge0}\right) \le \Val(r,c,N,\tau^N) + 2\delta_{N-1}.
$$
\end{lemma}
\begin{proof}
The proof is not complicated, but requires thoughtful description. We will only show the first inequality. Here it suffices to describe a strategy of the robber such that no matter how the cops play, the final result will in the worst case be close to the value $\Val(r,c,N,\tau^N)$ of the finite game.
Let $s_0$ be the optimal strategy for the game $\Gamma(r,c,N,\tau^N)$. It suffices to prove by induction on $N$ that the robber can stay ``close'' to the gameplay in the unperturbed game in which he uses strategy $s_0$ and the cops use some strategy $q$ since in that gameplay the distance between the cops and the robber will always be at least $\Val(r,c,N,\tau^N)$. By ``close'' we mean that his position and the position of each cop after $n$ steps is at most $\delta_n$ away from a position in the usual gameplay with the robber using strategy $s_0$.
When $N=0$, this is clear. If $N>0$, use the induction hypothesis to conclude that after $N-1$ steps, we are $\delta_{N-1}$-close to a position $(r^{N-1},c^{N-1})$ of the unperturbed gameplay. By mimicing the last step of the game, the players can come to within the distance $\delta_{N-1}$ from the position $(r^N,c^N)$ in the optimal gameplay. Finally, the adversary can move each player up to distance $\varepsilon_N$ further away from that position. Since $\delta_{N-1}+\varepsilon_N=\delta_N$, players could be at most $2\delta_N$ closer than in the optimal gameplay performed by the robber.
This completes the proof.
\end{proof}
\section{Limits of strategies}
The somewhat technical definition of our Cops and Robber games using agility functions enables us to prove that each game $\Gamma(r,c)$ or $\Gamma(r,c,\tau)$ can be approximated by finite discrete games of the form $\Gamma(r,c,T_i,\tau)$, where $T_i$ is an increasing sequence of positive real numbers tending to infinity. When $\tau$ is fixed, we can equivalently approximate $\Gamma(r,c,\tau)$ with finite-step games $\Gamma(r,c,N_i,\tau^{N_i})$, where $0<N_1<N_2<N_3<\cdots$ is a sequence of positive integers tending to infinity.
However, strategies are not continuous functions, so there is an obvious question if one can define a limiting strategy, knowing optimal strategies for finite games that are $\varepsilon$-approximations with $\varepsilon\to 0$. This question is our goal in this section. Answering what are the limits of strategies will also yield our final min-max theorem.
Suppose that $\tau$ is an agility that is decreasing. Let $0<T_1<T_2<\cdots$ be an increasing sequence tending to infinity. For each $i$, let $N_i$ be the smallest integer such that $T(N_i) := \sum_{n=1}^{N_i} \tau(n) \ge T_i$.
If we start instead with an increasing sequence of positive integers tending to infinity, $N_1<N_2<N_3<\cdots$, we can define $T_i=T(N_i)$ and then proceed as we would with the sequence $(T_i)_{i\ge1}$.
We will also use, with a fixed value of $\varepsilon$, the following decreasing sequence: $\varepsilon_i := 2^{-i-1}\varepsilon$ ($i\ge0$). Note that $\sum_{i\ge0} \varepsilon_i = \varepsilon$.
Let us recall that by Corollary \ref{cor:approx_allRC}, for every $\varepsilon>0$ and every agility $\tau$, there is $T=T(\varepsilon)$ such that $\Gamma(r,c,\tau)$ is $\varepsilon$-approximated with the finite game $\Gamma(r,c,T,\tau)$ for every initial position $(r,c)$. Let us now choose $T_i$ ($i\ge 1$) so that $\Gamma(r,c,\tau)$ is $\varepsilon_i$-approximated with the finite game $\Gamma(r,c,T_i,\tau) = \Gamma(r,c,N_i,\tau^{N_i})$. For each $i\ge1$, let $s_i$ and $q_i$ be optimal strategies of the robber and the cops (respectively) for the finite game $\Gamma(r,c,N_i,\tau^{N_i})$.
We now define the notion of an $\varepsilon$-approximate limit of strategies $(s_i)_{i\ge1}$ and $(q_i)_{i\ge1}$ for the game $\Gamma(r,c,\tau)$. Since $\tau$ is decreasing, we can define the limiting strategies depending only on the current step length $\tau(n)$, and this is defined first for $n=1$, then for $n=2$, etc. Suppose that strategies have been defined for steps $1,2,\dots,n-1$. Consider the next step, whose length is $t=\tau(n)$. Consider a finite point-set $Y^n\subset X^{k+1}$ such that every point in $X^{k+1}$ is at distance less than $\varepsilon_n$ from $Y^n$. For every position $(r^n,c^n)\in Y^n$, each strategy $s_i$ ($i\ge1$) gives the new position $r_i^n$ of the robber that is at most $t$ away from $r^n$. Since $X$ is compact, the sequence $(r_i^n)_{i\ge1}$ contains a convergent subsequence with limit $r'$, and $r'$ is still at most $t$ away from $r^n$. We say that this subsequence is \emph{appropriate} for $(r^n,c^n)$ if, moreover, $d(r_i^n,r') < \varepsilon_n$ for each $i\ge1$. Since $Y^n$ is finite, there is a subsequence $j_1^n < j_2^n < j_3^n < \dots$ of indices that is appropriate for all points in $Y^n$. We will from now on, in all succeeding steps, only consider this subsequence and its subsequences; so in particular, if $n>1$, then $(j_i^n)_{i\ge1}$ is a subsequence of $(j_i^{n-1})_{i\ge1}$. The corresponding lengths of the finite games will also be denoted using superscripts $n$, $N_i^n := N_{j_i^n}$ ($i\ge1$). The assignment $s_0: (r^n,c^n,t)\mapsto r'$ then defines the strategy of the robber in step $n$ restricted to the positions in $Y^n$.
If a position $(r^n,c^n)$ is not in $Y^n$, then we find the closest position, say $(a,b)\in Y^n$, and consider its strategy limit point $r'=s_0(a,b,t)$. Now, we first move the robber from $r^n$ to $a$ and then proceed towards $r'$ using a geodesic from $a$ to $r'$. Of course, the total length we can make is $t$, and we may need to stop before reaching $r'$. Let $r''$ be the point reached in this way, and we set $s_0: (r^n,c^n,t)\mapsto r''$.
Observe that we also have $d(r'',r')\le \varepsilon_n$. This defines the strategy of the robber for the $n$th step.
Similarly we define a strategy $q_0:(r,c,t)\mapsto c')$ for the cops in the $n$th step of length $t$, again passing to a subsequence $(N_i^n)$ of $(N_i^{n-1})$. We may assume that the same subsequence that was appropriate for all $(r^n,c^n)\in Y^n$ for the robber is also appropriate with respect to the cops' positions, so we do not need to introduce new notation.
By repeating the above process for each $n\ge 1$, we end up with strategies $s_0$ and $q_0$ for the game $\Gamma(\tau)$, where $\tau$ is a fixed decreasing agility. Note that there is some freedom in the process by taking different converging subsequences, so these need not be unique. Any strategies $s_0$ and $q_0$ obtained in this way are said to be \emph{$\varepsilon$-approximate limits of strategies} $(s_i)_{i\ge1}$ and $(q_i)_{i\ge1}$ for the finite games $(\Gamma(r,c,N_i,\tau^{N_i}))_{i\ge1}$.
\begin{lemma}\label{lem:approximate limits approximate the value}
Let $\tau$ be a decreasing agility function and $(r^0,c^0)$ be a fixed initial position for the game of cops and robber on $X$. Suppose that $s_0$ and $q_0$ are $\varepsilon$-approximate limits of optimal strategies for finite games $\Gamma(r^0,c^0,N_i,\tau^{N_i})$ with $N_i\to \infty$ as $i\to\infty$. Then for arbitrary strategies, $s$ of the robber and $q$ of the cops, for the game $\Gamma(r^0,c^0,\tau)$, we have
$$
v_\tau(s,q_0) - 4\varepsilon < v_\tau(s_0,q_0) < v_\tau(s_0,q) + 4\varepsilon.
$$
\end{lemma}
\begin{proof}
We will give details only for the second inequality. The first one can be proved in exactly the same way, using estimates on the positions of each of the cops.
Suppose that the players start with position $(r^0,c^0)$ and then play the game using agility $\tau$, where the robber uses the limit strategy $s_0$ and the cops use any strategy $q$. When defining $s_0$ as the $\varepsilon$-approximate limit strategy in the $n$th step, we have considered a finite point set $Y^n$ such that each point in $X^{k+1}$ was at distance less than $\varepsilon_n$ from $Y^n$, and we considered a subsequence $(N_i^n)_{i\ge1}$ of the (sub)sequence $(N_i^{n-1})_{i\ge1}$ from the previous step. For $N_i^n$, the optimal strategies of the finite games $\Gamma(r^0,c^0,N_i^n,\tau^{N_i^n})$ converge for each $(r,c,t)$ with $(r,c)\in Y^n$ and $t=\tau(n)$ to the point $r'$ which we have used when defining the strategy $s_0$.
We can interpret the gameplay as a volatile game, where the adversary first moves the robber from where he would have ended using his optimal strategy to the limiting value $r'$. By our choice of subsequences, this moves the robber at most $\varepsilon_n$ away in the $n$th step. Next, the adversary moves both, the robber and the cops, each at most $\varepsilon_n$ away, by placing them to the nearest position $(a,b)\in Y^n$. Thus, the first $n$ steps of the gameplay of \emph{each} finite game $\Gamma(r^0,c^0,N_i^n,\tau^{N_i^n})$ ($i\ge1$) corresponds to a volatile game with perturbations $(2\varepsilon_n)_{n\ge1}$. By Lemma \ref{lem:value volatile game}, the value $v_\tau(s_0,q)$ of this gameplay is within $2\delta_{N_i^n}$ from $\Val(r^0,c^0,N_i^N,\tau^{N_i^n})$, where $\delta_n = \sum_{j=1}^{n} 2\varepsilon_j < 2\varepsilon$. This completes the proof.
\end{proof}
Our final results are now immediate consequences of Lemma \ref{lem:approximate limits approximate the value} when combined with the definitions of $\ValC(\tau)$ and $\ValR(\tau)$.
The lemma gives the min-max theorem for $\Gamma(r,c,\tau)$, where $(r,c)$ is any initial position. If the approximate strategies for different initial values in the same step $n$ were different, we can unify them so that they will be the same. Having done this, we may assume that we have $\varepsilon$-approximate strategies for all initial positions, forming the $\varepsilon$-approximate strategy for the game $\Gamma(\tau)$. This gives the following result.
\begin{corollary}
Suppose that $\tau$ is a decreasing agility function and that $s_0$ and $q_0$ are $\varepsilon$-approximate limits of optimal strategies for finite games $\Gamma(N_i,\tau)$ with $N_i\to \infty$ as $i\to\infty$. Then
$$\ValR(\tau)-\varepsilon \le v_\tau(s_0,q_0)\le \ValC(\tau) + \varepsilon.$$
\end{corollary}
Finally, taking $\varepsilon \to 0$, we get a min-max result.
\begin{theorem}\label{thm:ValC=ValR}
For every decreasing agility $\tau$ we have $\ValC(\tau)=\ValR(\tau)$. Consequently, $\ValC=\ValR$.
\end{theorem}
The above ``min-max theorem" implies that for every $\varepsilon>0$ there are $\varepsilon$-approximating strategies for both players, and if either one of them uses his strategy, the other player cannot do more than $\varepsilon$ better than just using his own $\varepsilon$-approximating strategy.
\bibliographystyle{plain}
|
1,116,691,497,653 | arxiv | \section{Introduction}
Mathematical models based on ideas from physics can improve our
understanding of the characteristics of crowds and give useful
insights into their dynamics. From a more practical point of view
such models have applications e.g.\ in safety analysis of large
public events where they may help predicting critical situations,
allowing preventive measures.
A popular class of models is of microscopic nature, describing
the dynamics of crowds by specifying properties of individuals and
defining their interactions. The most elaborated models belong either to the
subclass of rule-based models that are discrete in space (i.e. cellular
automata), or to
force-based models continuous in space, which are described by a
system of second order ordinary differential equations~\cite{Helbing2001,Schadschneider2009a,Schadschneider2010b,Ali2013}.
Especially for applications in safety analysis,
models that are validated qualitatively and quantitatively are
required. Quantitative validation of pedestrian dynamics consists of
measuring density, velocity and flow in simulations and comparing them
with empirical data. The relation between these quantities, also
called the fundamental diagram, is widely considered as the most
important criterion to validate simulation results
\cite{Seyfried2008,Schadschneider2009c}. Besides this quantitative
validation often the focus is more on the reproduction of qualitative
properties, especially collective effects. Most of the force-based
models are in fact able to describe fairly well some of those phenomena,
e.g.\ lane formation~\cite{ZhangQ2011,Yu2005}, oscillations at
bottlenecks~\cite{ZhangQ2011,Helbing2004}, the ``faster-is-slower''
effect~\cite{Lakoba2005,Parisi2007} and clogging at bottlenecks
\cite{Helbing2004,Yu2005}, that sometimes are difficult to verify
empirically~\cite{Garcimartin2014,Parisi2015}.
An often observed collective phenomenon
that emerges in crowds, especially when the density exceeds a critical
value, is stop-and-go waves~\cite{Schadschneider2009a}. Although some
space-continuous models
\cite{Portz2010,Seyfried2010b,Lemercier2012,Eilhardt2014} reproduce
partly this phenomenon, force-based models generally fail to describe
pedestrian dynamics in jam situations correctly. Instead in some situations
quite often unrealistic behavior like backward motion or overtaking
(``tunneling'') is observed, especially in one-dimensional single-file
scenarios. Recently it has been shown~\cite{Chraibi2014a} that this is
not a consequence of numerical problems in the treatment of the
differential equations, but an indication of inherent problems of
force-based models, at least for certain classes of forces.
In vehicular traffic, the formation of jams and the dynamics of
traffic waves have been studied intensively~\cite{Chowdhury2000,Gazis2002,Orosz2010,Nagatani2002a}.
Traffic jams in simulations occur as a result of phase transitions from a
stable homogeneous configuration to an unstable configuration.
That means it should be possible to calibrate model parameters such that
systems in unstable regimes can be simulated.
Otherwise, a reproduction of jams is impossible and the model can be
qualified as unrealistic. For each parameter set that leads to an
unstable homogeneous state it has to be verified by simulations
whether this instability corresponds to realistic behavior (i.e.~the
occurrence of jams) or unrealistic behavior (e.g. overlapping of
particles). A certain amount of overlapping might be acceptable as it
could be interpreted as ``elasticity'' of the particles. Generically,
however, the amount of overlapping is not limited in these models and
even ``tunneling'' of particles is observed.
In pedestrian dynamics, numerous force-based models have been
developed based on physical analogies, i.e. Newtonian dynamics.
Pedestrian dynamics is described as a deviation from a predefined
desired direction resulting from forces acting on each pedestrian.
These forces are not fundamental physical forces, but effective forces
that give a physical interpretation of the decisions made by
pedestrians. Therefore the forces can not be measured directly but
only via their effects on the motion, i.e.\ the observed
accelerations. This might be one reason why in the literature a
diversity of models has been proposed, e.g.\ based on algebraically
decaying forces, exponential forces etc.
Although the force-based
Ansatz is elegant and to some extent helpful in describing the
dynamics of pedestrians, it has some intrinsic problems that we will
discuss in this paper.
These problems were observed earlier and have lead to modifications
of the original models by introducing additional forces, like a physical
force, or even restrictions on the state variables.
K\"oster et al.~\cite{Koester2013} gave a thorough analysis of the
numerical problems that are encountered when simulating pedestrian
dynamics with force-based models. As shown in~\cite{Koester2013,Chraibi2014}
the problem of oscillations in the
trajectories of pedestrian (backwards movement) is an
\textit{intrinsic} problem of second order models, not (only)
a numerical one due
to the accuracy of the numerical solver. In~\cite{Chraibi2014a} an analytical investigation of the social
force model in one-dimensional space proved that oscillations can only be avoided by choosing
values in some defined parameter spaces. Unfortunately, these parameter
values are either unrealistic (if they have a physical meaning) or
they lead to large amount of overlapping (and in extreme cases, e.g.\
high densities, tunneling) of pedestrians. This so-called
overlapping-oscillation duality is discussed in more detail
in~\cite{Chraibi2010a,Chraibi2011}. These problems that often lead
to a ``complexification'' of the original (elegant) Ansatz of
force-based models, may explain the paradigm shift observed lately
with the emergence of new first-order models or so called ``velocity
models''~\cite{Berg2008,Maury2009,Venel2010,Patil2010,Portz2010,Dietrich2014,Dietrich2014a,Eilhardt2014,Kirik2014}.
In this work we introduce a classification of force-based models
according to the form of the repulsive force. The stability
properties of each class can be investigated separately in a
unified way. Analytical criteria that ensure reproduction of
stop-and-go waves in terms of the instability of uniform single-file
motion are derived.
Furthermore, we
analyze the influence of specific parameters of the overall behavior
of the investigated model. A focus is on the analytical forms of the
models, and not on eventual numerical difficulties. Based on
numerical simulations we show that the investigated models behave
unrealistically in unstable regimes, which is manifested in negative
speeds (movement in the opposite of the desired direction) and
oscillations in position of pedestrians (leads to nonphysical
overlapping). After identifying the origin of this unrealistic
behavior we attempt to develop a new model that mitigates these
problems. We observe that this model shows phase separation in its
unstable regime, in agreement with empirical results~\cite{Portz2011}. We
conclude with a discussion of the results and analysis of their
consequences as well as a detailed discussion of the limitation of the
proposed model in special and force-based models in general.
\section{Model Definition}
Pedestrian dynamics is generically a two-dimensional problem.
In order to reduce the complexity and to capture the essentials of the
jamming dynamics, we focus here on 1D systems.
Furthermore we assume an asymmetric nearest-neighbor interaction where
the motion of a pedestrian is only influenced by the person
immediately in front. $N$ pedestrians are initially distributed
uniformly in a one-dimensional space with periodic boundary
conditions. Important information can then be derived from
the reaction of the system in the uniform steady state to small perturbations.
For the state variables position $x_n$ and velocity $\dot
x_n=\frac{dx_n}{dt}$ of pedestrian $n$ we define the distance of the
centers $\Delta x_n$ and the relative velocity $\Delta\dot x_n$ of two
successive pedestrians, respectively, as (see Fig.~\ref{fig1})
\begin{equation}
\Delta x_n = x_{n+1} - x_n,\qquad
\Delta\dot x_n = \dot x_{n+1} - \dot x_n\,.
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\columnwidth]{cor.pdf}
\caption{
(Color online) Definition of the quantities characterizing the
single-file motion} of pedestrians (represented by rectangles).
\label{fig1}
\end{figure}
For convenience, we will mainly use dimensionless quantities
in the following. These are defined by the transformation
\begin{equation}
t\rightarrow t'=\frac{t}{\tau}\qquad \text{and}\quad
x_n \rightarrow x_n' = \frac{x_n}{a_0}\,,
\label{eq-rescale}
\end{equation}
with time constant $\tau$ and the length constant $a_0$.
To simplify the notation
we denote the rescaled velocity by $\dot x_n'=\mbox dx_n'/\mbox dt'$.
In general, pedestrians are modeled as simple geometric objects of
constant size, e.g.\ a circle or ellipse. In one-dimensional space
the size of pedestrians is characterized by $a_n$ (Fig.~\ref{fig1}),
i.e.\ their length is $2a_n$. However, it is well-known that the space
requirement of a pedestrian depends on its velocity and is defined in
a general way as a linear function of the velocity~\cite{Weidmann1993}
\begin{equation}
a_n = a_0+a_v \dot x_n\,.
\label{eq:an}
\end{equation}
In the following, the parameter $a_0$, characterizing the space
requirement of a standing person, will be used as length
scale for the dimensionless quantities (\ref{eq-rescale}).
Note that the parameter $a_v\ge0$ has the dimension of time.
The dimensionless spacing $a_n' =a_n/a_0$ is written as
\begin{equation}
a_n' = 1 + \tilde a_v \dot x_n'\,, \qquad \text{with}\quad
\tilde a_v=\frac{a_v}{\tau}\,.
\label{eq:an'}
\end{equation}
The effective distance (distance gap) $d_n$ of two consecutive
pedestrians becomes in dimensionless form
\begin{equation}
d_n^\prime = \frac{d_n}{a_0} = \Delta x_n' - a_n'-a_{n+1}'
=\Delta x_n' - \tilde a_v\left(\dot x_n' +\dot x_{n+1}'\right) - 2.
\label{eq:effDistn}
\end{equation}
The dynamical equation of force-based models is usually defined as the
superposition of a repulsive force $f$ and a driving term
$g$~\cite{Chraibi2012a}. The driving term is of central
importance and the standard form used is
\begin{equation}
g(\dot x_n)=\frac{v_0-\dot x_n}{\tau} \,.
\label{eq:frv}
\end{equation}
Typical values for the parameters are $\tau=0.5$~s for the relaxation
time and $v_0=1.2$~m/s for the desired speed. Note that $\tau$ is the
same time scale used in Eq.~(\ref{eq-rescale}).
This definition gives rise to exponential acceleration to $v_0$
in free-flow movement.
The equation of motion for pedestrian $n$ has the generic form
\begin{equation}
\ddot x_n = f\Big(\dot x_n, \Delta \dot x_n, \Delta
x_n\Big) + g(\dot x_n )\cdot
\label{eq1}
\end{equation}
In this work we limit ourselves to models that incorporate
(\ref{eq:frv}) as driving term and investigate the stability of
several force-based models, defined through different
functions $f(\cdot)$ corresponding to repulsive forces that
either decay algebraically or exponentially with distance.
We consider uni-dimensional dynamics and totally asymmetric
interaction with the predecessor and assume that the repulsive
forces are negative.
We determine their instability regions where the
investigated model may be able to reproduce stop-and-go waves.
Technical details of the stability analysis, which is a standard
tool that can lead to cumbersome calculations, are deferred to
appendix which provides all relevant results.
\section{Models with algebraically decaying forces}
In this section we consider force-based models with an
algebraically decaying repulsive term, i.e.,
\begin{equation}
f\Big(\dot x_n, \Delta \dot x_n,
\Delta x_n\Big) \propto 1/(d_{n}) ^q.
\label{eqmain}
\end{equation}
More specifically we consider the following dimensionless
equation of motion:
\begin{equation}
\ddot x_n' = - \frac{\Big(\mu + \delta\cdot r_\varepsilon(\Delta \dot
x_n')\Big)^2}{{d_{n}^\prime}^q} + v_0'-
\dot x_n',
\label{eq:invd}
\end{equation}
with a dimensionless parameter $\mu\ge 0$ to adjust the strength of
the force, the dimensionless desired speed
$v_0'=\frac{v_0\tau}{a_0}>0$ and constants $\delta\ge 0$ and $q>0$.
In two-dimensional space the case $q<1$ corresponds to a long-ranged
repulsive force, whereas the force is short-ranged for $q>1$.
Note that the definition of the model implies that each pedestrian only interacts with its predecessor.
Eq.~(\ref{eq:invd}) can be interpreted as extension of the generalized centrifugal force
model~\cite{Chraibi2010a} which corresponds to the
special case $\delta=1$. The differentiable function
\begin{equation}
r_\varepsilon(x)=\varepsilon \log(1+e^{-x/\varepsilon})
\qquad
(0 < \varepsilon \ll 1),
\label{eq:aTheta}
\end{equation}
is an approximation of the non-differentiable ramp function
\begin{equation}
r(x)=
\begin{cases} 0, & x \ge 0, \\ -x, & \rm{else}, \end{cases}
\end{equation}
as $\varepsilon\rightarrow0$ (see Fig.~\ref{fig:theta}).
This function suppresses the repulsive effect of a
predecessor moving faster than the follower.
(We will set $\varepsilon=0.1$ in the simulations).
\begin{figure}[H]
\centering
\includegraphics[width=0.35\columnwidth]{log_approximation.pdf}
\vspace{-0.4cm}
\caption{(Color online) Approximations $r_\varepsilon(\cdot)$ of the ramp
function $r(\cdot)$ (thick line). In the simulations we use $\varepsilon=0.1$.}
\label{fig:theta}
\end{figure}
\subsection{Model Classification}
The model class defined by Eq.~(\ref{eq:invd}) depends on
four (dimensionless) parameters $\mu$, $\delta$, $q$, and $\tilde a_v$
[which enters via (\ref{eq:effDistn})]
and includes several models studied previously.
In the following each model will by specified by
the quadruple
$\mathcal{Q}=\langle\mu, \delta, q, \tilde a_v\rangle$. As we will
see later the parameters $\delta$ and $\tilde a_v$ are most critical
for the dynamics described by Eq.~(\ref{eq:invd}). The parameter
$\delta$ controls the influence of the relative velocity, whereas
$\tilde a_v$ determines the velocity-dependence of the effective size
of the pedestrians. Although in principle $\delta$ can be any real
number, in most known models it takes only discrete values in $\{0,
1\}$.
In the Centrifugal Force Model (CFM)~\cite{Yu2005} the size of
the pedestrians is independent of their speed. In addition, the CFM
considers the effects of the relative velocity $\Delta\dot x_n$,
such that slow pedestrians are not effected by faster ones. Hence, we
can define the CFM as $\mathcal{Q}=\langle 0,1,1,0\rangle$.
In contrast to the CFM, the Generalized Centrifugal Force Model (GCFM)
\cite{Chraibi2010a} includes both components - the relative velocity
and the velocity-dependence of the volume exclusion~\footnote{In GCFM
pedestrians are modeled by ellipses with two velocity-dependent
semi-axes.}. Additionally, to avoid overlapping of pedestrians that
results from repulsive forces among pedestrians that are too
small, moving \textit{nearly} in lockstep, a non-negative constant
$\mu$ is added to the relative velocity. Thus, the GCFM
corresponds to the case $\mathcal{Q}=\langle
\mu, 1, 1,\tilde a_v\rangle$.
Another model that represents pedestrians with constant circles and
thus has $\tilde a_v=0$ was introduced in Ref.~\cite{Helbing2000a} to
which we will refer to as HFV (Helbing, Farkas, Vicsek). Different
to the CFM and GCFM, in HFV the effects of the relative velocity are
ignored so that the HFV can be characterized by $\mathcal{Q}=\langle
\mu , 0, 2,0\rangle$. In Ref.~\cite{Seyfried2006} an enhancement of
the HFV was introduced by Seyfried et al (SEY) consisting on a
velocity-dependent space requirement, i.e. $\mathcal{Q}=\langle \mu
\ne 0, 0, 2, \tilde a_v\ne 0\rangle$. Furthermore, in
Refs.~\cite{Guo2010,Guo2012} Guo et al investigated a slightly
different model (GUO) with the focus on navigation in two-dimensional
space. The GUO model can be classified as $\mathcal{Q}=\langle \mu ,
0, 1, 0\rangle$. Similar models introducing new features have been
proposed in Refs. \cite{Lohner2010} and~\cite{Shiwakoti2011} with a
constant added to the denominator of $f(\cdot)$. They correspond to
the case $\mathcal{Q}=\langle \mu , 0, 2, 0\rangle$.
In Tab.~\ref{tab:model} a brief summary of the aforementioned models is given.
\begin{table}[H]
\begin{center}
\begin{tabular}{ | c || c | }
\hline
Model & $\mathcal{Q}=\langle \mu, \delta, q,\tilde a_v\rangle$\\ \hline
\hline
CFM & $\langle
0,1,1,0\rangle$\\ \hline
GCFM & $\langle \mu , 1, 1,\tilde a_v\rangle$\\ \hline
HFV & $\langle \mu, 0, 2, 0\rangle$\\ \hline
SEY & $\langle \mu, 0, 2, \tilde a_v\rangle$\\ \hline
GUO & $\langle \mu, 0, 1, 0\rangle$ \\ \hline
\hline
\end{tabular}
\end{center}
\label{tab:model}
\vspace{-0.4cm}
\caption{ $\mathcal{Q}$-values of the investigated
models with algebraically decaying forces.}
\end{table}
Some force-based model rely on additional algorithmic solutions like
collision detection techniques~\cite{Yu2005} or a time-to-collision
constant~\cite{Karamouzas2014} that allows to manage collisions
in simulations. Other models rely on optimization algorithms to
define the desired direction of pedestrians~\cite{Moussaid2011}
depending on the situation of every pedestrian in the simulation.
While these additional components may prove to be useful for numerical
simulations, they have the downside of adding more complexity to the
model while stretching the concept of force-based modeling beyond
the original idea. In some models, e.g.~\cite{Karamouzas2014},
these components are strongly correlated with the forces, which
complicates the analytical investigation of the ``pure'' force model.
Therefore, in this paper the analytical investigation is limited
solely to the force-based models that can be formulated without any
additional algorithmic components.
\subsection{Linear Stability}
\label{sub-stab1}
We study the linear stability of the system (\ref{eq:invd}) for a
given set $\mathcal{Q}$ of parameters. The positions of the
pedestrians in the homogeneous steady state are given by
\begin{equation}
y_n=\frac1{a_0}\left(\frac{n}{\rho} + vt\right)\,,
\end{equation}
so that
$y_{n+1}-y_n=\frac1{a_0\rho}=\Delta y$, $\dot y_n=v\tau/a_0=v'$
and $\ddot y_n=0$
for all $n$, where derivatives are taken with respect to $t'$.
Now we consider small (dimensionless) perturbations $\epsilon_n$
of the steady state positions,
\begin{equation}
x_n' = y_n + \epsilon_n\cdot
\label{eq-pertub}
\end{equation}
For perturbations of the form
\begin{equation}
\epsilon_n(t)=\alpha_ne^{z t},
\end{equation}
with $\alpha_n,z\in\mathbb{C}$ we then find (expanding to first order)
\begin{equation}
z^2 = \delta\gamma\frac{e^{\boldsymbol{i}k}-1}{{d^\prime}^q}z-
\phi \tilde a_v(e^{\boldsymbol{i}k}+1)z
+\phi(e^{\boldsymbol{i}k}-1)-z ,
\label{eq:stabN}
\end{equation}
with $\gamma=\mu + \delta\varepsilon\log(2)$, $\phi=
\frac{q\gamma^2}{{d^\prime}^{q+1}}$ and $k=2\pi l/N$ with
$l=0,\ldots,N-1$. Details of the derivation can be found in
the appendix, Sec.~\ref{appA}.
For $k\approx 0 $ we can expand $z$ as a polynomial in $k$:
\begin{equation}
z=z^{(0)}k + z^{(1)}k^2 + \cdots
\end{equation}
Up to second order we then find the stability condition
(see appendix Sec.~\ref{appA2})
\begin{equation}
\gamma>0,\qquad \Phi \coloneqq \phi\omega
- \frac{\delta\gamma}{{d^\prime}^q} - \frac{1}{2}<0\cdot
\label{condition1}
\end{equation}
Here
$d'=\Delta y-2\tilde a_vv-2$,
with $\tilde a_v=a_v/\tau$ and $\omega=1/(2\tilde a_v\phi+1)$.
The stability condition (\ref{condition1}) suggests that models of
type $\mathcal{Q}=\langle \mu\ne 0, 0, q, 0\rangle$, e.g.\ the HFV, GUO
models, tend to instability with increasing density and increasing
strength of the force ($\mu$), because $\Phi$ simplifies to
$\phi-\frac{1}{2}$.
Adding the influence of the relative speed ($\delta\ne 0$) leads to a
comparable structure (compare Fig.~\ref{fig:rec-models} left and middle).
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{d_mu_m0_q2_av0_0.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{d_mu_m1_q2_av0_0.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{d_mu_m0_q2_av0_2.pdf}
\vspace{-0.4cm}
\caption{(Color online) Stability region of the algebraically decaying models with
respect to $\mu$ and $d'$.
Left: $\mathcal{Q}=\langle \mu , 0, 2, 0\rangle$.
Middle: $\mathcal{Q}=\langle \mu , 1, 2, 0\rangle$. Right:
$\mathcal{Q}=\langle \mu , 0, 2, 0.2\rangle$. The colors are
mapped to the value of $\Phi$ in Eq.~(\ref{condition1}).
Negative values of $\Phi$ indicate stability regions.}
\label{fig:rec-models}
\end{center}
\end{figure}
Modifying these models by introducing a velocity-dependent enlargement
of pedestrians i.e.\ considering models in class $\mathcal{Q}=\langle
\mu\ne 0, 0, q, \tilde a_v\ne 0\rangle$, leads to $\Phi = \phi\omega
-\frac{1}{2}$, with smaller $\omega$ by increasing $\tilde a_v$, which
has a stabilizing effect on the system (see Fig.~\ref{fig:rec-models} right).
This means the velocity-dependence in this
kind of models enhances the stability of the system. In comparison,
the impact of the relative velocity on the stability of the system is
less significant.
Inverting the sign of $\delta$ adds a positive term to $\phi\omega^2$
in the expression of $\Phi$, which increases the instability of the system
Although negative values of $\delta$ give rise to instabilities, they
are physically not relevant, since that would imply that a faster
pedestrian in front has more influence on a slower pedestrian directly
behind.
\subsection{Simulations}
\label{sec:sim}
We solve the system of equations (\ref{eq:invd}) for $N=67$
using Heun's scheme with time step $\Delta t=10^{-5}$~s.
According to \cite{Treiber2015} Heun's scheme seems to be the best
scheme for simulations of pedestrian dynamics for many practical scenarios.
For all simulations performed in this work we use this scheme with an unchanged $\Delta t$.
Pedestrians are uniformly distributed in a one-dimensional system with
periodic boundary conditions and length $L=200$~m. The chosen
values of $N$ and $L$ lead to $d'\approx 1$ ($\tilde a_v=0$). $v_0'=3$.
The initial velocities are set to zero. The maximum simulation time is
$\Delta t = 2000$~s.
Only the initial position of the first pedestrian is slightly perturbed,
i.e. $\epsilon_1=10^{-4}\,$ ($\epsilon_{n\ne 1}=0$).
With $\Phi=0$ in Eq.~(\ref{condition}) we obtain for $\delta=\tilde
a_v=0$ the critical value for $\mu$ as
$\mu_{\rm{cr}}=\sqrt{\frac{d'^{q+1}}{2q}}$. Therefore, a model of
type $\mathcal{Q}=\langle 0.45, 0, 2, 0\rangle$ is stable since
$\mu=0.45$ is smaller than this critical value
$\mu_{\rm{cr}}=\frac{1}{2}$.
To observe the behavior of the system in the unstable regime we
perform simulations for a parameter set $\mathcal{Q}=\langle 0.55, 0,
2, 0\rangle$ with $\mu > \mu_{\rm cr}$. The simulations show an
oscillatory behavior that leads inevitably to overlapping among
pedestrians. Note that the model is not
defined when the distance $d'$ is zero, see Eq.~(\ref{eq:invd}).
This phenomenon (overlapping) is a stopping criterion for the simulation.
Since all pedestrians start with speed zero and
due to the small perturbation of the initial position ($\epsilon_1$) the speeds of pedestrians
in the beginning of simulations are perturbed too. However, depending on the state of the system
this initial perturbation may disperse to zero if the system is stable. Otherwise, it will grow
until the simulation is stopped due to overlapping.
Fig.~\ref{fig:stdv_rec} shows a comparison between the time evolution of the speed's standard deviation for both
cases
$\mathcal{Q}=\langle 0.45, 0, 2, 0\rangle$ and $\mathcal{Q}=\langle 0.55, 0, 2, 0\rangle$.
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{std_rec_log_N67_av_0_00_v0_3_00_mu_0_45_q_2.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{std_rec_log_N67_av_0_00_v0_3_00_mu_0_55_q_2.pdf}
\vspace{-0.4cm}
\caption{Standard deviation of the speeds with respect to simulation time. The initial perturbation
in the speed disperses to zero when the system is stable (left: $\mu=0.45$), while it
grows when the system is unstable (right with $\mu=0.55$).}\label{fig:stdv_rec}
\end{center}
\end{figure}
We conclude that in the unstable regime the investigated models with
algebraic forces lead to negative velocities (backward movement) and
hence unrealistic behavior. Introducing a velocity-dependent
enlargement of pedestrians stabilizes the system, but the unstable
regime remains unrealistic since the volume exclusion of a pedestrian
($a'_n$) with a negative speed can become negative.
\section{Exponential-distance models}
In this section we consider models with
\begin{equation}
f\Big(\dot x_n,
\Delta x_n\Big) \propto \exp\Big(- d'_n\Big)\,,
\end{equation}
i.e.~exponentially decaying repulsive forces using use the notation
introduced in the previous section.
The paradigmatic model in this class is arguably the social force
model (SFM) as originally introduced in~\cite{Helbing1995}. Further
modifications and enhancements followed. In~\cite{Helbing2000} a
physical force was introduced to mitigate overlapping among
pedestrians. Lakoba et al.~\cite{Lakoba2005} studied the calibration
of the modified SFM by improving the numerical efficiency of the model
and introducing several enhancements. The calibration of the modified
SFM was investigated again in~\cite{Johansson2007} by means of an
evolutionary optimization algorithm. Parisi et al.~\cite{Parisi2009}
investigated the difficulties of SFM concerning quantitative
description of pedestrian dynamics by introducing a mechanism, called
``respect mechanism'' to mitigate overlapping among pedestrians.
Finally in Ref.~\cite{Moussaid2009} an interesting Ansatz to calibrate
the SFM by means of experimental measurements led to a modified
repulsive force that includes the effect of the distance as well as
the angle between two pedestrians. However, these measurements,
basically from experiments with two pedestrians, are extrapolated to a
crowd with several individuals. Hence, it implicitly assumes that the
superposition of forces can be applied. This hypothesis, however,
lacks experimental evidence in the context of pedestrian dynamics.
Often different specifications of the repulsive force are adopted, in
form of circular or elliptical equipotential lines. However, for a
one-dimensional analysis both specifications are equivalent. In
comparison to the models with algebraic forces the exponential force
has no singularity at $d^\prime = 0$. Hence it is defined for all
distances and no regularization is required.
\subsection{Linear stability}
\label{sub-stab2}
One common point among the aforementioned models is their
consideration of a ``physical'' force to mitigate overlapping among
pedestrians. For the stability analysis we therefore consider the
following system using dimensionless variables:
\begin{equation}
\ddot x_n' = - a\exp\left(-\frac{d_n'}{b}\right) -c\,
r_\varepsilon(d_n') + v_0'- \dot x_n',
\label{eq:sfm}
\end{equation}
with $a$, $b$ and $c$ dimensionless positive constants, $d_n'$
as defined in (\ref{eq:effDistn}), $v_0'=\frac{v_0\tau}{a_0}$ and
$r_\varepsilon(\cdot)$ the function (\ref{eq:aTheta}).
The general form of these models contains five parameters. However,
the value for $\tau$ was determined empirically in
\cite{Helbing2003,Moussaid2009}. That means the system (\ref{eq:sfm})
can be defined by the quadruple
\begin{equation}
\mathcal{\tilde Q}=\langle a, b, c, \tilde a_v \rangle.
\end{equation}
Similarly to Sec.~\ref{sub-stab1} we consider the effect of small
perturbations $\epsilon_n(t)=\alpha_n e^{zt}$ to the steady state
positions $y_n$. After some calculations outlined in the appendix
Sec.~\ref{AppB} we obtain the following stability condition
\begin{equation}
\tilde \Phi \coloneqq -\frac12+\tilde c \alpha<0,
\label{eq:stab_condition_exp}
\end{equation}
with $\alpha=\frac1{2\tilde b -1}$, $\tilde b =\tilde a_v \tilde c$,
$\tilde c=\tilde a/b-\frac{1}{2}c$ and $\tilde a=-a\exp(-d'/b)$.
Assuming $d'$ is positive, which means $r_\varepsilon(\cdot)$ vanishes
or simply $c=0$, and the enlargement of pedestrians is
constant ($\tilde a_v=0$), we obtain
\begin{equation}
\tilde b = 0,\;\; \alpha=-1,
\end{equation}
and
\begin{equation}
\tilde \Phi = -\frac12+\frac{a}{b}\exp(-d'/b).
\label{eq:exp_stab_sp}
\end{equation}
Fig.~\ref{fig:sfm} depicts the stability regions for the
$\mathcal{\tilde Q}=\langle a, b, 0, 0\rangle$-class models in the
$(a,\,b)$-plane.
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{sfm_a_b_av0_00_d2_50.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{sfm_a_b_av0_00_d1_50.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{sfm_a_b_av0_00_d0_50.pdf}
\vspace{-0.4cm}
\caption{(Color online) Stability region of a modified SFM ($\mathcal{\tilde Q}=\langle
a, b, 0, 0\rangle$) with respect to $a$ and $b$ for different
densities. Left: $d'=2.5\,$. Middle: $d'=1.5\,$. Right: $d'=0.5\,
$. The colors
are mapped to the values of $\tilde \Phi$ (Eq. (\ref{eq:exp_stab_sp})).
Negative values indicate stability regions.}
\label{fig:sfm}
\end{center}
\end{figure}
To investigate the effect of a velocity-dependent enlargement of
pedestrians we evaluate the stability regions of $\mathcal{\tilde Q
}=\langle 4, b, 0, \tilde a_v\rangle$-class models. The value of
$a=4$ is according to Fig.~\ref{fig:sfm} large enough to lay in an
unstable region.
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{sfm_a_b_av0_15_d2_50.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{sfm_a_b_av0_15_d1_50.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{sfm_a_b_av0_15_d0_50.pdf}
\caption{(Color online) Stability region of a modified SFM ($\mathcal{\tilde Q}=\langle
a, b, 0, \tilde a_v =0.15\rangle$) with respect to $a$ and $b$ for
different densities. Left: $d'=2.5\,$. Middle:
$d'=1.5\, $. Right: $d'=0.5\, $.
}
\label{fig:avsfm}
\end{center}
\end{figure}
In Fig.~\ref{fig:avsfm} we observe that a system with a
velocity-dependent enlargement ($\tilde a_v \ne 0$) becomes
increasingly stable in the $(\tilde a, b)$-space with decreasing
density. This confirms the observation made in the previous section:
velocity-dependent enlargement of pedestrians has a stabilizing effect
on the system.
\subsection{Simulations}
\label{sec:sim2}
Similar to Sec.~\ref{sec:sim} we perform simulations with the
exponential-distance models for different parameters.
The same initial values and parameters as in Sec.~\ref{sec:sim} are considered.
$N=57$ pedestrians are uniformly distributed, which corresponds to $d'\approx 1.5$.
For $a_v=c=0$, the critical value of $a$ in dependence of $b$ is given
by $a_{\rm cr}= \frac{b}{2\exp(-d'/b)}$.
Accordingly we choose $b=1.5$ and $a=3$, which yield an unstable system (compare also to Fig.~\ref{fig:sfm}).
Here again we make the same observation as with algebraically
decaying models (Sec.~\ref{sec:sim}). In the unstable regime a
$\mathcal{\tilde Q}=\langle a, b, 0, 0\rangle$ models behave
unrealistically. Instead of jams, collisions occur
Based on the time series of the speed's standard deviation,
we compare the behavior of the model in a stable and an unstable
regime (defined according to Eq.~(\ref{eq:stab_condition_exp}).
Fig.~\ref{fig:vstd_exp} left shows as expected
for $\mathcal{\tilde Q}=\langle 1.5, 1.5, 0, 0\rangle$ that the standard deviation of the speed,
decreases to zero and the overall system converges to an homogeneous state, whereas it grows until the simulation interruption ($\mathcal{\tilde Q}=\langle 3, 1.5, 0, 0\rangle$).
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{std_exp_log_57_av0_00_a1_50_b1_50.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{std_exp_log_57_av0_00_a3_00_b1_50.pdf}
\vspace{-0.4cm}
\caption{Standard deviation of the speeds with respect to simulation time. The initial perturbation
in the speed disperses to zero when the system is stable (left: $a=1.5,\; b=1.5$), while it
grows when the system is unstable (right with $a=3.0,\; b=1.5$).}\label{fig:vstd_exp}
\end{center}
\end{figure}
\section{A new model}
In the previous sections we investigated properties of several
force-based models related to jam formation. The linear stability
analysis of these models yields conditions that determine parameter
regions where unstable behavior may lead to stop-and-go waves in
one-dimensional systems with boundary conditions. However, simulations
with parameters in the unstable regime lead to unrealistic behavior
(collisions, overlapping etc.) instead of stop-and-go waves.
In this section we discuss the reasons for this failure and formulate a
new model that produces stop-and-go waves in its unstable regime.
Rewriting the generic equation of motion (\ref{eq1}) as
\begin{equation}
\ddot x_n = \frac{\tilde v_0-\dot x_n}{\tau},
\label{eq:modified2}
\end{equation}
with $\tilde v_0 = \tau f+v_0 \le v_0$ implies that the movement of
pedestrian $n$ is determined by a driving force with a modified and
density-dependent desired speed $\tilde v_0$: the higher the density,
the smaller the desired speed. However, if the desired speed is
negative, which means pedestrians move backwards after some delay, collisions are
likely to happen. This is in fact the case in the reciprocal-distance
and exponential-distance models, where collisions are observed in the
unstable regimes instead of jams.
In order to avoid such problems, a non-linear function $f(\Delta
x_n, \dot x_n, \dot x_{n+1})$ such that $f(0, 0, 0) = -v_0/\tau$ is
required. That means that overlapping of pedestrians leads
to a vanishing desired speed $\tilde v_0=0$ instead of a negative one.
Note that initial high values of $v_0$ may still lead to backward movement even if the resulting
desired speed $\tilde v_0=0$. We discuss this effect in more detail in section \ref{sec:discussion}.
For $f$ we propose the following expression:
\begin{equation}
f(\Delta x_n, \dot x_n, \dot x_{n+1}) = - \frac{v_0}{\tau}
\log\Big(c\cdot R_n + 1\Big),
\label{eq:newf}
\end{equation}
with
\begin{equation}
\qquad R_n =r_\varepsilon
\Big(\frac{\Delta x_n}{a_n+a_{n+1}} -1\Big),\qquad c = e - 1.
\label{eq:Rn}
\end{equation}
Pedestrians anticipate collisions when their distance to their
predecessors is smaller than a critical distance $a=a_n + a_{n+1}$,
which is given by the addition of safety distances of two consecutive
pedestrians. It is worth pointing out at this point that $a_n$ does
not model the body of pedestrian $n$ but represents a ``personal''
safety distance. For $\Delta x_n=0$, i.e., $R_n=1$ the repulsive
force reaches the value $-v_0/\tau$ to nullify the effects of the
driving term (Fig.~\ref{fig:log_f}). In other words, the desired speed
$\tilde v_0$ vanishes and pedestrians are not pushed to move
\textit{backwards}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.32\columnwidth]{log_f.pdf}
\vspace{-0.4cm}
\caption{The absolute value of the repulsive force according to Eq. (\ref{eq:newf}).}
\label{fig:log_f}
\end{center}
\end{figure}
The corresponding dimensionless model we henceforth use is
\begin{equation}
\ddot x_n' = -v'_0\ln\Big(c\cdot R_n' + 1\Big)- \dot x_n' + v'_0,
\label{eq:modlog}
\end{equation}
with
\begin{equation}
\quad R_n'
=r_\varepsilon \Big(\frac{\Delta x_n'}{a'_n+a'_{n+1}} -1\Big),\quad v'_0
=\frac{v_0\tau}{a_0}.
\end{equation}
The main difference between this model and the optimal velocity model
\cite{Bando1995,Nakayama2005} is the velocity-dependent space
requirement of pedestrians, expressed by the critical distance $a$.
\subsection{Stability analysis}
In this section, we investigate the stability of the new model. We
suppose that $\Delta y<a'$, with $\Delta y=\frac1{\rho a_0}$ is the
mean dimensionless spacing and $a'=2(1+\tilde a _v v')$, $v'$ being
the dimensionless speed for the equilibrium of uniform solution, and
add a small perturbation $\epsilon_n$ to the dimensionless coordinates
of pedestrians. For $R_n'$ we obtain with $a^\prime =2(1 +\tilde
a_vv')$ and $a_v^\prime = \frac{\tilde a_v}{a^\prime}$
\begin{align}
R_n'
&\approx 1 - \frac{\Delta x'_n}{a^\prime}\Big( 1 - a_v^\prime(\dot \epsilon_n
+ \dot \epsilon_{n+1}) \Big).\;\;
\end{align}
From the equation of motion (\ref{eq:modlog}) we obtain with $d_0=1 +
c\Big( 1 - \frac{\Delta y}{a^\prime}\Big)$
\begin{align*}
\ln( c\cdot R_n'+1)
&\approx \ln(d_0) +\frac{c}{d_0}\Big(
\frac{\Delta y}{a^\prime}a_v^\prime(\dot \epsilon_n+ \dot
\epsilon_{n+1}) -\frac{\Delta \epsilon_n}{a^\prime} \Big)
\end{align*}
Eq. (\ref{eq:modlog}) in steady state yields $v_0'\ln(d_0) = v_0' - v'$ thus
\begin{equation}
\ddot \epsilon_n
= -v'_0 \frac{c}{d_0}\Big(
\frac{\Delta y}{a^\prime}a_v^\prime(\dot \epsilon_n+ \dot
\epsilon_{n+1}) -\frac{\Delta \epsilon_n}{a^\prime} \Big) - \dot \epsilon_n.
\label{eq:eps_neu}
\end{equation}
Eq. (\ref{eq:eps_neu}) rewritten in the $z$-domain yields
\begin{equation}
z^2 +\Big( \xi a_v'\Delta y\Big(e^{\boldsymbol{i}k}+1\Big)
+ 1\Big)z -
\xi\Big(e^{\boldsymbol{i}k}-1\Big) =0.
\label{eq:stab_neu_av}
\end{equation}
with $\xi= \frac{cv'_0}{a' d_0}$.
Given $\hat z^\pm$ two solutions of (\ref{eq:stab_neu_av}) we show in
Fig.~\ref{fig:newd_av} the influence of the velocity-dependence of the
safety distance ($\tilde a_v$) and the constant $v'_0$ on the
stability behavior of the model.
As expected we observe that velocity-dependent safety distance has a
stabilizing effect on the model. Unlike the previous models for
$a_v\ne 0$ the model still can show significant unstable behavior.
This observation is important since it has been shown in the context of different force-based models that constant space
requirement of pedestrians is responsible for an unrealistic shape of the
fundamental diagram in single-lane movement~\cite{Seyfried2006,Chraibi2010a}
Additionally, we observe that increasing $v'_0$ leads to an unstable system.
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{log_k_av_solution2.pdf}
\subfigure{}
\includegraphics[width=0.32\columnwidth]{log_k_v0_solution2.pdf}
\vspace{-0.4cm} \caption{(Color online) Left: Stability region in the $(\tilde a_v, k)$-space for
$v^\prime_0=3$ and $\Delta y=1.5$. Right: Stability region in the
$(\tilde v'_0, k)$-space for $\tilde a_v=0$ and $\Delta y=1.5$.
The colors are mapped to the values of real part of the positive solutions $\hat z^+$.
}
\label{fig:newd_av}
\end{center}
\end{figure}
Expanding Eq.~(\ref{eq:stab_neu_av}) around $k\approx 0$ yields the stability condition
\begin{equation}
\hat \Phi \coloneqq \Big(\frac{1}{1 + 2\xi a'_v \Delta
y}\Big)\Big(\frac{\xi}{1 + 2\xi a'_v \Delta y} +\xi a'_v\Delta y
\Big) - 1/2 < 0.
\label{eq:log_condition}
\end{equation}
For $\tilde a_v=0$ the equation above simplifies to
\begin{equation}
\xi<1/2,\qquad \xi= \frac{cv'_0}{a' d_0}.
\end{equation}
This result is in agreement with the stability condition
$V'<1/(2\tau)$ given in Ref.~\cite{Bando1995} for the system
\begin{equation}
\ddot x_n = A(V(\Delta x_n) -\dot x),
\end{equation}
with $A=1/\tau$ and $V(\Delta x_n) = v_0(1-\ln(1+cR))$.
The dimensionless from of the equation of motion (\ref{eq:modlog})
has only two free parameters, $v'_0$ and $\tilde a_v$.
In Fig.~\ref{fig:new_av_v0} we observe that the system becomes
increasingly unstable with increasing $v'_0$ (by a relatively small
and constant $\tilde a_v$). Assuming that the free flow speed $v_0$ is
constant, this means that increasing the reaction
time $\tau$ or diminishing the safety space leads to unstable behavior
of the system.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.35\columnwidth]{log_av_v0_solution2.pdf}
\caption{(Color online) Stability region in the $(\tilde a_v, v^\prime_0)$-space for
$\Delta y=1.5$. The colors are mapped to the values of $\hat
\Phi$ in Eq.~(\ref{eq:log_condition}). }
\label{fig:new_av_v0}
\end{center}
\end{figure}
\subsection{Simulations}
We perform simulations with the introduced models using the same
set-up as before. For $\tilde a_v=0$, $v'_0=1$ and $\Delta
y_n = 1.5$ we calculate the solution for 3000 s. Fig.~\ref{fig:jams}
shows the trajectories of 133 pedestrians. $\varepsilon$ in Eq. (\ref{eq:Rn}) is set to 0.01.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{new_traj_133_av0_00_v01_00.pdf}
\vspace{-0.3cm}
\caption{Trajectories by $\Delta y_n = 1.5$.
The trajectories show stop-and-go waves.}
\label{fig:jams}
\end{figure}
As shown in Fig.~\ref{fig:jams_vel} the speed does not become negative,
therefore backward movement is not observed. This condition favors
the appearance of stable jams.
\begin{figure}[H]
\centering
\subfigure{}
\includegraphics[scale=0.32]{screenshot_300.pdf}
\subfigure{}
\includegraphics[scale=0.32]{screenshot_2000.pdf}
\vspace{-0.4cm}
\caption{Speed of pedestrians at different time steps.
Left: $t=300\,$ s,
right: $t=2000\,$ s.}
\label{fig:jams_vel}
\end{figure}
Fig.~\ref{fig:std_vel_log} shows the time evolution of the speed's standard deviation. After a relatively pronounced
increase of the standard deviation, a stable plateau is formed. That means the system is in a ``stable'' homogeneous state.
\begin{figure}[H]
\centering
\includegraphics[scale=0.32]{std_new_log_133_av0_00_v01_00.pdf}
\caption{Standard deviation of the speed with respect to simulation time. The initial perturbation
in the speed stabilizes at a non zero value.}
\label{fig:std_vel_log}
\end{figure}
\section{Discussion and summary}
\label{sec:discussion}
Since their first application to pedestrian dynamics by Hirai and
Tarui~\cite{Hirai1977}, force-based models have been used extensively
to investigate the properties of crowds. The ``goodness'' of these
models is usually asserted by means of qualitative and/or quantitative
investigations. Hereby, a model is judged to be realistic if its
description of pedestrian dynamics is consistent with empirical
findings. As example the fundamental diagram is often used as a
benchmark to test the plausibility of such models.
Depending on the expression of the repulsive force, we classify the
investigated force-based models as ``algebraically decaying'' and
``exponential-distance models''. The repulsive force in the first
category is inversely proportional to the effective distance of two
pedestrians
\cite{Yu2005,Chraibi2010a,Helbing2000a,Seyfried2006,Guo2010,Guo2012,Lohner2010,Shiwakoti2011}.
In the second category however, the magnitude of the repulsive force
increases exponentially with decreasing
distance~\cite{Helbing1995,Helbing2000,Lakoba2005,Johansson2007,Parisi2009,Moussaid2009}.
Hybrid models that rely on additional mechanisms to optimize the
desired direction of pedestrians (e.g.~\cite{Moussaid2011}) or to
handle collisions among pedestrians like for
example~\cite{Karamouzas2009,Karamouzas2014}, where the concept of the
time-to-collision is incorporated in the repulsive forces, make the
analytic form of the repulsive force way too complicated to be
investigated analytically. Therefore, we do not include these models
in our analysis.
In this work we apply a method that gives new insights into the
characteristics of force-based models for pedestrian dynamics. It is
based on an analytical approach by investigating the linear stability
of the homogeneous steady state.
In this manner, it is possible to determine for which
parameter set, if any exists, a model is able to reproduce inhomogeneous states.
Yet the nature of the unstable states (and the presence of realistic stop-and-go waves)
has to be described by simulation.
From an empirical point of view, the stop-and-go waves that were
observed in experiments under laboratory conditions
\cite{Portz2010,Lemercier2012} have a short pseudo-period. Hence, it
is not clear if these waves disappear after a long time or remain. In
all cases, their existence has been observed frequently in experiments
under laboratory conditions.
We have confirmed the analytical results by simulations which also
give information about the nature of the unstable state. These
simulations have clearly shown that the unstable regions in the
investigated models do not show stop-and-go waves, but instead
unrealistic behavior, e.g.\ backward movement and hence overlapping of
pedestrians.
We have discussed that the superposition of forces may lead to
negative ``desired'' speeds and hence to backward movements. In an
attempt to avoid this side-effect we have introduced a simple force-based
model that shows no negative speeds in simulations. As expected, the
model is able to produce stop-and-go waves in the instability region
instead. However, depending on the
chosen values for $v'_0$, collisions \textit{can} occur, as a result of backwards movement
and negative speeds.
This is
explained by the fact that at the time $t_0$ when the sum of the
repulsive force and the positive driving term vanishes the system is
described by the following ODE
\begin{equation}
\ddot x'_n + \dot x'_n = 0,
\label{fig:decay_sys}
\end{equation}
which yields a speed that decays exponentially:
\begin{equation}
\dot x'_n = \dot x'_n(t_0)\exp(-t).
\end{equation}
$t_0$ can be interpreted as the time at which pedestrians start
anticipating possible collision.
Larger $v'_0$ implies a slower relaxation of the velocity.
Therefore, a possible enhancement of this model could be to shift the
minimal distance such that at $t_0$, $\Delta x'_n \ne 0$. That
improves the ability of the system to tolerate
slower decay of speeds for $t>t_0$. However, the
main difficulty is that the value of the critical time $t_0$
remains unknown and can not be easily calculated. This would
require adding more complexity to the model,
e.g. by considering behavioral anticipation of the dynamics, adding
more (physical) forces or implementing extra collision detection
techniques.
The investigations presented here were performed for single-file
motion, i.e. a strictly one-dimensional scenario. Although this situation
is well studied empirically in several controlled experiments, generically
pedestrian dynamics is two-dimensional. It remains to be seen, both
theoretically and empirically, how the scenario found here changes
in this case.
\section{Appendix}
\subsection{Derivation of stability condition for algebraic forces}
\label{appA}
Here we give the details of the derivation of the stability criterion
of Sec.~\ref{sub-stab1}.
From (\ref{eq-pertub}) we find that
\begin{equation}
\dot x_n' = v' + \dot \epsilon_n,\;\quad
\Delta \dot x_n' = \Delta \dot \epsilon_n\,,
\quad
\ddot x_n' = \ddot \epsilon_n,
\label{eq-pertub2}
\end{equation}
since $\ddot y_n=0$. Inserting this into
the equation of motion Eq.~(\ref{eq:invd}) we obtain
\begin{equation}
\ddot \epsilon_n =-F\cdot G+ v_0'-v' - \dot \epsilon_n,
\label{eqmotion}
\end{equation}
where $F$ and $G$ are defined as
\begin{eqnarray}
F &=& \Big(d' + \Delta \epsilon_n - \tilde a_v
(\dot \epsilon_n + \dot \epsilon_{n+1}) \Big)^{-q}\\
G &=& \Big({\mu + \delta r_\varepsilon(\Delta \dot \epsilon_n)}\Big)^2,
\end{eqnarray}
and $d' =\Delta y-2\tilde a_v v' - 2$.
We suppose that $v$ and $\rho$ are such that $d'\not=0$.
Considering the first-order approximation of $\exp(x)$ for
$x\ll \varepsilon$ we have
\begin{equation}
r_\varepsilon(x) \approx \varepsilon \ln\Big(2-\frac{x}{\varepsilon}\Big)
= \varepsilon \Big(\ln(2) + \ln( 1- \frac{x}{2\varepsilon} )\Big)
\approx \varepsilon\ln(2) - \frac{1}{2}x\,.
\label{eq:tanh}
\end{equation}
Then,
\begin{equation}
G \approx \left(\mu + \delta\varepsilon\ln(2)
- \frac{1}{2}\delta\Delta \dot \epsilon_n\right)^2
\approx \gamma^2 - \delta\gamma \Delta \dot \epsilon_n\,.
\end{equation}
where we have introduced $\gamma=\mu + \delta\varepsilon\ln(2)$.
Using the effective distance Eq.~(\ref{eq:effDistn}),
the expression for $F$ can be written as
\begin{equation}
F= \Big({\frac{1}{d^\prime}\Big)^q\Big(1-\underbrace{\frac{\tilde a_v
(\dot \epsilon_n + \dot \epsilon_{n+1})-\Delta \epsilon_n}{d^\prime} }_{\ll 1}
\Big)^{-q}}
\approx \Big(\frac{1}{d^\prime}\Big)^q\Big(1+ q\frac{\tilde a_v (\dot
\epsilon_n + \dot \epsilon_{n+1})-\Delta \epsilon_n}{d^\prime}\Big).
\end{equation}
Substituting the expressions for $F$ and $G$ in Eq.~(\ref{eqmotion}) yields
\begin{equation}
\ddot \epsilon_n = -\Big(\frac{1}{d^\prime}\Big)^q\Big( \gamma^2 +
\frac{{\gamma}^2 q\tilde a_v}{{d^\prime}}(\dot \epsilon_n + \dot
\epsilon_{n+1}) - \frac{{\gamma}^2q}{{d^\prime}}\Delta \epsilon_n
-\delta\gamma\Delta \dot \epsilon_n \Big)
+ v_0' -v' -\dot \epsilon_n.
\label{eq:x}
\end{equation}
In the steady state the equation of motion (\ref{eq:invd}) simplifies to
\begin{equation}
0 = -\frac{\gamma^2}{{d^\prime}^q} + v_0'- v'\,,
\label{eq2}
\end{equation}
and we obtain after rearranging Eq.~(\ref{eq:x})
\begin{equation}
\ddot \epsilon_n = \frac{\delta\gamma}{{d^\prime}^q}\Delta \dot
\epsilon_n + \frac{{\gamma}^2q}{{d^\prime}^{q+1}}\Delta \epsilon_n -
\frac{{\gamma}^2q \tilde a_v}{{d^\prime}^{q+1}}(\dot \epsilon_n
+ \dot \epsilon_{n+1}) - \dot \epsilon_n\,.
\label{eq:epsf}
\end{equation}
Assuming a perturbation of the form
$\epsilon_n(t)=\alpha_ne^{z t}$ with $z\in\mathbb{C}$ and $\alpha_n\in\mathbb R$, $n=1,\ldots,N$, yields
\begin{equation}
\alpha_nz^2 = \frac{\delta\gamma}{{d^\prime}^q}z(\alpha_{n+1}-\alpha_n)
+ \frac{{\gamma}^2q}{{d^\prime}^{q+1}}(\alpha_{n+1}-\alpha_n)
- \frac{{\gamma}^2q \tilde a_v}{{d^\prime}^{q+1}}z(\alpha_n+\alpha_{n+1})
- \alpha_nz,
\label{eq:a1}
\end{equation}
with $\alpha_{N+1}=\alpha_1$.
Introducing
\begin{equation}
A=\frac{\delta\gamma}{{d^\prime}^q}z+\frac{{\gamma}^2q}{{d^\prime}^{q+1}}
-\frac{{\gamma}^2q \tilde a_v}{{d^\prime}^{q+1}}z\qquad \mbox{and}\qquad
B=z^2+\frac{\delta\gamma}{{d^\prime}^q}z+\frac{{\gamma}^2q}{{d^\prime}^{q+1}}
+\frac{{\gamma}^2q \tilde a_v}{{d^\prime}^{q+1}}z+z,
\end{equation}
Eq.~(\ref{eq:a1}) takes the simple form
\begin{equation}
\alpha_n=\alpha_{n+1}\frac AB\,.
\end{equation}
Iterating over $n$, we obtain the rational fraction in $z$
\begin{equation}
\left(\frac AB\right)^N=1
\qquad\Leftrightarrow\qquad
A=Be^{i2\pi l/N},
\quad l=0,\ldots,N-1.
\label{eq:a4}
\end{equation}
This equation is
\begin{equation}
z^2 = \delta\gamma\frac{e^{\boldsymbol{i}k}-1}{{d^\prime}^q}z-
\phi \tilde a_v(e^{\boldsymbol{i}k}+1)z
+\phi(e^{\boldsymbol{i}k}-1)-z ,
\label{eq:stab}
\end{equation}
with $\phi= \frac{q\gamma^2}{{d^\prime}^{q+1}}$ and
$k=2\pi l/N$ with $l=0,\ldots,N-1$.
The system described by the equation of motion ~(\ref{eq:invd}) is
stable if the real part $\Re [z]$ of all roots $z$ of
Eq.~(\ref{eq:stabN}) is negative. Let $z^+$ and $z^-$ be two roots of
Eq.~(\ref{eq:stabN}). For five models (see Tab.~I), we
investigate the stability regions in dependence of different wave
numbers $k$ and different densities (Fig.~\ref{fig:d-k}). Since $z^+>z^-$ it is enough to check the sign of $z^+$.
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{mu_0_50_m0_q2_av_0_00_solution2.pdf}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{mu_0_50_m0_q1_av_0_00_solution2.pdf}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{mu_0_50_m0_q2_av_0_10_solution2.pdf}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{mu_0_50_m1_q1_av_0_10_solution2.pdf}
\caption{(Color online) Stability region in the $(d^\prime, k)$-space for
different model classes.
Top left: $\mathcal{Q}= \langle 0.5, 0, 2,0\rangle$.
Top right: $\mathcal{Q}= \langle 0.5, 0, 1, 0\rangle$.
Bottom left: $\mathcal{Q}= \langle 0.5, 0, 2, 0.1\rangle$.
Bottom right: $\mathcal{Q}= \langle 0.5, 1, 1, 0.1\rangle$. The colors
are mapped to the value of $\Re [z^+]$ such that stability corresponds to $\Re [z^+]<0$.
}
\label{fig:d-k}
\end{center}
\end{figure}
We can observe that introducing a velocity-dependence in form of
relative velocity in the numerator of the repulsive term
(\ref{eq:invd}) or in the space requirement (\ref{eq:an}) has
a stabilizing effect on the behavior of the model, especially for
small wave numbers $k$.
\subsubsection{Stability for small $k$}
\label{appA2}
Limiting the expansion to second order and taking advantage of
$e^{\boldsymbol{i}k}\approx 1+\boldsymbol{i}k- \frac{k^2}{2}$ we obtain
from Eq.~(\ref{eq:stab})
\begin{align}
{z^{(0)}}^2k^2 &= \frac{\delta\gamma}{{d^\prime}^q}\Big(
\boldsymbol{i}k-\frac{k^2}{2}\Big)\Big({z^{(0)}}k + {z^{(1)}}k^2\Big)
+\phi\Big(\boldsymbol{i}k-\frac{k^2}{2}\Big)\nonumber\\
&\quad - \tilde a_v\phi\Big(2+\boldsymbol{i}k-\frac{k^2}{2}\Big)
\Big({z^{(0)}}k + {z^{(1)}}k^2\Big)- ({z^{(0)}}k + {z^{(1)}}k^2)\nonumber\\
&=
\Big(\boldsymbol{i}\frac{\delta\gamma}{{d^\prime}^q}z^{(0)}-\frac{\phi}{2}
- 2 \tilde a_v\phi z^{(1)} -\boldsymbol{i} \tilde a_v\phi z^{(0)}
- z^{(1)}\Big)k^2
+ \Big( \boldsymbol{i}\phi -2 \tilde a_v\phi z^{(0)} - z^{(0)}\Big)k.
\end{align}
Rearranging with respect to $k$ yields
\begin{equation}
\Big({z^{(0)}}^2-\boldsymbol{i}\frac{\delta\gamma}{{d^\prime}^q}z^{(0)}
+ \frac{\phi}{2} + 2 \tilde a_v\phi{z^{(1)}} +\boldsymbol{i}\tilde a_v
\phi z^{(0)}+ z^{(1)}\Big) k^2 -\Big( \boldsymbol{i}\phi
-2\tilde a_v\phi z^{(0)} - z^{(0)}\Big)k=0.
\label{eq:1}
\end{equation}
By a first-order approximation the terms with $k^2$ in
Eq.~(\ref{eq:1}) can be ignored which leads to
\begin{equation}
\boldsymbol{i}\phi -2\tilde a_v\phi z^{(0)} - z^{(0)}=0.
\label{eq:z00}
\end{equation}
Hence,
\begin{equation}
z^{(0)} = \boldsymbol{i} \frac{\phi}{2\tilde a_v\phi + 1}.
\label{eq:z0}
\end{equation}
With $\Re [z^{(0)}] =0$ we notice that a first order approximation is
not enough to provide the stability criterion, therefore we consider a
second order approximation. From Eq.~(\ref{eq:1}) and because of
Eq.~(\ref{eq:z00}) we obtain
\begin{equation}
{z^{(0)}}^2-\boldsymbol{i}\Big(\frac{\delta\gamma}{{d^\prime}^q}-\tilde
a_v\phi \Big) z^{(0)} + \Big( 2\tilde a_v\phi+1\Big)z^{(1)}+ \frac{\phi}{2} =0.
\label{eq:2}
\end{equation}
Replacing the expression of $z^{(0)}$ from (\ref{eq:z0}) in (\ref{eq:2})
yields
\begin{equation}
\Big(\boldsymbol{i} \frac{\phi}{2\tilde a_v\phi + 1}\Big)^2 -
\boldsymbol{i}\Big(\frac{\delta\gamma}{{d^\prime}^q}-\tilde a_v\phi
\Big)\Big(\boldsymbol{i} \frac{\phi}{2\tilde a_v\phi + 1}\Big)+
\Big(2\tilde a_v\phi +1\Big)z^{(1)} + \frac{\phi}{2} =0,
\end{equation}
or
\begin{align}
\frac{ 2\tilde a_v\phi+1}{\phi}z^{(1)} &=
\frac{\phi}{(2\tilde a_v\phi + 1)^2} -
\Big(\frac{\delta\gamma}{{d^\prime}^q} -\tilde a_v\phi \Big)\Big(
\frac{1}{2\tilde a_v\phi + 1}\Big)-\frac{1}{2},\;\; \phi \ne 0.
\end{align}
Since the coefficient of $z^{(1)}$ is positive, the system described
by Eq.~(\ref{eqmotion}) is linearly stable for $k\approx 0$ if
\begin{equation}
\gamma>0,\qquad\phi\omega^2 -\Big(\frac{\delta\gamma}{{d^\prime}^q}- \tilde a_v \phi\Big)\omega - \frac{1}{2}<0,
\label{condition0}
\end{equation}
with the following notation $\omega=\frac{1}{2\tilde a_v\phi + 1}$.
Remarking that $\frac{1}{2}\omega(2\tilde a_v\phi + 1) = \frac{1}{2}$,
the inequality (\ref{condition0}) can be simplified to
\begin{equation}
\gamma>0,\qquad \Phi \coloneqq \phi\omega - \frac{\delta\gamma}{{d^\prime}^q} - \frac{1}{2}<0.
\label{condition}
\end{equation}
Here, as a reminder,
$\phi = \frac{q\gamma^2}{{d^\prime}^{q+1}}$,
$\gamma=\mu+\delta\varepsilon\ln(2)$ and $d'=\Delta y-2\tilde a_vv-2$,
with $\tilde a_v=a_v/\tau$. Note that since $\delta,\mu\ge0$ and
$\varepsilon>0$, $\gamma>0$ implies here $\mu>0$ or $\delta>0$.
\subsection{Derivation of stability condition for exponential forces}
\label{AppB}
As in the previous section we add a small dimensionless perturbation
$\epsilon_n$ to the uniform solution and get from Eq.~(\ref{eq:sfm})
\begin{equation}
\ddot \epsilon_n =-a \exp \Big(\frac{-d'}{b}\Big)
\exp\Big(\frac{\tilde a_v(\dot \epsilon_n
+ \dot \epsilon_{n+1})-\Delta \epsilon_n}{b}\Big) - c\Big(\varepsilon
\ln(2)-\frac{1}{2}(d'+\Delta \epsilon - \tilde a_v(\dot \epsilon_n
+ \dot \epsilon_{n+1})\Big)+v_0' - v' - \dot \epsilon_n.
\label{eq:seps}
\end{equation}
In the steady state we have $\ddot x^{(0)} = 0$ and
Eq.~(\ref{eq:sfm}) reduces to
\begin{equation}
0 = -a \exp\Big(\frac{-d'}{b}\Big) - c \Big(\varepsilon
\ln(2)-\frac{1}{2}d'\Big) +v_0' - v'\,.
\label{eq:sfm_ss}
\end{equation}
Applying (\ref{eq:sfm_ss}) to (\ref{eq:seps}) yields
\begin{align}
\ddot \epsilon_n&=\underbrace{-a \exp \Big(\frac{-d'}{b}\Big)}_{\tilde a}
\Big(\exp\Big(\frac{\tilde a_v(\dot \epsilon_n
+ \dot \epsilon_{n+1}) - \Delta \epsilon_n}{b}\Big) -1\Big)
+\frac{1}{2}c\Big(\Delta\epsilon_n-\tilde a_v(\dot \epsilon_n
+ \dot \epsilon_{n+1})\Big) -\dot \epsilon_n\nonumber\\
&\approx \tilde a\Big(\frac{\tilde a_v(\dot \epsilon_n
+ \dot \epsilon_{n+1})-\Delta \epsilon_n}{b} \Big) +\frac{1}{2}c\Big(\Delta
\epsilon_n-\tilde a_v(\dot \epsilon_n
+ \dot \epsilon_{n+1})\Big) -\dot \epsilon_n\nonumber\\
&=\tilde a_v(\tilde a/b-\frac{1}{2}c)(\dot \epsilon_n + \dot
\epsilon_{n+1})-(\tilde a/b - \frac{1}{2}c)\Delta \epsilon_n - \dot \epsilon_n.
\end{align}
By introducing the substitutions $\tilde c =\tilde a/b - \frac{1}{2}c$
and $\tilde b = \tilde a_v\tilde c$ we obtain a simplified equation for
the perturbation:
\begin{equation}
\ddot \epsilon_n = \tilde b(\dot \epsilon_n + \dot
\epsilon_{n+1}) - \tilde c\Delta \epsilon_n - \dot \epsilon_n.
\end{equation}
Using the expansion $\epsilon_n(t)=\alpha_n e^{zt}$, we obtain
\begin{equation}
z^2-\Big(\tilde b(e^{\boldsymbol{i}k}+1) -1\Big)z
+\tilde c(e^{\boldsymbol{i}k}-1)=0.
\label{eq:polyExp}
\end{equation}
Fig.~\ref{fig:d-k-exp} shows the instability regions in the ($k, d'$)-space.
With $\tilde a_v \ne 0$ the instability of the system is
considerably reduced.
\begin{figure}[H]
\begin{center}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{a_12_0_b1_0_av_0_00_c0_0_solution2.pdf}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{a_12_0_b1_0_av_0_20_c0_0_solution2.pdf}
\includegraphics[width=0.4\columnwidth]{a_12_0_b2_0_av_0_00_c0_0_solution2.pdf}
\subfigure{}
\includegraphics[width=0.4\columnwidth]{a_12_0_b2_0_av_0_20_c0_0_solution2.pdf}
\caption{(Color online) Stability region in the $(d^\prime, k)$-space for
different model classes. Up left: $\mathcal{\tilde Q}= \langle
12, 1, 0, 0\rangle$. Up right: $\mathcal{\tilde Q}= \langle 12,
1, 0, 0.2\rangle$. Bottom left: $\mathcal{\tilde Q}= \langle 12,
2, 0, 0\rangle$. Bottom right: $\mathcal{\tilde Q}= \langle 12,
2, 0, 0.2\rangle$. The colors are mapped to the value of
$\Re{(\tilde z^+)}$.
}
\label{fig:d-k-exp}
\end{center}
\end{figure}
\subsubsection{Stability for small $k$}
\label{AppB2}
We further focus on the case $k\approx 0$. For the solution
$z\approx z^{(0)}k + z^{(1)}k^2$ we obtain by substituting in (\ref{eq:polyExp})
\begin{align}
{z^{(0)}}^2&=\tilde b(2 + \boldsymbol{i}k-\frac{k^2}{2})(z^{(0)}k
+ z^{(1)}k^2) -\tilde
c(\boldsymbol{i}k-\frac{k^2}{2}) -(z^{(0)}k +z^{(1)}k^2).
\end{align}
Rearranging the coefficients of the same power yields
\begin{equation}
\Big(-{z^{(0)}}^2 + 2\tilde b z^{(1)} + \boldsymbol{i}\tilde b z^{(0)}
+ \frac{\tilde c}{2} -
z^{(1)} \Big)k^2 + \Big(2\tilde b z^{(0)} -\boldsymbol{i}\tilde c-z^{(0)}
\Big)k=0.
\label{eq:pol}
\end{equation}
A first-order approximation yields by ignoring the $k^2$-term in
(\ref{eq:pol}):
\begin{equation}
z^{(0)} = \boldsymbol{i}\frac{\tilde c}{2\tilde b\tau -1}.
\label{eq:z02}
\end{equation}
Since $\Re(z^{(0)})=0$, we consider a second order approximation of $z$.
Therefore, replacing $z^{(0)}$ by its expression from (\ref{eq:z02}) yields
\begin{equation}
z^{(1)}\left(2\tilde b-1\right)= -\frac{\tilde c}{2} -
\left(\frac{\tilde c}{2\tilde b -1}\right)^2 +\tilde b
\frac{\tilde c}{2\tilde b -1}.
\end{equation}
Finally we obtain for $z^{(1)}$
\begin{equation}
z^{(1)}= -\left(\frac{\tilde c}{2} + \left(\frac{\tilde c}{2\tilde
b -1}\right)^2 - \tilde b \frac{\tilde c}{2\tilde b
-1}\right)\Big(\frac{1}{2\tilde b-1}\Big)
= -\alpha\left(\frac{\tilde c}{2} + \tilde c^2\alpha^2
- \tilde b \tilde c \alpha\right).
\end{equation}
and the system is linearly stable for $k\approx 0$ if
\begin{equation}
-\alpha\left(\frac{\tilde c}{2} + \tilde c^2\alpha^2
- \tilde b \tilde c\alpha\right)<0
\end{equation}
where $\alpha=\frac{1}{2\tilde b-1}$.
By simplifying using $-\alpha^2\tilde c >0$, we obtain the condition
\begin{equation}
\tilde \Phi=-\frac12+\tilde c \alpha<0,
\end{equation}
with $\alpha=\frac1{2\tilde b -1}$, $\tilde b =\tilde a_v \tilde c$,
$\tilde c=\tilde a/b-\frac{1}{2}c$ and $\tilde a=-a\exp(-d'/b)$.
\begin{acknowledgments}
M.C. is grateful to Japan Society for the Promotion of Science (JSPS)
for funding this work under Grant-Nr.: PE 12078. T.E. acknowledges
support from JSPS Grants-in-Aid for Scientific Research (13J05086).
A.Sch. thanks the Deutsche Forschungsgemeinschaft (DFG) for support
under grant ``Scha 636/9-1''.
\end{acknowledgments}
|
1,116,691,497,654 | arxiv | \section{Introduction}
Domain walls (DW) form a frontier region between adjacent ferromagnetic domains with different direction of magnetization. Their internal structure depends on the competition between the local exchange and anisotropy and the non-local dipolar field\cite{Hubert}. When reducing the size of the objects to the nanoscale, the shape starts to play an important role. Magnetic nano-materials as nanowires, nanotubes and nanodots can be designed to have specific properties and arrays of patterned nano-materials can be used as storage or logic devices. This devices are mainly based on DWs motion that can be controlled by magnetic field or electric current\cite{Tatara,Boulle}.
Recently, possibles applications were proposed based on the current induced motion of the domain walls in nanowires\cite{Allwood,Parkin,Hayashi}. Usually, transverse (TDW) or vortex walls (N\'{e}el type walls) appear in nanowires between head-to-head (HH) or tail-to-tail (TT) domains. Sometimes, these two wall types can transform one into another during propagation and even other types can emerge\cite{Klaui}. In small diameter cylindrical nanowires, the Walker limit is completely suppressed for TDWs with theirs velocity and precession speed depending linearly on the applied current\cite{Yan2}.
The interactions between DWs\cite{Hayward1,Hayward2} or between a DW and artificially patterned traps\cite{Petit,OBrien} are used to manipulate the propagation of DWs. Often, these interactions are pictured in a first approximation in the Columbian approach, introducing the magnetostatic charge (pole) density $\rho = -\mu_0\nabla$\textbf{m} and surface charge density $\sigma = \textbf{m}\cdot\textbf{n}$. A HHDW carries an intrinsic North monopole moment or a positive charge while a TTDW carries a South monopole moment or negative charge\cite{Hayward1}. Like charges repel, while opposite charges attract.
The interaction between DWs can also be interpreted using topology. A DW as a nonuniform magnetization configuration, can be viewed as a topological defect or soliton\cite{Braun}. A topological defect is characterized by the winding number \textit{n}, which counts the number of times the field wraps around the unit circle when crossing the defect. The DWs in flat nanowires are composite objects formed of edge defects with half-integer winding numbers\cite{Tchernyshyov,Kunz} (\textit{n} = $\pm$1/2). Two edge defects of opposite winding number\cite{Tretiakov} belong to the same topological sector and can be deformed continuously into the ground state and annihilate (attract). Two defects of same winding number cannot be deformed continuously into a zero charge state and thus repel each other. An external force is needed to annihilate them.
In this letter, we study the interaction between DWs in cylindrical ferromagnetic nanowires of moderate aspect ratio. Understanding theirs behavior and the possible metastable states that can appear, as local equilibrium states, are important for the progress of devices. The stability of these states is discussed both in the free and in the forced regime. The free regime consists of the relaxation of DWs in the nanowire, while the forced regime comprises the interaction of DWs with a spin polarized electric current. We show that metastable DWs pairs are created in small diameter cylinders and can form oscillatory bound states depending on the local potential landscape. Under the influence of a DC current, the bound states can be moved along the nanowire in both directions. These pairs could assist in the reversal mechanism of small nanowires\cite{Braun2}.
\begin{figure}[!t]
\center
\includegraphics[width=8cm]{Fig1.pdf}
\caption{\label{Fig.1} (a) Two transverse domain walls with opposite chirality in a cylindrical nanowire of 600nm in length (longitudinal $x$ axis) and 60nm in diameter. (b) Magnetic density charge magnitude of two TDWs as in panel (a). The surface charge density at the edges is indicated along with the polarity of the DWs magnetic charges (poles). (c) Calculated potential landscape of a rigid DW, as in the panels (a)-(b), situated at the position $x$=-100nm between the edge of the nanowire (at $x$=-300nm) and another DW with opposite chirality (at $x$=+100nm). The filled circles correspond to the monopole-monopole interaction, while the full line and the empty squares correspond to the interaction energy with dipolar correction when the two DWs point in the same direction or opposite.}
\end{figure}
\section{One dimensional model}
The motion of a DW is usually described using the rigid 1D approximation\cite{Slonc,Thiaville}. This model gives a qualitative understanding of the motion of TDWs. The equations of motion of the DW are :
\begin{align}
\label{eq1}
(1+\alpha^2)\dot{X} =& -\frac{\alpha\gamma\Delta}{2\mu_0M_s S}\frac{\partial E}{\partial X} + \frac{\gamma\Delta}{2}H_k\sin 2\psi \nonumber\\
& + \frac{\gamma}{2\mu_0M_s S}\frac{\partial E}{\partial\psi} + (1+\alpha\beta)u \\
(1+\alpha^2)\dot{\psi} =& -\frac{\gamma}{2\mu_0M_s S}\frac{\partial E}{\partial X} -\frac{\gamma\alpha}{2}H_k\sin 2\psi \nonumber\\
& - \frac{\alpha\gamma}{2\Delta\mu_0 M_s S}\frac{\partial E}{\partial\psi} + \frac{\beta-\alpha}{\Delta}u
\end{align}
\noindent where $X$ and $\psi$ are the position and azimuthal angle (in the $yz$ plane) of the DW, $\Delta$ the DW width, $\gamma$ the gyromagnetic ratio, $M_s$ the saturation magnetization, $H_k$ the DW demagnetizing field, $\alpha$ the damping parameter, $u$ the spin drift velocity and $\beta$ the non-adiabatic parameter. $E$ is the potential energy of the DW that includes the internal energy, the Zeeman energy, the interaction energy with other DW and eventually the pinning energy.
The interaction energy between two DWs of opposite chirality in the same wire was calculated analytically using the point-charge model with multipole expansion\cite{Kruger}. The expression below disregards the internal structure of the DWs and the exchange interaction between them, considering only the interaction through the dipolar field. As long as the distance between the two DWs remains large (above three times the DW width), this approximates well the interaction of the DWs and gives a qualitative description of the interaction. In the numerical simulation of the next section, the exchange energy is taken into account and as will be shown in Fig.~\ref{Fig.2}(d) it's smaller than the demagnetizing energy by a factor 4 when the DWs are at large distances.
In this 1D approximation the interaction energy between two DWs separated by $d$ is:
\begin{align}
\label{eq2}
E &= \frac{\mu_0q_1q_2}{4\pi d} \left( 1 - \frac{\pi^2\Delta^2\cos(\psi_1-\psi_2))}{4d^2} \right) \nonumber\\ &= -\frac{\mu_0M_s^2S^2}{\pi d}\left( 1 - \frac{\pi^2\Delta^2\cos(\psi_1-\psi_2))}{4d^2} \right)
\end{align}
Here $q_{ij}$ represent the 'magnetic charges' of the DWs and in this approximation are equal to $\pm2M_s$S, with S the section of the nanowire. The first term represents the Coulomb-type interaction energy (monopole-monopole), while the second is the correction from the dipole-dipole interaction. The next correction term (quadrupolar contribution) does not change the potential landscape calculated below. The interaction between the DWs depends mainly on the separation distance $d$. The interaction energy increases with $d$, therefore one DW experiences an energy well created by the second DW\cite{Hayward1}. The correction term becomes important at moderate distances ($d \sim 3-5\Delta$) and depends on the orientation of the DWs dipolar moment. It takes extremal values when the DWs dipolar moment points in the same direction or opposite. This equation remains valid as long as the DWs do not overlap.
In a finite size nanowire, the charged DW also interacts with the edge charges (Fig.~\ref{Fig.1}(b)) that create another potential well with different depth. The monopoles of each end of the wire are given by $\pm M_sS$. The total interaction energy of a rigid DW is shown in Fig.~\ref{Fig.1}(c). The DW is situated at half distance between the edge of the nanowire (at the position $x$=-300nm) and another DW of opposite chirality (at the position $x$=+100nm), having a width $\Delta$ = 30nm. When considering only the first term in Eq.~\eqref{eq2} (filled cercles), the DW fells an attraction from the second DW as they have opposite charges. If the dipolar correction is taken into account, the DW is attracted or repelled by the other DW when theirs dipolar moments point in opposite or same direction (empty squares or full line) and this interaction decreases more slowly as in the monopolar case. As a result, if the magnetization in the two DWs precesses with different angular velocity, the interaction energy between them oscillates between attraction and repulsion leading to a metastable bound state. As the precession in TDWs is related with changes in position, this type of metastable bound state should oscillate along the symmetry axis of the nanowire.
\section{Numerical simulation}
To investigate the dynamics of DW pairs and the predictions of the analytical 1D model, we compute full 3D micromagnetic simulations (with the nmag package\cite{Fischbacher}) to determine the spatial distribution of the magnetization dynamics\cite{Dolocan1}. We use cylinders with a length varying between 600nm and 1200nm and diameters ranging from 10nm to 60nm. The cylinders were discretized into a mesh with a cell size between 0.9nm, for thinner cylinders, to 3nm for thickest, inferior to the exchange length ($\sim$5nm for Ni). We consistently checked that smaller cell size discretization does not influence the results presented below.
\begin{figure}[!t]
\includegraphics[width=6cm]{Fig2.pdf}\centering
\caption{\label{Fig.2} (Color online) (a) Density plot of the average magnetization ($m_x$) along the nanowires axis showing a metastable state of two DWs. The plot of the angular variation of the magnetization in the $yz$ plane ($\psi$ angle in degrees) of the two DWs is shown in (b). (c) Instantaneous period of the two signals from (b). In the region of synchronization the ratio of periods remains constant. (d) Time variation of exchange and demagnetizing energies of (a).}
\end{figure}
The DWs in the nanowire were obtained by different methods: the DWs were nucleated at the ends of the nanowire and subsequently displaced to the center by a spin polarized current, or by using a predefined multi-domain magnetic state. In experiments, DWs are usually injected from nucleation pads at the ends of the nanowire\cite{Glathe} or at the corners of a U-shaped nanowire\cite{Faulkner}.
Fig.~\ref{Fig.1}(a) shows a snapshot of a pair of TDWs of opposite chirality (HH and TT) in a cylindrical nanowire. The configuration was obtained for a Nickel nanowire with 60nm diameter and 600nm in length (4nm cell size, $\mu_0$M$_s$ = 0.6T, zero crystalline anisotropy). The transverse DW is the natural ground state. For this diameter, the difference in total micromagnetic energy between the TW and VW is only of 0.6\% of E$_0$, with E$_0 = \mu_0 M_s^2$/2. The form of the walls is triangular and similar with the one of flat nanomagnets\cite{Thiaville}. Fig.~\ref{Fig.1}(b) shows the magnitude (norm) of the divergence of magnetization on a planecut through the center of the nanowire for the snapshot in panel (a). Light areas indicate the maximum values of $\|\nabla\textbf{m}\|$ with the polarity (poles) denoted by $\pm$. We also indicate the polarity of the surface density $\sigma = \pm\textbf{m}$ at the two edges, considering a nanowire uniformly magnetized in the $x$ direction.
The dynamics of a pair of TDW is shown in Fig.~\ref{Fig.2}(a). The figure shows the density plot of the magnetization ($m_x$) along the axis of the nanowire ($x$) during 200ns. The DWs position oscillates while the magnetization inside the DWs precesses out of phase in the $yz$ plane (normal to the cylinder axis) as shown in the plot of the azimuthal angle ($\psi$ in degrees) in Fig.~\ref{Fig.2}(b) for each DW. The precession of the magnetization in the DWs, that is anticlockwise has a variable angular velocity. We observe a metastable bound state with a lifetime of 200ns. Initially, the magnetization in the two DWs is rotating with different angular velocity and after some time the two DWs turn synchronously and annihilate. To determine the instant of synchronization, the instantaneous phases for the two signals in panel (b) were computed using the Hilbert transform. Afterwards, the instantaneous period (frequency) was calculated and is represented in Fig.~\ref{Fig.2}(c). When the two signals synchronize, the ratio of the frequencies remains constant. We observe that the two DWs synchronize after 170ns and shortly after they annihilate creating a small burst of spin waves.
As observed in the Fig.~\ref{Fig.1}(b), the magnetic charge distribution for a TDW is asymmetric, the wide side being a region of high charge concentration\cite{Petit,Zeng}. Precession of the DW magnetization means the precession of the wide side of each DW and therefore of the respective magnetic charge and dipolar moment. The DWs magnetization can point in every direction in the $yz$ plane therefore there is a continuous precession of the DWs charge and of their mutual interaction. The formation of the metastable bound state of two DWs is a result of the charge oscillations of the DWs in the potential landscape of the nanowire (with edges). As shown in Fig.~\ref{Fig.2}(d), its motion is determined by the conversion between the exchange and demagnetizing energies which oscillate in time. The total energy is actually decreasing slowly do to the small damping factor. The dissipation damps the oscillation of the bound state as can be seen in the Fig.~\ref{Fig.2}(a), where after 200ns the pair shrinks and annihilate. At small distances between the TDWs, the exchange energy increases being of the same order as the dipolar energy and tries to align the magnetization of the DWs.
\begin{figure}[!t]
\center
\includegraphics[width=6cm]{Fig3.pdf}
\caption{\label{Fig.3} (Color online) Density plot of the average $x$ component of the magnetization, along the nanowire axis, for a cylinder with a diameter of 20nm and length 600nm with no anisotropy (a) and with high transverse ($z$ axis) anisotropy (c). When no anisotropy is present, the DWs form a metastable state and they synchronize after 260ns as shown by the instantaneous period of the two DWs in (b). When high transverse anisotropy is present, the two DWs don't oscillate as theirs azimuthal angles stay constant as observed in (d).}
\end{figure}
The mutual interaction of the DWs can also be explained by topological considerations. The DWs can be considered as composite objects formed of edge defects with half-integer winding numbers\cite{Tchernyshyov,Kunz} (n = $\pm$1/2). Two edge defects of opposite winding number or opposite skyrmion charge\cite{Tretiakov} annihilate (attract) while two defects of same winding number or same skyrmion charge repel each other. The bound state is therefore a result of the precession of the topological charge around the axis of the nanowire.
To clearly demonstrate that metastable states exist in cylindrical nanowires, we compared the same initial predefined state of two DWs situated in two identical nanowires (length 600nm, diameter 20nm): one with high uniaxial perpendicular anisotropy and one without anisotropy (as before). When no anisotropy is present (Fig.~\ref{Fig.3}(a)), the magnetization precess freely on the $yz$ plane and various states can appear as seen above. The two DW form a metastable state with a lifetime of 300ns. As the pair width is smaller than in larger diameter nanowires (smaller DW width), the DWs can travel longer distances before reaching the edge. As the pair synchronizes (after 260ns, see panel (b)) and shrinks, the angular velocity and therefore the speed of the pair increases and the amplitude of the spatial oscillation increases before annihilation.
In the case of the high perpendicular (along $z$ axis) anisotropy (Fig.~\ref{Fig.3}(c)), the magnetization is frustrated and the anisotropy breaks the rotational symmetry. Using a value for the anisotropy of K$_{\perp}$ = 5.2$\times 10^5$J/m$^3$ (value corresponding to Co), we observe that the magnetization is strongly frustrated in the $yz$-plane and it does not precess similar to what happens for DWs in flat nanowires. Pairs of TDWs can form, with the magnetization in the walls pointing in the transverse hard-axis direction (Fig.~\ref{Fig.3}(d)) but no oscillatory bound state appears. The pair of DWs annihilate rapidly after 1.5ns. The anisotropy energy is much higher than the other energies at play and damps strongly the motion as the precession is obstructed.
The formation of the metastable bound state of DWs depends on the potential landscape of the sample. If the magnetization of DWs turns asynchronously and is not frustrated, the metastable state can form even in smaller diameter or longer nanowires (not shown). The key conditions for the formation of a bound state in our simulations is the precession speed of each DWs magnetization and the position of the DWs as the tails of the walls are confined by the finiteness of the sample\cite{Dolocan2} and by each other. As long as the above conditions are fulfilled, even in a more complex potential landscape, oscillating metastable pairs appear in nanowires.
\textit{Forced regime}. The forced regime covers the interaction of DWs with an external force like a spin polarized current or a magnetic field. We only consider here the former case. Nanosecond dc current pulses were applied in both directions along the symmetry axis of the nanowire ($x$ axis). In all the cases, the DWs move linearly in the direction of the electron flow. This interaction is detailed in Fig.~\ref{Fig.4}(a) where the position of a pair of DWs is shown: eight 2ns dc current pulses are applied with an amplitude of 5$\times10^{11}$A/m$^2$ ($u=42m/s$) to the same initial configuration as in Fig.~\ref{Fig.2}. The damping parameter $\alpha$ is taken as 0.015, while the nonadiabatic parameter $\beta$ = 0.05. The first pulse is applied in the +$x$ direction, while the second pulse is applied in reverse direction, to move the pair back. The delay between the pulses is 2ns. After another 2ns, the sequence is repeted. The DW pair is then relaxed for an initial period of 25ns, after which the same procedure of four pulses as before is applied. Afterward, the pair relaxes freely. The DW pair oscillates during the two relaxation periods and after 170ns annihilates, as the DWs precess synchronously (see the instantaneous periods in Fig.~\ref{Fig.4}(c)). The choice of the amplitude of the current in simulations is made as to have a reasonable speed and exclude nonlinear effects. The pulse duration was deliberately kept short not to nucleate DWs at the ends of the cylinder.
The influence of the current manifests through the modification of the speed and angular velocity of the DW pair during the application of the pulse. The DWs move in the same direction contrary to the case of an external applied field and maintain a constant distance while moving. The current modifies the potential landscape of the nanowire and slows the precession velocity (Doppler shift). As can be observed in Fig.~\ref{Fig.4}(b), the rotation direction and angular velocity can change due to changes in the torque applied by the current. The uniform precession is disturbed and the main interaction of the DWs is with the current. The pair width remains constant during the application of the current and after the extinction of the pulse the DWs restart to synchronize. Therefore, the DWs pair can be moved long distances with current pulses without disrupting the bound state and restarts to oscillate when the current is turned off.
The time evolution of the DWs can be explained with the 1D analytical model. From Eq.~\eqref{eq1}, the velocity and the precession speed of the TW in cylindrical nanowires can be calculated for the case $\alpha\neq\beta$ (with H$_{eff}$=0) as: $v = \frac{1+\beta\alpha}{1+\alpha^2}u $ and $\dot{\phi} = \frac{\beta - \alpha}{1+\alpha^2}\frac{u}{\Delta}$. The speed of the DW is almost linear ($\alpha, \beta \ll 1$) in $u$ while the precession depends on the sign of $\alpha - \beta$ (here $<<$ 0) and $u$. In the case of small cylindrical nanowires, no threshold current is needed to move the DW and the influence of the current is determined by the above expressions. The numerically calculated velocity (from Fig.~\ref{Fig.4}) corresponds to the analytical value. Changing the nonadiabatic parameter $\beta$ to values higher or smaller than $\alpha$, modifies the precession speed of both DW and thus does not affect the metastable boundstate.
\begin{figure}[!t]
\center
\includegraphics[width=7cm]{Fig4.pdf}
\caption{\label{Fig.4} (Color online) (a) Variation of position of two DWs in the forced regime for the same cylinder as in Fig.~\ref{Fig.2}. Four successive pulses of spin polarized current with amplitude of 5$\times10^{11}$A/m$^2$ were applied during 2ns at 2ns intervals in both directions along the $x$-axis. After 25ns, four other pulses where again applied to move the pair of DWs. (b) Angular variation ($\psi$ angle) of the two DWs shown in panel (a). (c) Instantaneous period of the two signals from (b) showing synchronization after 150ns.}
\end{figure}
In real nanowires, the DWs are usually pinned by defects with an external force (or just thermal force) needed to depin them. The DWs can still precess and the bound state can form if two DWs are close enough and precess out of phase. The movement of one DW can be estimated from Eq.~\eqref{eq1} with a pinning energy term as for artificial defects\cite{Martinez,Gonzalez}. The influence of temperature should not affect the apparition of bound states as the thermal energy at room temperature is only 25meV. Although the precession in the two DW should be affected in the same manner, the thermal energy can influence the DW that is closer to the edge if this one is at the top of the potential curve (Fig.~\ref{Fig.1}(c)) and break the pair. The lifetime of the bound states could be diminished.
In summary, in this paper we reported on the metastable DW states that appear in ideal small diameter cylindrical nanowires. We showed that in special conditions the DW pairs form metastable bound states that can be local equilibrium states with a lifetime longer than 200ns. These states oscillate with small amplitude around theirs position. Theirs presence can perturb the performance of devices based on DW motion. One way to stabilize the DWs position is through strong anisotropy or pinning by patterning of notches. We also observed that the DW pairs are stabilized by an external force, as a spin polarized current, and can be moved over long distances. If hard-axis anisotropy is present, it frustrates the precession of the magnetization and the motion of DWs under current.
The author wish to thank L. Raymond for the access to the Lafite servers and is grateful for the support of the NANOMAG platform by FEDER and Ville de Marseille. Part of the computations were performed at the Mesocentre d'Aix-Marseille University.
|
1,116,691,497,655 | arxiv |
\section{Introduction}
The state complexity of a rational language is the size of its minimal automaton and the state complexity of a rational operation is the maximal one of those languages obtained by applying this operation onto languages of fixed state complexities.
The classical approach is to compute an upper bound and to provide a witness, that is a specific example reaching the bound which is then the desired state complexity.
Since the $70s$, the state complexity of
numerous unary and binary operations has been computed. See, for example, \cite{Dom02,GMRY17,JJS05,Jir05,JO08,Yu01a} for a survey of the subject. More recently, the state complexity of combinations of operations has also been studied. In most cases the result is not simply the mathematical composition of the individual complexities and studies lead to interesting situations. Examples can be found in \cite{CGKY11,GSY08,JO11,SSY07}.
In some cases, the classical method has to be enhanced by two independent approaches. The first one consists in describing states by combinatorial objects. Thus the upper bound is computed using combinatorial tools. For instance, in \cite{CLMP15}, the states are represented by tableaux representing boolean matrices and an upper bound for the catenation of symmetrical difference is given. These combinatorial objects will be used to compute an upper bound for the Kleene star of symmetrical difference.
The second one is an algebraic method consisting in
%
building a witness for a certain class of rational operations by searching in a set of automata with as many transition functions as possible. This method has the advantage of being applied to a large class of operations, but has the drawback of giving witnesses that have alphabets of non-constant size. Witnesses with small alphabets are indeed favoured in this area of research when they can be found, as evidenced by several studies (\cite{CLP16,CLP17}).
This approach has been described independently by Caron \textit{et al.} in \cite{CHLP18} as the monster approach and by Davies in \cite{Dav18} as the OLPA (One Letter Per Action) approach but was implicitly present in older papers like \cite{BJLRS16,DO09}.
In this paper, we illustrate these approaches to find the state complexity of the star of symmetrical difference. Furthermore, we improve the witness found by drastically reducing the size of its alphabet to a constant size.
The paper is organized as follows. Section \ref{sect-prel} gives definitions and notations about automata and combinatorics. In Section \ref{sec-mod}, we recall the monster approach : we define \emph{modifiers}, \emph{monsters}, and give some properties of these structures related to state complexity. In Section \ref{sec-sc}, the state complexity of star of symmetrical difference is computed. Hence, in Section \ref{sec-borne}, we find witnesses for this operation with an alphabet size of $17$.
\section{Preliminaries}\label{sect-prel}
\subsection{Operations over sets}
The \emph{cardinality} of a finite set $E$ is denoted by $\#E$, the \emph{set of subsets} of $E$ is denoted by $2^E$ and the \emph{set of mappings} of $E$ into itself is denoted by $E^E$. The \emph{symmetric difference} of two sets $E_1$ and $E_2$ is denoted by $\oplus$ and defined by $E_1\oplus E_2=(E_1\cup E_2)\setminus (E_1\cap E_2)$. For any positive integer $n$, let us denote $\{0,\ldots, n-1\}$ by $\llbracket n\rrbracket$. $\mathds{1}$ denotes the identity mapping, the set of which depends on context.
\subsection{Languages and automata}
Let $\Sigma$ denote a finite alphabet. A \emph{word} $w$ over $\Sigma$ is a finite sequence of symbols of $\Sigma$. The \emph{length} of $w$, denoted by $|w|$, is the number of occurrences of symbols of $\Sigma$ in $w$. For $a\in \Sigma$, we denote by $|w|_a$ the number of occurrences of $a$ in $w$. The set of all finite words over $\Sigma$ is denoted by $\Sigma ^*$. A \emph{language} is a subset of $\Sigma^*$.
A \emph{complete and deterministic finite automaton} (DFA) is a $5$-tuple $A=(\Sigma,Q,i,F,\delta)$ where $\Sigma$ is the input alphabet, $Q$ is a finite set of states, $i\in Q$ is the initial state, $F\subseteq Q$ is the set of final states and $\delta$ is the transition function from $Q\times \Sigma$ to $Q$ extended in a natural way from $Q\times \Sigma^*$ to $Q$. The cardinality of $A$ is the cardinality of its set of states, \emph{i.e.} $\#A=\#Q$. We will often use $\IntEnt{n}$ for some $n\in\mathbb{N}$ as the set of states for DFAs.
Let $A=(\Sigma,Q,i,F,\delta)$ be a DFA. A word $w\in \Sigma ^*$ is \emph{recognized} by the DFA $A$ if $\delta(i,w)\in F$. The \emph{language recognized} by a DFA $A$ is the set $\mathrm L(A)$ of words recognized by $A$. Two DFAs are said to be \emph{equivalent} if they recognize the same language.
For any word $w$, we denote by $\delta^w$ the function $q\rightarrow\delta(q,w)$. Two states $q_1,q_2$ of $D$ are \emph{equivalent} if for any word $w$ of $\Sigma^*$, $\delta(q_1, w)\in F$ if and only if $\delta(q_2, w)\in F$. This equivalence relation is called the \emph{Nerode equivalence} and is denoted by $q_1\sim_{Ner} q_2$. If two states are not equivalent, then they are called \emph{distinguishable}.
A state $q$ is \emph{accessible} in a DFA if there exists a word $w\in \Sigma ^*$ such that $q=\delta(i,w)$. A DFA is \emph{minimal} if there does not exist any equivalent DFA with less states and it is well known that for any DFA, there exists a unique minimal equivalent one (\cite{HU79}). Such a minimal DFA can be obtained from $D$ by computing $\widehat A_{/\sim}=(\Sigma,Q/\sim,[i],F/\sim,\delta_{\sim})$ where $\widehat A$ is the accessible part of $A$, and where, for any $q\in Q$, $[q]$ is the $\sim$-class of the state $q$ and satisfies the property $\delta_{\sim}([q],a)=[\delta(q,a)]$, for any $a\in \Sigma$. The number of its states is denoted by $\#_{Min}(A)$. In a minimal DFA, any two distinct states are pairwise distinguishable.
Let $L$ be a regular language defined over an alphabet $\Sigma$. We denote by $L^*$ $\{w=u_1\cdots u_n\mid u_i\in L \land n\in\mathbb{N}\}$.
The syntactic semigroup of $L$ is the semigroup generated by the transition functions of all letters of the minimal DFA of $L$.
\subsection{State complexity}
A \emph{unary regular operation} is a function from regular languages into regular languages of $\Sigma$. A \emph{$k$-ary regular operation} over the alphabet $\Sigma$ is a function from the set of $k$-tuples of regular languages of $\Sigma$ into regular languages of $\Sigma$.\\
The state complexity of a regular language $L$ denoted by $\mathrm{sc}(L)$ is the number of states of its minimal DFA. This notion extends to regular operations: the state complexity of a unary regular operation $\otimes$ is the function $\mathrm{sc}_{\otimes}$ such that, for all $n\in\mathbb{N\setminus\{0\}}$, $\mathrm{sc}_{\otimes}(n)$ is the maximum of all the state complexities of $\otimes(L)$ when $L$ is of state complexity $n$, \emph{i.e.} $\mathrm{sc}_{\otimes}(n)=\max\{\mathrm{sc}(\otimes(L)) | \mathrm{sc}(L) = n\}$.
This can be generalized, and the state complexity of a $k$-ary operation $\otimes$ is the $k$-ary function $\mathrm{sc}_\otimes$ such that, for all $(n_1,\ldots,n_k)\in (\mathbb N^*)^k$,
\begin{equation}
\mathrm{sc}_\otimes(n_1,\ldots,n_k)=\max\{\mathrm{sc}(\otimes(L_1,\ldots,L_k))\mid\text{ for all }i\in\{1,\ldots,k\}, \mathrm{sc}(L_i)=n_i\}.
\end{equation}
Then, a witness for $\otimes$ is a a way to assign to each $(n_1,\ldots,n_k)$, assumed sufficiently big, a k-tuple of languages $(L_1,\ldots,L_k)$ with $\mathrm{sc}(L_i)=n_i$, for all $i\in\{1,\ldots,k\}$, satisfying $\mathrm{sc}_\otimes(n_1,\ldots,n_k)=\mathrm{sc}(\otimes(L_1,\ldots,L_k))$.
\subsection{Morphisms}
Let $\Sigma$ and $\Gamma$ be two alphabets. A morphism is a function $\phi$ from $\Sigma^*$ to $\Gamma^*$ such that, for all $w,v\in\Sigma^*$, $\phi(wv)=\phi(w)\phi(v)$. Notice that $\phi$ is completely defined by its value on letters.
Let $L$ be a regular language over alphabet $\Sigma$ recognized by the DFA $A=(\Sigma,Q,i,F,\delta)$ and let $\phi$ be a morphism from $\Gamma^*$ to $\Sigma^*$. Then, $\phi^{-1}(L)$ is the regular language recognized by the DFA $B=(\Gamma,Q,i,F,\delta')$ where, for all $a\in\Gamma$ and $q\in Q$, $\delta'(q,a)=\delta(q,\phi(a))$. Therefore, note that we have
\begin{property}\label{prop-scmorph}
Let $L$ be a regular language and $\phi$ be a morphism. We have $\mathrm{sc}(\phi^{-1}(L))\leq\mathrm{sc}(L)$.
\end{property}
We say that a morphism $\phi$ is \emph{$1$-uniform} if the image by $\phi$ of any letter is a letter. In other words, a $1$-uniform morphism is a (not necessarily injective) renaming of the letters and the only complexity of the mapping stems from mapping $a$ and $b$ to the same image, i.e., $\phi(a) = \phi(b)$.
\section{Monsters and state complexity}\label{sec-mod}
In \cite{Brz13}, Brzozowski gives a series of properties that would make a language $L_n$ of state complexity $n$ sufficiently complex to be a good candidate for constructing witnesses for numerous classical rational operations. One of these properties is that the size of the syntactic semigroup is $n^n$, which means that each transformation of the minimal DFA of $L_n$ can be associated to a transformation by some non-empty word. This upper bound is reached when the set of transition functions of the DFA is exactly the set of transformations from state to state. We thus consider the set of transformations of $\IntEnt{n}$ as an alphabet where each letter is simply named by the transition function it defines. This leads to the following definition :
\begin{definition}
A $1$-monster is an automaton
$\mathrm{Mon}_{n}^{F}=(\Sigma,\IntEnt{n},0, F,\delta)$ defined by
\begin{itemize}
\item the alphabet $\Sigma=\IntEnt{n}^{\IntEnt{n}}$,
\item the set of states $\IntEnt{n}$,
\item the initial state $0$,
\item the set of final states $F$,
\item the transition function $\delta$ defined for any $a\in \Sigma$ by $\delta(q,a)=a(q)$.
\end{itemize}
The language recognized by a $1$-monster DFA is called a \emph{$1$-monster language}.
\end{definition}
\begin{example}\label{ex-mon}
The $1$-monster $\mathrm{Mon}_2^{\{1\}}$ is
\begin{figure}[H]
\centering
\begin{tikzpicture}[node distance=2cm]
\node[state,initial](p0){$0$};
\node[state,accepting](p1) at (4,0) {$1$};
\path[->]
(p0)edge[loop ] node [swap]{$[01],[00]$} (p0)
(p0)edge[bend left] node {$[11],[10]$} (p1)
(p1)edge[loop ] node [swap]{$[01],[11]$} (p1)
(p1)edge[bend left] node{$[00],[10]$}(p0);
\end{tikzpicture}
\end{figure}
where, for all $i,j\in\{0,1\}$, the label $[ij]$ denotes the transformation sending $0$ to $i$ and $1$ to $j$, which is also a letter in the DFA above.
\end{example}
Let us notice that some families of $1$-monster languages are witnesses for the Star and Reverse operations (\cite{CHLP18}). The following claim is easy to prove and captures a universality-like property of $1$-monster languages:
\begin{property}\label{prop-res}
Let $L$ be any regular language recognized by a DFA $A=(\Sigma,\IntEnt{n},0, F,\delta)$. The language $L$ is the preimage of $\mathrm L(\mathrm{Mon}_n^F)$ by the $1$-uniform morphism $\phi$ such that, for all $a\in\Sigma$, $\phi(a)=\delta^a$, i.e.
\begin{equation}
L=\phi^{-1}(\mathrm{L}(\mathrm{Mon}_n^F)).
\end{equation}
\end{property}
This is an important and handy property that we should keep in mind. We call it the \emph{restriction-renaming} property.
We can wonder whether we can extend the notions above to provide witnesses for $k$-ary operators. In the unary case, the alphabet of a monster is the set of all possible transformations we can apply on the states. In the same mindset, a $k$-monster DFA is a $k$-tuple of DFAs, and its construction must involve the set of $k$-tuples of transformations as an alphabet. Indeed, the alphabet of a $k$-ary monster has to encode all the transformations acting on each set of states independently one from the others. This leads to the following definition :
\begin{definition}\label{def-mon}
A $k$-monster is a $k$-tuple of automata $\mathrm{Mon}_{n_1,\ldots, n_k}^{F_1,\ldots, F_k}=(\mathds{M}_1,\ldots, \mathds{M}_k)$ where\\
$\mathds{M}_j=(\Sigma,\IntEnt{n_j},0, F_j,\delta_j)$ for $j\in \{1,k\}$ is defined by
\begin{itemize}
\item the common alphabet $\Sigma=\IntEnt{n_1}^{\IntEnt{n_1}}\times \ldots \times \IntEnt{n_k}^{\IntEnt{n_k}}$,
\item the set of states $\IntEnt{n_j}$,
\item the initial state $0$,
\item the set of final states $F_j$,
\item the transition function $\delta_j$ defined for any $(a_1,\ldots, a_k)\in \Sigma$ by $\delta_j(q,(a_1,\ldots, a_k))={a_j}(q)$.
\end{itemize}
A $k$-tuple of languages $(L_1,\ldots,L_k)$ is called a \emph{monster $k$-language} if there exists a $k$-monster \\
$(\mathds{M}_1,\ldots,\mathds{M}_k)$ such that $(L_1,\ldots,L_k)=(\mathrm L(\mathds{M}_1),\ldots,\mathrm L(\mathds{M}_k))$.
\end{definition}
\begin{remark}\label{remark-min}
When $F_j$ is different from $\emptyset$ and $Q_j$, $\mathds{M}_j$ is minimal.
\end{remark}
Definition \ref{def-mon} allows us to extend the restriction-renaming property in a way that is still easy to check.
\begin{property}
Let $(L_1,\ldots,L_k)$ be a $k$-tuple of regular languages over the same alphabet $\Sigma$. We assume that each $L_j$ is recognized by the DFA $A_j=(\Sigma,\IntEnt{n_j},0,F_j,\delta_j)$. Let $\mathrm{Mon}_{n_1,\ldots, n_k}^{F_1,\ldots, F_k}=(\mathds{M}_1,\ldots, \mathds{M}_k)$. For all $j\in\{1,\ldots,k\}$, the language $L_j$ is the preimage of $\mathrm{L}(\mathds{M}_j)$ by the $1$-uniform morphism $\phi$ such that, for all $a\in\Sigma$, $\phi(a)=(\delta_1^a,\ldots,\delta_k^a)$, i.e.
\begin{equation}
(L_1,\ldots,L_k)=(\phi^{-1}(\mathrm{L}(\mathds{M}_1)),\ldots,\phi^{-1}(\mathrm{L}(\mathds{M}_k))).
\end{equation}
\end{property}
It has been shown that some families of $2$-monsters are witnesses for binary boolean operations and for the catenation operation \cite{CHLP18}. Many papers concerning state complexity actually use monsters as witnesses without naming them (e.g. \cite{BJLRS16}).
Therefore, a natural question arises : can we define a simple class of rational operations for which monsters are always witnesses ? This class should ideally encompass some classical regular operations, in particular the operations studied in the papers cited above. In the next section, we define objects that allow us to answer this question.
\section{Modifiers}
We first describe a class of regular operations for which monsters are always witnesses in the unary case. Once again, the restriction-renaming property comes in handy and gives us the intuition we need. We call \emph{$1$-uniform} any unary regular operation $\otimes$ that commutes with any $1$-uniform morphism, \emph{i.e.} for every regular language $L$ and every $1$-uniform morphism $\phi$, $\otimes(\phi^{-1}(L))=\phi^{-1}(\otimes(L))$. For example, it is proven in \cite{Dav18} that the Kleene star and the reverse are $1$-uniform. Suppose now that $\otimes$ is a unary $1$-uniform operation. Then, if $L$ is a regular language, $A=(\Sigma,\IntEnt{n},0,F,\delta)$ its minimal DFA, and $\phi$ the $1$-uniform morphism sending any letter of $\Sigma$ into its associated transition function in $A$, we have
\begin{equation}
\otimes(L)=\otimes(\phi^{-1}(\mathrm{L}(\mathrm{Mon}_{n}^F))=\phi^{-1}(\otimes(\mathrm{L}(\mathrm{Mon}_{n}^F))).
\end{equation}
It follows that $\mathrm{sc}(\otimes(L))= \mathrm{sc}(\phi^{-1}(\otimes(\mathrm{L}(\mathrm{Mon}_{n}^F))))\leq \mathrm{sc}(\otimes(\mathrm{L}(\mathrm{Mon}_{n}^F)))$ by Property \ref{prop-scmorph}. In addition, Remark \ref{remark-min} implies that $\mathrm L(\mathrm{Mon}_{n}^F)$ has the same state complexity as $L$. Therefore, we have
\begin{theorem}\label{th-mon}
Any $1$-uniform operation admits a family of monster $1$-languages as a witness.
\end{theorem}
We now introduce the second central concept of our paper. In many cases, to compute state complexities, it is easier to describe regular operations as constructions on DFAs. We would therefore like to find a class of operations on DFAs, that are naturally associated to $1$-uniform operations. Such an operation on DFAs needs to have some constraints that are described in the following definitions.
\begin{definition}
The \emph{state configuration} of a DFA $A=(\Sigma,Q,i,F,\delta)$ is the triplet $(Q,i,F)$.
\end{definition}
\begin{definition}
A \emph{$1$-modifier} is a unary operation on DFA $\mathfrak m$ that produces a DFA such that :
\begin{itemize}
\item For any DFA $A$, the alphabet of $\mathfrak m(A)$ is the same as the alphabet of $A$.
\item For any DFA $A$, the state configuration of $\mathfrak m(A)$ depends only on the state configuration of the DFA $A$.
\item For any DFA $A$ over the alphabet $\Sigma$, for any letter $a\in\Sigma$, the transition function of $a$ in $\mathfrak m(A)$ depends only on the state configuration of the DFA $A$ and on the transition function of $a$ in $A$.
\end{itemize}
\end{definition}
\begin{example}{The star modifier.}\label{ex-star}
For all DFA $A=(\Sigma,Q,i,F,\delta)$, define $\mathfrak{Star}(A)=(\Sigma,2^{Q},\emptyset,\{E| E\cap F \neq \emptyset\}\cup\{\emptyset\},\delta_1)$, where $\delta_1$ is as follows : for all $a\in\Sigma$,
\[\delta_1^a(\emptyset)=\left\{\begin{array}{ll}\{\delta^a(i)\}\text{ if }\delta^a(i)\notin F \\
\{\delta^a(i),i\}\mbox{ otherwise }
\end{array}\right.
\mbox{ and, for all } E\neq\emptyset,\;
\delta_1^a(E)=\left\{\begin{array}{ll}\delta^a(E)\text{ if }\delta^a(E)\cap F=\emptyset \\
\delta^a(E)\cup\{i\}\mbox{ otherwise }
\end{array}\right.\]
\end{example}
The modifier $\mathfrak{Star}$ describes the classical construction on DFA associated to the Star operation on languages, \emph{i.e.} for all DFA $A$, $\mathrm L(A)^*=\mathrm L(\mathfrak{Star}(A))$.
\begin{example}
If we apply the modifier $\mathfrak{Star}$ to the modifier $\mathrm{Mon}_2^{\{1\}}$ described in Example \ref{ex-mon}, we obtain the DFA drawn in Figure \ref{Stmon}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[node distance=2cm]
\node[state,initial](p1) at (0,4) {$\emptyset$};
\node[state,accepting](p2) at (6,0) {$\{1\}$};
\node[state](p0) at (0,0) {$\{0\}$};
\node[state,accepting](p3) at (6,4) {$\{0,1\}$};
\path[->]
(p1)edge node [swap]{$[01],[00]$} (p0)
(p1)edge node {$[11],[10]$} (p3)
(p0)edge[loop left] node [swap]{$[01],[00]$}
(p0)edge[bend left=15] node {$[11],[10]$} (p3)
(p2)edge node {$[10],[00]$} (p0)
(p2)edge node [swap]{$[11],[01]$} (p3)
(p3)edge[loop right] node [swap]{$[10],[01],[11]$}
(p3)edge[bend left=15] node {$[00]$} (p0);
\end{tikzpicture}
\end{figure}
\captionof{figure}{$\mathfrak{Star}(\mathrm{Mon}_2^{\{1\}})$}\label{Stmon}
From this, one deduces the action of the modifier $\mathfrak{Star}$ on any DFA with two states. For instance, applying $\mathfrak{Star}$ to DFA $C$ (Figure \ref{C}) gives the DFA described in Figure \ref{St(C)}.
\newline
\begin{minipage}{.5\textwidth}
\centering
\resizebox{.8\textwidth}{!}{
\begin{tikzpicture}[node distance=2cm]
\node[state,initial](p0){$0$};
\node[state,accepting](p1) at (4,0) {$1$};
\path[->]
(p0)edge[loop ] node [swap]{$a$} (p0)
(p0)edge[bend left] node {$b$} (p1)
(p1)edge[loop ] node [swap]{$a,b$} (p1);
\end{tikzpicture}
}
\captionof{figure}{The DFA $C$}\label{C}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[node distance=2cm]
\node[state,initial](p1) at (0,4) {$\emptyset$};
\node[state,accepting](p2) at (6,0) {$\{1\}$};
\node[state](p0) at (0,0) {$\{0\}$};
\node[state,accepting](p3) at (6,4) {$\{0,1\}$};
\path[->]
(p1)edge node [swap]{$a$} (p0)
(p1)edge node {$b$} (p3)
(p0)edge[loop left] node [swap]{$a$}
(p0)edge[bend left=15] node {$b$} (p3)
(p2)edge node [swap]{$a,b$} (p3)
(p3)edge[loop right] node [swap]{$a,b$} (p3);
\end{tikzpicture}
}
\captionof{figure}{$\mathfrak{Star}(C)$}\label{St(C)}
\end{minipage}
Remark that to apply $\mathfrak{Star}$ to $C$, we just take the subautomaton of $\mathfrak{Star}(\mathrm{Mon}_2^{\{1\}})$ with letters being exactly the transition functions of letters in $C$, and rename its letters by the letters of $C$ of which they are the transition functions. The transition labeled by $b$ in Figure \ref{C} is first assimilated to the transition $[11]$ in $\mathrm{Mon}_2^{\{1\}}$ (see Example \ref{ex-mon}). Hence, the transition labeled by $b$ in $\mathfrak{Star}(C)$ is the same as the transition labeled by $[11]$ in $\mathfrak{Star}(\mathrm{Mon}_2^{\{1\}})$ (Figure \ref{Stmon}).
\end{example}
\begin{theorem}\label{the-eq}
A regular unary operation $\otimes$ is $1$-uniform if and only if there exists a $1$-modifier $\mathfrak m$ such that for any regular language $L$ and any DFA $A$ recognizing $L$, $\otimes(L)=\mathrm L(\mathfrak m(A))$.
\end{theorem}
\begin{proof}
Let $\otimes$ be a $1$-uniform unary operation. We define a $1$-modifier $\mathfrak m$ as follows.
For any DFA $A=(\Sigma,Q_A,i_A,F_A,\delta_A)$, we can rename its set of states so that $A$ becomes the DFA $D=(\Sigma,\IntEnt{n},0,F,\delta)$. Let us denote by $B=(\IntEnt{n}^{\IntEnt{n}},Q',i',F',\delta')$ the minimal DFA of $\otimes(\mathrm L(\mathrm{Mon}_n^{F}))$.
We set $\mathfrak m(A)=(\Sigma,Q',i',F',\tilde \delta')$, with $\tilde\delta'(q,a)=\delta'(q,\delta^a)$.
Notice that $\mathfrak m$ is indeed a $1$-modifier. First, $(Q',i',F')$ depends only on $(Q_A,i_A,F_A)$. Second, $\tilde \delta'^a$ depends only on $\delta^a$ and on $\delta'$, which in turn depend only on $(Q_A,i_A,F_A)$ and $\delta_A^a$.
Furthermore, by construction, $\mathrm L(\mathfrak m(A))=\phi^{-1}(\mathrm L(B))$, where $\phi$ is the $1$-uniform morphism such that $\phi(a)=\delta_D^a$ for all $a\in\Sigma$. Therefore, we have $\mathrm L(\mathfrak m (A))=\phi^{-1}(\otimes(\mathrm L(\mathrm{Mon}_n^F)))$. And, since $\otimes$ is $1$-uniform, we obtain $\mathrm L(\mathfrak m (A))=\otimes(\phi^{-1}(\mathrm L(\mathrm{Mon}_n^F)))=\otimes(L)$.
Conversely, let $\otimes$ be a regular operation and let $\mathfrak m$ be a $1$-modifier such that for any regular language $L$ and any DFA $A$ recognizing $L$, $\otimes(L)=\mathrm L(\mathfrak m(A))$. We must prove that $\otimes$ is $1$-uniform. Let $\Gamma$ and $\Sigma$ be two alphabets. Consider a 1-uniform morphism $\phi$ from $\Gamma^*$ to $\Sigma^*$ and a language $L$ over $\Sigma$. Let $A=(\Sigma,Q,i,F,\delta)$ be any DFA recognizing $L$ and let $B=(\Gamma,Q,i,F,\tilde \delta)$ the DFA such that $\tilde \delta^a=\delta^{\phi(a)}$ for any letter $a\in\Gamma$. We have $\mathrm L(B)=\phi^{-1}(\mathrm L(A))$.
Let $\mathfrak m(A)=(\Sigma,Q_1,i_1,F_1,\delta_1)$ and $\mathfrak m(B)=(\Gamma,Q_2,i_2,F_2,\delta_2)$. Since the state configuration of $A$ is the same as the state configuration of $B$, we have $(Q_1,i_1,F_1)=(Q_2,i_2,F_2)$. Furthermore, because the transition function of any letter $a\in\Gamma$ in $B$ is also the same as the transition function of $\phi(a)$ in $A$, we have $\delta_2^a=\delta_1^{\phi(a)}$. Hence, $\mathrm L(\mathfrak m(B))=\phi^{-1}(\mathrm L(\mathfrak m(A)))$, which implies that $\otimes(\mathrm L(B))=\phi^{-1}(\otimes(A))$. Therefore, $\otimes(\phi^{-1}(\mathrm L(A)))=\phi^{-1}(\otimes(\mathrm L(A)))$, as expected.
\end{proof}
We extend the previous theorems by generalizing the definitions to $k$-ary operations.
\begin{definition}\label{def-uni}
A $k$-ary regular operation $\otimes$ is called $1$-uniform if, for any $k$-tuple of rational languages $(L_1,\ldots,L_k)$, for any $1$-uniform morphism $\phi$, $\otimes(\phi^{-1}(L_1),\ldots,\phi^{-1}(L_k))=\phi^{-1}(\otimes(L_1,\ldots,L_k))$.
\end{definition}
Using the same arguments as in Theorem \ref{th-mon}, we find
\begin{theorem}\label{th-mon2}
Any $k$-ary $1$-uniform operation admits a family of monster $k$-languages as a witness.
\end{theorem}
\begin{proof}
Suppose now that $\otimes$ is a $k$-ary $1$-uniform operation. Then, if $(L_1,\ldots,L_k)$ is a $k$-tuple of regular languages over $\Sigma$, $(A_1,\ldots,A_k)$ the $k$-tuple of DFAs such that each $A_j=(\Sigma,Q_j,i_j,F_j,\delta_j)$ is the minimal DFA of $L_i$, and $\phi$ the $1$-uniform morphism such that, for all $a\in\Sigma$, $\phi(a)=(\delta_1^a,\ldots,\delta_k^a)$, and if $\mathrm{Mon}_{n_1,\ldots,n_k}^{F_1,\ldots,F_k}=(\mathds{M}_1,\ldots,\mathds{M}_k)$, then $\otimes(L)=\otimes(\phi^{-1}(\mathrm{L}(\mathds{M}_1)),\ldots,\phi^{-1}(\mathrm{L}(\mathds{M}_k)))=\phi^{-1}(\otimes(\mathrm{L}(\mathds{M}_1),\ldots,\mathrm{L}(\mathds{M}_k)))$.
It follows that $\mathrm{sc}(\otimes(L))= \mathrm{sc}(\phi^{-1}(\otimes(\mathrm{L}(\mathds{M}_1),\ldots,\mathrm{L}(\mathds{M}_k))))\leq \mathrm{sc}(\otimes(\mathrm{L}(\mathds{M}_1),\ldots,\mathrm{L}(\mathds{M}_k)))$ by Property \ref{prop-scmorph}. In addition, each $\mathrm{L}(\mathds{M}_j)$ has the same state complexity as $L_j$.
\end{proof}
\begin{definition}\label{def-mod}
A $k$-modifier is a $k$-ary operation on DFAs over the same alphabet that returns a DFA and such that :
\begin{itemize}
\item The alphabet of $\mathfrak m (A_1,...,A_k)$ is the same as the alphabet of each $A_j$.
\item For any $k$-tuple of DFAs $(A_1,\ldots,A_k)$, the state configuration of $\mathfrak m (A_1,...,A_k)$ depends only on the state configurations of the DFAs $A_1,\ldots,A_k$.
\item For any $k$-tuple of DFAs $(A_1,\ldots,A_k)$ where each DFA is over the alphabet $\Sigma$, for any letter $a\in\Sigma$, the transition function of $a$ in $\mathfrak m (A_1,\ldots,A_k)$ depends only on the state configurations of the DFAs $A_1,\ldots, A_k$ and on the transition functions of $a$ in each of the DFAs $A_1,...,A_k$.
\end{itemize}
\end{definition}
\begin{example}\label{ex-xor}
For all DFAs $A=(\Sigma,Q_1,i_1,F_1,\delta_1)$ and $B=(\Sigma,Q_2,i_2,F_2,\delta_2)$, define
\[\mathfrak{Xor}(A,B)=(\Sigma,Q_1\times Q_2,(i_1,i_2),(F_1\times(Q_2\setminus F_2)\cup (Q_1\setminus F_1)\times F_2),(\delta_1,\delta_2))\]
\end{example}
The modifier $\mathfrak{Xor}$ describes the classical construction associated to the operation Xor on couples of languages, \emph{i.e} for all DFAs $A$ and $B$, $\mathrm L(A)\oplus \mathrm L(B)=\mathrm L(\mathfrak{Xor}(A,B))$.
\begin{theorem}\label{th-mod}
A regular $k$-ary operation $\otimes$ is $1$-uniform if and only if there exists a $k$-modifier $\mathfrak m$ such that for any $k$-tuple of regular languages $(L_1,\ldots,L_k)$ and any $k$-tuple of DFAs $(A_1,\ldots,A_k)$ such that each $A_j$ recognizes $L_j$, we have $\otimes(L_1,\ldots,L_k)=\mathrm L(\mathfrak m(A_1,\ldots,A_k))$.
\end{theorem}
The proof of Theorem \ref{the-eq} can be easily adapted to $k$-ary operations.
The following proposition states the effects of composition on modifiers and $1$-uniform operations and directly stems from Definitions \ref{def-uni} and \ref{def-mod}.
\begin{proposition}\label{prop-comp}
Let $\otimes_1$ be a $k_1$-ary $1$-uniform operation and $\otimes_2$ be a $k_2$-ary $1$-uniform operation. The $(k_1+k_2)$-ary operation defined by $\otimes(L_1,\ldots,L_{k_1+k_2})= \otimes_1(L_1,\ldots,L_{l},\otimes_2(L_{l+1},\ldots,L_{l+k_2}),L_{l+k_2},\ldots,L_{k_1+k_2})$ is $1$-uniform. Furthermore, if $\mathfrak m_1$ is a $k_1$-modifier associated with $\otimes_1$ and $\mathfrak m_2$ is a $k_2$-modifier associated with $\otimes_2$, the operation on $(k_1+k_2)$-tuples of DFAs defined by
\begin{equation}
\mathfrak m(A_1,\ldots,A_{k_1+k_2})=\mathfrak m_1(A_1,\ldots,A_{l},\mathfrak m_2(A_{l+1},\ldots,A_{l+k_2}),A_{l+k_2},\ldots,A_{k_1+k_2})
\end{equation}
is a modifier associated to $\otimes$.
\end{proposition}
\section{State complexity of the star of symmetrical difference}\label{sec-sc}
In this section, we compute the state complexity of the $2$-ary regular operation $L_1\textcircled{$\star$} L_2=(L_1\oplus L_2)^*$. Examples \ref{ex-star} and \ref{ex-xor} together with Proposition \ref{prop-comp} show that $\textcircled{$\star$}$ is $1$-uniform and that an associated modifier can be defined by $\mathfrak{StX}(A_1,A_2)=\mathfrak{Star}(\mathfrak{Xor}(A_1,A_2))$. To be more precise, if $A_1=(\Sigma, Q_1,i_1,F_1,\delta_1)$ and $A_2=(\Sigma,Q_2,i_2,F_2,\delta_2)$, then
\[\mathfrak{StX}(A_1,A_2)=(\Sigma,2^{Q_1\times Q_2},\emptyset,\{E\in 2^{Q_1 \times Q_2}\mid E\cap F\neq\emptyset \}\cup\{\emptyset\},\delta)\]
where $F=(F_1\times Q_2)\oplus(Q_1\times F_2)$ and, for all $a\in \Sigma$,
\[\delta^a(\emptyset)=\left\{\begin{array}{ll}\{(\delta_1^a(i_1),\delta_2^a(i_2))\}\text{ if }(\delta_1^a(i_1),\delta_2^a(i_2))\notin F \\
\{(\delta_1^a(i_1),\delta_2^a(i_2)),(i_1,i_2)\}\mbox{ otherwise }
\end{array}\right.\]
\begin{center}
and, for all $E\neq\emptyset$, $\delta^a(E)=\left\{\begin{array}{ll}(\delta_1^a,\delta_2^a)(E)\text{ if }(\delta_1^a,\delta_2^a)(E)\cap F=\emptyset \\
(\delta_1^a,\delta_2^a)(E)\cup\{(i_1,i_2)\}\mbox{ otherwise. }
\end{array}\right.$
\end{center}
Theorem \ref{th-mon2} states that $\textcircled{$\star$}$ admits a family of $2$-monsters as witness. For any positive integers $n_1,n_2$, let $(\mathds M_1,\mathds M_2)=\mathrm{Mon}_{n_1,n_2}^{\{n_1-1\},\{0\}}$. We are going to show that, for all $(n_1,n_2)\in \mathbb {N^*}^2$, $(\mathrm L(\mathds M_1)),\mathrm L(\mathds M_2))$ is indeed a witness for $\textcircled{$\star$}$. This allows us to compute its state complexity.
To be more precise, here is the outline of our proof. For any positive integers $n_1,n_2$, any $F_1,F_2\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$, let us denote by $\mathrm{M}_{F_1,F_2}$ the DFA $\mathfrak{StX}(\mathrm{Mon}_{n_1,n_2}^{F_1,F_2})$. We are going to minimize the DFA $\mathrm{M}_{\{n_{1}-1\},\{0\}}$ by first computing its accessible states, and then, restricting it to its accessible states, by computing its Nerode equivalence. We will therefore have computed the minimal DFA equivalent to $\mathrm{M}_{\{n_{1}-1\},\{0\}}$, and computing its size allows us to compute the state complexity of $\mathrm L(\mathrm{M}_{\{n_{1}-1\},\{0\}})$. We then show that the state complexity of $\mathrm L(\mathrm{M}_{\{n_{1}-1\},\{0\}})$ is the greatest out of all the state complexities of $\mathrm L(\mathrm{M}_{F_1,F_2})$, with $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$. Theorem \ref{th-mon2} allows us to conclude that the state complexity of $\mathrm L(\mathrm{M}_{\{n_{1}-1\},\{0\}})$ is indeed $sc_{\textcircled{$\star$}}(n_1,n_2)$.
\subsection{Computing the accessible states of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$}
In order to understand more easily the next proofs, we associate elements of $2^{\IntEnt{n_1}\times\IntEnt{n_2}}$ to boolean matrices of size $n_1\times n_2$. Such a matrice is called a tableau when crosses are put in place of $1$s, and $0$s are erased. We denote by the same letter the element of $2^{\IntEnt{n_1}\times\IntEnt{n_2}}$, the associated boolean matrix, and the associated tableau. If $T$ is an element of $2^{\IntEnt{n_1}\times\IntEnt{n_2}}$, we denote by $T_{x,y}$ the value of the boolean matrix $T$ at row $x$ and column $y$. Therefore, the three following assertions mean the same thing : a cross is at the coordinates $(x,y)$ in $T$, $T_{x,y}=1$, $(x,y)\in T$.
We say that a cross at coordinates $(x,y)$ in an element of $2^{\IntEnt{n_1}\times\IntEnt{n_2}}$ is in the final zone of $\mathrm{M_{F_1,F_2}}$ if $(x,y) \in (F_1\times\llbracket n_2 \rrbracket) \oplus (\llbracket n_1 \rrbracket \times F_2)$. We remark that an element of $2^{\IntEnt{n_1}\times\IntEnt{n_2}}$ is final in $\mathrm{M_{F_1,F_2}}$ if and only if it has a cross in the final zone of $\mathrm{M_{F_1,F_2}}$.
We fix for the remainder of this section two positive integers $n_1$ and $n_2$.
\begin{lemma}\label{lemma-acc}
The states of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$ that are accessible are exactly the tableaux $T$ of size $n_1\times n_2$ such that, if $T$ has a cross in the final zone of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$ , then $T$ has cross at $(0,0)$.
\end{lemma}
\begin{proof}
It is easy to see by the definition of the transition function of $\mathfrak{StX}$ that every tableau $T$ with a cross in the final zone of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$ and no cross at $(0,0)$ is not accessible.
Let $\delta$ be the transition function of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$. If $T$ is a tableau of size $n_1\times n_2$, let $\#_\mathrm{nf}T$ be the number of crosses of $T$ which are not in the final zone of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$.
Let us define an order $<$ on cross matrices as $T < T'$ if and only if $\#T < \#T'$ or $(\#T = \#T'$ and $\#_\mathrm{nf}T < \#_\mathrm{nf}T')$ .\\
Let us prove every tableau $T$ of size $n_1\times n_2$ such that, if $T$ has a cross in the final $(\{n_{1}-1\},\{0\})$-zone, then $T$ has cross at $(0,0)$, is accessible by induction on non-empty cross matrices for the partial order $<$ (the empty cross matrix is the initial state of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$, and so it is accessible).
The only minimal cross matrix for non-empty matrices and the order $<$ is the cross matrix with only one cross at $(0,0)$. This is accessible from the initial state $\emptyset$ by reading the letter $(\mathds{1},\mathds{1})$. Let us notice that each letter is a couple of functions of $\IntEnt{n_1}^{\IntEnt{n_1}}\times \IntEnt{n_2}^{\IntEnt{n_2}}$.
Now let us take a cross matrix $T'$, and find a cross matrix $T$ such that $T<T'$, and T' is accessible from $T$. We distinguish the cases :
\begin{itemize}
\item $T'$ has no cross in the final zone, except maybe at $(0,0)$.
\begin{itemize}
\item \emph{Case $T'_{n_1-1,0}=0$.} Let $(i,j)$ be the index of a cross of $T'$.
Let $(f,g)=((0,i),(0,j))$ where $(0,i)$ and $(0,j)$ denote transpositions, and let $T=(f,g)(T')$ where $(f,g)(T')=\{(f(i),g(j))\mid (i,j)\in T'\}$. As $(f,g)$ is a one-to-one transformation on $\llbracket n_{1}\rrbracket \times \llbracket n_{2}\rrbracket$, as $(f,g)(T')$ has a cross at $(0,0)$ and as $T'$ does not have any crosses in the final zone, we have $\delta^{(f,g)}(T)= (f,g)(T)=(f,g)(f,g)(T')=T' $.
We also have $T<T'$ since $\#T=\#T'$ and $\#_\mathrm{nf}T<\#_\mathrm{nf}T'$.
\item \emph{Case $T'_{n_1-1,0}=1$.} Let $(f,g)=((0,n_1-1),\mathds{1})$ and let $T=(f,g)(T')$. We have $\delta^{(f,g)}(T)=T'$, and $T<T'$ as $\#_\mathrm{nf}T < \#_\mathrm{nf}T'$.
\end{itemize}
\item $T'$ has a cross in the final zone other than $(0,0)$.
Let $(i,j)$ be such a cross, and let $(f,g)=((0,i),(0,j))$. Let $T''$ be the cross matrix obtained from $T'$ by deleting the cross at $(0,0)$. Let $T=(f,g)(T'')$.
As $(f,g)$ is still one-to-one on $\llbracket n_{1}\rrbracket \times \llbracket n_{2}\rrbracket$, we have $T_{0,0}=((f,g)(T''))_{0,0}=T''_{i,j}=1$, and $((f,g)(T))=((f,g)((f,g)(T'')))=T''$. As $T''$ has a cross in the final zone, we therefore have $\delta^{(f,g)}(T)=T'$ and $T<T'$ as $\#T<\#T'$.
\item The only cross of $T'$ which is in the final zone is $(0,0)$.
\begin{itemize}
\item \emph{Case $A$: there exists $j$ such that $T'_{0,j}=1$.}
Let $(f,g)=(\left(n_1-1\atop 0\right),\mathds{1})$ and let $T_{i,j}=\left\{\begin{array}{ll}
1 & \mbox{if } (i,j)=(0,0),\\
T'_{0,j} & \mbox{if } i=n_1-1 \land j\neq 0,\\
T'_{i,j} & \mbox{otherwise.}
\end{array}\right.$
It is easy to check that $\delta^{(f,g)}(T)=T'$, and $T<T'$ as $\#_\mathrm{nf}T < \#_\mathrm{nf}T'$.
\begin{figure}
\centerline{
\begin{tikzpicture}[scale=0.4]
\draw[step=1.0,black, thin] (0,0) grid (5,5);
\draw[black,very thick] (0,1) -- (5,1);
\draw[black,very thick] (1,0) -- (1,5);
\node[scale=1.5] at (2.5,1.5) {$\times$};
\node[scale=1.5] at (0.5,4.5) {$\times$};
\node[scale=1.5] at (3.5,3.5) {$\times$};
\node[scale=1.5,blue] at (2.5,4.5) {$\times$};
\node[scale=1.5] at (1.5,3.5) {$\times$};
\draw[->] (9,2.5) -- node[midway,above,scale=1.5] {\tiny $\left(\left(4\atop 0\right),\mathds{1}\right)$} (6,2.5) ;
%
\draw[step=1.0,black, thin] (10,0) grid (15,5);
\draw[black,very thick] (10,1) -- (15,1);
\draw[black,very thick] (11,0) -- (11,5);
\node[scale=1.5] at (12.5,1.5) {$\times$};
\node[scale=1.5] at (10.5,4.5) {$\times$};
\node[scale=1.5] at (13.5,3.5) {$\times$};
\node[scale=1.5,red] at (12.5,0.5) {$\times$};
\node[scale=1.5] at (11.5,3.5) {$\times$};
\end{tikzpicture}}
\caption{The two tableaux $T'$ and $T$ for case $A$.}
\end{figure}
\item \emph{ Case $\neg A$ and $T'_{n_1-1,0}=0$.} There exists $(i,j)\neq (n_1-1,0)$ such that $i\neq 0$ and $T'_{i,j}=1$. Let $(f,g)=((i,n_1-1),\mathds{1})$ and let $T=(f,g)(T')$. We have $\delta^{(f,g)}(T)=T'$, and $T<T'$ as $\#_\mathrm{nf}T < \#_\mathrm{nf}T'$.
\begin{figure}
\centerline{
\begin{tikzpicture}[scale=0.4]
\draw[step=1.0,black, thin] (0,0) grid (5,5);
\draw[black,very thick] (0,1) -- (5,1);
\draw[black,very thick] (1,0) -- (1,5);
\node[scale=1.5] at (2.5,1.5) {$\times$};
\node[scale=1.5] at (0.5,4.5) {$\times$};
\node[scale=1.5,blue] at (3.5,3.5) {$\times$};
\node[scale=1.5,blue] at (1.5,3.5) {$\times$};
\draw[->] (9,2.5) -- node[midway,above,scale=1.5] {\tiny $\left(\left(1,4\right),\mathds{1}\right)$} (6,2.5) ;
%
\draw[step=1.0,black, thin] (10,0) grid (15,5);
\draw[black,very thick] (10,1) -- (15,1);
\draw[black,very thick] (11,0) -- (11,5);
\node[scale=1.5] at (12.5,1.5) {$\times$};
\node[scale=1.5] at (10.5,4.5) {$\times$};
\node[scale=1.5,red] at (13.5,0.5) {$\times$};
\node[scale=1.5,red] at (11.5,0.5) {$\times$};
\end{tikzpicture}}
\caption{The two tableaux $T'$ and $T$ for case $\neg A$ and $T'_{n_1-1,0}=0$.}
\end{figure}
\item \emph{Case $\neg A$ and $T'_{n_1-1,0}=1$.} Let $(f,g)=(\left(n_1-2,n_1-1\right),\mathds{1})$ and
let $T=(f,g)(T')$. We have $\delta^{(f,g)}(T)=T'$, and $T<T'$ as $\#_\mathrm{nf}T < \#_\mathrm{nf}T'$.
\end{itemize}
\end{itemize}
\end{proof}
For all $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$, let us now call $\mathrm{\widehat M}_{F_1,F_2}$ the DFA $\mathrm{M}_{F_1,F_2}$ restricted to states $T$ such that, if $T$ has a cross in the final zone of $\mathrm{M}_{F_1,F_2}$, then $T$ has cross at $(0,0)$. The following remark stems from the formula given for $\mathfrak{StX}$.
\begin{remark}\label{remark-acc}
The accessible part of $\mathrm{M}_{F_1,F_2}$ is included in $\mathrm{\widehat M}_{F_1,F_2}$.
\end{remark}
\subsection{Computing the Nerode equivalence of $\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}}$}
\begin{definition}
A tableau $T$ in $2^{\IntEnt{n_1}\times \IntEnt{n_2}}$ is right-triangle free if $\forall x, x' \in \IntEnt{n_{1}}$ such that $x\neq x'$ and $\forall y,y' \in \IntEnt{n_{2}}$, such that $y\neq y'$, we have $\#(\{(x,y),(x,y'),(x',y),(x',y')\}\cap T) \neq 3$.
\end{definition}
\begin{figure}
\centerline{
\begin{tikzpicture}[scale=0.5]
\draw[step=1.0,black, thin] (0,0) grid (4,4);
\draw[black] (0,2) -- (4,2);
\draw[black] (3,0) -- (3,4);
\node[scale=1.5] at (2.5,0.5) {$\times$};
\node[scale=1.5] at (2.5,1.5) {$\times$};
\node[scale=1.5] at (0.5,1.5) {$\times$};
\end{tikzpicture}}
\caption{A tableau with a right-triangle}
\end{figure}
\begin{definition}\label{def-fleche}
If $T$ and $T'$ are distinct tableaux, we define the transformation on tableaux $\rightarrow$ as $T \rightarrow T'$ if $T'=T \cup \{(i',j')\}$, and there exists $(i,j)$ such that $\{(i,j),(i',j),(i,j')\}\subseteq T$. The equivalence relation $\overset{*}{\leftrightarrow}$ is defined as the symmetric, reflexive and transitive closure of $\rightarrow$.
\end{definition}
For any tableau $T$, we define $\mathrm{Sat}(T)$ as the smallest tableau (relatively to inclusion) with no right-triangle containing $T$. The existence and the unicity of $\mathrm{Sat}(T)$ are easy to check. It is the representative of the equivalence class of $T$. Two tableaux $T$ and $T'$ are therefore equivalent if $\mathrm{Sat}(T)=\mathrm{Sat}(T')$.
\begin{lemma}\label{lemma-lines}
The tableau $T$ in $2^{\IntEnt{n_1}\times \IntEnt{n_2}}$ is right-triangle free if and only if for all $i,i' \in \llbracket n_{1}\rrbracket$, the lines $i$ and $i'$ are either the same (for all $j\in \llbracket n_2 \rrbracket, T_{i,j}=T_{i',j}$), or disjoint (for all $j\in \llbracket n_2 \rrbracket, T_{i,j}=0 \lor T_{i',j}=0$).
\end{lemma}
\begin{lemma}\label{lm-tool}
Let $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$, and let $T$ and $T'$ be any two states of $\mathrm{M_{F_1,F_2}}$ such that $T\rightarrow T'$. Then $T$ is final if and only if $T'$ is final.
\end{lemma}
Let us recall that the alphabet of $\mathrm{ M_{F_1,F_2}}$ is $\IntEnt{n_1}^{\IntEnt{n_1}}\times \IntEnt{n_2}^{\IntEnt{n_2}}$. If $(f,g)$ is such a letter and $T=\{(x_1,y_1),\ldots, (x_n,y_n)\}$ is a tableau, then define $(f,g)(T)$ as $\{(f(x_1),g(y_1)),\ldots, (f(x_n),g(y_n))\}$.
\begin{lemma}\label{lm-tool2}
Let $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$, and let $T$ and $T'$ be any two states of $\mathrm{\widehat M_{F_1,F_2}}$ such that $T\rightarrow T'$. Then, for any $a\in \IntEnt{n_1}^{\IntEnt{n_1}}\times \IntEnt{n_2}^{\IntEnt{n_2}} $, $\delta^a(T)\rightarrow \delta^a(T')$ or $\delta^a(T)= \delta^a(T')$.
\end{lemma}
\begin{proposition}\label{prop-undistinguishable}
Let $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$, and let $T,T'$ be two states of $\mathrm{ M_{F_1,F_2}}$. If $T \overset{*}{\leftrightarrow} T'$, then $T$ and $T'$ are not distinguishable.
\end{proposition}
\begin{proof}
From Lemma \ref{lm-tool2}, it is easy to see by a simple induction that, for any word $w$ if $T\rightarrow T'$ then $\delta^w(T)\rightarrow \delta^w(T')$ or $\delta^w(T)= \delta^w(T')$.
From Lemma \ref{lm-tool}, if $T\rightarrow T'$, then $T\sim_{Ner} T'$ in the sense of the Nerode equivalence. Thus, as $\overset{*}{\leftrightarrow}$ is the symmetric and transitive closure of $\rightarrow$, $T\overset{*}{\leftrightarrow} T'$ implies $T\sim_{Ner} T'$.
\end{proof}
\begin{lemma}\label{lemma-dist}
All states of $ (\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}})_{/\overset{*}{\leftrightarrow}}$ are pairwise distinguishable.
\end{lemma}
\begin{proof}
Let $\delta$ be the transition function of $ (\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}})_{/\overset{*}{\leftrightarrow}}$. Let $T$ and $T'$ be the representatives of two states of $(\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}})_{/\overset{*}{\leftrightarrow}}$, such that $T\neq T'$. Let $(i,j)$ be such that $T_{i,j} \neq T'_{i,j}$. Suppose, for example that $T_{i,j}=1$. Take $\{i_{1},\dots,i_{\ell}\}=\{\alpha \mid T'_{\alpha,j}=1\}$ and $\{j_{1},\dots,j_{p}\}=\{j\}\cup\{\beta \mid T'_{i_{1},\beta}=1\}$. We can see that :
\begin{enumerate}
\item By Lemma \ref{lemma-lines}, lines $i_{1},\dots, i_{\ell}$ are the same, as they all have a cross on the column $j$. Columns $\{j_{1},\dots,j_{p}\}$ are also the same, as they all have a cross on line $i_{1}$. It follows that, if $(i',j')\in \left(\{i_{1},\dots,i_{\ell}\}\times \left(\{0,\dots,n_2-1\}\setminus \{j_{1},\dots,j_{p}\} \right)\right)\cup \left(\left(\{0,\dots,n_1-1\}\setminus \{i_{1},\dots,i_{\ell}\} \right)\times \{j_{1},\dots,j_{p}\}\right)$, then $T'_{i',j'}=0$
\item $j\in\{j_{1},\dots,j_{p}\}$ and $i\not\in\{i_{1},\dots,i_{\ell}\}$.
\end{enumerate}
\begin{figure}
\centerline{
\begin{tikzpicture}[scale=0.4]
\draw[step=1.0,black, thin] (0,0) grid (5,5);
\draw[black,very thick] (0,1) -- (5,1);
\draw[black,very thick] (1,0) -- (1,5);
\node[scale=1.5] at (0.5,4.5) {$\times$};
\node[scale=1.5] at (3.5,1.5) {$\times$};
\node[scale=1.5] at (1.5,1.5) {$\times$};
\node[scale=1.5,black] at (3.5,3.5) {$\otimes$};
\node[scale=1.5,black] at (1.5,3.5) {$\times$};
\node[scale=1.5,black] at (2.5,2.5) {$\times$};
\node at (3.5,5.5) {$j$};
\node at (-0.5,3.5) {$i$};
\draw[step=1.0,black, thin] (10,0) grid (15,5);
\draw[black,very thick] (10,1) -- (15,1);
\draw[black,very thick] (11,0) -- (11,5);
\node[scale=1.5] at (12.5,1.5) {$\times$};
\node[scale=2,black] at (13.5,3.5) {$\circ$};
\node[scale=1.5] at (11.5,4.5) {$\times$};
\node[scale=1.5] at (11.5,2.5) {$\times$};
\node[scale=1.5] at (11.5,0.5) {$\times$};
\node[scale=1.5] at (13.5,4.5) {$\times$};
\node[scale=1.5] at (13.5,2.5) {$\times$};
\node[scale=1.5] at (13.5,0.5) {$\times$};
\node[scale=1.5] at (14.5,3.5) {$\times$};
\node[scale=1.5] at (11.5,0.5) {$\times$};
\node[scale=1.5] at (10.5,4.5) {$\times$};
\node[scale=1.5] at (10.5,2.5) {$\times$};
\node[scale=1.5] at (10.5,0.5) {$\times$};
\node[scale=1.5] at (3.5,0.5) {$\times$};
\node[scale=1.5] at (1.5,0.5) {$\times$};
\node at (10.5,5.5) {$j_1$};
\node at (11.5,5.5) {$j_2$};
\node at (13.5,5.5) {$j_3$};
\node at (9.5,0.5) {$i_1$};
\node at (9.5,2.5) {$i_2$};
\node at (9.5,4.5) {$i_3$};
\node at (13.5,-0.5) {$j$};
\end{tikzpicture}}
\caption{An example of two tableaux $T$ and $T'$}
\end{figure}
Let $f(i')=\left\{\begin{array}{ll}
n_1-1&\text{if }i'\in\{i_{1},\dots, i_{\ell}\},\\
0&\text{otherwise,}
\end{array}\right.$
and
$g(j')=\left\{\begin{array}{ll}
0&\text{if }j'\in\{j_{1},\dots, j_{p}\}\\
n_2-1&\text{otherwise.}
\end{array}\right.$
If $(f(i'),g(j'))$ is in the final zone of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$, then
\begin{gather*}
(i',j')\in \left(\{i_{1},\dots,i_{\ell}\}\times \left(\{0,\dots,n_2-1\}\setminus \{j_{1},\dots,j_{p}\} \right)\right) \cup \left(\left(\{0,\dots,n_1-1\}\setminus \{i_{1},\dots,i_{\ell}\} \right)\times \{j_{1},\dots,j_{p}\}\right),
\end{gather*} and so the first point above gives us $T'_{i',j'}=0$.
Therefore, $\delta^{(f,g)}(T')$ has only at most two crosses, one in $(n_1-1,0)$ and one in $(0,n_{2}-1)$, and it is not final. However, the second point above and the fact that $T_{i,j}=1$ gives us that $\delta^{(f,g)}(T)_{0,0}=1$, which means that $\delta^{(f,g)}(T)$ is final. Thus, $T$ and $T'$ are distinguishable.
\end{proof}
Proposition \ref{prop-undistinguishable} and Lemma \ref{lemma-dist} give us that $ (\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}})_{/\overset{*}{\leftrightarrow}}$ is the minimal DFA equivalent to the DFA $ \mathrm{\widehat M}_{\{n_{1}-1\},\{0\}}$. The following corollary stems from this assertion combined with Lemma \ref{lemma-acc}.
\begin{corollary}\label{cor-min}
$ (\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}})_{/\overset{*}{\leftrightarrow}}$ is the minimal DFA equivalent to $\mathrm{M}_{\{n_{1}-1\},\{0\}}$.
\end{corollary}
\subsection{Computing the state complexity of the language recognized by $\mathrm{M}_{\{n_{1}-1\},\{0\}}$}
The number of right-triangle free tableaux $T$ of size $\IntEnt{n_1}\times\IntEnt{n_2}$ such that, if $T$ has a cross in the final zone of $\mathrm{M}_{\{n_{1}-1\},\{0\}}$, then $T$ has cross at $(0,0)$ is exactly $2\alpha_{n_1-1,n_2-1}+\alpha'_{n_1,n_2}$ where $\alpha_{x,y}$ is the number of right-triangle free tableaux of size $x\times y$ and $\alpha'_{x,y}$ the number of right-triangle free tableaux of size $x\times y$ having a cross in $(0,0)$. Therefore,
\begin{lemma}\label{lemma-comp}
The state complexity of $\mathrm L(\mathrm{M}_{\{n_{1}-1\},\{0\}})$ is $2\alpha_{n_1-1,n_2-1}+\alpha'_{n_1,n_2}$.
\end{lemma}
Closed formulas for $\alpha(x,y)$ and $\alpha'(x,y)$ are given in Corollary $20$ and Proposition $22$ of \cite{CLMP15}.
In the next subsection, we prove that $(\{n_{1}-1\},\{0\})$ is a couple of final states that maximizes the size of the minimal DFA associated to any $\mathrm{M}_{F_1,F_2}$, with $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$.
\subsection{Maximizing the state complexity of $\textcircled{$\star$}$ applied to monster $2$-languages}
Let $\mathcal T$ be the set of right-triangle free tableaux of size $n_1\times n_2$. For all $(F_1,F_2)\subseteq\IntEnt{n_1}\times\IntEnt{n_2}$, let \[\mathcal T_{F_1,F_2}=\#(\mathrm{\widehat M}_{F_1,F_2})_{/\overset{*}{\leftrightarrow}}=\#\{T\in \mathcal T\mid T \text{ has a cross in the final } \text{zone implies } T_{0,0}=1\}.\]
We show that :
\begin{lemma}\label{lemma-max}
For any $F_1\times F_2\subseteq \IntEnt{n_1}\times\IntEnt{n_2}$ such that $F_1,F_2\neq\emptyset$ and $F_1\neq\IntEnt{n_1}$,$F_2\neq\IntEnt{n_2}$, $\mathcal T_{F_1,F_2}\leq \mathcal T_{\{n_1-1\},\{0\}}$.
\end{lemma}
Therefore, by Remark \ref{remark-acc}, Proposition \ref{prop-undistinguishable}, and Corollary \ref{cor-min}, for any $F_1\times F_2\subseteq \IntEnt{n_1}\times\IntEnt{n_2}$ such that $F_1,F_2\neq\emptyset$ and $F_1\neq\IntEnt{n_1}$,$F_2\neq\IntEnt{n_2}$,
\[ \#_{\min}(\mathrm{M}_{F_1,F_2})\leq \#((\mathrm{\widehat M}_{F_1,F_2})_{/\overset{*}{\leftrightarrow}}) = \mathcal T_{F_1,F_2}\leq \mathcal T_{\{n_1-1\},\{0\}} = \#((\mathrm{\widehat M}_{\{n_1-1\},\{0\}})_{/\overset{*}{\leftrightarrow}}) = \#_{\min}(\mathrm{M}_{\{n_1-1\},\{0\}}).\]
The cases where $F_1=\emptyset$ or $F_2=\emptyset$ or $F_1=\IntEnt{n_1}$ or $F_2=\IntEnt{n_2}$ are easy and proven by :
\begin{lemma}\label{lemma-part}
If $F_1=\emptyset$ or $F_2=\emptyset$ or $F_1=\IntEnt{n_1}$ or $F_2=\IntEnt{n_2}$, then $\#_{\min}(\mathrm{M}_{F_1,F_2})\leq \#_{\min}(\mathrm{M}_{\{n_1-1\},\{0\}})$.
\end{lemma}
Therefore, by Theorem \ref{th-mon2} and Lemma \ref{lemma-comp},
\begin{theorem}
The state complexity of $\textcircled{$\star$}$ is $2\alpha_{n_1-1,n_2-1}+\alpha'_{n_1,n_2}$, \emph{i.e.} for all $n_1,n_2\in \mathbb N^*$, $sc_{\textcircled{$\star$}}(n_1,n_2)=2\alpha_{n_1-1,n_2-1}+\alpha'_{n_1,n_2}$.
\end{theorem}
\section{Witnesses with a bounded alphabet size}\label{sec-borne}
We now prove that there is a finite-bounded-alphabet witness. Let $n_1,n_2$ be two positive integers and let $(\mathds M_1,\mathds M_2)=\mathrm{Mon}_{n_1,n_2}^{\{n_{1}-1\},\{0\}}$. Recall that the letters of $\mathrm{Mon}_{n_1,n_2}^{\{n_{1}-1\},\{0\}}$ are couples of mappings and that $\mathds{1}$ is the identities bot in $\IntEnt{n_1}$ and in $\IntEnt{n_2}$. Let $B_1$ and $B_2$ be the DFAs obtained by restricting the letters of respectively $\mathds M_1$ and $\mathds M_2$ to the alphabet
\[\begin{array}{ll} \Sigma'=&\left\{((0,\ldots ,n_1-2),\mathds{1}), ((1,\dots, n_1-2),\mathds{1}),(\mathds{1},(1,\dots, n_2-2)),((1,\dots, n_1-1),\mathds{1}),\color{white}{\binom a b}\right.\\
&(\mathds{1},(1,\dots,n_2-1)),
((0,n_1-1),\mathds{1}),(\mathds{1},(0,n_2-1)),((0,1),(0,1)),((0,1),\mathds{1}),(\mathds{1},(0,1)),\\
&\left.((n_1-2,n_1-1),\mathds{1}), (\left(1\atop 0\right),\mathds{1}),
(\mathds{1},\left(1\atop 0\right)),
(\left(n_1-2\atop n_1-1\right),\mathds{1}),(\mathds{1},\left(n_2-2 \atop n_2-1\right)),(\left(n_1-1\atop 0\right),\mathds{1}),(\mathds{1},\left(n_2-1 \atop 0\right))\right\}.
\end{array}\]
Let $B=\mathfrak{StX}(B_1,B_2)$, and $\widehat B$ be the DFA obtained by restricting $B$ to states $T$ such that, if $T$ has a cross in the final zone of $\mathrm{M}_{\{n_1-1\},\{0\}}$, then $T$ has cross at $(0,0)$. The DFA $A=\widehat B_{/ \overset{*}{\leftrightarrow}}$ is obtained by restricting the letters of $\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}}$ to the alphabet $\Sigma'$. We are going to show that $A$ is minimal.
Let us recall that all letters of $\Sigma'$ can be seen as a function acting on tableaux. Every word $w$ of $\Sigma'$ acts on a tableau $T$ by applying the composition of all letters of $w$ to $T$ : if $w=a_1\ldots a_n$, define $w(T)=a_n\circ\ldots\circ a_1(T)$. When it exists, we denote by $w^{-1}$ the inverse function of $a_n\circ\ldots\circ a_1$. Let $\delta$ be the transition function of $B$. We first notice that $w(T)$ is not necessarily equivalent to $T'=\delta^{w}(T)$ since $(0,0)$ is in $T'$ if $T'$ has a cross in the final zone. We denote by $w[i,j]$ the subword $a_i\cdots a_j$. By convention, if j<i, $w[i,j]=\varepsilon$. The proof of the following lemma is easy by induction.
\begin{lemma}\label{lemma-sat}
Let $w$ be a word of $\Sigma'$, and $T$ be a state of $B$. If, for any integer $k<|w|$, we have
$(w[1,k](T))_{0,0}=1$ or $w[1,k](T)$ has no cross in the final zone, then $\delta^{w}(T)=w(T)$.
\end{lemma}
\begin{lemma}\label{lemma-acc-fin}
All the states of $\widehat B$ are accessible.
\end{lemma}
\begin{proof}
As the induction is the same as in Lemma \ref{lemma-acc}, we only focus on cases of this previous lemma where the letters used are not in $\Sigma'$.
\begin{itemize}
\item If $T'$ has no cross in the final zone, according the previous remark, we have only to examine the case where $T'_{n_1-1,0}=0$. Let $(i,j)$ be the index of a cross of $T'$. Let $w=((\mathds{1},(0,1))((0,\ldots, n_1-2),\mathds{1})^i(\mathds{1},(1,\ldots, n_2-1))^{j-1}$ and let $T=w^{-1}(T')$. We have $T_{0,0}=1$ and for all $1<k<|w|$, for all $(i,j)\neq(n_1-1,0)$ such that $i=n_1-1$ and $j=0$, $(w[1, k](T))_{i,j}=(w[k,|w|-1]^{-1}(T'))_{i,j}=0$. Thus, by Lemma \ref{lemma-sat}, $\delta^{w}(T)=T'$. We also have $T<T'$ since $\#T=\#T'$ and $\#_\mathrm{nf}T<\#_\mathrm{nf}T'$.
\item $T'$ has a cross in the final zone other than $(0,0)$. Let $(i,j)$ be such a cross. We distinguish two cases.
\begin{itemize}
\item If $j=0$, we consider the word $w_1=((1,\ldots, n_1-2),\mathds{1})^i$. Let $T''$ be the tableau obtained from $w_1^{-1}(T')$ by deleting the cross at $(0,0)$ and let $w_2=((0,1),\mathds{1})$ and $T=w_2(T'')$. It is easy to see that $T'=w_1(\delta^{w_2}(T))$. By Lemma \ref{lemma-sat}, we have $T'=w_1(\delta^{w_2}(T))=\delta^{w_2w_1}(T)$ in $\mathrm{\widehat M}_{\{n_{1}-1\},\{0\}}$ and $T<T'$ as $\#T<\#T'$.
\item Otherwise define $w_1=((1,\ldots, n_1-1),\mathds{1})^{i-1}(\mathds{1},(1,\ldots, n_2-1))^{j-1}$. Let $T''$ be tableau obtained from $w_1^{-1}(T')$ by deleting the cross at $(0,0)$. It means that $T'=w_1(T'')\cup \{(0,0)\}$. Let $w_2=((0,1),(0,1))$ and $T=w_2(T'')$. Then we have $\delta^{w_2}(T)=T''\cup \{(0,0)\}$ or $\delta^{w_2}(T)=T''$.
\begin{itemize}
\item If $\delta^{w_2}(T)=T''\cup \{(0,0)\}$, then by Lemma \ref{lemma-sat}, as $w_1$ does not change the first line and the first column we have $\delta^{w_2w_1}(T)=\delta^{w_1}(\delta^{w_2}(T))=w_1(\delta^{w_2}(T))=w_1((T''\cup \{0,0)\})=w_1(T'')\cup \{(0,0)\}=T'$.\\
\item If $\delta^{w_2}(T)=T''$, then we set $k=\min\{l<|w| \mid \delta^{(w_1[1,l]}(T'')_{0,0}=1\}$. For all integer $l<k$, the tableau $\delta^{w_1[1,l]}(T'')$ has no cross in the final zone, and we can apply Lemma \ref{lemma-sat}, and $w_2(w_1[1,l](T))=\delta^{w_2w_1}[1,l](T)$. Furthermore $\delta^{w_2w_1[1,k]}(T)=(w_1[1,k])(\delta^{w_2}(T))\cup \{(0,0)\}$ and as letters of $w_1[k+1,|w_1|]$ do not change the first line and the first column, we have $\delta^{w_2w_1}(T)=w_1[k+1,|w_1|](w_1[1,k](\delta^{w_2}(T))\cup \{(0,0)\})=w_1(\delta^{w_2}(T))\cup \{(0,0)\}=w_1(T'')\cup \{(0,0)\}=T'$. Moreover, as $\#T<\#T'$, we have $T<T'$.
\end{itemize}
\end{itemize}
\item The only cross of $T'$ which is in the final zone is $(0,0)$. According to the first sentence of the proof, we have only to consider the case where there does not exist $j$ such that $T'_{0,j}=1$ and $T'_{n_1-1,0}=0$. It follows that there exists $(i,j)\neq (n_1-1,0)$ such that $i\neq 0$ and $T'_{i,j}=1$. Let $w=((1,\ldots, n_1-1),\mathds{1})^i$ and let $T=w^{-1}(T')$. By Lemma \ref{lemma-sat}, as for each proper prefix $w'$ of $w$, $(w'(T))_{0,0}=1$, we have $\delta^{w}(T)=T'$ in $B$, and $T<T'$ as $\#_\mathrm{nf}T < \#_\mathrm{nf}T'$.
\end{itemize}
\end{proof}
Similarly, the following lemma is obtained by simulating with letters in $\Sigma'$ the transition functions used in Lemma \ref{lemma-dist}.
\begin{lemma}\label{lemma-dist-fin}
All states of $A$ are pairwise distinguishable.
\end{lemma}
Lemma \ref{lemma-acc-fin} and \ref{lemma-dist-fin} imply that $A$ is minimal and that the following theorem holds.
\begin{theorem}
The couple $(B_1,B_2)$ is a witness for the operation $\textcircled{$\star$}$.
\end{theorem}
\section{Conclusion}
We have given the state complexity of the star of symmetrical difference and have provided a witness with a constant alphabet size. We know that
the bounded size of the alphabet that we exhibit is not optimal,
but it simplifies the proof given.
Moreover, proving the optimality of a bound seems out of reach for now and would necessitate to introduce new tools.
One of our future works will be to generalize the method used here to a whole well-defined class of operations, in order to provide a witness with bounded alphabet size for all of them.
{\bf Acknowlegedments }
This work is partially supported by the projects MOUSTIC ( ERDF/GRR) and ARTIQ (ERDF/RIN)
|
1,116,691,497,656 | arxiv | \section{Introduction}
The use of a wavelet transform (WT) as an X-ray detection
algorithm was pioneered by Rosati et al.\ (1995; 1998)
for the detection of extended sources in the Roentgen Satellite ({\it ROSAT\/})
Position Sensitive Proportional Counter (PSPC) fields
and subsequently adopted by many groups
(\citealt{Grebenevea95,Damianiea97b,Pislarea97,Vikhlininea98,Lazzatiea98,Freemanea02}).
%
Differently from other WT-based algorithms, the Brera Multi-scale Wavelet
(BMW, \citealt{Lazzatiea99,Campanaea99}) algorithm, which was developed to analyse
{\it ROSAT} High Resolution Imager (HRI) data, automatically characterises
each source through a multi-source $\chi^2$ fitting with respect to a Gaussian model
in the wavelet space.
For these reasons it has also proven to perform well in
crowded fields and in conditions of very low background (\citealt{Lazzatiea99}).
The BMW was used to produce the BMW-HRI catalogue (\citealt{Lazzatiea99,Campanaea99,Panzeraea03}),
a catalogue of $\sim 29\,000$ sources detected with a probability of $\ga 4.2\sigma$ and with
a sky coverage of 732 deg$^2$ down to a limiting flux of $10^{-12}$ erg cm$^{-2}$ s$^{-1}$
and 10 deg$^2$ down to a limiting flux of $10^{-14}$ erg cm$^{-2}$ s$^{-1}$.
The BMW-HRI is being currently used for a number of scientific projects.
Among preliminary results, an analysis of X-ray detected sources
without obvious counterparts at other wavelengths, (a.k.a. blank
field sources, \citealt{Chieregatoea05}) has been carried out
with the aim of identifying unusual objects, as well as a
serendipitous X-ray survey of Lyon-Meudon Extragalactic Database
(LEDA) normal galaxies (\citealt{Tajerea05}).
The BMW algorithm was modified to support the analysis of
{\it Chandra} Advanced CCD Imaging Spectrometer (ACIS) images
(\citealt{Morettiea02}), and has led to interesting results on the nature of
the cosmic X-ray background (CXB, \citealt{Campanaea01}; \citealt{Morettiea03}).
Given the reliability and versatility of the BMW algorithm, we decided to apply it to a
large sample of {\it Chandra} ACIS-I images, to take full advantage of the
superb spatial resolution [$\sim 0\farcs5$ point-spread function (PSF) on-axis]
of {\it Chandra} while being able to automatically analyse crowded fields
and/or with very low background. We thus produced the
Brera Multi-scale Wavelet {\it Chandra} source catalogue (BMW-C).
In this paper we present a pre-release of the BMW-C,
which is based on a subset of the whole {\it Chandra} ACIS observations dataset,
roughly corresponding to the first three years of operations.
Like the BMW-HRI, our catalogue provides source positions, count rates,
extensions and relative errors. In addition, for all bright
(100 counts in the 0.5--7\,keV band) sources in the catalogue
we extracted light curves which will be exploited in a search for periodic
and non-periodic variability,
as well as spectra to have an immediate spectral classification.
In Sect.~\ref{bmwc:datasample} we describe the selection criteria of the
{\it Chandra} fields and the resulting data sample.
In Sect.~\ref{bmwc:dataproc} we describe the data processing,
which includes data screening and correction, event list selections
and energy band selections.
In Sect.~\ref{bmwc:detect} we describe the wavelet detection
algorithm and its application to the {\it Chandra} fields as well as the resulting source catalogue,
the BMW-{\it Chandra} catalogue.
In Sect.~\ref{bmwc:character} we report the properties of the source sample, including
their spatial extent
fluxes, and the definition of our serendipitous sub-sample.
In Sect.~\ref{bmwc:skycov_how} we calculate the sky coverage of our survey.
In Sect.~\ref{bmwc:champcat} we compare our catalogue with the
{\it Chandra} Multiwavelength Project (ChaMP, \citealt[][and references therein]{Champ3}) data.
In Sect.~\ref{bmwc:xcorr} we describe preliminary results of
the cross-matches between the
BMW-{\it Chandra} and other catalogues at different wavelengths.
Finally, in Sect.~\ref{bmwc:summary} we summarise our work and highlight the plan
for future exploitation of the catalogue.
\section{Data sample\label{bmwc:datasample}}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f1.eps}}
\caption{Aitoff projection in Galactic coordinates of the
136 observations in the BMW-C catalogue.
The size of each point is proportional to the exposure time,
and the scale shows representative exposure times.}
The thick lines are the limits of the high- and low-latitude
sub-catalogues (see Sect.~\ref{bmwc:character}).
\label{bmwc:aitoff}
\end{figure}
Our choice of the {\it Chandra} fields favoured the ones that would
maximise the sky area not occupied by the pointed targets, that is the
fields where the original PI was interested in a single, possibly
point-like object centred in the field. We adopted the following criteria:
%
\begin{enumerate}
\item All ACIS-I [no grating, no High Resolution Camera (HRC) fields
and in Timed Exposure mode]
fields with exposure time in excess of 10\,ks available by 2003 March
were considered.
Data from all four front-illuminated (FI) CCDs (I0, I1, I2, I3) were used.
\item We excluded fields dominated by extended sources
[covering $\ga 1/9$ of the field of view (FOV)].
\item We excluded planet observations and supernova remnant observations.
\item We also excluded fields with bright point-like or high-surface
brightness extended sources.
\item We put no limit on Galactic latitude, but
we selected sub-samples based on latitude at a later time.
\end{enumerate}
%
The exclusion of bright point-like or high-surface brightness
extended sources was dictated by the nature of our detection algorithm,
which leads to an excessive number of spurious detections at the
periphery of the bright source.
This problem is common to most detection algorithms.
Therefore, each field was visually inspected to check for such effects;
where found, a conservatively large portion of the field was flagged
(see Sect.~\ref{bmwc:detect}).
Of the 147 fields analysed, 11 ($\sim 7$\%) were discarded because of problems
at various stages of the pipeline execution.
The field relative to observation ID 634, for instance, was discarded because
of the spacecraft wobbling during the exposure that caused the images
of point sources to be elongated and to be detected as double.
We eliminated observation ID 581 because the CCD I3 suffered from good time intervals
inconsistent with the other CCDs, resulting in an anomalously high background
(e.g., see \citealt{Morettiea02}).
Furthermore, the observation IDs 316, 361, 950, 1302, 1872, 2269, and 2271
were eliminated because of problems at the detection stage.
Finally, the fields relative to observation IDs 2365 and 1431 were discarded
at the analysis stage because the asset of the spacecraft changed during the
observation.
As a result, 136 fields reported successful completion of the pipeline.
Figure~\ref{bmwc:aitoff} shows the Aitoff projection in
Galactic coordinates of their positions. Table~\ref{bmwc:fieldprops}
lists the basic properties (observation ID, Sequence number, coordinates
of the target, etc.) of the fields included in our sample.
We note that several fields were observed more than once.
These fields were considered as different pointings, so that the number of
distinct fields is 94 (see Sec.~\ref{bmwc:skycov_how}).
\setcounter{table}{0}
\begin{table*}
\begin{center}
\caption{Field Properties in the BMW-C Sample.}
\label{bmwc:fieldprops}
\begin{tabular}{rllcccccc}
\hline
\hline
\noalign{\smallskip}
ID & Sequence & Target Name & Nominal& Nominal Dec & Observation Date & MJD & Effective & Average $N_{\rm H}$ \\
({\tt OBSID}) & ({\tt SEQ\_NUM}) & ({\tt OBJECT})& RA ({\tt RA\_NOM}) & ({\tt DEC\_NOM}) & ({\tt DATE\_OBS}) & & Exp.\ & ({\tt NH\_WAVG)} \\
& & & (J2000) & (J2000) & & & ({\tt TEFF}, s) & ($10^{20}$ cm$^{-2}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
19 &200017 &CRACLOUDCORE &285.44943 &$-$36.97264 &2000$-$10$-$07T17:01:59& 51824.7 & 19705.67& 7.75 \\
50 &290019 &ETACARINAE &161.25481 &$-$59.68144 &1999$-$09$-$06T19:49:15& 51427.8 & 9143.05 & 1.36$\times 10^{2}$ \\
65 &290034 &NGC2516 &119.58374 &$-$60.78898 &1999$-$08$-$26T15:20:50& 51416.6 & 7754.19 & 1.46$\times 10^{1}$ \\
79 &300004 &NGC6397 &265.17007 &$-$53.68983 &2000$-$07$-$31T15:31:33& 51756.6 & 48343.20& 1.38$\times 10^{1}$ \\
88 &400001 &CENX$-$3 &170.31691 &$-$60.61832 &2000$-$11$-$25T07:57:31& 51873.3 & 35742.88& 1.19$\times 10^{2}$ \\
90 &400003 &4U1538$-$52 &235.60481 &$-$52.38390 &2000$-$04$-$08T22:46:16& 51642.9 & 20950.14& 9.71$\times 10^{1}$ \\
96 &400009 &GS2000+25 &300.70609 &25.23087 &1999$-$11$-$05T11:03:12& 51487.5 & 18836.82& 6.55$\times 10^{1}$ \\
322 &600082 &NGC4472 &187.44366 &8.00878 &2000$-$03$-$19T12:15:22& 51622.5 & 10363.09& 1.66 \\
324 &600084 &NGC4636 &190.71155 &2.69179 &1999$-$12$-$04T23:48:01& 51517.0 & 6296.65 & 1.81 \\
441 &900003 &AXAFSouthernDee&53.11186 &$-$27.80500 &2000$-$05$-$27T01:18:05& 51691.1 & 54985.14& 9.00$\times 10^{-1}$ \\
446 &200036 &W3B &36.41589 &62.09362 &2000$-$04$-$03T02:57:18& 51637.1 & 20059.54& 9.07$\times 10^{1}$ \\
522 &800030 &A209 &23.01838 &$-$13.56364 &2000$-$09$-$09T09:50:41& 51796.4 & 9965.00 & 1.66 \\
580 &900001 &HUBBLEDEEPFIELD&189.31799 &62.21036 &1999$-$11$-$13T01:14:18& 51495.1 & 49433.22& 1.49 \\
582 &900003 &AXAFSouthernDee&53.11215 &$-$27.80473 &2000$-$06$-$03T02:38:23& 51698.1 & 128609.01& 9.00$\times 10^{-1}$ \\
606 &200031 &IC348 &56.12549 &32.13877 &2000$-$09$-$21T19:58:42& 51808.8 & 51663.77& 1.37$\times 10^{1}$ \\
611 &200036 &W3B &36.41431 &62.09313 &2000$-$03$-$23T11:59:55& 51626.5 & 18467.16& 9.07$\times 10^{1}$ \\
633 &200058 &NGC3603 &168.78081 &$-$61.26516 &2000$-$05$-$01T23:29:33& 51666.0 & 46210.94& 1.35$\times 10^{2}$ \\
635 &200060 &RHOOPHCORE &246.82513 &$-$24.57316 &2000$-$04$-$13T18:33:23& 51647.8 & 76286.52& 1.40$\times 10^{1}$ \\
637 &200062 &RHOOPHA &246.64668 &$-$24.38723 &2000$-$05$-$15T23:35:16& 51680.0 & 96094.30& 1.41$\times 10^{1}$ \\
642 &200067 &NGC1333 &52.27541 &31.32702 &2000$-$07$-$12T22:55:58& 51738.0 & 36044.84& 1.51$\times 10^{1}$ \\
652 &300021 &V382Velorum1999&161.25778 &$-$52.38171 &1999$-$12$-$30T19:25:10& 51542.8 & 14954.87& 3.69$\times 10^{1}$ \\
653 &300022 &NGC5139 &201.69753 &$-$47.47309 &2000$-$01$-$24T02:15:01& 51567.1 & 25026.24& 9.43 \\
676 &400043 &GROJ0422+32 &65.42895 &32.91309 &2000$-$10$-$10T06:57:08& 51827.3 & 18795.74& 1.66$\times 10^{1}$ \\
677 &400044 &4U1543$-$47 &236.78270 &$-$47.67433 &2000$-$07$-$26T10:30:33& 51751.4 & 9844.94 & 4.00$\times 10^{1}$ \\
799 &600102 &NGC1395 &54.65637 &$-$23.06370 &1999$-$12$-$31T06:31:53& 51543.3 & 18983.21& 1.97 \\
868 &700173 &PG1115+407 &169.67772 &40.42151 &2000$-$10$-$03T23:12:41& 51821.0 & 17338.21& 1.91 \\
887 &700192 &Elais:N2 &249.19620 &41.02638 &2000$-$08$-$02T17:43:11& 51758.7 & 73375.76& 1.07 \\
888 &700193 &Elais:N1 &242.58421 &54.55655 &2000$-$08$-$03T14:45:16& 51759.6 & 71115.65& 1.35 \\
944 &900016 &SGRB2 &266.78070 &$-$28.44160 &2000$-$03$-$29T09:44:36& 51632.4 & 98629.03& 1.26$\times 10^{2}$ \\
945 &900017 &GALACTICCENTERA&266.58221 &$-$28.87193 &2000$-$07$-$07T19:05:19& 51732.8 & 48399.41& 1.10$\times 10^{2}$ \\
949 &900021 &GALACTICPLANE &280.99377 &$-$4.07476 &2000$-$02$-$25T22:00:10& 51599.9 & 38385.57& 1.88$\times 10^{2}$ \\
953 &300028 &47TUC &5.97537 &$-$72.07276 &2000$-$03$-$16T08:39:44& 51619.4 & 31676.94& 4.94 \\
955 &300030 &47TUC &5.97539 &$-$72.07265 &2000$-$03$-$16T18:33:03& 51619.8 & 31676.94& 4.94 \\
957 &900024 &HUBBLEDEEPFIELD&189.15215 &62.24523 &2000$-$02$-$23T06:31:59& 51597.3 & 57442.50& 1.49 \\
966 &900026 &HUBBLEDEEPFIELD&189.26813 &62.21516 &1999$-$11$-$21T04:03:48& 51503.2 & 55972.70& 1.49 \\
967 &900027 &HUBBLEDEEPFIELD&189.26808 &62.21466 &1999$-$11$-$14T19:47:49& 51496.8 & 57433.02& 1.49 \\
972 &200079 &M17 &275.12656 &$-$16.17502 &2002$-$03$-$02T17:05:03& 52335.7 & 39439.75& 1.45$\times 10^{2}$ \\
977 &200084 &NGC6530 &271.10144 &$-$24.35215 &2001$-$06$-$18T11:40:26& 52078.5 & 58008.05& 7.09$\times 10^{1}$ \\
978 &200085 &M16 &274.68353 &$-$13.80290 &2001$-$07$-$30T18:55:00& 52120.8 & 77126.02& 1.24$\times 10^{2}$ \\
1109 &780059 &PKS0312$-$770 &47.99976 &$-$76.86046 &1999$-$09$-$08T02:42:07& 51429.1 & 12836.97& 8.20 \\
1110 &780060 &PKS0312$-$770 &48.13725 &$-$76.84995 &1999$-$09$-$08T06:48:47& 51429.3 & 12581.05& 8.20 \\
1111 &780061 &PKS0312$-$770 &48.35330 &$-$76.83350 &1999$-$09$-$08T10:38:47& 51429.4 & 12593.69& 8.17 \\
1232 &280182 &NGC2516 &119.58367 &$-$60.78894 &1999$-$08$-$27T07:07:36& 51417.3 & 5728.96 & 1.46$\times 10^{1}$ \\
1249 &280199 &ETACARINAE &161.25478 &$-$59.68141 &1999$-$09$-$06T23:46:37& 51428.0 & 9630.09 & 1.36$\times 10^{2}$ \\
1479 &980429 &LEONIDANTI$-$RADI&333.29935 &$-$22.18443 &1999$-$11$-$17T22:42:28& 51499.9 & 19690.89& 2.45 \\
1519 &300022 &NGC5139 &201.69749 &$-$47.47309 &2000$-$01$-$25T04:33:40& 51568.2 & 43591.34& 9.43 \\
1523 &900021 &GALACTICPLANE &280.99301 &$-$4.07539 &2000$-$02$-$24T09:56:58& 51598.4 & 54724.64& 1.88$\times 10^{2}$ \\
1671 &900030 &HDFNORTH &189.26709 &62.21755 &2000$-$11$-$21T13:26:31& 51869.6 & 166146.53& 1.49 \\
1672 &900031 &AXAFSOUTHERNDEE&53.11979 &$-$27.81285 &2000$-$12$-$16T05:07:55& 51894.2 & 95138.09& 9.01$\times 10^{-1}$ \\
1697 &900056 &ISONW\#3 &158.65106 &57.62013 &2001$-$05$-$16T12:46:50& 52045.5 & 38863.00& 5.70$\times 10^{-1}$ \\
1698 &900057 &ISONW\#1 &158.50160 &57.77004 &2001$-$05$-$17T18:29:38& 52046.8 & 72974.50& 5.72$\times 10^{-1}$ \\
1699 &900058 &ISONW\#2 &158.33333 &57.62089 &2001$-$04$-$30T10:59:38& 52029.5 & 40735.17& 5.74$\times 10^{-1}$ \\
1866 &200094 &LYNDS1551 &67.88639 &18.13492 &2001$-$07$-$23T05:10:11& 52113.2 & 78926.84& 1.86$\times 10^{1}$ \\
1867 &200095 &CHAINORTHCLOUD &167.53415 &$-$76.57798 &2001$-$07$-$02T06:23:33& 52092.3 & 66292.12& 8.62 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{0}
\begin{table*}
\begin{center}
\caption{Field Properties in the BMW-C Sample, Continued.}
\begin{tabular}{rllcccccc}
\hline
\hline
\noalign{\smallskip}
ID & Sequence & Target Name & Nominal& Nominal Dec & Observation Date & MJD & Effective & Average $N_{\rm H}$ \\
({\tt OBSID}) & ({\tt SEQ\_NUM}) & ({\tt OBJECT})& RA ({\tt RA\_NOM}) & ({\tt DEC\_NOM}) & ({\tt DATE\_OBS}) & & Exp.\ & ({\tt NH\_WAVG)} \\
& & & (J2000) & (J2000) & & & ({\tt TEFF}, s) & ($10^{20}$ cm$^{-2}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
1873 &200101 &M67 &132.84630 &11.82625 &2001$-$05$-$31T11:19:36& 52060.5 & 46248.46& 3.94 \\
1874 &200102 &ROSETTEFIELD1 &97.97012 &4.92848 &2001$-$01$-$05T11:54:17& 51914.5 & 14450.83& 7.33$\times 10^{1}$ \\
1875 &200103 &ROSETTEFIELD2 &98.17007 &4.71265 &2001$-$01$-$05T17:47:55& 51914.7 & 19506.63& 7.47$\times 10^{1}$ \\
1876 &200104 &ROSETTEFIELD3 &98.32136 &4.57847 &2001$-$01$-$05T23:29:45& 51915.0 & 19408.68& 7.47$\times 10^{1}$ \\
1877 &200105 &ROSETTEFIELD4 &98.57216 &4.46292 &2001$-$01$-$06T05:11:35& 51915.2 & 19509.79& 7.41$\times 10^{1}$ \\
1878 &200106 &NGC\_2024 &85.44305 &$-$1.91993 &2001$-$08$-$08T06:37:10& 52129.3 & 74671.84& 2.14$\times 10^{1}$ \\
1881 &200109 &NGC346 &14.76239 &$-$72.17383 &2001$-$05$-$15T01:55:17& 52044.1 & 98670.55& 6.23 \\
1882 &200110 &MONOCEROSR2 &91.96145 &$-$6.38078 &2000$-$12$-$02T23:14:19& 51881.0 & 95348.23& 2.46$\times 10^{1}$ \\
1893 &200121 &S106FIR &306.85522 &37.37556 &2001$-$11$-$03T00:08:14& 52216.0 & 44387.49& 1.18$\times 10^{2}$ \\
1934 &400147 &XTEJ1723$-$376 &260.90622 &$-$37.66724 &2001$-$09$-$04T06:40:52& 52156.3 & 28592.80& 1.43$\times 10^{2}$ \\
2232 &900059 &HDF$-$N &189.14977 &62.24391 &2001$-$02$-$19T14:24:58& 51959.6 & 129242.30& 1.49 \\
2233 &900060 &HDF$-$N &189.14841 &62.24338 &2001$-$02$-$22T03:44:27& 51962.2 & 62157.43& 1.49 \\
2234 &900061 &HDF$-$N &189.14441 &62.24145 &2001$-$03$-$02T02:00:29& 51970.1 & 162255.00& 1.49 \\
2235 &900062 &XMM13HR$-$FIELD4&203.57935 &37.82424 &2001$-$06$-$08T10:37:06& 52068.4 & 30002.42& 8.49$\times 10^{-1}$ \\
2236 &900063 &XMM13HR$-$FIELD1&203.55498 &37.97983 &2001$-$06$-$08T19:25:34& 52068.8 & 29800.20& 8.31$\times 10^{-1}$ \\
2237 &900064 &XMM13HR$-$FIELD2&203.77635 &37.84336 &2001$-$06$-$09T04:01:09& 52069.2 & 29797.05& 8.47$\times 10^{-1}$ \\
2238 &900065 &XMM13HR$-$FIELD3&203.75237 &37.99891 &2001$-$06$-$09T12:36:44& 52069.5 & 18884.53& 8.32$\times 10^{-1}$ \\
2239 &900066 &CDFS &53.11699 &$-$27.81122 &2000$-$12$-$23T17:28:01& 51901.7 & 129740.39& 9.00$\times 10^{-1}$ \\
2240 &900067 &CADIS01HFIELD &26.90029 &2.32900 &2001$-$01$-$26T18:36:47& 51935.8 & 26702.29& 3.03 \\
2252 &900079 &WHDF &5.63854 &0.34335 &2001$-$01$-$06T11:36:05& 51915.5 & 71220.99& 2.67 \\
2254 &900081 &3C295 &212.82706 &52.20298 &2001$-$05$-$18T15:25:59& 52047.6 & 89236.41& 1.33 \\
2255 &900082 &NGC55 &3.79150 &$-$39.22025 &2001$-$09$-$11T06:25:05& 52163.3 & 58576.13& 1.72 \\
2267 &900094 &GCS20 &266.17123 &$-$29.27356 &2001$-$07$-$19T10:01:48& 52109.4 & 10208.63& 1.19$\times 10^{2}$ \\
2268 &900095 &GCS21 &265.98087 &$-$29.17148 &2001$-$07$-$20T04:37:11& 52110.2 & 10811.74& 1.16$\times 10^{2}$ \\
2270 &900097 &GCS22 &266.24515 &$-$29.54168 &2001$-$07$-$20T08:00:49& 52110.3 & 10622.17& 1.23$\times 10^{2}$ \\
2272 &900099 &GCS23 &266.05414 &$-$29.43966 &2001$-$07$-$20T11:12:40& 52110.5 & 11611.05& 1.26$\times 10^{2}$ \\
2273 &900100 &GCS10 &266.71024 &$-$28.87570 &2001$-$07$-$18T00:48:28& 52108.0 & 11611.05& 1.11$\times 10^{2}$ \\
2274 &900101 &GCS3 &266.67642 &$-$28.17316 &2001$-$07$-$16T08:44:25& 52106.4 & 10426.28& 1.29$\times 10^{2}$ \\
2275 &900102 &GCS24 &265.86353 &$-$29.33749 &2001$-$07$-$20T14:41:10& 52110.6 & 11598.25& 1.22$\times 10^{2}$ \\
2276 &900103 &GCS11 &266.52002 &$-$28.77435 &2001$-$07$-$18T04:16:58& 52108.2 & 11520.58& 1.13$\times 10^{2}$ \\
2277 &900104 &GCS4 &266.94061 &$-$28.54206 &2001$-$07$-$16T11:52:55& 52106.5 & 10426.28& 1.22$\times 10^{2}$ \\
2278 &900105 &GCS25 &266.12796 &$-$29.70796 &2001$-$07$-$20T18:09:40& 52110.8 & 10030.49& 1.30$\times 10^{2}$ \\
2279 &900106 &GCS12 &266.33029 &$-$28.67280 &2001$-$07$-$18T07:45:28& 52108.3 & 11611.05& 1.10$\times 10^{2}$ \\
2280 &900107 &GCS5 &266.75079 &$-$28.44106 &2001$-$07$-$16T15:01:25& 52106.6 & 10426.28& 1.26$\times 10^{2}$ \\
2281 &900108 &GCS26 &265.93677 &$-$29.60579 &2001$-$07$-$20T21:38:10& 52110.9 & 8219.31 & 1.32$\times 10^{2}$ \\
2282 &900109 &GCS13 &266.59457 &$-$29.04224 &2001$-$07$-$18T11:13:58& 52108.5 & 10625.33& 1.07$\times 10^{2}$ \\
2283 &900110 &GCS27 &265.74591 &$-$29.50336 &2001$-$07$-$21T01:06:39& 52111.0 & 11614.25& 1.29$\times 10^{2}$ \\
2284 &900111 &GCS14 &266.40414 &$-$28.94089 &2001$-$07$-$18T14:25:48& 52108.6 & 10625.33& 1.09$\times 10^{2}$ \\
2285 &900112 &GCS6 &266.56137 &$-$28.33985 &2001$-$07$-$16T18:09:55& 52106.8 & 10410.49& 1.22$\times 10^{2}$ \\
2286 &900113 &GCS28 &266.01025 &$-$29.87408 &2001$-$07$-$21T04:35:09& 52111.2 & 11614.25& 1.37$\times 10^{2}$ \\
2287 &900114 &GCS15 &266.21411 &$-$28.83905 &2001$-$07$-$18T17:37:38& 52108.7 & 10625.33& 1.07$\times 10^{2}$ \\
2288 &900115 &GCS7 &266.82550 &$-$28.70883 &2001$-$07$-$17T14:11:51& 52107.6 & 11493.24& 1.17$\times 10^{2}$ \\
2289 &900116 &GCS29 &265.81894 &$-$29.77175 &2001$-$07$-$21T08:03:39& 52111.3 & 11614.25& 1.40$\times 10^{2}$ \\
2290 &900117 &GCS30 &265.62793 &$-$29.66920 &2001$-$07$-$21T11:32:10& 52111.5 & 11614.25& 1.36$\times 10^{2}$ \\
2291 &900118 &GCS16 &266.47845 &$-$29.20889 &2001$-$07$-$18T20:49:28& 52108.9 & 10625.33& 1.11$\times 10^{2}$ \\
2292 &900119 &GCS8 &266.63556 &$-$28.60780 &2001$-$07$-$17T17:51:28& 52107.7 & 11604.73& 1.19$\times 10^{2}$ \\
2293 &900120 &GCS17 &266.28790 &$-$29.10730 &2001$-$07$-$19T00:01:18& 52109.0 & 11118.21& 1.13$\times 10^{2}$ \\
2294 &900121 &GCS9 &266.44598 &$-$28.50635 &2001$-$07$-$17T21:19:58& 52107.9 & 11611.05& 1.16$\times 10^{2}$ \\
2295 &900122 &GCS18 &266.09775 &$-$29.00538 &2001$-$07$-$19T03:21:28& 52109.1 & 11118.21& 1.10$\times 10^{2}$ \\
2296 &900123 &GCS19 &266.36200 &$-$29.37541 &2001$-$07$-$19T06:41:38& 52109.3 & 11118.21& 1.17$\times 10^{2}$ \\
2298 &900125 &GALACTICPLANE1 &280.88382 &$-$3.90722 &2001$-$05$-$20T08:44:32& 52049.4 & 94214.80& 2.00$\times 10^{2}$ \\
2300 &900127 &1RXSJ104047.4$-$7&160.18372 &$-$70.78441 &2001$-$09$-$11T02:20:34& 52163.1 & 12802.17& 1.05$\times 10^{1}$ \\
2312 &900066 &CDFS &53.11796 &$-$27.81074 &2000$-$12$-$13T03:28:03& 51891.1 & 123686.25& 9.00$\times 10^{-1}$ \\
2313 &900066 &CDFS &53.11699 &$-$27.81122 &2000$-$12$-$21T02:08:43& 51899.1 & 130389.61& 9.00$\times 10^{-1}$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{0}
\begin{table*}
\begin{center}
\caption{Field Properties in the BMW-C Sample, Continued.}
\begin{tabular}{rllcccccc}
\hline
\hline
\noalign{\smallskip}
ID & Sequence & Target Name & Nominal& Nominal Dec & Observation Date & MJD & Effective & Average $N_{\rm H}$ \\
({\tt OBSID}) & ({\tt SEQ\_NUM}) & ({\tt OBJECT})& RA ({\tt RA\_NOM}) & ({\tt DEC\_NOM}) & ({\tt DATE\_OBS}) & & Exp.\ & ({\tt NH\_WAVG)} \\
& & & (J2000) & (J2000) & & & ({\tt TEFF}, s) & ($10^{20}$ cm$^{-2}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
2344 &900030 &HDFNORTH &189.26715 &62.21756 &2000$-$11$-$24T05:41:51& 51872.2 & 66696.32& 1.49 \\
2386 &900030 &HDFNORTH &189.26720 &62.21756 &2000$-$11$-$20T05:39:48& 51868.2 & 9842.40 & 1.49 \\
2405 &900031 &AXAFSOUTHERNDEE&53.12021 &$-$27.81259 &2000$-$12$-$11T08:14:17& 51889.3 & 56702.80& 9.01$\times 10^{-1}$ \\
2406 &900066 &CDFS &53.11819 &$-$27.81062 &2000$-$12$-$10T23:35:12& 51889.0 & 29686.29& 9.00$\times 10^{-1}$ \\
2409 &900066 &CDFS &53.11699 &$-$27.81123 &2000$-$12$-$19T03:55:35& 51897.2 & 68982.46& 9.00$\times 10^{-1}$ \\
2421 &900061 &HDF$-$N &189.14301 &62.24069 &2001$-$03$-$04T16:52:28& 51972.7 & 61648.85& 1.49 \\
2423 &900060 &HDF$-$N &189.14841 &62.24339 &2001$-$02$-$23T06:57:57& 51963.3 & 68419.52& 1.49 \\
2550 &200158 &NGC2264 &100.19905 &9.84542 &2002$-$02$-$09T05:10:47& 52314.2 & 48137.83& 4.70$\times 10^{1}$ \\
2553 &200161 &MADDALENA'SCLOU&102.27326 &$-$4.56858 &2002$-$02$-$08T20:39:21& 52313.9 & 20757.27& 7.54$\times 10^{1}$ \\
2556 &200164 &RCW38 &134.83614 &$-$47.50389 &2001$-$12$-$10T10:14:38& 52253.4 & 95599.90& 8.93$\times 10^{1}$ \\
2562 &200170 &IRAM04191 &65.48610 &15.49148 &2002$-$02$-$09T19:08:44& 52314.8 & 18918.96& 1.80$\times 10^{1}$ \\
3293 &900132 &HDF$-$N &189.21547 &62.21753 &2001$-$11$-$13T15:04:42& 52226.6 & 161250.19& 1.49 \\
3294 &900133 &HDF$-$N &189.15369 &62.24493 &2002$-$02$-$14T03:22:07& 52319.1 & 166971.55& 1.49 \\
3305 &900144 &GROTH$-$WESTPHALF&214.42932 &52.47367 &2002$-$08$-$11T21:43:57& 52497.9 & 29405.28& 1.28 \\
3343 &900182 &LH$-$NW$-$6 &158.33452 &57.92092 &2002$-$05$-$03T09:11:41& 52397.4 & 31812.35& 5.91$\times 10^{-1}$ \\
3344 &900183 &LH$-$NW$-$7 &158.18425 &57.77091 &2002$-$05$-$01T20:03:06& 52395.8 & 38545.61& 5.89$\times 10^{-1}$ \\
3345 &900184 &LH$-$NW$-$4 &158.01767 &57.62109 &2002$-$04$-$29T03:23:45& 52393.1 & 38466.62& 5.89$\times 10^{-1}$ \\
3346 &900185 &LH$-$NW$-$5 &158.50150 &57.47105 &2002$-$04$-$30T02:03:59& 52394.1 & 38207.51& 5.73$\times 10^{-1}$ \\
3347 &900186 &LH$-$NW$-$8 &158.65106 &57.92095 &2002$-$05$-$02T14:16:27& 52396.6 & 38463.43& 5.72$\times 10^{-1}$ \\
3348 &900187 &LH$-$NW$-$9 &158.80956 &57.77088 &2002$-$05$-$04T11:01:47& 52398.5 & 39521.85& 5.69$\times 10^{-1}$ \\
3388 &900132 &HDF$-$N &189.21545 &62.21752 &2001$-$11$-$16T05:41:16& 52229.2 & 49560.66& 1.49 \\
3389 &900132 &HDF$-$N &189.21523 &62.21782 &2001$-$11$-$21T14:16:05& 52234.6 & 102258.35& 1.49 \\
3390 &900133 &HDF$-$N &189.15369 &62.24493 &2002$-$02$-$16T18:58:56& 52321.8 & 161733.11& 1.49 \\
3391 &900133 &HDF$-$N &189.15369 &62.24493 &2002$-$02$-$22T01:52:24& 52327.1 & 164732.22& 1.49 \\
3408 &900132 &HDF$-$N &189.21547 &62.21753 &2001$-$11$-$17T01:09:01& 52230.0 & 66213.38& 1.49 \\
3409 &900133 &HDF$-$N &189.15367 &62.24494 &2002$-$02$-$12T10:59:50& 52317.5 & 82233.02& 1.49 \\
4357 &900144 &GROTH$-$WESTPHALF&214.42932 &52.47367 &2002$-$08$-$12T22:32:00& 52498.9 & 84361.27& 1.28 \\
4365 &900144 &GROTH$-$WESTPHALF&214.42934 &52.47367 &2002$-$08$-$21T10:56:53& 52507.5 & 58854.59& 1.28 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Data processing\label{bmwc:dataproc}}
Our pipeline is a combination of {\tt Ciao}\footnote{http://cxc.harvard.edu/ciao/}
tasks (data screening,
image reduction, exposure map creation), IDL programs (additional data
screening, wavelet detection core), tasks in the
{\tt HEAsoft}
package and UNIX shell scripts (drivers and house-keeping) that reduces and
analyses the Level 2 (L2) data generated by the {\it Chandra} X-ray Center
(CXC) standard data processing in a uniform fashion.
The data (event list, the aspect solution, the aspect offset, and the
bad pixel files) were downloaded using the {\it Chandra} Search and Retrieval
tool (ChaSeR) from the {\it Chandra} Data Archive
(CDA)\footnote{http://cxc.harvard.edu/cda/s+r.html.}, and
were filtered to only include the standard event grades
[Advanced Satellite for Cosmology and Astrophysics ({\it ASCA}) grades 0, 2, 3, 4, 6].
Since several {\it Chandra} data sets in the archive suffer from known
aspect offsets as large as 2$\arcsec$, we checked and corrected
for this problem using the {\tt fix\_batch} Perl script by
Tom Aldcroft\footnote{http://cxc.harvard.edu/cal/ASPECT/fix\_offset/fix\_offset.cgi.}.
We applied energy filters to the offset-corrected L2 event list,
and created soft (SB, 0.5--2.0\,keV), hard (HB, 2.0--7.0\,keV) and total
(FB, 0.5--7.0\,keV) band event files.
The upper limit on our hard and total energy bands was chosen at
7\,keV because at higher energy the background increases and the effective
area decreases, producing lower signal-to-noise (S/N) data.
Our results in the 0.5--10\,keV band are then extrapolations
from our findings in the 0.5--7\,keV range
(see Sects.~\ref{bmwc:catalogue}, and \ref{bmwc:fluxes}).
Since the source detection strongly depends on the background rate,
the data obtained during background flares were carefully removed.
We created counts light curves with a time resolution that would allow
$\sim 400$ counts per time bin, which typically correspond to
$\sim 1.4$\,ks and $\sim 1.8$\,ks in the soft and hard bands, respectively,
and excluded all time intervals during which the rate was more than
3$\sigma$ of the mean rate.
The mean fraction of exposure lost to background flares is 4\%,
comparable with 5\% given by the ChaMP
collaboration for FI chips (\citealt{Champ3,Champ1}).
The effective exposure times reported in the catalogue
({\tt TEFF}, see Table~\ref{bmwc:fieldprops} and distribution
shown in Fig.~\ref{bmwc:histo_teff}, median 31.8\,ks),
reflect these corrections.
Generally, hot/flickering pixels and bad columns are listed in the
calibration database and eliminated by the {\tt Ciao} tools.
However, some remaining flickering pixels, probably due to the afterglow of
cosmic rays or charged particles hitting the CCDs, occur and they
were identified at this point.
Following \citet{Tozziea01}, we defined as flickering each pixel that
registered two events within the time of two consecutive frames.
In each observation we eliminated all the events registered from that pixel,
a procedure which is safe when no bright sources are present, such as is
our case.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f2.eps}}
\caption{Distribution of effective exposure
times for the 136 observations.}
\label{bmwc:histo_teff}
\end{figure}
\subsection{Total background map creation\label{bmwc:bgdmaps}}
%
The wavelet algorithm requires an accurate characterisation of the background
of the images to process (see Sec.~\ref{bmwc:algorithm}),
since the WT detection is carried
out on top of a map representing the image background
(the total background map; \citealt{Lazzatiea99}).
The background in the ACIS-I images, in particular, is the sum of two components,
a cosmic X-ray background
and a particle one (i.e., cosmic--ray-induced).
The former suffers from vignetting and can be modelled
as a power law of photon index $\Gamma =1.4$
(\citealt{Marshallea80};
\citealt{Morettiea03}).
The latter is not affected by vignetting, only depends on the temperature
of the electronics, and can be modelled as a flat pattern with an inaccuracy within
10\%\footnote{http://cxc.harvard.edu/cal/Links/Acis/acis/Cal\_prods/bkgrnd/current/.}.
For each field and for each of the three energy bands,
we used the {\tt merge\_all} script to create
exposure maps (adopting an input spectrum of a power law with photon index 1.4)
and flat maps and then combined them.
\section{Source detection and catalogue construction\label{bmwc:detect}}
\subsection{The BMW-C algorithm \label{bmwc:algorithm}}
We used an updated version of the
BMW detection algorithm that supports the analysis of {\it Chandra} ACIS images.
The main steps of the algorithm can be summarised as follows
(full details in \citealt{Lazzatiea98}; \citealt{Lazzatiea99}; \citealt{Campanaea99}).
The first step is the creation of the WT of the input image; the BMW WT
is based on the discrete multi-resolution theory and on the ``\`a{} trous'' algorithm
(\citealt{trous}), which differs from continuous--WT-based algorithms which can
sample more scales at the cost of a longer computing time
(\citealt{Rosatiea95,Grebenevea95,Damianiea97a}).
We used a Mexican hat mother-wavelet, which can be analytically approximated
by the difference of two Gaussian functions (\citealt{Slezakea94}).
%
The WT decomposes each image into a set of sub-images, each of them carrying
the information of the original image at a given scale. This property
makes the WT well suited for the analysis of X-ray images,
where the scale of sources is not constant over the field of view,
because of the dependence of the PSF on the off-axis angle.
We used 7 WT scales $a=[1,2,4,8,16,32,64]$ pixels
to cover a wide range of source sizes, where $a$ is the scale of the WT
(\citealt{Lazzatiea99}).
Candidate sources are identified as local maxima above the significance threshold
in the wavelet space at each scale, so that a list is obtained at each scale,
and then a cross-match is performed among the 7 lists to merge them.
At the end of this step, we have a first estimate of source positions
(the pixel with the highest WT coefficient),
source counts (the local maximum of the WT)
and a guess of the source extension (the scale at which the WT is maximized).
A critical parameter is the detection threshold which, in the context of WT algorithms,
is usually fixed arbitrarily by the user in terms of expected spurious detections
per field (\citealt{Lazzatiea98}).
The number of expected spurious detections as a function of the threshold
value and for each scale was calculated
by means of Monte Carlo simulations (\citealt{Morettiea02}).
We ran the detection algorithm with a single significance threshold
that corresponds to $\sim 0.1$ spurious detections per scale, hence (with
7 scales) $\sim 0.7$ spurious detections per field for each band in which we performed the detection.
Given our sample of 136 fields, we expect a total of $\sim 95$ spurious
sources in the catalogue.
The final step is the characterisation of the sources by means of a multi-source
$\chi ^2$ minimization with respect to a Gaussian model source in the WT space.
In order to fit the model on a set of independent data, the WT coefficients are
decimated according to a scheme described in full in Lazzati et al.\ (1999).
The wavelet probability is defined as the
confidence level at which a source is not a chance background
fluctuation, given the background statistics and the specific field
exposure time.
This quantity is assessed via the S/N ratio in WT space.
For each WT scale, the noise level is computed through numerical
simulations of blank fields with the corresponding background
(\citealt{Lazzatiea99}), while the signal is the peak of the WT
coefficients corresponding to the source.
Figure~\ref{bmwc:cha_prob} shows the wavelet probability as a
function of WT S/N, and background value.
%
In order to make this significance more easily
comparable to other methods, confidence intervals are approximately expressed in
units of the standard deviation $\sigma$ for which a Gaussian
distributed variable would give an equal probability of spurious
detection (68\%: $1\,\sigma$; 95\%: $2\,\sigma$, etc.).
So, the values of the wavelet probability
in the catalogue column {\tt WTPROBAB} represent the number of
$\sigma$'s corresponding to the confidence level of that specific source.
%
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f3.eps}}
\caption{ Wavelet probability (or source significance in number of $\sigma$s)
as a function of WT S/N and background value (counts, see Sect.~\ref{bmwc:algorithm}). }
\label{bmwc:cha_prob}
\end{figure}
%
\setcounter{figure}{3}
\begin{figure*}
\centering
\includegraphics[width=19cm,height=8.5cm]{sf4.eps}
\caption[Detection Results]{Example of Detection. {\bf Left}: the $\eta$ Carinae
full field ({\tt OBSID} 50, {\tt SEQ\_NUM} 200019) at half resolution.
The FOV is $\sim 33\arcmin$. The sources are represented
with a size which is three times their Gaussian width.
Note the complicated extended structure at the centre and
the spurious detections along a readout streak (see Sect.~\ref{bmwc:algorithm})
Crosses mark sources that the detection algorithm classifies as extended
(e.g.\ bottom-left corner).
{\bf Right}: central portion of the field at full resolution.
The FOV is $\sim 8\farcm4$ and the sources are represented with a size
which is five times their Gaussian width. }
\label{bmwc:origimas}
\end{figure*}
%
\begin{figure*}
\centering
\includegraphics[width=19cm,height=8.5cm]{sf5.eps}
\caption[Detection Results 2]{Example of manual cleaning.
The spurious sources along the readout streak were eliminated and the
sources in the central portion of the image
(contained within the box and not shown) were flagged.}
\label{bmwc:cleanimas}
\end{figure*}
We ran the detection algorithm on the central $\sim 8\arcmin$ portion of the
source image at full resolution (rebin $=1$, 1 pixel $\sim 0\farcs49$)
and on the image rebinned by a factor of two outside (rebin $=2$, 1 pixel $\sim 0\farcs98$).
This strategy allowed us to optimize computer time, while preserving spatial
information. Indeed, outside the $\sim 8\arcmin$ radius circle,
the PSF radius that encircles 90\% of the energy is $\ga 6\farcs5$, i.e., $\ga6.6$
rebinned pixels in the soft band
and $\ga 7\farcs4$, i.e., $\ga7.5$ rebinned pixels in the hard band.
In detail,
a) we ran the detection on the images at rebin 2;
b) we ran the detection in their inner $512\times512$ part
(rebinned pixels, corresponding to $\sim 8\farcm4 \times 8\farcm4$)
at the full resolution;
c) we excluded the $480\times480$ pixel central part in the analysis at rebin 2
(rebinned pixels, corresponding to $\sim 7\farcm8\times 7\farcm8$);
d) we cross-matched the positions of the sources found at rebin 1 and 2 to
exclude common double entries (sources were considered coincident if their
distance was less than 6 times their width).
We repeated this procedure for each of the three energy bands, and
cross-matched the resulting source coordinates to form the definitive list
(for coincident sources, the coordinates of the highest S/N one were kept).
A source extension, defined as the width of the best-fitting Gaussian model,
was calculated for each band and the extension
of the highest S/N among the three bands and the inner and outer regions was kept.
An example of the products of the pipeline is shown in
Figs.~\ref{bmwc:origimas} and \ref{bmwc:cleanimas}, which represent the
$\eta$ Carinae field ({\tt OBSID} 50, {\tt SEQ\_NUM} 200019), one of the most problematic
observations since it has a complicated extended structure at the centre as well as
other data problems.
Figure~\ref{bmwc:origimas} (left) is the full image at rebin 2, with a
FOV of $\sim 33\arcmin$. The sources are represented
with circles with a size which is three times their Gaussian width.
Note the spurious detections along a readout streak (photons detected during
the readout in a field containing a bright source are clocked out in the wrong row
and so have incorrect CHIPY values and show up as a streak along the column).
Crosses mark sources that the detection algorithm classifies as extended.
Figure~\ref{bmwc:origimas} (right) shows the central portion of the field at full
resolution. The FOV is $\sim 8\farcm4$, and the sources are represented with a size
which is five times their Gaussian width for better presentation.
\subsection{BMW-C catalogue\label{bmwc:catalogue}}
The pipeline produced a catalogue of source positions, count rates,
counts, extensions, and relative errors, as well as the additional
information drawn from the headers of the original files for a total of 21325 sources.
%
Source counts were corrected for vignetting using the
exposure maps (see Sect.~\ref{bmwc:bgdmaps}),
i.e., by comparing the image counts at each source position with the average counts
of the normalized exposure map within a PSF range. Furthermore, the source counts were
corrected for PSF fitting, i.e., for using a Gaussian approximation of the PSF to fit
the sources in the WT space. This latter correction was calculated by running the detection
on the psfsize table in {\tt CALDB} (psfsize\_2000830.fits),
using the PSF images at $\sim 1.50$\,keV for the soft band PSF
and at $\sim 4.51$\,keV for the hard band PSF, and calculating the percentage of counts lost
with the Gaussian approximation in the WT detection.
Errors were calculated in two ways: following \citet{Grebenevea95}, appropriate
for sufficiently high values of the background and source counts
($\ga 5\times 10^{-2}$ counts/pixel), and
using basic statistic expressions.
In the latter case, assuming a Gaussian shaped distribution of $N$ photons
of width ({\tt WIDTH}) $\Delta$,
the positional errors in $x$ and $y$ ({\tt X\_POS\_E, Y\_POS\_E}) are limited by
$\sigma_x \approx \sigma_y \approx \Delta / \sqrt{N -1}$,
the total positional error ({\tt T\_POS\_ER}) is
$\sigma_t \approx \sqrt{{{\sigma_x}^2} + {{\sigma_y}^2}}$,
the error on the total number of counts is given by Poisson statistics
$\sigma_N \approx \sqrt{N}$,
and the intrinsic limit on the width estimate is
$\sigma_{\Delta} \approx 0.5 \, \Delta / \sqrt{N-1}$
(see detailed discussion in \citealt{Lazzatiea99}).
The largest of the values derived using the two methods is adopted.
As an example, the distribution of the total positional error is shown
in Fig.~\ref{bmwc:poserr} for the 16834 sources
that do not require a more in-depth, non-automated analysis (see below).
We note that the absolute source position in {\it Chandra} observations
has an uncertainty radius of 0\farcs6 at 90\% confidence level (CL), and of 0\farcs8 at
99\% CL\footnote{The most updated information on the {\it Chandra} absolute positional
accuracy can be found at http://cxc.harvard.edu/cal/ASPECT/celmon/.}.
These values were calculated by measuring the distances between the {\it Chandra} X-ray source
positions and the corresponding optical/radio counterpart positions from the Tycho2
\citep{Tycho2000} or the International Celestial Reference Frame \citep[ICRS, ][]{ICRS}
catalogues, for sources within 3\,$\arcmin$ from the aimpoint
and with the Science Instrument Module Z-axis (SIM-Z)
at the nominal detector value (for large off-nominal SIM-Z, the
observations can suffer an additional aspect offset of up to 0\farcs5).
For each source we computed two fluxes in the 0.5--10\,keV range, assuming that
a) the source is extragalactic and its flux was corrected for Galactic
absorption ({\tt FLUX1}) and
b) the source is subject to no absorption ({\tt FLUX2}).
In general, the count-rate--to--flux conversion factors (CF) are a function of
the assumed shape of the spectrum of the source.
%
To obtain {\tt FLUX1}, we first calculated an average line-of-sight column density
$N_{\rm H}$ according to \citet{DickeyL90} with the {\tt nH} tool in {\tt HEASoft}
({\tt NH\_WAVG}), for each catalogue field.
We show the distribution of column densities in Fig.~\ref{bmwc:histo_nh}.
Then we performed simulations with {\tt XSPEC} (\citealt{XSPEC})
assuming as input model an absorbed ({\tt wabs}) power law with
photon index 2.0 (a Crab spectrum, for direct comparison with other
work, e.g., the BMW-HRI catalogue; \citealt{Panzeraea03})
for a range of column densities spanning these values.
We report the model 0.5--7.0\,keV count rate to 0.5--10\,keV flux
CFs in the catalogue column {\tt COR\_FAC1},
while the flux was calculated as
{\tt FLUX1} = {\tt COR\_FAC1} $\times$ {\tt T\_FT\_CTS} $/$ {\tt EXPOSURE},
where {\tt T\_FT\_CTS} are the WT total counts and {\tt EXPOSURE} is the total
exposure time.
%
To calculate {\tt FLUX2} an unabsorbed ($N_{\rm H}=0$) spectrum was simulated, and
the 0.5--7.0\,keV count rate to 0.5--10\,keV flux conversion factor
is CF2$=8.23\times10^{-12}$ erg cm$^{-2}$ cts$^{-1}$.
For comparison with other work, we also report approximate fluxes in the three bands
({\tt T\_FLUX}, {\tt S\_FLUX}, and {\tt H\_FLUX},) calculated using a count rate to flux
conversion factor CF0 $=1\times10^{-11}$ erg cm$^{-2}$ cts$^{-1}$.
\setcounter{figure}{5}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f6.eps}}
\caption{Distribution of the total positional error for the
16834 BMW-{\it Chandra} sources that do not require a
non-automated analysis. The mean is $0\farcs45$ and
the median $0\farcs36$.}
\label{bmwc:poserr}
\end{figure}
%
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f7.eps}}
\caption{Distribution of the Galactic column density of the
136 observations according to \citet{DickeyL90}.}
\label{bmwc:histo_nh}
\end{figure}
%
\begin{table}
\begin{center}
\caption{Energy Bands and Background values.}
\label{bmwc:band_bgd}
\begin{tabular}{lcc}
\hline
\hline
\noalign{\smallskip}
Band & Energy & Background value$^{\mathrm{a}}$ \\
& (keV) & (counts s$^{-1}$ chip$^{-1}$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
SB & 0.5-2.0 & ... \\
HB & 2.0-7.0 & ... \\
FB & 0.5-0.7 & ... \\
SB1 & 0.5-1.0 & 0.0336 \\
SB2 & 1.0-2.0 & 0.0417 \\
HB1 & 2.0-4.0 & 0.0508 \\
HB2 & 4.0-7.0 & 0.0656 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\begin{list}{}{}
\item[$^{\mathrm{a}}$] ACIS Background Memos:
Baganoff (1999); Markevitch (2001).
\end{list}
\end{table}
Each field was visually inspected to identify and remove
obviously spurious detections (such as image streaks in correspondence
of bright pointed sources and crowded fields, an example of which is shown
in Figs.~\ref{bmwc:origimas} and \ref{bmwc:cleanimas}).
A number of checks were performed on the raw catalogue and the results
stored in the {\tt FL\_CHECK} catalogue column, as follows.
\begin{enumerate}
\item We eliminated sources that presented negative fitted counts
(an instance that may occur if the WT fit does not converge).
\item When the errors on position, the total positional error,
and the error on the width
({\tt X\_POS\_ER, Y\_POS\_ER, T\_POS\_ER, WIDTH\_ER}, respectively),
did not converge, they were set to their statistical value
({\tt FL\_CHECK $= -2$}),
if possible, otherwise the sources were eliminated.
\item When the error on the width exceeded the width value ({\tt WIDTH}),
it was set to the statistical value
({\tt FL\_CHECK $= -2$}),
if possible, otherwise it was left unmodified
({\tt FL\_CHECK $= -3$}).
\item When the error on the rate ({\tt T\_CR\_ER, S\_CR\_ER, H\_CR\_ER})
exceeded the rate value ({\tt T\_CTRATE, S\_CTRATE, H\_CTRATE}),
it was set to its statistical value
({\tt FL\_CHECK $= -2$}),
if possible, otherwise it was left unmodified ({\tt FL\_CHECK $= -4$}).
\item When the fitted counts ({\tt T\_FT\_CTS, S\_FT\_CTS, H\_FT\_CTS}),
and count rates ({\tt T\_CTRATE, S\_CTRATE, H\_CTRATE}),
and consequently, the fluxes ({\tt FLUX1, FLUX2}),
resulted infinite (because the WT fit did not converge),
they were set to $-9999.9$.
\end{enumerate}
Each field was visually inspected again to exclude/flag
problematic portions, and the results
stored in the {\tt FL\_CHCK2} catalogue column. In particular, extended pointed
targets were flagged ({\tt FL\_CHCK2 $= -1$}).
Sources within a radius of 30$\arcsec$ from the target position
(as given by {\tt RA\_TARG} and {\tt DEC\_TARG} original header fields)
of all fields not in surveys (73) were flagged with {\tt FL\_CHCK2 $= 10 \times $FL\_CHCK2}.
The parameters of the final catalogue are listed in Table~\ref{bmwc:params}.
The full catalogue contains 21325 sources,
16834 of which do not require a more in-depth, non-automated analysis
(i.e.\ not associated with bright and/or extended sources at the centre of the field).
The latter are obtained applying the following conditions,
{\tt FL\_CHCK2 $\neq -1$}, and {\tt FL\_CHCK2 $\neq -10$}, and include the pointed ones.
The final number of sources not associated with pointed targets is 16758
(see Sect.~\ref{bmwc:ssc}).
\setcounter{table}{2}
\begin{table*}
\begin{center}
\caption{BMW-C Parameters}
\label{bmwc:params}
\begin{tabular}{rllccccc}
\hline
\hline
\noalign{\smallskip}
Column & BMWC Parameter & Description \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
1 & UNIQ\_NUM & source unique number (internal reference) \\
2 & SRC\_NAME & source IUA name BMCHHMMSS.S$\pm$DDMMSS \\
3 & SEQ\_NUM & sequence Number of the Observation (e.g., 900030) \\
4 & OBJECT & target name (e.g., HDFNORTH) \\
5 & OBS\_ID & observation ID Number of the Target (e.g., 2386)\\
6 & EXPOSURE & total exposure time (s) \\
7 & EXPTIME & commanded exposure time (s) \\
8 & ASCDSVER & ASCDS version number (e.g., 6.0.1)\\
9 & DATE\_OBS & date and time of observation start (e.g., 2000-11-20T05:39:48)\\
10 & DATE\_END & date and time of observation stop (e.g., 2000-11-20T08:54:42)\\
11 & RA\_NOM & nominal target right ascension (RA; J2000, degrees) \\
12 & DEC\_NOM & nominal target declination (DEC; J2000, degrees) \\
13 & OBS\_MODE & observation mode (e.g., POINTING) \\
14 & DATAMODE & data mode (e.g., FAINT) \\
15 & READMODE & read mode (e.g., TIMED) \\
16 & TEFF & effective exposure (s, after flares are removed) \\
17 & T\_BCK & total-band mean background in the central $1024\times1024$ pix region (cts/pix)\\
18 & S\_BCK & soft-band mean background in the central $1024\times1024$ pix region (cts/pix)\\
19 & H\_BCK & hard-band mean background in the central $1024\times1024$ pix region (cts/pix)\\
20 & NH\_WAVG & weighted averaged $N_{\rm H}$ (cm$^{-2}$) for RA\_NOM, DEC\_NOM \\
21 & COR\_FAC1 & (erg cm$^{-2}$ cts$^{-1}$), counts(0.5--7 keV) vs.\ Flux(0.5--10 keV) for field $N_{\rm H}$ \\
22 & SRC\_RA\_H & source RA (hh) \\
23 & SRC\_RA\_M & source RA (mm) \\
24 & SRC\_RA\_S & source RA (ss) \\
25 & SRC\_DE\_D & source DEC (dd) \\
26 & SRC\_DE\_M & source DEC (mm) \\
27 & SRC\_DE\_S & source DEC (ss) \\
28 & SRC\_ALPH & source RA (J2000, degrees)\\
29 & SRC\_DELT & source DEC (J2000, degrees)\\
30 & X\_POS\_ER& error on source RA (arcsec) \\
31 & Y\_POS\_ER& error on source DEC (arcsec)\\
32 & T\_POS\_ER& total positional error (arcsec)\\
33 & WIDTH & width of WT Gaussian distribution of photons (arcsec)\\
34 & WIDTH\_ER & error on width (arcsec)\\
35 & W\_FLAG & meaningfulness of width (0/1; 0$=$free, $1=$fixed to PSF) \\
36 & SCALE & scale at which source was found (1,2,4,8,16,32,64 pixels) \\
37 & CHI2 & fit $\chi^2$ \\
38 & OFFAX & offaxis angle (arcmin) \\
39 & X\_POS & source x-coordinate (pixel) \\
40 & Y\_POS & source y-coordinate (pixel) \\
41 & T\_FT\_CTS & total-band (0.5--7\,keV) counts from the fit (cts) \\
42 & T\_CTS\_ER & total-band error on counts from the fit (cts) \\
43 & T\_CN\_CTS & total-band net counted source counts (cts)\\
44 & T\_CN\_BG & total-band counted background counts (cts) \\
45 & T\_VIGCOR & total-band vignetting correction factor \\
46 & T\_PSFCOR & total-band PSF correction factor \\
47 & T\_CTRATE & total-band count rate (cts s$^{-1}$) \\
48 & T\_CR\_ER & total-band count rate error (cts s$^{-1}$) \\
49 & T\_FLUX & total-band flux (for CF0$=1\times10^{-11}$ erg cm$^{-2}$ cts$^{-1}$) \\
50 & T\_FT\_S\_N & total-band signal-to-noise ratio of the detection \\
51 & T\_WV\_S\_N & total-band signal-to-noise ratio in wavelet space \\
52 & S\_FT\_CTS & soft-band (0.5--2\,keV) counts from the fit (cts) \\
53 & S\_CTS\_ER & soft-band error on counts from the fit (cts)\\
54 & S\_CN\_CTS & soft-band net counted source counts (cts)\\
55 & S\_CN\_BG & soft-band counted background counts (cts)\\
56 & S\_VIGCOR & soft-band vignetting correction factor \\
57 & S\_PSFCOR & soft-band PSF correction factor \\
58 & S\_CTRATE & soft-band count rate (cts s$^{-1}$)\\
59 & S\_CR\_ER & soft-band count rate error (cts s$^{-1}$) \\
60 & S\_FLUX & soft-band flux (for CF0$=1\times10^{-11}$ erg cm$^{-2}$ cts$^{-1}$) \\
61 & S\_FT\_S\_N & soft-band signal-to-noise ratio of the detection \\
62 & S\_WV\_S\_N & soft-band signal-to-noise ratio in wavelet space \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{2}
\begin{table*}
\begin{center}
\caption{BMW-C Parameters (Continued)}
\begin{tabular}{rllccccc}
\hline
\hline
\noalign{\smallskip}
Column & BMWC Parameter & Description \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
63 & H\_FT\_CTS & hard-band (2--7\,keV) counts from the fit (cts) \\
64 & H\_CTS\_ER & hard-band error on counts from the fit (cts) \\
65 & H\_CN\_CTS & hard-band net counted source counts (cts)\\
66 & H\_CN\_BG & hard-band counted background counts (cts)\\
67 & H\_VIGCOR & hard-band vignetting correction factor \\
68 & H\_PSFCOR & hard-band PSF correction factor \\
69 & H\_CTRATE & hard-band count rate (cts s$^{-1}$)\\
70 & H\_CR\_ER & hard-band count rate error (cts s$^{-1}$)\\
71 & H\_FLUX & hard-band flux (for CF0$=1\times10^{-11}$ erg cm$^{-2}$ cts$^{-1}$) \\
72 & H\_FT\_S\_N & hard-band signal-to-noise ratio of the detection \\
73 & H\_WV\_S\_N & hard-band signal-to-noise ratio in wavelet space \\
74 & FL\_REBIN & rebin at which source was found (e.g., 1/2)\\
75 & WTPROBAB & WT Probability\\
76 & FLUX1 & absorption corrected 0.5--10 keV flux (for {\tt COR\_FAC1(NH\_WAVG)}) \\
77 & FLUX2 & observed 0.5--10 keV flux (for CF2($N_{\rm H}=0$)$=8.22981\times10^{-12}$ erg cm$^{-2}$ cts$^{-1}$) \\
78 & FL\_CHECK & quality flag: $= 1$ status ok; $< -1$ decreasing quality\\
79 & FL\_CHCK2 & quality flag: $-1=$ to be visually inspected, $<-1$ decreasing quality; $|$ $| > 10$ pointed source\\
80 & FL\_EXT\_Y & flag for extension (Point/Extended) \\
81 & CORR\_RAD & cross-matching radius\\
82 & FL\_MERGE & cross-matching: part of merged catalogue? $\ga1=$yes, $0=$no \\
83 & CORR\_RA2 & cross-matching radius\\
84 & FL\_MERG2 & cross-matching: part of merged catalogue? $\ga1=$yes, $0=$no \\
85 & CAT\_VERS & catalogue version \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
86 & ID\_FIRST & FIRST identification\\
87 & RA\_FIRST & FIRST RA (degrees) \\
88 & DE\_FIRST & FIRST DEC (degrees) \\
89 & LOBF\_1ST & warning flag for side-lobe source\\
90 & PFLX\_1ST & peak flux (mJy/bm)\\
91 & IFLX\_1ST & integrated Flux (mJy)\\
92 & RMS\_1ST & local noise at the source position (mJy/beam)\\
93 & MAJ\_1ST & deconvolved major axis (FWHM in arcsec; elliptical Gaussian model)\\
94 & MIN\_1ST & deconvolved min axis (FWHM in arcsec; elliptical Gaussian model)\\
95 & PA\_1ST & deconvolved position angle (degrees, east of north) \\
96 & fMAJ\_1ST & measured major axis (arcsec)\\
97 & fMIN\_1ST & measured minor axis(arcsec)\\
98 & fPA\_1ST & measured position angle (deg)\\
99 & FIRSTBMC & angular distance between FIRST and BMW-C position (arcsec) \\
100 & FIRSTCOM & number of FIRST cross-matches: $-99=$none, 1$=$single, $>1=$number of matches \\
101 & FIRSTCOV & BMW-C--FIRST cross-matching version \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
102 & ID\_IRAS & IRAS identification\\
103 & RA\_IRAS & IRAS RA (J2000, degrees)\\
104 & DE\_IRAS & IRAS DEC (J2000, degrees)\\
105 & F12\_IRAS & 12 $\mu$m flux (mJy) \\
106 & F25\_IRAS & 25 $\mu$m flux (mJy) \\
107 & F60\_IRAS & 60 $\mu$m flux (mJy) \\
108 & F100\_IRAS & 100 $\mu$m flux (mJy) \\
109 & IRASBMC & angular distance between IRAS and BMW-C position (arcsec) \\
110 & IRASCOM & number of IRAS cross-matches: $-99=$none, $1=$single, $>1=$number of matches \\
111 & IRASCOV & BMW-C--IRAS cross-matching version \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
112 & ID\_2MASS & 2MASS identification\\
113 & RA\_2MASS & 2MASS RA (J2000, degrees)\\
114 & DE\_2MASS & 2MASS DEC (J2000, degrees)\\
115 & J\_2MASS & 2MASS $J$ magnitude (mag)\\
116 & DJ\_2MASS & 2MASS error on $J$ magnitude (mag)\\
117 & H\_2MASS & 2MASS $H$ magnitude (mag)\\
118 & DH\_2MASS & 2MASS error on $H$ magnitude (mag)\\
119 & K\_2MASS & 2MASS $K$ magnitude (mag)\\
120 & DK\_2MASS & 2MASS error on $K$ magnitude (mag) \\
121 & 2MASSBMC & angular distance between 2MASS and BMW-C position (arcsec) \\
122 & 2MASSCOM & number of 2MASS cross-matches:$ -99=$none, 1$=$single, $>1=$number of matches \\
123 & 2MASSCOV & BMW-C--2MASS cross-matching version \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\setcounter{table}{2}
\begin{table*}
\begin{center}
\caption{BMW-C Parameters (Continued)}
\begin{tabular}{rllccccc}
\hline
\hline
\noalign{\smallskip}
Column & BMWC Parameter & Description \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
124 & ID\_GSC2 & GSC2 identification \\
125 & RA\_GSC2 & GSC2 RA (J2000, degrees) \\
126 & DE\_GSC2 & GSC2 DEC (J2000, degrees) \\
127 & F\_GSC2 & GSC2 $F$ magnitude (mag) \\
128 & DF\_GSC2 & GSC2 error on $F$ magnitude (mag) \\
129 & J\_GSC2 & GSC2 $B_J$ magnitude (mag) \\
130 & DJ\_GSC2 & GSC2 error on $B_J$ magnitude (mag) \\
131 & V\_GSC2 & GSC2 $V$ magnitude (mag) \\
132 & DV\_GSC2 & GSC2 error on $V$ magnitude (mag) \\
133 & N\_GSC2 & GSC2 $N$ magnitude (mag) \\
134 & DN\_GSC2 & GSC2 error on $N$ magnitude (mag) \\
135 & A\_GSC2 & GSC2 semi-major axis (pixels) \\
136 & E\_GSC2 & GSC2 eccentricity\\
137 & PA\_GSC2 & GSC2 position angle (deg)\\
138 & C\_GSC2 & GSC2 class: 0$=$star; 1$=$galaxy; 2$=$blend; 3$=$non-star; 4$=$unclassified; 5$=$defect \\
139 & GSC2BMC & angular distance between GSC2 and BMW-C position (arcsec) \\
140 & GSC2COM & number of GSC2 cross-matches: $-99=$none, $1=$single, $>1=$number of matches \\
141 & GSC2COV & BMW-C--CSC2 cross-matching version \\
142 & CTS05\_1 & 0.5-1 keV counted source counts\\
143 & CTS\_1\_2 & 1-2 keV counted source counts\\
144 & CTS\_2\_4 & 2-4 keV counted source counts\\
145 & CTS\_4\_7 & 4-7 keV counted source counts\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Source properties and sub-samples\label{bmwc:character}}
The full catalogue includes all detections in the 136 examined fields,
and no association was made among the sources detected more than once in
different observations of the same portion of the sky.
In order to account for repeated detections and to exploit
multiple detections of the same source in variability studies,
an estimate of the number of independent source detections is needed.
We estimated this number by cross matching the catalogue with a given
cross-matching radius, i.e.,
by merging the number of sources within the cross-matching radius.
Given the distribution of total position uncertainties
(Fig.~\ref{bmwc:poserr}), we chose cross-matching radii of 3\arcsec and 4\farcs5
(catalogue parameters {\tt CORR\_RAD} and {\tt CORR\_RA2}, respectively).
We obtain 16088 independent sources in the former case, and 15497 in the latter case
(of which 12135 and 11954, respectively, are not associated with bright and/or
extended sources at the centre of the field).
The catalogue parameter {\tt FL\_MERGE} assumes the value 1 if the source is part of the
sub-sample merged within {\tt CORR\_RAD}, 0 otherwise.
{\tt FL\_MERG2} is the corresponding value for {\tt CORR\_RA2}.
\subsection{The serendipitous source catalogue (BMC--SSC) \label{bmwc:ssc}}
For cosmological studies it is particularly important to have a sample which is
not biased toward objects selected on the basis of their properties.
To this end, we selected a subsample of the BMW-C catalogue that contains
16758 sources not associated with pointed targets, by
excluding sources within a radius of 30$\arcsec$ from the target position. This subsample
represents the BMW-{\it Chandra} Serendipitous Source Catalogue (BMC-SSC).
\subsection{Source extension\label{bmwc:extension}}
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f8.eps}}
\caption{Distribution of the off-axis angle of the 16834
BMW-{\it Chandra} sources.}
\label{bmwc:offax}
\end{figure}
Figure~\ref{bmwc:offax} shows the distribution of the source off-axis angle {\tt OFFAX},
which presents a steep increase with collecting area, and a gentler decrease with
decreasing sensitivity with off-axis angles. Differently from what found with the
BMW-HRI catalogue (\citealt{Panzeraea03}), our distribution does not present
a peak at zero off-axis due to pointed sources.
To characterise the source extension,
which is one of the main features of the WT method, one cannot simply compare
the WT width with the instrumental PSF at a given off-axis angle.
Thus, we use a $\sigma$-clipping algorithm which divides the
distribution of source extensions as a function of off-axis angle
in bins of 1\arcmin{} width.
The mean and standard deviation are calculated within each bin and all sources
which width exceeds 3$\sigma$ the mean value are discarded.
The procedure is repeated until convergence is
reached. The advantage of this method is that it effectively eliminates truly
extended sources, while providing a value for the mean and standard deviation
in each bin (\citealt{Lazzatiea99}). The mean value plus the 3$\sigma$
dispersion\footnote{The fitting function is the third-order polynomial:
3$\sigma$~extension~(arcsec) $=1.24097+0.0530598\,{\tt OFFAX^2} +0.000179786\,{\tt OFFAX^3}$,
where {\tt OFFAX} is in arcmin.}
provides the line discriminating the source extension, but we conservatively classify as
extended only the sources that lie 2$\sigma$ above this limit.
Combining this threshold with the 3$\sigma$ on the intrinsic dispersion,
we obtain a $\sim 4.5$$\sigma$ confidence level for the extension classification
(\citealt{Rosatiea95,Campanaea99,Panzeraea03}).
%
We note (\citealt{Morettiea04}) that fluxes of extended sources are usually
underestimated, since they are computed by fitting a Gaussian to the surface
brightness profile, which in many cases is a poor approximation.
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f9.eps}}
\caption{Extension of the 12648 BMW-{\it Chandra} sources as a
function of the off-axis angle.
The dashed line is the 3$\sigma$ limit for point sources.
Diamonds are the extended sources ($\sim 4.5 \sigma$), i.e., sources that
lie more than 2 $\sigma$ from the dashed line (316 points). }
\label{bmwc:extension_final1}
\end{figure}
%
\begin{figure}[t
\resizebox{\hsize}{!}{\includegraphics{f10.eps}}
\caption{Distribution of the extension of the extended sources for the full sample
(solid line, 316 sources) and for the high-latitude sub-sample
(dotted line, 120 sources).}
\label{bmwc:fullg_hwidthb}
\end{figure}
We applied the $\sigma$-clipping algorithm to the 12648 good sources
which WT width had been successfully determined
(i.e., the width had not been fixed to the PSF value, {\tt W\_FLAG}$ = 1$).
Figure~\ref{bmwc:extension_final1} shows their extension
thus calculated as a function of the off-axis angle,
as well as the PSF function and 3$\sigma$ limit for point sources (dashed line).
Diamonds are the extended sources ($\sim 4.5 \sigma$), i.e., sources that
lie more than 2 $\sigma$ from the dashed line (316 points).
There are 145 points within the 3 and 4.5$\sigma$ limits.
We selected two sub-catalogues, based on Galactic latitude, the discriminant
value being 20$^{\circ}$, thus obtaining 7401 high-latitude sources,
and 9433 low-latitude sources.
Figure~\ref{bmwc:fullg_hwidthb} shows the distribution of the extension of
the extended sources for the full sample (solid line, 316 sources)
and for the high-latitude sub-sample (dotted line, 120 sources).
We expect that most high-galactic latitude sources are extra-galactic.
Table~\ref{bmwc:numeri} summarises the basic numbers of sources in each subsample
examined.
\begin{table}
\begin{center}
\caption{BMW-C Basic Numbers.}
\label{bmwc:numeri}
\begin{tabular}{llr}
\hline
\hline
\noalign{\smallskip}
Source Sample & & Number \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
detected & & 21325 \\
good$^{\mathrm{a}}$ & & 16834 \\
serendipitous & & 16758 \\
independent & ({\tt CORR\_RAD = 3\arcsec}) & 12135 \\
& ({\tt CORR\_RA2 = 4\farcs5}) & 11954 \\
detected in total band & & 11124\\
detected in soft band & & 12631 \\
detected in hard band & & 9775\\
only detected in hard band & & 4203 \\
high-latitude & ($| b|>20^{\circ}$) & 7401 \\
& not pointed & 7381 \\
& not in surveys & 2931 \\
& not in surveys or pointed & 2911 \\
low-latitude & ($| b|<20^{\circ}$) & 9433 \\
& not pointed & 9377 \\
& not in surveys & 9433 \\
& not in surveys or pointed & 9377 \\
serendipitous extended & & 316 \\
& ($| b|>20^{\circ}$) & 120 \\
& ($| b|<20^{\circ}$) & 196 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\begin{list}{}{}
\item[$^{\mathrm{a}}$] Sources which do not require a more in-depth, non-automated analysis
(i.e.\ not associated with bright and/or extended sources at the centre of the field),
including the target ones.
\end{list}
\end{table}
\subsection{Source fluxes\label{bmwc:fluxes}}
Figure~\ref{bmwc:histo_flux} shows the distributions of 0.5--10\,keV absorption
corrected flux ({\tt FLUX1}, see Sect.~\ref{bmwc:catalogue}) for the 16834 sources in the
BMW-{\it Chandra} catalogue;
the fluxes range from $\sim 3\times 10^{-16}$ to $9\times10^{-12}$
erg cm$^{-2}$ s$^{-1}$ with a median of $7\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$.
Figure~\ref{bmwc:histo_flux} also shows the distributions of 0.5--10\,keV fluxes
for the high-latitude sources (median flux $4.50\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$)
and the low-latitude sources (median flux $1.07\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$).
%
Figure~\ref{bmwc:histo_ctcnt} shows the distributions of the source counts for
the total (FB, median 27.9 counts),
soft (SB, median 15.7 counts),
and hard (HB, median 15.3 counts) bands.
%
%
%
\begin{figure}[t
\resizebox{\hsize}{!}{\includegraphics{f11.eps}}
\caption{Distributions of 0.5--10\,keV flux for
the sources in the BMW-{\it Chandra} catalogue
({\tt FLUX1}, the Galactic absorption corrected flux),
for the high-latitude sources,
and the low-latitude sources.}
\label{bmwc:histo_flux}
\end{figure}
%
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f12.eps}}
\caption{Distributions of the source counts for
the soft, hard, and full bands, drawn from the full catalogue.}
\label{bmwc:histo_ctcnt}
\end{figure}
%
\section{Sky coverage \label{bmwc:skycov_how}}
In order to calculate the sky coverage of our survey, we
followed the procedure used by \citet{Munoea03}.
The signal-to-noise ratio S/N with which we measure the counts
from a source is given by
$n_{\sigma} = N/[(N + B) + \sigma_B^2]^{1/2}$,
where $N$ is the number of counts from a source, $B$ is the background
in the source cell and $\sigma_B$ is the uncertainty on the background,
using the simplifying assumption of $\sqrt{N}$ uncertainties.
The background can be written as $B = ab$, where $a$ is the area of
the PSF and $b$ is the background per pixel. Therefore, using
$\sigma_B^2 = B$, the S/N becomes
$n_{\sigma} = N/(N + 2B)^{1/2} = N/(N + 2ab)^{1/2} $.
The position-dependent number of source counts for a given S/N is then,
\begin{equation}
S = {{n_{\sigma}^2} \over {2}}
\left[ 1 + \left(1 + {{8ab}\over{n_{\sigma}^2}} \right)^{1/2} \right]
{\rm [cts]}
\label{bmwc:cslimeq}
\end{equation}
First, an off-axis angle map is generated, then
the PSF maps $a$ are generated from the off-axis map using the
psfsize table in CALDB,
using the PSF images at $\sim 1.50$\,keV for the soft band PSF
and at $\sim 4.51$\,keV for the hard band PSF.
We assumed as aperture the radius that encircles 70\% of the PSF energy
(as a reasonable compromise between having too many background counts
for a larger radius and too little source counts for a smaller radius).
For $b$ we used the background images generated for the detection
(see Sect.~\ref{bmwc:bgdmaps}).
%
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f13.eps}}
\caption{S/N distribution in the counts space (dotted line)
and wavelet space (solid line) in the
soft ({\bf top}) and hard ({\bf bottom}) bands.}
\label{bmwc:histo_sn}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{f14.eps}}
\caption{Solid angle versus flux limit for S/N $=3$ for the
soft (solid line) and hard (dotted line) bands,
calculated following Sect.~\ref{bmwc:skycov_how}. This sky coverage
was constructed using 94 independent fields (no fields covered the same sky area).}
\label{bmwc:sky_both_sn3}
\end{figure}
Based on Fig.~\ref{bmwc:cha_prob}, which shows the wavelet probability as a
function of WT S/N, and background value, we chose S/N$=3$ in the wavelet space,
for our analysis.
Figure~\ref{bmwc:histo_sn} shows the S/N distribution in the counts space and wavelet space in
the soft and hard bands, and demonstrates where our S/N cut falls.
We note here that the S/N in WT space can be different
from the S/N in counts space for several reasons:
i) the background subtraction is more accurate and locally performed;
ii) the high frequencies are suppressed, so that a correlated count
excess gives a higher significance than a random one with the same number of counts;
iii) the exposure map can be incorporated in the WT space, so
that artifacts do not affect the source significance.
Using Eq.~\ref{bmwc:cslimeq}, we calculated the limiting counts for each
field, in the soft and hard bands.
We then converted the limiting counts maps into limiting flux maps,
using count rate-to-flux conversion factors derived assuming a power-law model
with a photon index $\Gamma =$1.7 in the soft band and $\Gamma =$1.4 in the hard band,
modified by the absorption by Galactic $N_{\rm H}$ relative to each field.
The adopted values of $\Gamma$ were chosen to compare with the results in the literature
(e.g.\ \citealt{Champ3,Champ2}).
%
Histograms of the number of pixels (hence solid angle)
with flux smaller than a given threshold were produced for each field
in the soft and hard bands for S/N$=3$, and the solid angle
of the whole survey calculated as the sum of the contributions of each field.
We note here that some of the observations covered the same sky area, notably so for the
survey fields. Therefore, only the observation with the longest exposure time (after
screening and correction of the data) was considered for the sky coverage
calculation.
In Fig.~\ref{bmwc:sky_both_sn3} we show the
solid angle of the full survey (94 independent fields) as a function of the
flux limit for S/N$=$3.
The complete catalogue provides a sky coverage in the soft band (0.5--2\,keV, S/N =3)
of $\sim 8$ deg$^2$ at a limiting flux of $\sim 10^{-13}$ erg cm$^{-2}$ s$^{-1}$,
and $\sim 2$ deg$^2$ at a limiting flux of $\sim 10^{-15}$ erg cm$^{-2}$ s$^{-1}$
(in the soft band).
\section{Comparison with the ChaMP catalogue\label{bmwc:champcat}}
The {\it Chandra} Multiwavelength Project (ChaMP) is a survey of serendipitous {\it Chandra}
sources carried out by a multi-institution
collaboration\footnote{http://hea-www.harvard.edu/CHAMP/.}.
It covers $\sim 10$ deg$^2$ at flux levels of
$\sim 9\times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ (\citealt{Champ1,Champ2,Champ3}),
and is derived from the analysis of 149 {\it Chandra} fields, that include both
ACIS-I and ACIS-S images and a total of $\sim 6\,800$ sources.
The cross-matching of the source coordinates
yields 162 matches within 1\arcsec{} and 210 within 3\arcsec;
shifting the coordinates of our source list by 1\,\arcmin{} and then
cross-matching with the ChaMP again, we found no mismatches
(a null misidentification probability).
Fig.~\ref{bmwc:champ_distance}, shows the distribution of the angular
separation between the BMW-C and ChaMP positions (\citealt{Champ3}, Table~5) for the matching objects.
%
In Fig.~\ref{bmwc:champ_fluxes} we show the BMW-C 0.5--2.0\,keV counts
versus the ChaMP 0.5--2.0\,keV counts, as well as the best fit (solid line)
and the bisecant of the plane (dashed line),
$Counts_{\rm \,\,\,ChaMP}= (1.035\pm0.002)+(-0.174\pm0.005)\, Counts_{\rm \,\,\,BMW-C}$.
Although our counts are generally higher (at the percent level)
than the ones reported by ChaMP, we note a general consistence
within 1-$\sigma$. We also note that in general the WT algorithm is
better equipped at disentangling sources in crowded fields and at low S/N.
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f15.eps}}
\caption[BMW-C--ChaMP: angular separation]{BMW-C--ChaMP:
distribution of the angular separation between the
ChaMP and the BMW-C X-ray positions (210 matches). }
\label{bmwc:champ_distance}
\end{figure}
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f16.eps}}
\caption[BMW-C--ChaMP: fluxes]{BMW-C--ChaMP: the 0.5--2\,keV BMW-C counts versus the
0.5--2\,keV ChaMP counts (210 matches). The solid line is the best fit, while
the dashed line is the bisecant of the plane.}
\label{bmwc:champ_fluxes}
\end{figure}
\section{Cross-match with existing databases \label{bmwc:xcorr}}
\begin{table*}
\begin{center}
\caption{Catalogue Data for Cross-matching.}
\label{bmwc:allcats}
\begin{tabular}{lllrl}
\hline
\hline
\noalign{\smallskip}
Catalogues & Wavelength & Sky fraction & Entries & References\\
& & (\%) & & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
FIRST & 20 cm & $\sim 25$ & 811\,117 & \citet{FIRST97} \\
IRASPSC & 12, 25, 60, 100 $\mu$m & $\sim 98$ & $\sim $250\,000 & \citet{IRAS88} \\
2MASS & 1.25, 1.65, 2.16 $\mu$m & all-sky & 470\,992\,970 & \citet{2MASS} \\
GSC2 & $B_J, F, N, V$ & all-sky ($B_J, F, N$) & $\sim 10^9$ & \citet{GSC2} \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{ Cross-matches with other catalogues.}
\label{bmwc:allxcorrs}
\begin{tabular}{lrrrrrrr}
\hline
\hline
\noalign{\smallskip}
Catalogues & Cross-matching & Total & Single & Double & Multiple & Closest
& Expected (\%) \\
& Radius (\arcsec) & Matches & Matches & Matches & Matches & Matches
& Mismatches \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
FIRST & 10 & 50 & 3 & 12 & 35 & 13 & 46 \\
IRASPSC & 20 & 156 &51 & 38 & 67 & 87 & 49 \\
2MASS & 4.5 & 7\,687 &5881 & 1486 & 320 & 6700 & 50 \\
GSC2 & 4.5 & 5\,854 &3788 & 962 & 1104 & 4485 & 33 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
We cross-matched our catalogue with some of the largest catalogues available at
other wavelengths, from radio to optical, namely,
the Faint Images of the Radio Sky at Twenty-cm (FIRST),
the Infrared Astronomical Satellite (IRAS) Point Source catalogue (PSC),
the Two Micron All Sky Survey (2MASS), and
the Guide Star Catalogue 2 (GSC2).
Table~\ref{bmwc:allcats} summarises the main characteristics of the catalogues we considered.
In this section we report the results of the cross-match based on the
closest-match criterion.
The FIRST is a survey that
covers $\sim 10\,000$ square degrees of the North and South Galactic Cap,
which produced images with 1$\farcs$8 pixels, a typical rms of 0.15 mJy,
a sensitivity of $\sim 1$ mJy, and a resolution of 5\arcsec{} (\citealt{FIRST95}).
The 2003 release\footnote{http://sundog.stsci.edu/first/catalogs/readme\_03apr11.html.}
contains $\sim 800\,000$ sources and covers a total of $\sim 9\,000$
square degrees (\citealt{FIRST97}).
IRAS conducted an all-sky survey at
12, 25, 60, and 100\,$\mu$m that led to the IRAS Point Source catalogue (PSC).
The PSC contains $\sim 250\,000$ sources (\citealt{IRAS88}), and away from confused
regions of the sky, is complete to $\sim0.4$, 0.5, 0.6, and 1.0 Jy at 12, 25, 60,
and 100\,$\mu$m, respectively, with angular resolution of $\sim 0\farcs5$ at 12\,$\mu$m
and $\sim 2$\arcmin{} at 100\,$\mu$m.
Typical position uncertainties are $\sim2$ to 6\arcsec{} in-scan and about
$\sim8$ to 16\arcsec{} cross-scan.
The 2MASS (\citealt{2MASS}) covers virtually all
sky with simultaneous observations in
$J$ (1.25\,$\mu$m), $H$ (1.65\,$\mu$m), and $K_{\rm s}$ (2.17 $\mu$m) bands
with nominal magnitude limits of 15.8, 15.1, and 14.3 mag,
for point sources, and 15.0, 14.3, and 13.5 mag for extended sources,
respectively.
The All-Sky Data
Release\footnote{http://www.ipac.caltech.edu/2mass/releases/allsky/index.html.}
contains positional and photometric information for $\sim 470$ million
point sources and $\sim 1$ million extended sources.
In the optical, we considered the
GSC2 (\citealt{GSC2}),
which is an all-sky catalogue of approximately 1 billion stars and galaxies.
In particular, we used the
last version (GSC2.3), which covers the entire sky in the $B_J$, $F$ and $N$ bands
(roughly comparable to Johnson $B$, $R$ and $I$ bands), down to the limiting magnitudes
22.5--23, 20--22, and 19.5.
Furthermore, partial sky coverage is available in the $V$ band
(similar to Johnson $V$ filter) down to 19.5 magnitudes.
%
The astrometry, which is calibrated with the Hipparcos (\citealt{Hipparcos97})
and the Tycho 2 (\citealt{Tycho2000}) catalogues is accurate to within 0\farcs3.
The GSC2 also provides morphological classification for objects observed at least
in two bands with a $\sim 90 \%$ confidence level for objects at
$\mid b \mid \ga 5^{\circ}$ and brighter than $B_{J} \sim 19$ mag.
The FIRST, IRAS, 2MASS, and GSC2 parameters are listed in
Table~\ref{bmwc:params}.
%
Our cross-match procedure consisted in matching objects in the
BMW-C catalogue with objects in the other catalogues within a given
cross-matching radius (Table~\ref{bmwc:allxcorrs}), and
in case of multiple matches selecting the spatially closest one.
Our choice of cross-matching depended on a combination of
the positioning accuracy of the BMW-C and the other catalogues.
In general, more than one optical identification was available for most
BMW-C objects.
Table~\ref{bmwc:allxcorrs} shows the number of BMW-C sources that have
only one, only two, or three or more matches in the other catalogues
(Columns 4, 5, and 6, respectively).
The table also shows the total number of matches obtained after a
closest distance selection (Column 7).
In Figs.~\ref{bmwc:distance1} and \ref{bmwc:distance2} we show the distribution
of the angular separations between the BMW-C position and the position of the other
catalogues for these closest-matching objects. For the 2MASS and GSC2, the
distance distribution peaks at $\sim 1\arcsec$.
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f17.eps}}
\caption{Distribution of the angular separation:
BMW-C--FIRST (dashed line, 13 unique matches),
and BMW-C--IRAS (solid line, 87 matches).}
\label{bmwc:distance1}
\end{figure}
\begin{figure
\resizebox{\hsize}{!}{\includegraphics{f18.eps}}
\caption{Distribution of the angular separation:
BMW-C--2MASS (solid line, 6700 matches) and
BMW-C--GSC2 (dashed line, 4458 matches).}
\label{bmwc:distance2}
\end{figure}
There is only one BMW-C source
which is found in the FIRST, IRAS, 2MASS and GSC2 catalogues (1BMC163608.2+410509);
it is a galaxy
observed by Infrared Space Observatory (ISO), in the European Large Area ISO Survey
(ELAIS, \citealt{Vaisanenea02}) and in the radio (\citealt{Ciliegiea99}).
Furthermore, 71 BMW-C sources have IRAS, 2MASS and GSC2 matches;
4208 are found in both 2MASS and GSC2 catalogues.
We calculated the expected number of mismatches (Column 8 in Table~\ref{bmwc:allxcorrs})
by shifting the coordinates of our source list by 1\,\arcmin{} and then cross-matching
with the other catalogues again.
\section{Summary and future work\label{bmwc:summary}}
We presented the BMW-C source catalogue drawn from 136
{\it Chandra} ACIS-I pointed observations with an exposure time in excess
of 10\,ks public as of March 2003.
The full catalogue comprises 21325 sources,
16834 of which do not require a more in-depth, non-automated analysis
(i.e.\ not associated with bright and/or extended sources),
including the pointed ones.
Among them, 16758 are serendipitous.
This makes our catalogue the largest compilation of {\it Chandra} sources to date.
The 0.5--10\,keV absorption corrected fluxes of these sources
range from $\sim 3\times 10^{-16}$ to $9\times10^{-12}$
erg cm$^{-2}$ s$^{-1}$ with a median of $7\times 10^{-15}$
erg cm$^{-2}$ s$^{-1}$.
%
The catalogue includes count rates and relative errors in three energy bands
(total, 0.5--7\,keV; soft, 0.5--2\,keV; and hard band, 2--7\,keV;
the source positions are relative to the highest signal-to-noise
detection among the three bands),
as well as source counts extracted in four additional energy bands,
SB1 (0.5--1\,keV), SB2 (1.0--2.0\,keV), HB1 (2.0--4.0\,keV) and HB2 (4.0--7.0\,keV),
and information drawn from the headers of the original files.
The WT algorithm also provides an estimate of the extension of
the source which we refined with a $\sigma$-clipping method.
%
We computed the sky coverage for the full catalogue and for a subset
at high Galactic latitude ($\mid b \mid \, > 20^{\circ}$).
The complete catalogue provides a sky coverage in the soft band (0.5--2\,keV, S/N =3)
of $\sim 8$ deg$^2$ at a limiting flux of $\sim 10^{-13}$ erg cm$^{-2}$ s$^{-1}$,
and $\sim 2$ deg$^2$ at a limiting flux of $\sim 10^{-15}$ erg cm$^{-2}$ s$^{-1}$.
We also presented the results of the cross-match with existing catalogues
at different wavelengths (FIRST, IRAS, 2MASS, GSC2 and ChaMP).
Among the scientific applications of the catalogue are:
1) search for periodic and non-periodic
variability using light curves extracted for bright sources;
2) the optical/IR follow-up of a list of galaxy cluster candidates drawn from
our sub-sample of $\sim 300$ extended sources;
3) analysis of blank fields, i.e.\ X-ray detected sources without counterparts
at other wavelengths;
4) optical/IR follow-up of peculiar sources, such as
isolated neutron stars candidates (ultra-soft sources) and heavily absorbed sources
(ultra-hard sources, not observed in the soft bands).
The current version of the BMW-C source catalogue,
(as well as additional information and data) is available at the
Brera Observatory site: {\tt http://www.brera.inaf.it/BMC/bmc\_home.html}.
\begin{acknowledgements}
We thank the anonymous referee for insightful comments that improved our paper.
This work was supported through Consorzio Nazionale per l'Astronomia
e l'Astrofisica (CNAA) and Ministero dell'Istruzione,
dell'Universit\`a{} e della Ricerca (MIUR) grants.
We thank A.\ Mist\`o{} for his help with the database software.
This publication makes use of data products from the {\it Chandra} Data Archive,
the FIRST, IRAS, 2MASS, GSC2 surveys.
\end{acknowledgements}
|
1,116,691,497,657 | arxiv | \section{Introduction}\label{intro}
Throughout the following considerations, let
\begin{align}
p(x):=\sum\nolimits_{i=0}^n a_i x^i\in\mathbb{Z}[x]\text{, with }|a_i|<2^\tau\text{ and }\tau\in N_{\ge 1},\label{polyf}
\end{align}
be a (not necessarily square-free) polynomial of degree $n$ with integer coefficients of bit-size less than $\tau$, and let $k$ be the number of non-zero coefficients $a_{i_0},\ldots,a_{i_{k-1}}$, with $0\le i_{0}\le\cdots\le i_{k-1}=n$. For convenience, we denote a polynomial $p\in\mathbb{Z}[x]$ of degree at most $n$ and with at most $k$ non-vanishing coefficients, each of absolute value less than $2^\tau$, a \emph{$k$-nomial of magnitude} $(n,\tau)$.
We assume that $p$ is given by its sparse representation
\begin{align}
p(x)=\sum\nolimits_{l=0}^{k-1} a_{i_l} x^{i_l},\quad\text{where }a_{i_l}\neq 0\text{ for all }l=0,\ldots,k-1.\label{polyfsparse}
\end{align}
Notice that the sparse representation needs $O(k\cdot(\tau+\log n+1))$ many bits. Namely, we need one bit for the sign of each coefficient $a_{i_l}$, $\tau$ or less bits for the binary representation of $|a_{i_l}|$, and $\log n$ bits for the binary representation of each index $i_l$. To date, it was unknown whether we can isolate (or just count) the real roots of $p$ with a number of arithmetic operations over $\mathbb{Q}$ that is polynomial in the input size of the sparse representation of $p$. This paper gives a positive answer to the latter question. In addition, we show that, for isolating all real roots of a sparse enough polynomial $p\in\mathbb{Z}[x]$, our algorithm is near-optimal:
\begin{theorem}
Let $p\in\mathbb{Z}[x]$ be a $k$-nomial of magnitude $(n,\tau)$, then we can isolate all real roots of $p$ with $O(k^3\cdot \log(n\tau)\cdot\log n)$ many arithmetic operations over the rational numbers. In addition, for $k=O(\log^c(n\tau))$, with $c$ a non-negative constant, we need at most $\tilde{O}(n\tau)$ bit operations to isolate all real roots of $p$. The latter bound is optimal up to logarithmic factors in $n$ and $\tau$.
\end{theorem}
There exist numerous algorithms,\footnote{\small The literature on root solving is extensive. Hence, due to space limitations, we decided to restrict to a small selection of representative papers and refer the reader to the references given therein.} e.g.~\cite{DBLP:journals/jsc/BurrK12,ESY06,DBLP:conf/issac/GarciaG12,McNamee-Pan,MSW-rootfinding2013,Pan:alg,rouillier-zimmermann:roots:04,Sagraloff12,DBLP:journals/corr/SagraloffM13,DBLP:conf/issac/YapS11,Schoenhage,DBLP:journals/tcs/Tsigaridas13}, for efficiently computing the real (complex) roots of a polynomial $p$ as in (\ref{polyfsparse}), \emph{given that $k$ is large enough}.
That is, for $k=\Omega(n^c)$ with $c$ an arbitrary but positive constant, the computational complexity of
these algorithms is polynomial in the input size. For isolating all complex roots of $p$, Pan's method~\cite{MSW-rootfinding2013,Pan:alg}, which goes back to Sch\"onhage's splitting circle approach~\cite{Schoenhage}, achieves
record bounds with respect to arithmetic and bit complexity in the worst case. More specifically, it needs $\tilde{O}(n)$
arithmetic operations performed with a precision of $\tilde{O}(n\tau)$ bits, and thus, $\tilde{O}(n^2\tau)$ bit operations. Besides Pan's method, which computes all complex roots at once, there
also exist very efficient methods for computing the real roots only. A recently proposed algorithm, denoted \textsc{ANewDsc}~\cite{DBLP:journals/corr/SagraloffM13}, which combines Descartes' Rule of Signs, Newton iteration, and approximate arithmetic, has a bit complexity that is comparable to Pan's method; for any given positive integer $L$, \textsc{ANewDsc} uses $\tilde{O}(n^3+n^2\tau+nL)$ bit operations to compute isolating intervals of size less than $2^{-L}$ for all real roots of $p$.
We further remark that both of the above mentioned methods can be used to efficiently isolate the roots of a polynomial $p$ whose coefficients can only be learned from (arbitrarily good) approximations, given that $p$ has no multiple roots.
In this model, the bound on the bit complexity is stated in terms of the degree, the discriminant, and the Mahler bound of~$p$.
In contrast, for general $k$, much less is known about the computational complexity of computing (or just counting) the real roots of $p$. In~\cite{CUCKER1999}, Cucker et al. proposed a method to compute all \emph{integer} roots of $p$ with a number of bit operations that is polynomial in the input size. Lenstra~\cite{lenstra99} further showed that all rational roots of $p$ can be computed in polynomial time. In fact, he even proved that one can compute all factors of $p$ over $\mathbb{Q}$ of a fixed degree $d$ with a number of bit operations that is polynomial in the input size and $d$. For trinomials $p$ (i.e.~$k=3$) with arbitrary real coefficients, Rojas and Ye~\cite{Rojas05} gave an algorithm for counting (and \emph{$\epsilon$-approximating}) all real roots of $p$ that uses $O(\log^2 n)$ arithmetic operations in the field over $\mathbb{Q}$ generated by the coefficients of $p$. However, already for polynomials $p\inI\! \!R[x]$ with more than $3$ monomials, it is unknown whether there exists a deterministic polynomial-time algorithm for computing (or just counting) the real roots of $p$. Bastani et al.~\cite{Bastani11} introduced a deterministic algorithm that, for most inputs, counts the number of real roots of a tetranomial $p$ (i.e.~$k=4$). Its arithmetic complexity is polynomial in the input size, and, in the special case where $p$ has integer coefficients, even the bit complexity is polynomial.
For general $k$-nomials $p\in\mathbb{Z}[x]$ with integer coefficients, we are not aware of any method, either for counting or isolating the real roots, that achieves an arithmetic complexity that is polynomial in the input size of the sparse representation of $p$.
For the bit complexity, the best known bound for isolating the roots of a (not necessarily sparse polynomial) $p\in\mathbb{Z}[x]$ is $\tilde{O}(n^2\tau)$, and we expect no improvement of the corresponding algorithms when restricting to sparse polynomials. Namely, since Pan's method computes all complex roots, it needs $\Omega(n)$ arithmetic operations. Also, methods based on Descartes' Rule of Signs need at least $\Omega(n)$ arithmetic operations due to a transformation of the polynomial $p$ that destroys its sparsity. This together with the fact that there exist $4$-nomials that require a precision of $\Omega(n\tau)$ for isolating its (real) roots (see also the proof of Theorem~\ref{maintheorem2}) indicates that both approaches have a worst-case bit complexity of $\Omega(n^2\tau)$. In addition, for isolating the roots of $p$, most algorithms need to compute the square-free part of $p$ in a first step, and the best known deterministic bounds~\cite[Section~14]{gathen-gerhard:algebra:bk} for the arithmetic and bit complexity of the latter problem are $\tilde{O}(n)$ and $\tilde{O}(n^2\tau)$, respectively.\\
Our algorithm is rather simple from a high-level perspective and combines mainly known techniques. Thus, we consider our contribution to be the right assembly of these techniques into an algorithm and the complexity analysis.
The main idea underlying our approach is to compute isolating intervals for the roots of $p_0:=p/x^{i_0}$ from sufficiently small isolating intervals for the roots of the polynomial\footnote{\small For simplicity, the reader may assume that $i_0=0$, and thus, $p_1$ has the same roots as the derivative $p'=\frac{dp}{dx}$ of $p$ except for the root at zero.} $p_1:=p_0'\cdot x^{1-i_1}$. Notice that $p_1$ is a $(k-1)$-nomial of magnitude $(n,\tau+\log n)$ with a non-vanishing constant coefficient. Using evaluation and separation bounds, we can determine the sign of $p_0$ at the roots of $p_1$ by evaluating $p_0$ at arbitrary points in the corresponding isolating intervals, and thus, we can compute common roots of $p_0$ and $p_1$. In addition, we can immediately derive isolating intervals for the simple real roots of $p_0$ as $p_0$ is monotone in between two consecutive roots of $p_1$. Then, the isolating intervals for the roots of $p_0$ can be further refined to an arbitrary small size. Hence, recursive application of the above approach allows us to compute isolating intervals for $p$ from the roots of a $1$-nomial $p_{k-1}\in\mathbb{Z}[x]$ after $k$ iterations; see Section~\ref{sec:algorithm} for details.
Efficiency of the above approach crucially depends on the method to refine the isolating intervals for the simple roots of the polynomials $p_i$ that are considered in the recursion.
For this, we modify an efficient method for approximating (clusters of) real roots as recently proposed in~\cite{DBLP:journals/corr/SagraloffM13}. Since the method from~\cite{DBLP:journals/corr/SagraloffM13} is based on Descartes' Rule of Signs, its arithmetic complexity is super-linear in $n$. Hence, in order to exploit the sparsity of the polynomials $p_i$, we had to replace the corresponding steps by simple polynomial evaluation.
For an arbitrary positive integer $L$, the so-obtained method refines arbitrarily isolating intervals for all simple roots of a $k$-nomial $p$ of magnitude $(n,\tau)$ to a size less than $2^{-L}$ in $O(k\cdot (\log n+ \log(\tau+L)))$ iterations, and, in each iteration, $p$ is evaluated at a constant number of points. This yields an arithmetic complexity for the refinement steps that is polynomial in the input size. We consider the refinement method as the key ingredient of our algorithm, and think that it is of independent interest.
When using exact arithmetic over the rationals, the bit complexity of our algorithm is $\tilde{O}(k^3\cdot n^2\tau)$. We further show that, when replacing exact by approximate computation, the bit-size of the intermediate results reduces by a factor $n$ for the price of using $k$ times as many arithmetic operations. This yields the bound $\tilde{O}(k^4\cdot n\tau)$, and thus, $\tilde{O}(n\tau)$ for sufficiently small $k$, that is, $k=O(\log^c(n\tau))$. We also prove that the latter bound is optimal, where we use the fact that there exist $4$-nomials such that the binary representations of the corresponding isolating intervals need $\Omega(n\tau)$ bits.
\section{Algorithm and Complexity}\label{sec:idea}
\subsection{The Algorithm}\label{sec:algorithm}
\ignore{Let $z_1,\ldots,z_n$ denote the complex roots of $p$, $\sigma(z_i):=\min_{j\neq i}|z_i-z_j|$ the \emph{separation of $z_i$}, and $\sigma_p:=\min_i \sigma(z_i)$ the \emph{separation of $p$}.}
It is well known (e.g., see~\cite{CUCKER1999,Rojas05}) that the number of real roots of $p$ is upper bounded by $2k-1$. Namely,
according to Descartes' Rule of Signs, the number of positive real roots of $p$ (counted with multiplicity) is upper bounded by the number of sign changes in the coefficient sequence of $p$, and thus, smaller than $k$. The same argument applied to the polynomial $p(-x)$ further shows that the number of negative roots of $p(x)$ is smaller than $k$ as well.
In what follows, we may assume, w.l.o.g., that $i_0=0$. Namely, if $i_0>0$, then $p$ has the same roots as $p/x^{i_0}$ plus an additional root at $x=0$ of multiplicity $i_0$. Hence, we can consider the polynomial $p/x^{i_0}$ instead. In addition, we may restrict our search to the positive roots; for the negative roots, we can then apply the same approach to the polynomial $p(-x)$.
According to Cauchy's root bound, the modulus of each (complex) root is upper bounded by $1+2^{\tau}< 2^{\tau+1}$, and thus, for isolating the positive real roots of $f$, we can restrict our search to the interval $\mathcal{I}:=(0,2^{\tau+1})$.
We write $p$ as
\begin{align*}
p(x)=a_{i_0}+x^{i_{i_1}}\cdot(a_{i_1}+\cdots+a_{i_{k-1}}\cdot x^{i_{k-1}-i_1})=a_{i_0}+x^{i_1}\cdot\hat{p}(x),
\end{align*}
where $\hat{p}$ has degree $n-l_1<n$ and exactly $k-1$ non-zero coefficients. The idea is now to compute isolating intervals for the positive roots of $p_0:=p$ from sufficiently small isolating intervals for the positive roots of its derivative $p'(x):=\frac{dp(x)}{dx}$. For this, we do not directly consider the derivative $p'(x)$ but the polynomial
\begin{align}
p_1(x):=x\cdot\hat{p}'(x)+i_1\cdot\hat{p}(x)=\frac{p'(x)}{x^{i_1-1}},\label{polybarf}
\end{align}
which has the same roots as $p'$ except for the root at $x=0$ of multiplicity $i_1-1$. Notice that $p_1$ is a $(k-1)$-nomial of magnitude $(n-i_1,\tau+\log n)$ and that its coefficients can be computed from the coefficients of $p$ using $k$ multiplications and $k$ additions.
Let $x_{1}'$ to $x_{k_1}'$, with $0\le k_1\le k-2$, denote the roots of $p_1$ that are contained in $\mathcal{I}=(0,2^{\tau+1})$. W.l.o.g., we may assume that $0<x_{1}'<x_{2}'<\cdots<x_{k_1}'<2^{\tau+1}$.
Now, suppose that, for each root $x_{j}'$, an isolating interval $I_{j}'=(a_{j},b_{j})\subset\mathcal{I}$ of width less than $2^{-L}$, with $L:=128\cdot n\cdot(\tau+k\cdot \log n)$ and $a_{j},b_{j}\in\mathbb{Q}$, is given. Then, based on the following theorem, we can compute the sign of $p$ at the roots of $p_1$, and thus, determine common roots of $p$ and $p_1$.
Because of space limitations, we give the proof of Theorem~\ref{evalbound} in the Appendix. It mainly combines known results, however, we remark that we could not find a comparable result in the literature, where only evaluation/separation bounds of size $\tilde{O}(n^2+n\mu)$ are given.
\begin{theorem}\label{evalbound}
Let $f$ and $g$ be polynomials of degree $n$ or less with integer coefficients of absolute values less than $2^{\mu}$, and let
\begin{align}
L:=128\cdot n\cdot (\log n+\mu).
\end{align}
Then, for any two distinct roots $\xi_i$ and $\xi_j$ of $F:=f\cdot g$, it holds that $|\xi_i-\xi_j|^{m_i}>2^{-L}$, where $m_i:=\operatorname{mult}(\xi_i,F)$ denotes the multiplicity of $\xi_i$ as a root of $F$. If $\xi$ is a root of $g$ and $f(\xi)\neq 0$, then it holds that $|f(x)|>2^{-L/4}$ for all $x\in\mathbb{C}$ with $|x-\xi|<2^{-L}$. Vice versa, if $f(\xi)=0$, then $|f(x)|<2^{-L}$ for all $x\in\mathbb{C}$ with $|x-\xi|<2^{-L}$.
\end{theorem}
\ignore{
\begin{proof}
For the proof, we mainly combine known results~\cite{MSW-rootfinding2013}, however, we aim to stress the fact that the following computations are necessary to derive an $L$ of size $O(n(\log n+\tau))$. Namely, the literature only provides comparable bounds for square-free polynomials, whereas, for arbitrary polynomials, the existing bounds are of size $\tilde{O}(n^2+n\tau)$. This is mainly due to the fact that the known bounds for square-free polynomials are directly applied to the square-free part, and, in general, the square-free part of an integer polynomial of magnitude $(n,\tau)$ is of magnitude $(n,O(n+\tau))$.
Let $F(x)=f(x)\cdot g(x)=F_N\cdot\prod_{j=1}^N (x-z_j)$, where $z_1,\ldots,z_N$ denote the complex roots of $F$. Then, $F$ has degree $N\le 2n$ and its coefficients are integers of absolute value $2^{\tau_F}$ with $\tau_F<2(\tau+\log n)$. Now, suppose that $F$ has exactly $r_0$, with $1\le r_0\le \deg F$, distinct complex roots $\xi_1$ to $\xi_{r_0}$ with multiplicities $m_1$ to $m_{r_0}$, respectively. From the proof of~\cite[Theorem 5]{MSW-rootfinding2013}, we conclude that
\[
\prod_{i=1}^{r_0}\min\left(1,\frac{|F^{(m_i)}(\xi_i)|}{|F_N|\cdot m_i!}\right)\ge \left(2^{3\tau_F+2\cdot\log N+1}\cdot \operatorname{Mea}(F)\right)^{-N},
\]
where $\operatorname{Mea}(F)=|F_N|\cdot\prod_{i=1}^{r_0}\max(1,|\xi_i|)^{m_i}$ denotes the Mahler Measure of $F$ and $F^{(m)}(x):=\frac{d^m F(x)}{dx^m}$ the $m$-th derivative of $F$. Since $\operatorname{Mea}(F)\le \|F\|_2\le\sqrt{N+1}\cdot 2^{\tau_F}$, a simple computation shows that
\begin{align}
\prod_{i=1}^{r_0}\min\left(1,\frac{|F^{(m_i)}(\xi_i)|}{|F_N|\cdot m_i!}\right)> 2^{-24n(\tau+\log n)}.\label{bound1}
\end{align}
Now, assume that $\xi=\xi_i$ is a root of $g$ and that $f(\xi)\neq 0$. Then, it follows that
\[
|f(\xi)|=\frac{|F^{(m_i)}(\xi_i)|}{|g^{(m_i)}(\xi_i)|}> \frac{2^{-24n(\tau+\log n)}}{(n+1)\cdot 2^\tau \cdot |\xi_i|^n}>2^{-28n(\tau+\log n)},
\]
where we used that $\xi_i$ is a root of $g$ of multiplicity $m_i$ and $|\xi_i|<2^{\tau+1}$ for all $i$. Hence, if $w:=|x-\xi|<2^{-L}$, then
\begin{align*}
|f(x)|&=\left|f(\xi)+\frac{f'(\xi)}{1!}\cdot w+\cdots+\frac{f^{(n)}(\xi)}{n!}\cdot w^n\right|\\
&\ge |f(\xi)|-w\cdot n^2\cdot 2^{\tau}\cdot 2^{n(\tau+1)}\ge 2^{-32(n(\tau+\log n))}.
\end{align*}
Vice versa, if we assume that $f(\xi)=0$, then $|f(x)|<w\cdot n^2\cdot 2^{n(\tau+1)}<2^{-64n(\tau+\log n)}$ for all $x$ with $|x-\xi|\le w\le 2^{-L}$. This proves the second claim. For the first claim, let $\xi_i$ and $\xi_j$ be any two distinct roots of $F$. Then, we conclude from (\ref{bound1}) that
\begin{align}\label{boundonsep2}
2^{-24n(\tau+\log n)}&<\frac{|F^{(m_i)}(\xi_i)|}{|F_N|\cdot m_i!}=\prod_{l\neq i}|\xi_i-\xi_l|^{m_l}\\
&= |\xi_i-\xi_j|^{m_j}\cdot \prod_{l\neq i,j}|\xi_i-\xi_l|^{m_l}\le |\xi_i-\xi_j|^{m_j}\cdot 2^{2N(\tau_F+1)},\nonumber
\end{align}
and thus, the first claim follows.
\end{proof}
}
From the above theorem, we conclude that, for all $j=1,\ldots,k_1$:\smallskip
\noindent $\bullet$ $p$ has at most one root $\xi$ in $I_{j}'$.\\
\noindent $\bullet$ If $p$ has a root $\xi$ in $I_{j}'$, then $\xi=x_{j}'$ and $|p(x)|<2^{-L}$ for all $x\in I_{j}'$.\\
\noindent $\bullet$ If $p$ has no root in $I_{j}'$, then $|p(x)|>2^{-L/4}$ for all $x\in I_{j}'$.\smallskip
Hence, we can determine the sign of $p$ at each root $x_{j}'$ of $p_1$ by evaluating $p(x)$ to an absolute error\footnote{\small For now, you may assume that we exactly evaluate $p(x)$ for some rational $x\in I_{j}'$. However, we will need the more general statement for our version of the algorithm that uses approximate arithmetic, as proposed in Section~\ref{bitcomplexity}.} of less than $2^{-L/2}$, where $x$ is an arbitrary point in $I_{j}'$.
Let $x_{0}':=0$ and $x_{k_1+1}':=2^{\tau+1}$, and let $I_{0}'=[a_{0},b_{0}]:=[x_{0}',x_{0}']$ and $I_{k_1+1}'=[a_{k_1+1},b_{k_1+1}]:=[x_{k_1+1}',x_{k_1+1}']$ be corresponding intervals of width $0$. Notice that the values $x_{j}'$ decompose $\mathcal{I}$ into $k_1+1$ many intervals $A_j:=(x_{j-1}',x_{j}')$ such that $p$ is monotone in each interval~$A_j$. In addition, if either $p(x_{j-1}')=0$ or $p(x_{j}')=0$, then $p$ has no root in $A_j$ according to Rolle's Theorem.
Hence, $p$ has a root $\xi$ in $A_j$ if and only $p(x_{j-1}')\cdot p(x_{j}')<0$.
If the latter inequality holds, then $\xi$ is unique and simple. In fact, it even holds that the shortened interval $A_{j}':=(a_{j-1},b_{1,j})\subset A_j$ isolates $\xi$ because $I_{j-1}'$ and $I_{j}'$ do not contain any root of $p$. Now, since we can compute the sign
of $p$ at all points $x_{j}'$, isolating intervals for the positive real roots of
$p$ can directly be derived from the intervals $I_{j}'$. Notice that, for the
positive roots of $p$ with multiplicity larger than $1$, isolating intervals of width
less than $2^{-L}$ are already given. Namely, the multiple roots of $p$ are exactly
the common roots of $p$ and $p_1$, and thus they are already isolated by some of the intervals
$I_{j}'$. Each simple positive root of $p$ is isolated by an interval $A_j'$, which can be further refined to a width less than $2^{-L}$ using the refinement method
from Section~\ref{sec:refinement}. In summary, we have shown how to compute isolating intervals of width less than $2^{-L}$ for all roots of $p$ contained in $\mathcal{I}$ from isolating intervals of width less than $2^{-L}$ for all roots of~$p_1$ that are contained in $\mathcal{I}$.
We now recursively apply the above approach to $p_1$. More explicitly, for $j=1,\ldots,k-1$, we first compute the polynomials
\begin{align*}
&p_0:=p,\text{ }p_{j}:=x\cdot\hat{p}_{j-1}'+(i_{j}-i_{j-1})\cdot \hat{p}_j=x^{-(i_{j}-i_{j-1})+1}\cdot p_{j-1}'(x),\\&\text{where } p_{j-1}=p_{j-1}(0)+x^{i_{j}-i_{j-1}}\cdot \hat{p}_{j-1}(x),\text{ and }\hat{p}_{j-1}(0)\neq 0.
\end{align*}
Since $p_j$ is a $(k-j)$-nomial of magnitude
$(n-i_j,\tau+j\cdot \log n)$, $p_{j}$ becomes a constant for
$j=k-1$. Thus, computing isolating intervals of width less than $2^{-L}$ for the
positive roots of $p_{k-1}$ is trivial. Going backwards from $j=k-1$, we can then
iteratively compute isolating intervals $I_{j-1,1}$ to $I_{j-1,k_j}$ of width less than
$2^{-L}$ for the roots of $p_{j-1}$ from isolating intervals $I_{j,1}$ to $I_{j,k_{j}}$ of width less than $2^{-L}$ for the roots of $p_{j}$. Notice that Theorem~\ref{evalbound} applies to $f:=p_{j-1}$, $g:=p_{j}$ and any point $x\in I_{j,i}$
as $I_{j,i}$ has width less than $2^{-L}$ and $L=128\cdot n\cdot (n+k\cdot \log n)\ge 128\cdot\max(\deg(p_{j-1}),\deg(p_j))\cdot(\log\max(\|p_{j-1}\|_\infty,\|p_{j}\|_\infty)+\log n)$ for all $j\le k-1$. Hence, we can compute the sign of $p_{j-1}$ at each positive root of $p_{j}$ by evaluating $p_{j-1}$ at an arbitrary point in the corresponding isolating interval.
Notice that the above approach does not only yield isolating intervals for all real roots of $p$ but also the corresponding multiplicities. Namely, a root $\xi$ of $p$ has multiplicity $j$ if and only if $p_0(\xi)=\cdots=p_{j-1}(\xi)=0\neq p_j(\xi)$.\\
Before we continue with the analysis of our algorithm, we first consider a simple example to illustrate our approach. Notice that we tried to keep the formulation of our algorithm as simple as possible with the prior goal to achieve the claimed complexity bounds, however, for the cost of a probably worse performance in practice. Hence, for an actual implementation, we propose to integrate additional steps in order to avoid costly refinement steps, and thus, to considerably speed up the overall approach. We hint to such techniques in the following section. The reader who is mainly interested in the theoretical complexity bounds should feel free to skip the example and directly continue with Section~\ref{ssec:complexity1}.
\subsection{An Example and Alternatives}
Let $p(x)=x^{50}-4\cdot x^{48}+4\cdot x^{46}-x^4+4\cdot x^2-4$ be a $6$-nomial of magnitude $(n,\tau):=(50,2)$. We consider the polynomials $p_j$, with $j=0,\ldots,5$, that are defined as follows:
\begin{align*}
p_0(x)&:=x^{50}-4\cdot x^{48}+4\cdot x^{46}-x^4+4\cdot x^2-4\\
&=x^2\cdot (x^{48}-4\cdot x^{46}+4\cdot x^{44}-x^2+4)-4=x^2\cdot\hat{p}_0-4\\
p_1(x)&:=x\cdot\hat{p}_0'+2\cdot \hat{p}_0=x^{-1}\cdot p_0'(x)=\\
&=x^2\cdot(50\cdot x^{46}-192\cdot x^{44}+184\cdot x^{42}-4)+8=x^2\cdot\hat{p}_1+8\\
p_2(x)&:=x\cdot\hat{p}_1'+2\cdot \hat{p}_1=x^{-1}\cdot p_1'(x)=\\
&=x^{42}\cdot(2400\cdot x^4-8832\cdot x^2+8096)-8=x^{42}\cdot\hat{p}_2-8\\
p_3(x)&:=x\cdot\hat{p}_2'+42\cdot \hat{p}_2=x^{-41}\cdot p_2'(x)=\\
&=x^2\cdot (110400\cdot x^2-388608)+340032=x^2\cdot\hat{p}_3+340032\\
p_4(x)&:=x\cdot\hat{p}_3'+2\cdot \hat{p}_3=x^{-1}\cdot p_3'(x)\\
&=x^2\cdot 441600-777216=x^2\cdot\hat{p}_4-777216\\
p_5(x)&:=x\cdot\hat{p}_4'+2\cdot \hat{p}_4=x^{-1}\cdot p_4'(x)=883200
\end{align*}
We want to recursively isolate and approximate the positive real roots of the polynomials $p_5$ to $p_0$, starting with $p_5$. Since we are only interested in the roots of $p=p_0$, we can restrict to the interval $\mathcal{I}:=(0,8)$, which must contain all positive roots of $p$. Trivially, $p_5$ has no root, and thus, $p_4$ is monotone in $\mathcal{I}$.
Since $p_4(0)<0$ and $p_4(8)>0$, the interval $I_{4,1}:=\mathcal{I}$ isolates the unique (simple) positive real root (at $x_{4,1}\approx 1.326$) of $p_4$ in $\mathcal{I}$. The polynomial $p_3$ is monotone in each of the two intervals $(0,x_{4,1})$ and $(x_{4,1},8)$. Refining the isolating interval for $x_{4,1}$ to a width less than $2^{-L}$, with $L:=128\cdot \deg(p)\cdot(\log \|p\|_{\infty}+6\cdot\log \deg(p))\approx 8.5\cdot 10^4$, and using Theorem~\ref{evalbound}, we can evaluate the sign of $p_3$ at $x=x_{4,1}$. Since $p_3(0)>0$, $p_3(x_{4,1})\approx -1943<0$, and $p_3(8)>0$, each of the two intervals $I_{3,1}:=(0,x_{4,1})$ and $I_{3,2}=(x_{4,1},8)$ isolates a (simple) positive real root (at $x_{3,1}\approx 1.275$ and at $x_{3,2}\approx 1.375$) of $p_3$. The polynomial $p_2$ is monotone in each of the three intervals $(0,x_{3,1})$, $(x_{3,1},x_{3,2})$, and $(x_{3,2},8)$.
We again refine the isolating intervals for $x_{3,1}$ and $x_{3,2}$ to a width less than $2^{-L}$ and evaluate the sign of $p_2$ at the points $x=0$, $x=8$, and at the roots of $p_3$. From the latter evaluations, we conclude that $p_2$ has exactly three positive (simple) real roots (at $x_{2,1}:=0.869\ldots$, $x_{2,2}\approx 1.315$, and at $x_{2,3}\approx 1.396$), which are isolated by the intervals $I_{2,1}:=(0,x_{3,1})$, $I_{2,2}:=(x_{3,1},x_{3,2})$, and $I_{2,3}:=(x_{3,2},8)$, respectively.
Refining the isolating intervals for $x_{2,1}$, $x_{2,2}$, and $x_{2,3}$ to a width less than $2^{-L}$ again allows us to evaluate the sign of $p_1$ at at the roots of $p_2$. The latter computation shows that $p_1$ has exactly two (simple) positive real roots in $\mathcal{I}$ (at $x_{1,1}\approx 1.356$ and at $x_{1,2}\approx 1.414$), which are isolated by the intervals $(x_{2,2},x_{2,3})$ and $(x_{2,3},8)$, respectively. Eventually, we refine the intervals to a width less than $2^{-L}$ and evaluate the sign of $p_0=p$ at the roots of $p_1$. We have $p_0(x_{1,1})\approx 3\cdot 10^4$ and $p_0(x)<2^{-L}$, where $x$ has been arbitrary chosen from the isolating interval for $x_{1,2}$. Hence, from Theorem~\ref{evalbound}, we conclude that $p_0(x_{1,2})=0$, and thus, $x_{0,1}:=x_{1,2}$ is the unique positive real root of $p$. In addition, $x_{0,1}$ has multiplicity~$2$.\medskip
Notice that, in each except the last step (i.e.~for $j=2,\ldots,5$), we could consider an alternative approach, where we simultaneously refine the isolating interval for a root $\xi$ of $p_{j}$ and use interval arithmetic to evaluate the sign of $p_{j-1}(\xi)$. Following the analysis in~\cite[Section 4]{qir-kerber-11}, one can show that, if $p_{j-1}(\xi)\neq 0$, then this approach yields the correct sign as soon as the interval has been refined to a width less than $2^{-L'}$, with $L'=\deg(p_{j-1})\cdot(4+\log\max(1,|\xi|))-\log|p_{j-1}(\xi)|+\tau$. For instance, in our example above, the sign of $p_3$ at the root $x_{4,1}$ of $p_4$ can be determined from an isolating interval for $x_{4,1}$ of width less than $2^{-11}$ (compared to the theoretical bound of approximate size $2^{- 8.5\cdot 10^4}$ from Theorem~\ref{evalbound}).
Hence, for a practical implementation, we strongly recommend to integrate such techniques to rule out easy cases in a more efficient way.
However, for deciding that $p_0$ evaluates to zero at $x=x_{1,2}$, methods that are purely based on approximate computation will not work.\footnote{\small That is, without computing an explicit theoretical evaluation bound $2^{-L}$ as given in Theorem~\ref{evalbound}. Certainly, if one is willing to use such a bound, then also numerical computation will yield the correct result as soon as the interval has size less than $2^{-L}$ and the precision of the approximate arithmetic is large enough to guarantee an absolute error of less than $2^{-L/2}$.} One possible way, as proposed in our algorithm, is to refine the isolating interval for $x_{1,2}$ to a sufficiently small size, to evaluate $p$ at an arbitrary point that is contained in the isolating interval, and to apply Theorem~\ref{evalbound}.
Another way is to compute the square-free part $g^*$ of the greatest common divisor $g:=\gcd(p_{j-1},p_{j})$ of $p_{j-1}$ and $p_{j}$ and to check whether $g^*$ changes signs at the endpoints of the isolating interval. The advantage of the latter approach is that $p_{j-1}$ and $p_{j}$ typically do not share a common non-trivial factor, which can be easily checked via modular computation, and thus, it suffices to use interval arithmetic to compute the sign of $p_{j-1}$ at the roots of $p_{j}$.
However, although the second approach, which is based on computing $g^*$, seems to be more efficient in practice, there is a severe drawback with respect to its arithmetic complexity. Namely, the considered symbolic computations need a number of arithmetic operations that is super-linear in the degree of the involved polynomials. In contrast, we will show that the first approach, which is entirely based on refinement and evaluation, only uses a number of arithmetic operations that is polynomial in the input size of the sparse representation of the input polynomial.
\subsection{Arithmetic Complexity}\label{ssec:complexity1}
For an arbitrary $k$-nomial $p\in\mathbb{Z}[x]$ of magnitude $(n,\tau)$, suppose that each simple positive root $\xi$ of $p$ is already isolated by a corresponding interval $I=(a,b)\subset\mathcal{I} =(0,2^{\tau+1})$ with rational endpoints $a$ and $b$. In Section~\ref{sec:refinement}, we give an algorithm to refine \emph{all} such isolating intervals to a width less than $2^{-L}$ using $O(k^2\cdot (\log(n\tau)+\log L)\cdot\log n)$ arithmetic operations over $\mathbb{Q}$.
Hence, from the definition of our algorithm for root isolation, we conclude that we can compute isolating intervals of width less than $2^{-L}$, with $L:=128\cdot n\cdot(n+k\cdot \log n)$, for all roots of $p_{j-1}$ contained in $\mathcal{I}$ from isolating intervals of size less than $2^{-L}$ for the
roots of $p_{j}$ contained in $\mathcal{I}$ using only $O(k^2\cdot \log(n\tau)\cdot\log n)$ many
arithmetic operations: Namely, we can compute $p_j$ from $p_{j-1}$ with $k$ multiplications and $k$ additions. For evaluating $p_{j-1}$ at an arbitrary point $x\in\mathbb{Q}$, we need
at most $2k\log n$ arithmetic operations since we can compute $x^i$ with less than
$2\log i$ multiplications by repeated squaring (e.g.~$x^{11}=x\cdot x^2\cdot
((x^2)^2)^2$) and $p_{j-1}$ has at most $k$ non-vanishing coefficients. We have shown that
evaluating the sign of $p_{j-1}$ at each root $\xi\in \mathcal{I}$ of $p_j$ can be reduced to the evaluation of $p_{j-1}$
at an arbitrary (rational) point $x\in I'$, where $I'$ is an isolating interval for $\xi$. Hence, the latter evaluations need at
most $(k+1)\cdot (2k\log n)$ many arithmetic operations as each polynomial $p_j$ is a $(k-j)$-nomial of magnitude $(n-i_j,\tau+j\cdot\log n)$. Finally, the refinement of
the isolating intervals for the simple positive roots of $p_{j-1}$ needs $O(k^2\cdot
\log(n\tau)\cdot\log n)$ many arithmetic operations. Hence, the total number of arithmetic operations is bounded by
\begin{align}\nonumber
&k\cdot\left(3k+2k^2\log n+O(k^2\cdot \log(n\tau)\cdot\log n)\right).
\end{align}
We fix this result:
\begin{theorem}\label{thm:main1}
Let $p\in\mathbb{Z}[x]$ be a $k$-nomial of magnitude $(n,\tau)$, then all real roots of $p$ can be isolated with $O(k^3\cdot \log(n\tau)\cdot\log n)$ arithmetic operations over the rational numbers.
\end{theorem}
Notice that, from the latter theorem, we can immediately derive a corresponding result for polynomials $p(x)=\sum_{i=0}^{k-1}a_{i_l}x^{i_l}\in\mathbb{Q}[x]$ with rational
coefficients. Namely, suppose that $a_{i_1}=\frac{p_{i_l}}{q_{i_l}}$,
with integers $p_{i_l}$ and $q_{i_l}$ of absolute values less than $2^\tau$. Then, $P(x):=p(x)\cdot \prod_{l=0}^{k-1}q_{i_l}\in\mathbb{Z}[x]$ is a $k$-nomial of magnitude $(n,k\tau)$ that has the same roots as $p(x)$. Since $P$ can be compute from $p$ using less than $2k$ multiplications, we conclude from Theorem~\ref{thm:main1}:
\begin{corollary}
Let $p(x)=\sum_{l=0}^{k-1}a_{i_l}x^{i_l}\in\mathbb{Q}[x]$ be a polynomial with rational coefficients of the form $a_{i_l}=\frac{p_{i_l}}{q_{i_l}}$, where $p_{i_l},q_{i_l}\in\mathbb{Z}$ and $|p_{i_l}|,|q_{i_l}|<2^\tau$ for all $l=0,\ldots,k-1$. Then, all real roots of $p$ can be isolated with $O(k^3\cdot \log(k n\tau)\cdot\log n)$ arithmetic operations over the rational numbers.
\end{corollary}
\section{Root Refinement}\label{sec:refinement}
Throughout the following considerations, let $p(x)$ be a $k$-nomial of magnitude $(n,\tau)$ as in (\ref{polyf}), and let $I_0=(a_0,b_0)\subset\mathcal{I}=(0,2^{\tau+1})$, with $a_0,b_0\in\mathbb{Q}$, be an isolating interval for a simple real root
$\xi$ of~$p$. For a given positive integer $L\in\mathbb{Z}$, we aim to refine $I_0$ to a width less than $2^{-L}$. Our refinement
method is almost identical to a combination of the \emph{Newton-} and the \emph{Boundary-Test} as proposed in a very
recent paper~\cite{DBLP:journals/corr/SagraloffM13} on real root isolation, however, we slightly modify the latter approach in order to exploit the sparsity of $p$. That is, for testing an interval $I\subset I_0$ for the
existence of a root, we replace a test based on Descartes' Rule of Signs (see~Theorem~\ref{propertiesvar}) by a simple sign evaluation of $p$ at the endpoints of
$I$. For refining $I_0$, this is possible as $I_0$ is assumed to be isolating for a simple root, whereas the method from~\cite{DBLP:journals/corr/SagraloffM13} has to process arbitrary intervals for which no further information is provided.\footnote{\small The Newton-Test from~\cite[Section 3.2]{DBLP:journals/corr/SagraloffM13} is a crucial subroutine within the root isolation algorithm \textsc{ANewDsc}. It guarantees that, during the isolation process, clusters of roots are automatically detected and further approximated in an efficient manner. In this setting, the Newton-Test applies to arbitrary intervals $I$ that are not known to be isolating yet. Notice that, for the refinement of $I_0$, we cannot directly use the original Newton-Test from~\cite{DBLP:journals/corr/SagraloffM13}. Namely, in general, the polynomial $p_I$ from (\ref{polypI}) is not sparse anymore, even for small~$k$, and thus, we would need a super linear number of arithmetic operations to compute $\operatorname{var}(p,I)$ (see Thm.~\ref{propertiesvar} for definitions). However, when refining an interval $I_{0}$ that is known to isolate a simple real root $\xi$ of $p$, we can test a subinterval $I\subset I_{0}$ for being isolating with only two evaluations of $p$. Namely, $I$ isolates $\xi$ if and only if $p(a)\cdot p(b)<0$.}
For the sake of a self-contained presentation, we briefly review some basic facts about Descartes' Rule of Signs before presenting the refinement algorithm; for an extensive treatment of the Descartes method, we refer to~\cite{Collins-Akritas,eigenwillig-phd,ESY06,Sagraloff12,DBLP:journals/corr/SagraloffM13}:
\begin{figure}[t]
\begin{center}
\includegraphics[width=4.5cm]{obreshkoff.png}\end{center}
\caption{\label{fig:Obreshkoff} For an arbitrary integer $k\in\{0,\ldots,n\}$, let
$\overline{C}_k$ and $\underline{C}_k$ for $I:=(a,b)$ have the endpoints of $I$ on their
boundaries; their centers see the line segment $\overline{ab}$ under the angle $\frac{2\pi}{k+2}$.
The \emph{Obreshkoff lens} $L_k$ is the interior of $\overline{C} \cap
\underline{C}_k$, and the \emph{Obreshkoff area} $A_k$ is the interior of $\overline{C}_k \cup
\underline{C}_k$. $A_0$ and $A_1$ are called the \emph{One-} and \emph{Two-Circle Regions} of $I$, respectively.\vspace{-0.25cm}}
\end{figure}
For an arbitrary interval $I=(a,b)$, we denote $\operatorname{var}(p,I)$ the number of
sign variations in the coefficient sequence $(a_{I,0},\ldots,a_{I,n})$ (after removing all zero-entries) of the polynomial
\begin{align}\label{polypI}
p_I(x)=\sum_{i=0}^n a_{I,i} x^i:=(x+1)^n\cdot f\left(\frac{a\cdot x+b}{x+1}\right).
\end{align}
The polynomial $p_I$ is computed from $p$ via the M\"obius transformation that maps a point $x\in\mathbb{C}\backslash \{-1\}$ to $\frac{a\cdot x+b}{x+1}\in\mathbb{C}$, followed by multiplication with $(x+1)^n$. Notice that the latter step ensures that denominators in $p((ax+b)/(x+1))$ are cleared. There is a one-to-one correspondence (preserving multiplicities) between the positive real roots of $p_I$ and the roots of $p$ in $I$. In addition, according to Descartes' Rule of Signs, $v:=\operatorname{var}(p,I)$ is an upper bound on the number $m$ of real roots of $p$ in $I$ and $v-m$ is an even integer.
The function $\operatorname{var}(p,\cdot)$ has several further important properties:
\begin{theorem}\label{propertiesvar}\cite{eigenwillig-phd,Obreschkoff:book,Obrechkoff:book-english}\quad
Let $I=(a,b)$ be an arbitrary interval, and let $L_i$ and $A_i$, with $i=0,1,\ldots,n$, be the Obreshkoff regions in $\mathbb{C}$ as defined in Figure~\ref{fig:Obreshkoff}. Then, it holds that (roots are counted with multiplicity):
\begin{itemize}
\item[(a)]
\# roots contained in $L_n$ $\le$ $\operatorname{var}(p,I)$ $\le$ \# roots contained in~$A_n$
\item[({b})] If $p$ contains no root in $A_0$, then $\operatorname{var}(p,I)=0$. If $A_1$ contains exactly one root, then $\operatorname{var}(p,I)=1$.
\item[({c})] If $I_1$ and $I_2$ are two disjoint subintervals of $I$, then
\[
\operatorname{var}(p,I_1) + var(p,I_2) \le \operatorname{var}(p,I).
\]
\end{itemize}
\end{theorem}
From Theorem~\ref{propertiesvar} (c), we conclude that, for any interval $I=(a,b)\subset I\! \!R_{>0}$ on the positive real axis, $\operatorname{var}(p,I)$ is upper bounded by $\operatorname{var}(p,(0,b))=\operatorname{var}(p)$, and thus, $\operatorname{var}(p,I)\le k-1$. In particular, we have $\operatorname{var}(p,I_0)\le k-1$. Hence, part (a) of Theorem~\ref{propertiesvar} implies that the Obreshkoff lens $L_n$ of $I_0$ contains at most $k-1$ roots of $p$.
We can now formulate our refinement method. As mentioned above, it is almost identical to the approach presented in~\cite[Section 3.2]{DBLP:journals/corr/SagraloffM13}, hence we keep the presentation at hand as short as possible and refer to the corresponding paper for more explanations and
for a discussion that motivates the approach:
The main idea is to iteratively refine $I_0$ such that, in each iteration, we replace an isolating interval $I=(a,b)\subset I_0$ by an isolating interval $I'=(a',b')\subset I$ of considerably smaller width, and with $a',b'\in\mathbb{Q}$. For this, we use two main ingredients, namely, sign evaluation of $p$ at the endpoints of $I'$ in order to test $I'$ for the existence of a root, and a subdivision strategy based on Newton iteration and bisection, which guarantees quadratic convergence in the majority of all step. We give details:\medskip
\noindent\textbf{Algorithm} \textsc{NewRefine} (read Newton-Refine)\medskip
\noindent Input: An interval $I_0=(a_0,b_0)\subset \mathcal{I}=(0,2^{\tau+1})$, with endpoints $a_0,b_0\in\mathbb{Q}$, that isolates a simple root $\xi\inI\! \!R$ of a polynomial $p\in\mathbb{Z}[x]$, and a positive integer $L$.\\
Output: An interval $I=(a,b)\subset I_0$, with endpoints $a,b\in\mathbb{Q}$, of width less than $2^{-L}$ that isolates $\xi$.\medskip
In each step of the recursion, we store a pair $\mathcal{A}:=(I,N_I)$, where we initially set $\mathcal{A}:=(I_0,N_{I_0})$, with $N_{I_0}:=4$. We first try to compute a subinterval $I'\subset I$ of width $w(I')$, with $\frac{w(I)}{8N_I}\le w(I')\le \frac{w(I)}{N_I}$, that contains the unique root $\xi$. This is done via two tests, that is, the \texttt{Newton-Test\_signat} and the \texttt{Boundary-Test\_signat}.\footnote{\small In order to distinguish the tests from their counterparts in~\cite{DBLP:journals/corr/SagraloffM13}, we use the affix \texttt{\_signat}, which refers to the sign evaluation of $p$ at the endpoints of an interval $I$ in order to test $I$ for the existence of a root. We also remark that we directly give both tests in its full generality. That is, at several places, we use approximate arithmetic, and, in addition, we allow to choose an arbitrary point $m_i$ from $\multipoint{m}{\delta}$, where $\multipoint{m}{\delta}:=\set{m_i:=m+(i-\lceil k/2\rceil)\cdot \delta}{i=0,\ldots,2\cdot \lceil k/2\rceil}$ is a set of $2\cdot\lceil k/2\rceil +1$ points that are clustered at $m$. For now, the reader may assume that we always choose $m_i=m$, and that exact arithmetic over rational numbers is used. However, for our variant of the root isolation algorithm that uses approximate arithmetic (see Section~\ref{bitcomplexity}), we will exploit the fact that we can choose a point $m_i$ from $\multipoint{m}{\delta}$ for which $|p(m_i)|$ becomes large. This guarantees that the precision does not become unnecessarily large.\label{footnote:apx}}
\medskip
\hrule\nopagebreak\medskip
\noindent \texttt{Newton-Test\_signat}:
Consider the points $\xi_{1}:=a+\frac{1}{4}\cdot w(I)$, $\xi_{2}:=a+\frac{1}{2}\cdot w(I)$, $\xi_{3}:=a+\frac{3}{4}\cdot w(I)$, and let $\epsilon := 2^{-\ceil{5 + \log n}}$.
For $j = 1,2,3$, we choose arbitrary points (see Footnote~\ref{footnote:apx})
\begin{equation}\label{Newtonmultipoint1}
\xi_{j}^{*}\in \multipoint{\xi_{j}}{\epsilon \cdot w(I)},
\end{equation}
where, for an arbitrary $m\inI\! \!R$ and an arbitrary $\delta\inI\! \!R_{>0}$, we define
$$\multipoint{m}{\delta}:=\set{m_i:=m+(i-\lceil k/2\rceil)\cdot \delta}{i=0,\ldots,2\cdot \lceil k/2\rceil}.$$
The points $\xi_j^*$ define values
$
v_{j}:=\frac{p(\xi_{j}^{*})}{p'(\xi_{j}^{*})}.
$
Then, for the three distinct pairs of indices $i,j\in\{1,2,3\}$ with $i<j$, we perform the following computations in parallel:
For $L=1,2,4,\ldots$, we compute
approximations of $p(\xi^{*}_{i})$, $p(\xi^{*}_{j})$, $p'(\xi^{*}_{i})$, and
$p'(\xi^{*}_{j})$ to $L$ bits after the binary point; see Footnote~\ref{footnote:apx}. We stop doubling $L$ for a particular pair $(i,j)$ if we can
either verify that
\begin{align}\label{condition1}
|v_{i}|,|v_{j}|>w(I)\quad\text{or}\quad |v_{i}-v_{j}|<\frac{w(I)}{4n}\vspace{-0.2cm}
\end{align}
or that\vspace{-0.2cm}
\begin{align}\label{condition2}
|v_{i}|,|v_{j}|<2\cdot w(I)\quad\text{and}\quad |v_{i}-v_{j}|>\frac{w(I)}{8n}.
\end{align}
If (\ref{condition1}) holds, we discard the pair $(i,j)$. Otherwise, we compute sufficiently good
approximations (see Footnote~\ref{footnote:apx}) of the values $p(\xi^{*}_{i})$, $p(\xi^{*}_{j})$, $p'(\xi^{*}_{i})$, and $p'(\xi^{*}_{j})$, such that we can derive an approximation $\tilde{\lambda}_{i,j}$ of\vspace{-0.25cm}
\begin{align}\label{def:lambda}
\lambda_{i,j}:=\xi_{i}^{*}+\frac{\xi^{*}_{j}-\xi^{*}_{i}}{v_{j}-v_{j}}\cdot v_{i}
\end{align}
with $|\tilde{\lambda}_{i,j}-\lambda_{i,j}|\le \frac{1}{32N_{I}}$.
If $\tilde{\lambda}_{i,j} \not\in[a,b]$, we discard the pair $(i,j)$. Otherwise, let $\ell_{i,j}:=\floor{\frac{4N_I\cdot(\tilde{\lambda}_{i,j}-a)}{w(I)}}$. Then, it holds that $\ell_{i,j} \in \{0,\ldots,4N_{I}\}$. We further define
\begin{align*}
I_{i,j}&:=(a_{i,j},b_{i,j})\\
:=&\left(a+\max(0,\ell_{i,j}-1)\cdot\frac{w(I)}{4N_{I}},a+\min(4N_{I},\ell_{i,j}+2)\cdot\frac{w(I)}{4N_{I}}\right).
\end{align*}
If $a_{i,j}=a$, we set $a_{i,j}^{*}:=a$, and if $b_{i,j}=b$, we set $b_{i,j}^*:=b$. For all other values for $a_{i,j}$ and $b_{i,j}$, we choose arbitrary points (see Footnote~\ref{footnote:apx})
\begin{equation}\label{Newtonmultipoint2}
a_{i,j}^{*}\in \multipoint{a_{i,j}}{\epsilon\cdot \frac{w(I)}{N_I}}
\quad\text{and}\quad
b_{i,j}^{*}\in \multipoint{b_{i,j}}{\epsilon\cdot \frac{w(I)}{N_I}}.
\end{equation}
We define $I':=I_{i,j}^{*}:=(a_{i,j}^{*},b_{i,j}^{*})$. Notice that $I'$ is contained in $I$, and it holds that $\frac{w(I)}{8N_{I}}\le w(I')\le \frac{w(I)}{N_{I}}$. In addition, if the endpoints of $I$ are dyadic, then the endpoints of~$I'$ are dyadic as well.
In the final step, we compute the sign of $p(a_{i,j}^{*})$ and $p(b_{i,j}^{*})$. $I'$ is isolating for $\xi$ if and only if $p(a_{i,j}^{*})\cdot p(b_{i,j}^{*})<0$, hence, if the latter inequality is fulfilled, we return $I'$. Otherwise, we discard $(i,j)$.
We say that the \texttt{Newton-Test\_signat} succeeds if it returns an interval $I'=I_{i,j}^{*}$ for at least one of the three pairs $(i,j)$. If we obtain an interval for more than one pair, we can output either one of them. Otherwise, the test fails.
\medskip
\hrule\medskip
If the \texttt{Newton-Test\_signat} succeeds, we replace $\mathcal{A}=(I,N_I)$ by $\mathcal{A}:=(I',N_{I'})$, with $N_{I'}:=N_I^2$. If the \texttt{Newton-Test\_signat} fails, we continue with the so-called \texttt{Boundary-Test\_signat}. Essentially, it checks whether $\xi$ is located very close to one of the endpoints of $I$.\medskip
\hrule \nopagebreak \medskip
\noindent\texttt{Boundary-Test\_signat:}
Let $m_{\ell}:=a+\frac{w(I)}{2N_{I}}$ and $m_{r}:=b-\frac{w(I)}{2N_{I}}$, and let $\epsilon := 2^{-\ceil{2 + \log n}}$. Choose arbitrary points (see Footnote~\ref{footnote:apx})
\begin{equation}\label{Boundarymultipoint}
m_{\ell}^{*}\in \multipoint{m_{\ell}}{\epsilon \cdot\frac{w(I)}{N_I}} \quad\text{and}\quad m_{r}^{*}\in \multipoint{m_{r}}{\epsilon\cdot\frac{w(I)}{N_I}},
\end{equation}
and compute the sign of $p(x)$ at $x=a$, $x=m_{\ell}^{*}$, $x=m_{r}^{*}$, and $x=b$.
If $p(a)\cdot p(m_{\ell}^{*})<0$, then $I':=(a,m_{\ell})$ isolates $\xi$, and thus, we return $I'$. If $p(b)\cdot p(m_{r}^{*})<0$, we return $I'=(m_{r},b)$. Notice that from our definition of $m_{\ell}^{*}$ and $m_{r}^{*}$, it follows that both intervals $I_{\ell}$ and $I_{r}$ have width in between $\frac{w(I)}{4N_{I}}$ and $\frac{w(I)}{N_{I}}$. If $p(a)\cdot p(m_{\ell}^{*})<0$ or $p(b)\cdot p(m_{r}^{*})<0$, the \texttt{Newton-Test\_signat} succeeds. Otherwise, the test fails. \nopagebreak
\medskip
\hrule\medskip
If the \texttt{Boundary-Test\_signat} succeeds, then $\mathcal{A}=(I,N_I)$ is replaced by $\mathcal{A}:=(I',N_{I'})$, with $N':=N_I^2$.
If the \texttt{Newton-Test\_signat} as well as the \texttt{Boundary-Test\_signat} fail, then we choose an arbitrary point (see Footnote~\ref{footnote:apx})
\begin{align}\label{Bisectionmultipoint}
m^* \in \multipoint{m(I)}{\frac{w(I)}{2^{\ceil{2 + \log n}}}}
\end{align}
and compute the sign of $p(x)$ at $x=a$ and $x=m^*$. If $p(a)\cdot p(m^*)<0$, we replace $\mathcal{A}=(I,N_I)$ by $\mathcal{A}:=(I',N_{I'})$, with $I'=(a,m^*)$ and $N_{I'}:=\max(4,\sqrt{N_{I}})$. If $p(m^*)=0$, we stop and return the interval $[m^*]$ of width zero. Otherwise, we replace $\mathcal{A}=(I,N_I)$ by $\mathcal{A}:=(I',N_{I'})$, with $I'=(m^*,b)$ and $N_{I'}:=\max(4,\sqrt{N_{I}})$. We stop refining $I$ as soon as $I$ has width less than~$2^{-L}$.\bigskip
We formulated the \texttt{Newton-Test\_signat} and the \texttt{Boundary-Test\_signat} in a way such that each of them
succeeds if the corresponding test in~\cite{DBLP:journals/corr/SagraloffM13} succeeds, assuming that we choose the same
points in~(\ref{Newtonmultipoint1}),~(\ref{Newtonmultipoint2}), and in~(\ref{Boundarymultipoint}). Namely, if $I'=(a',b')$ is
known to contain at most one (simple) root of $p$, then $\operatorname{var}(p,I')=0$ implies that $p(a')\cdot p(b')\ge 0$.\footnote{\small $p(a')\cdot p(b')\ge 0$ implies that $I'$ contains no root but not that $\operatorname{var}(p,I')=0$.} Hence,
the analysis from~\cite{DBLP:journals/corr/SagraloffM13} directly carries over and yields the following result:\footnote{\small The proof of Lemma~\ref{bound1:connection} is essentially identical to our considerations
in~\cite[Section 3.2 and 4.1]{DBLP:journals/corr/SagraloffM13}. In particular, the proofs
of~\cite[Lemma 20 and 23]{DBLP:journals/corr/SagraloffM13} directly carry over if we use that $\log N_I$ is always bounded by $O(\tau+L)$ when refining $I_0$ to a width less
than~$2^{-L}$.}
\begin{lemma}\label{bound1:connection}
Let $I_0,I_1,\ldots,I_s$, with $I_0\supset I_1\supset\cdots\supset I_s$, $s\in\mathbb{N}$, and $w(I_{s-1})\ge 2^{-L}>w(I_s)$, be the intervals produced by the algorithm \textsc{NewRefine}, and let $s_{\max}$ be the largest number of intervals $I_j$ for which the one-circle region of $I_j$ contains exactly the same roots. Then, it holds that
$
s_{\max}=O(\log n+\log(\tau+L)).
$
\end{lemma}
From the lemma above and Theorem~\ref{propertiesvar}, we now obtain the following bound for the number of iterations that is needed to refine $I_0$ to an interval of width less than $2^{-L}$.
\begin{theorem}\label{thm:arithmeticcomplexity}
Let $I_0=(a_0,b_0)\subset (0,2^{\tau+1})$, with $a_0,b_0\in\mathbb{Q}$, be an isolating interval for a simple root $\xi$ of a $k$-nomial $p\in\mathbb{Z}[x]$ of magnitude $(n,\tau)$. For computing an interval $I=(a,b)\subset I_0$, with $a,b\in\mathbb{Q}$ and $\xi\in I$, the algorithm \textsc{NewRefine} needs
$
O(\operatorname{var}(p,I_0)\cdot (\log n+\log(\tau+L)))
$
many iterations and $O(k\cdot\log n\cdot \operatorname{var}(p,I_0)\cdot (\log n+\log(\tau+L))$ many arithmetic operations over $\mathbb{Q}$.
\end{theorem}
\begin{proof}
As in Lemma~\ref{bound1:connection}, we denote $I_0,I_1,\ldots,I_s$, with $I_0\supset I_1\supset\cdots\supset I_s$, $s\in\mathbb{N}$, and $w(I_{s-1})\ge 2^{-L}>w(I_s)$, the intervals produced by \textsc{NewRefine}.
Let $j_0$ be the minimal index $j$ for which the one-circle region $A_0$ of $I_j$ contains at most $v_0$ many roots, with $v_0:=\operatorname{var}(p,I_0)$. If the one-circle region of each $I_j$ contains more than $v_0$ roots, we set $j_0:=s$.
Now, using Lemma~\ref{bound1:connection} for the sub-sequence $I_{j_0},\ldots,I_{s}$, we conclude that $s-j_0=v_0\cdot s_{\max}$.
Hence, we are left to argue that $j_0$ is bounded by $O(v_0\cdot (\log n+\log(\tau+L)))$. In fact, the following consideration even shows that $j_0=O(\log n+\log(\tau+L))$: We first consider the special case, where $I_{j_0}$ shares a common endpoint with $I_0$. Then, exactly the same argument as in the proof of~\cite[Lemma 23]{DBLP:journals/corr/SagraloffM13} shows that $j_0$ is bounded by $O(\log n+\log(\tau+L))$, where we use that $N_{I_j}=O(L+\tau)$ for all $j$. Essentially, this is due to the fact that success of the \texttt{Boundary-Test\_signat} guarantees quadratic convergence, and the latter test must succeed for all but $O(\log n+\log(\tau+L))$ many iterations. Now suppose that there exists an index $j_0'<j_0$ such that $I_{j_0'}$ shares a common endpoint with $I_0$, whereas $I_{j_0'+1}$ does not. Then, $j_0'=O(\log n+\log(\tau+L))$. In addition, the distance from any point $x\in I_{j_0'+1}$ to each of the two endpoints $a_0$ and $b_0$ is larger than or equal to $w(I_{j_0'+1})/4$. Hence, since $w(I_{j+1})\le \frac{3}{4}\cdot w(I_{j})$ for all $j$, we have $\max(|a_j-a_0|,|b_j-b_0|)>8n^2\cdot w(I_j)$ for all $j>j_0'+4(\log n+1)$. According to~\cite[Lemma 9]{Sagraloff12}, it follows that the one-circle region of any interval $I_j$, with $j>j_0'+4(\log n+1)$, is contained in the Obreshkoff lens $L_n$ of $I_{0}$. Now, from part (a) of Lemma~\ref{propertiesvar}, we conclude that the one-circle region of $I_j$ contains at most $v_0$ roots for each $j>j_0'+4(\log n+1)$, and thus, $j_0=O(\log n+\log(\tau+L))$. This proves the first claim.
For the second claim, we remark that, in each iteration, we perform a constant number of evaluations of the polynomial $p$ and its derivative $p'$. Since both polynomials have $k$ coefficients or less, this shows that we need $O(k\log n)$ arithmetic operations over $\mathbb{Q}$ in each iteration. Multiplication of the latter bound with the bound on the number of iterations eventually yields the claimed bound on the arithmetic complexity.
\end{proof}
Now suppose that isolating intervals $I_1,\ldots,I_{k_0}\subset \mathcal{I}$ for all simple real roots of $p$ are given. Then, $\sum_{j=1}^{k_0}\operatorname{var}(p,I_j)\le \operatorname{var}(p,\mathcal{I})\le k$, and thus, Theorem~\ref{thm:arithmeticcomplexity} yields the following result:
\begin{corollary}\label{cor:arithmeticcomplexity}
Let $p\in\mathbb{Z}[x]$ be a $k$-nomial of magnitude $(n,\tau)$, and $I_j=(a_j,b_j)\subset \mathcal{I}=(0,2^{\tau+1})$, with $j=1,\ldots,k_0$ and $a_j,b_j\in\mathbb{Q}$, be isolating intervals for all simple real roots of $p$. Then, we can refine all intervals $I_j$ to a width less than $2^{-L}$, with $L$ an arbitrary positive integer, with a number of arithmetic operations over $\mathbb{Q}$ bounded by
$
O(k^2\cdot \log n\cdot (\log n+ \log(\tau+L))).
$
\end{corollary}
\section{Bit Complexity}\label{bitcomplexity}
We finally aim to derive a bound on the bit complexity of our algorithm when using approximate but certified arithmetic. When using exact arithmetic over dyadic numbers (i.e.~numbers of the form $m\cdot 2^{-l}$,
with $m,l\in\mathbb{Z}$), all intermediate results are dyadic and of bit-size $O(n^2(\log n+\tau))$.
Namely, we refine intervals to a width of size $2^{-O(n(\tau+\log n))}$, and only consider evaluations of the polynomial $p$
at dyadic points that are contained in such intervals and whose bit-size is bounded by $\kappa=O(n(\tau+\log n))$. From the latter fact and Theorem~\ref{thm:main1}, we conclude that the overall bit complexity of our algorithm
is bounded by $\tilde{O}(n^2\tau\cdot k^3)$ when using exact arithmetic over rational (dyadic) numbers. Here, we use that exact evaluation of $p$ at a dyadic number of bit-size $\kappa$ needs $\tilde{O}(n(\kappa+\tau))$ bit operations~\cite[Lemma~2]{DBLP:conf/issac/BouzidiLPR13}.
However, the following considerations show that we can replace a factor $n$ by an additional factor $k$ in the latter bound. More precisely, using approximate computation, we can reduce the bit-size of the intermediate results by a factor $n$ for the price of using $k$ times as many arithmetic operations. We give details:\\
Notice that, at several places in the algorithm $\textsc{NewRefine}$, that is, in (\ref{Newtonmultipoint1}),~(\ref{Newtonmultipoint2}),~(\ref{Boundarymultipoint}), and in (\ref{Bisectionmultipoint}), we are free to choose an arbitrary point $m_i$ from a set $$\multipoint{m}{\delta}:=\set{m_i:=m+(i-\lceil k/2\rceil)\cdot \delta}{i=0,\ldots,2\cdot \lceil k/2\rceil}$$
consisting of $\lceil k/2\rceil +1$ points that are clustered at $m$. Now, in order to keep the precision of the computations as low as possible, we aim to choose a point $m_i\in\multipoint{m}{\delta}$ for which $p(m_i)$ has a large absolute value. We introduce the following definition that has already been used in~\cite[Section 2.2]{DBLP:journals/corr/SagraloffM13} in a slightly modified form.
\begin{definition}\label{admissible point} For $\multipoint{m}{\delta}$ as above, we call a point $m^*\in \multipoint{m}{\delta}$ \emph{admissible with respect to $\multipoint{m}{\delta}$} (or just \emph{admissible} if there is no ambiguity) if $|p(m^*)|\ge \frac{1}{4}\cdot\max_{i}|p(m_{i})|$.
\end{definition}
\begin{lemma}\label{lem:apxmultipointeval}
Suppose that each point in $\multipoint{m}{\delta}$ has absolute value less than $2^{\tau+1}$ and that $\lambda:=\max_{i}|p(x_{i})|\neq 0$. Then, we can determine an admissible point $m^*\in \multipoint{m}{\delta}$ and an integer $t$ with
\[
2^{t-1}\le |p(m^{*})|\le \lambda \le 2^{t+1}
\]
with $\tilde{O}(k(n\tau+\log \max(\lambda^{-1},1)))$ many bit operations.
\end{lemma}
\begin{proof}
Using the same approach as in~\cite[Section 4]{qir-kerber-11} plus repeated squaring, we can evaluate $p$ at any of the $\lceil k/2\rceil +1$ many points $x=m_i$ to an absolute error less than $2^{-K}$, with $K$ an arbitrary positive integer, in a number of bit operations bounded by $\tilde{O}(k\cdot (n\tau+K))$.
We can now compute an admissible point $m^*\in\multipoint{m}{\delta}$ as follows:
Consider $K=1,2,4,8,\ldots$ and approximate \emph{all values} $|p(m_{i})|$ to a precision of $K$ bits after the binary point until, for at least one $i$, we obtain an approximation $2^{t_{i}}$ with $t_{i}\in\mathbb{Z}$ and $2^{t_{i}-1}\le |p(x_{i})|\le 2^{t_{i}+1}$. Now, let $i_{0}$ be such that $t_{i_{0}}$ is maximal; then, it follows that $2^{t_{i_{0}}-1}\le \lambda\le 2^{t_{i_{0}}+1}$. Following this approach, we must stop for a $K$ with $K<2\log \max(\lambda^{-1},1)$. Since we double $K$ at most $\log\log \max(\lambda^{-1},1)$ many times, the claim follows.
\end{proof}
Now, in~(\ref{Newtonmultipoint1}),~(\ref{Newtonmultipoint2}),~(\ref{Boundarymultipoint}), and in (\ref{Bisectionmultipoint}) of \textsc{NewRefine}, we do not choose an arbitrary point from the corresponding set $\multipoint{m}{\delta}$ but an admissible point $m^*\in \multipoint{m}{\delta}$. We argue that, for each such $m^*$, we have $|p(m^*)|>2^{-O(n(\tau+\log n))}$: Let $I=(a,b)$ be the interval that is processed in the current iteration, then
$\min(|m^*-a|,|m^*-b|)>n\delta\ge\left(\sin\frac{\pi}{2n+2}\right)^{-1}\cdot\frac{\delta}{2}.$
Hence, the distance from $m^*$ to the boundary of the Obreshkoff lens $L_n$ of $I$ is larger than $\delta/2$ since the distance from an arbitrary point $x\in I=(a,b)$ to the boundary of $L_n$ is larger than $\min(|x-a|,|x-b|)\cdot \sin\frac{\pi}{2n+2}$; see~\cite[Lemma 5 and Figure 4.1]{Sagraloff12}. Since the Obreshkoff lens $L_n$ of the larger interval $I_0$ contains at most $k$ roots (counted with multiplicity), it follows that there exists at least one point $m_{i_0}\in \multipoint{m}{\delta}$ with $|\xi_j-m_{i_0}|\ge \delta/2$ for all (distinct) complex roots $\xi_j$ of $p$. Let $\xi_{j_0}$ be the root of $p$ that minimizes the distance to $m_{i_0}$. If $\xi_{j_0}=\xi$, then
\[
\frac{|p(m_{i_0})|}{|p'(\xi)|}=|m_{i_0}-\xi|\cdot\prod_{i\neq j_0} \left(\frac{|m_{i_0}-\xi_j|}{|\xi-\xi_j|}\right)^{\mu_j}\ge |m_{i_0}-\xi|\cdot 2^{-n+1}\ge \frac{\delta}{2^{n}},
\]
where $\mu_j$ denotes the multiplicity of $\xi_j$ as a root of $p$. Hence, from $\delta\ge 2^{-\lceil 5+\log n\rceil}\cdot\frac{w(I)}{N_I}=2^{-O(\log n+\tau+L)}$, we conclude that $|p(m_{i_0})|=2^{-O(n(\log n+\tau))}$ if $w(I)\ge 2^{-L}$, with $L:=128n\cdot(\log n+\tau)$. We are left to discuss the case $\xi_{j_0}\neq \xi$. Then,
\begin{align*}
\frac{|p(m_{i_0})|}{|p^{(\mu_{j_0})}(\xi_{j_0})|}&=|m_{i_0}-\xi_{j_0}|^{\mu_{j_0}}\cdot\prod_{i\neq j_0} \left(\frac{|m_{i_0}-\xi_j|}{|\xi_{j_0}-\xi_j|}\right)^{m_j}\ge \frac{|m_{i_0}-\xi_{j_0}|^{\mu_{j_0}}}{2^{n-1}}\\
&\ge 2^{-2n}\cdot \delta^{\mu_{j_0}}\ge 2^{-2n-n(6+\log n)}\cdot \left(\frac{w(I)}{N_I}\right)^{\mu_{j_0}}.
\end{align*}
Trivially, we have $w(I)\le 2\cdot w(I)/\sqrt{N_I}$ for $N_I=4$. If $N_I>4$, then there must have been an iteration, where we replaced an isolating interval $J$ for $\xi$, with $I\subset J\subset (0,2^{\tau+1})$ by an interval $J'$, with $I\subset J'\subset J$ and $w(J')\le w(J)/\sqrt{N_I}$. Hence, in any case, we have $w(I)\le 2^{\tau+2}/\sqrt{N_I}$. This shows that
\begin{align*}
\left(\frac{w(I)}{N_I}\right)^{\mu_{j_0}}&\ge w(I)^{3\mu_{j_0}}\cdot 2^{-2\mu_{j_0}\cdot(\tau+2)}\ge |\xi-\xi_{j_0}|^{3\mu_{j_0}}\cdot 2^{-2n(\tau+2)},
\end{align*}
where the second to last inequality follows from the inequality $|\xi-\xi_{j_0}|\le |\xi-m_{i_0}|+|m_{i_0}-\xi_{j_0}|\le w(I)+|m_{i_0}-\xi_{j_0}|$. Then, Theorem~\ref{evalbound} (with $F:=p\cdot 1$) implies that $\left(\frac{w(I)}{N_I}\right)^{\mu_{j_0}}=2^{-O(n(\log n+\tau))}$.
In summary, we conclude that, in~(\ref{Newtonmultipoint1}),~(\ref{Newtonmultipoint2}),~(\ref{Boundarymultipoint}), and in (\ref{Bisectionmultipoint}), we can choose points $m_i\in \multipoint{m}{\delta}$ with $|p(m_i)|=2^{-O(n(\log n+\tau))}$
for the cost of $\tilde{O}(k\cdot n\tau)$ bit operations. Notice that, for the same cost, we can also determine the sign of $p$ at each of these points, and thus, the considered sign evaluations in one iteration need $\tilde{O}(k\cdot n\tau)$ bit operations.
It remains to bound the cost for computing the approximations $\tilde{\lambda}_{j_1,j_2}$ as defined in (\ref{def:lambda}) in \textsc{NewRefine}. Notice that, for checking the inequalities in (\ref{condition1}) and in (\ref{condition2}), it suffices to approximate the values $p(\xi^{*}_{j_{1}})$, $p(\xi^{*}_{j_{2}})$, $p'(\xi^{*}_{j_{1}})$, and
$p'(\xi^{*}_{j_{2}})$ to an absolute error of $$\log \min(p(\xi^{*}_{j_{1}}),p(\xi^{*}_{j_{2}}))+O(\log (n/w(I)))=O(n(\log n+\tau))$$ bits after the binary point. Again, the cost for computing such approximations is bounded by $\tilde{O}(k\cdot n\tau)$ bit operations. Then, the same complexity bounds also holds for the computation of $\tilde{\lambda}_{j_1,j_2}$. Namely, since $v_{j_1}-v_{j_2}$ has absolute value larger than $w(I)/(8n)$, and $v_{j_1}$ as well as $v_{j_2}$ have absolute value smaller than $2^{O(n\tau)}$, it suffices to carry out all operations with a precision of $O(\log N_I+\log w(I)^{-1}+n\tau)$ bits after the binary point. We summarize:
\begin{lemma}\label{refinement:bitcomplexity}
Let $p\in\mathbb{Z}[x]$ be a $k$-nomial of magnitude $(n,\tau)$, and let $I_j=(a_j,b_j)\subset \mathcal{I}=2^{\tau+1}$, with $j=1,\ldots,k_0$, be isolating intervals for all simple real roots of $p$. Suppose that $a_j,b_j\in\mathbb{Q}$ and $\min_{j}\min(|p(a_j)|,|p(b_j)|)>2^{-L}$, with $L:=128n\cdot(\log n+\tau)$. Then, \textsc{NewRefine} refines all intervals $I_j$ to a width less than $2^{-L}$ with a number of bit operations bounded by
$
\tilde{O}(k^3\cdot n\tau).$
For each interval $I_j'=(a_j',b_j')$ returned by \textsc{NewRefine}, we have $a_j',b_j'\in\mathbb{Q}$ and $\min(|p(a_j')|,|p(b_j')|>2^{-L}$.
\end{lemma}
\begin{proof}
The result is an almost immediate consequence of our considerations above. Notice that the condition on the endpoints of the initial intervals $I_j$ guarantees that we only evaluate the sign of $p$ at points that are either admissible points $m^*\in\multipoint{m}{\delta}$ or endpoints of one of the intervals $I_j$. Hence, each such sign evaluation needs $\tilde{O}(k\cdot n\tau)$ bit operation. For the computation of an admissible point, we need $\tilde{O}(k^2\cdot n\tau)$ many bit operations, which is due to the fact that we perform approximate computation of $p$ at $O(k)$ many points in parallel. From Theorem~\ref{thm:arithmeticcomplexity}, we conclude that the number of iterations in total is bounded by $O(k\cdot \log (n\tau))$, and thus, the claimed bound on the bit complexity follows.
\end{proof}
For our root isolation algorithm as proposed in Section~\ref{sec:algorithm}, the above result implies that we can isolate all
real roots of $p$ with a number of bit operations bounded by $\tilde{O}(k^4\cdot n(\tau+k))$. Namely, in each step of the
recursion, we first have to evaluate some $k$-nomial $p_{j-1}$ of magnitude $(n,\tau+k\log n)$ at arbitrary points
$x_i\in I_{j,i}$, where $I_{j,i}=(a_{j,i},b_{j,i})$ are isolating intervals for the real roots of $p_j$. Since it suffices to
compute approximations of the values $p_{j-1}(x_i)$ to $L/2$ bits after the binary point, the cost for all evaluations is
bounded by $\tilde{O}(k^2\cdot n\tau)$ bit operations. In a second step, we have to refine all isolating intervals $I_{j,i}'$
for the simple real roots of $p_{j}$ to a width less than $2^{-L}$, with $L=128n(\log n+\tau)$. Each endpoint $e$ of an
arbitrary $I_{j,i}$ is an endpoint of one of the intervals $I_{j,i}$, that is, $e=a_{j,i}$ or $e=b_{j,i}$ for some $i$. Hence,
by induction, it follows that $p(e)\ge 2^{-L}$. Then, from Lemma~\ref{refinement:bitcomplexity}, we conclude that refining all
intervals to a width less than $2^{-L}$ needs $\tilde{O}(k^4\cdot n(\tau+k))$ bit operations.
\begin{theorem}\label{maintheorem2}
Let $p\in\mathbb{Z}[x]$ be a $k$-nomial of magnitude $(n,\tau)$. Then, computing isolating intervals with rational endpoints for all real roots of $p$ needs $\tilde{O}(k^3\cdot n\tau)$ bit operations. For $k=O(\log (n\tau)^C)$, with $C$ a fixed positive constant, the latter bound becomes $\tilde{O}(n\tau)$, which is optimal up to logarithmic factors.
\end{theorem}
\begin{proof}
It remains to prove the last claim. For this, consider the polynomial
$
p(x)=x^n-(2^{2\tau}\cdot x^2-a)^2,
$
where $a>1$ is a fixed constant integer, and $n,\tau\in\mathbb{N}_{\ge 8}$. Then, $p$ is a $4$-nomial of magnitude $(n,O(\tau))$, and $p$ has two positive roots $x_1$ and $x_2$, with $x_1<x_2$ and $|x_i-\sqrt{a}\cdot 2^{-\tau}|<2^{-\Omega(n\tau)}$ for $i=1,2$. Namely, let $f(x):=(2^{2\tau}\cdot x^2-a)^2$ be a polynomial that has two roots of multiplicity $2$ at $x=\pm\sqrt{a}\cdot 2^{-\tau}$, and let $g(x):=x^n$. Then, a simple computation shows that $|f(z)|>|g(z)|$ for all points $z$ on the boundary of the disk $\Delta\subset\mathbb{C}$ of radius $\epsilon:=2^{-(n-2)(\tau-2)}\cdot a^{n/2-1}$ centered at $\sqrt{a}\cdot 2^{-\tau}$. Hence, Rouch\'e's Theorem implies that $f$ and $p=g-f$ have the same number of roots in $\Delta$. In addition, both roots are real.
We conclude that, for any isolating interval $I_1=(a_1,b_1)$ for $x_1$, we must have $|b_1-\sqrt{a}\cdot 2^{-\tau}|<2^{-\Omega(n\tau)}$. Now, let $b_1=p/q$ with co-prime integers $p$ and $q$, then we must have $q=\Omega(n\tau)$; see Lemma~\ref{rationalbound} in the Appendix. Hence, the binary representation of the endpoints of $I_1$ already needs $\Omega(n\tau)$ bits, which proves our claim.
\end{proof}
\section{Conclusion}
In this paper, we give the first algorithm that computes the real roots of a sparse polynomial $p\in\mathbb{Z}[x]$ in a number of arithmetic operations over $\mathbb{Q}$ that is polynomial in the input size of the sparse representation of $p$. In addition, for sparse-enough polynomials, the algorithm achieves a near-optimal bound for the bit complexity of the problem of isolating all real roots. The main ingredients of our algorithm are evaluation and separation bounds as well as an efficient method for refining an isolating interval of a simple real root. So far, our algorithm has been formulated in the easiest possible way with the prior goal to achieve good theoretical complexity bounds, however, for the price of a probably worse efficiency in practice. Hence, our first research goal is to provide an efficient implementation of our algorithm that integrates additional steps in order to improve its practical efficiency. Our second research goal is to extend our algorithm to (square-free) polynomials with arbitrary real coefficients. For this, it seems reasonable to combine our algorithm with the root isolation method from~\cite{DBLP:journals/corr/SagraloffM13}. Hopefully, this allows us to derive improved (bit) complexity bounds for sparse polynomials that can be stated in terms of the geometry of the roots (similar to the bounds as provided in~\cite[Theorem~31]{DBLP:journals/corr/SagraloffM13} or~\cite[Theorem~3]{MSW-rootfinding2013}) rather than in terms of the input size. For polynomials $p\inI\! \!R[x]$ that may have multiple real roots, the situation becomes more complicated. Namely, since no a-priori separation bound is known to decide whether a certain root has multiplicity larger than one, we cannot distinguish between a real root of multiplicity $m>1$ and a cluster of $m$ (not necessarily real) roots. Hence, it remains an open research question whether the computation of a (reasonable good) separation bound has polynomial arithmetic complexity.
{\small
|
1,116,691,497,658 | arxiv | \section{Introduction}
Retinal fundus image analysis is crucial for ophthalmologist to deal with the medical diagnosis, screening and treatment of opthalmologic diseases. The morphology of the optic disk (OD) and optic cup (OC) is an important structural indicator for assessing the presence and severity of retinal diseases, such as diabetic retinopathy, hypertension, glaucoma, hemorrhages, vein occlusion, and neovascularization \cite{macgillivray2014retinal}. The OD shape and colors are similar to hard exudates which is one of the main signs of diabetic retinopathy. Being able to detect the OD significantly decreases the difficulties to detect the hard exhaudes in the fundus image \cite{saleh2018learning}.
The OD and OC segmentation is the first step for a significant investigation of retinal images that helps in treating eye diseases \cite{almazroa2015optic}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.35\textwidth, height=0.2\textheight]{optic_disc.pdf}
\caption{Relevant structures in a fundus image.}
\label{fig:figD}
\end{figure}
In color fundus images, OC appears as a bright yellowish oval region, in turn, OD is darker. Fig~\ref{fig:figD} shows an example of a color retinal fundus image with the key anatomical structures denoted. For ophthalmologists and eye care specialists, an automated segmentation and analysis of fundus optic disc plays an important role to diagnose and treat the retinal diseases.
Numerous methods has been proposed to detect and segment OD and OC. Segmenting of the OC region from fundus images is a challenge due to the low contrast boundary.
In \cite{wong2008level}, an automatic OC segmentation method based on a variational level set was proposed. For diagnosis of glaucoma disease, Chrastek et al.\cite{chrastek2005automated} proposed an automated segmentation algorithm to segment the optic nerve head. They firstly removed the blood vessel by using a distance map algorithm and a morphological operation, and then anchored active contour model has been used to segment the OC region.
With the widespread of using deep learning models in segmentation tasks, many methods have recently been proposed based on convolutional neural network (CNN). An automatic OC and OD segmentation has been proposed in \cite{al2018multiscale} based on a stack of deep U-Net models. Each model in a cascade refines the result of the previous one. In addition, \cite{fu2018joint} proposed a multi-scale deep model with multi-level loss for segmenting OD and OC regions in fundus images.
In this paper, we propose a retinal joint OD and cup segmentation model based on a U-Net network including encoder and decoder network followed by a CNN network to matching the features of the predicted and ground-truth images to get a segmented image more close to the correct one. The second CNN network is conditioned by the color input image for learning the statistical invariant features (texture, color etc.), in addition to the shape information of the segmented image. Indeed, the second CNN encourages the generator to produce output that cannot be distinguished from ground-truth ones.
The rest of the paper is organized as follows. Firstly, section 2 describes the methodology of the proposed model. In addition, section 3 shows the experiments and results. Finally, the conclusion are explained in section 4.
\section{Material and Methods}
\subsection{REFUGE 2018: Dataset description}
All retinal fundus images were download from REFUGE 2018 challenge \footnote{https://refuge.grand-challenge.org/home/}. We participated the Task 2: Optic disc and cup Segmentation. The dataset is divided into two sets: training (400 images) and validation (400 images). The 400 images of the training set originally are in JPG format. The training set also includes corresponding 400 ground truth segmentations in BMP format. All images have size of $2124\times 2056$. Validation set is used for on-line self-evaluation and for the final on-site challenge to result the performance rank of different research groups.
\label{subsec:massSegcGAN}
\begin{figure*}[htp]
\centering
\includegraphics[width=1.0\textwidth, height=0.4\textheight]{REFUGE_Flowchart.pdf}
\caption{Proposed segmentation architecture: generator .}
\label{fig:cGAN architecture}
\end{figure*}
To avoid the overfitting problem, we have applied a data augmentation techniques such as different illumination, scaling, flipping etc to make the data more diverse.
\subsection{Multi-Scale Feature Matching Segmentation Model }
The ideal CAD system should be able to segment the full image, hence it will automatically locate the OD and OC. However, this is a very difficult task due to the high similarity between pixel distributions of retinal OD and OC. Therefore, removing most of the image non-ROI portions logically helps the model to learn the visual features. The cropped regions provide a balanced proportion between the number of pixels of the three classes (OD, OC and background). Therefore, an approximate frame around the fundus OD and OC regions must be provided and nevertheless the OD and OC segmentation will be very accurate. Thus, our approach is composed of two stages: detection and segmentation.
Firstly, we localize the optic disc and cup regions in an input fundus image using the Single Shot Detector (SSD) \cite{liu2016ssd} and video visualization is shown here\footnote{\url{https://youtu.be/miCqw_2eclg}} . The detected Region of Interest (ROI) is then fed to the proposed segmentation model to segment the optic disk and cup areas in the input image. The proposed model is consists of two successive networks. The first network is a generator network that is used for segmenting the input image. This network is an encoder-decoder network with skip connections (U-shape) detailed in in Table \ref{table1}. The second one is a simple CNN network used for extracting the features in multi-scale from the predicted and ground truth images.
The encoder network consists of 8 convolutional layers. The eight layer uses $4\times 4$ convolution to generate 32 feature maps.
In turn, the 8th layer generates 256 feature maps with a size of $1\times 1$. The weights of the eight layers are randomly initialized. The max pooling with $2\times 2$ with a stride of 2 is used for downsampling the feature maps. In all encoder layers, Leaky-ReLU non-linearities are used with batch normalization to avoid the overfitting and speed-up the training process.
The architecture of the decoder is structured in the same way as the encoder one and includes 8 deconvolutional (e.g., Transpose Convolution) layers, but with a reverse layers ordering, and with downsampling layers being replaced by upsampling layers. The weights of the decoder layers are randomly initialized. All the deconvolutional layers use ReLU functions except the 8th $1\times 1$ deconvolution layer that use Tanh activation to produce the final optic disk and cup segmentation.
The next CNN network is composed of 4 convolutional and downsampling layers. The first layer generates 32 feature maps. Moreover, the 4th layer generates 256 feature maps with a size of $30\times 30$. All convolutions are $4\times 4$ spatial filters applied with a styride of $2$. Their weights are randomly initialized and they use leaky-ReLU functions as activations, in addition to batch normalization.
\begin{table}[htp]
\centering
\caption{Architectural details of the proposed generator netowork}
\label{table1}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Layer Name & Layer Type & K, S, P & Input Size & Output Size & Layer Name & Layer Type & K, S, P & Input Size & Output Size \\ \hline
\multirow{8}{*}{Enoder} & \multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}CONV+\\ BN+\\ Leaky Relu\end{tabular}} & 4, 2, 1 & nx3x256x256 & nx32x128x128 & \multirow{7}{*}{Decoder} & \multirow{7}{*}{\begin{tabular}[c]{@{}c@{}}CT+\\ BN+\\ Relu\end{tabular}} & 4, 2, 1 & nx256x1x1 & nx256x2x2 \\ \cline{3-5} \cline{8-10}
& & 4, 2, 1 & nx32x128x128 & nx64x64x64 & & & 4, 2, 1 & nx512x2x2 & nx256x64x64 \\ \cline{3-5} \cline{8-10}
& & 4, 2, 1 & nx64x64x64 & nx128x32x32 & & & 4, 2, 1 & nx512x4x4 & nx256x32x32 \\ \cline{3-5} \cline{8-10}
& & 4, 2, 1 & nx128x32x32 & nx256x16x16 & & & 4, 2, 1 & nx512x8x8 & nx256x16x16 \\ \cline{3-5} \cline{8-10}
& & 4, 2, 1 & nx256x16x16 & nx256x8x8 & & & 4, 2, 1 & nx512x16x16 & nx128x8x8 \\ \cline{3-5} \cline{8-10}
& & 4, 2, 1 & nx256x8x8 & nx256x4x4 & & & 4, 2, 1 & nx256x32x32 & nx64x4x4 \\ \cline{3-5} \cline{8-10}
& & 4, 2, 1 & nx256x4x4 & nx256x2x2 & & & 4, 2, 1 & nx128x64x64 & nx32x2x2 \\ \cline{3-10}
& & 4, 2, 1 & nx256x1x1 & nx256x1x1 & Output & Tanh & 4, 2, 1 & nx64x128x128 & nx1x256x256 \\ \hline
\end{tabular}%
}
\end{table}
The proposed model has been trained over a loss function resulting from combining a content and multi-scale feature matching losses. The content loss follows a classical approach in which the predicted image is pixel-wise compared with the corresponding one from ground-truth. In turn, the feature matching loss depends of the feature extraction over the ground-truth and the predicted images with observing the input image as a condition for color and texture features.
Given an input $x$ (a ROI of a fundus image), the U-Net network $G$ represents the generated image $\hat{y}$ as a vector of probabilities of each pixel. The content loss function $\ell_{dice}(G)$ is computed between $\hat{y}$ and its corresponding ground-truth $y$. To maximize the intersection between the two images, the Dice coefficient is used as a content loss in our model that is defined as:
\begin{equation}
\ell_{dice}(y,G(x))= 1- dice(y, \hat{y}) = 1- \frac{2 |y| . | \hat{y} |}{|y|^2 + | \hat{y} |^2} , \label{equation:dc}
\end{equation}
In turn, the multi-scale feature matching (mfm) loss is also based on the dice loss function, but it is done between the features extracted per layer to compare the features resulted with the ground truth and the predicted image in multi-scale. That error can be defined as:
\begin{equation}
\ell_{mfm} (x, y, G(x))= \sum_{i=1}^{N} \sum_{c=1}^{H} 1- dice(Y_{ci}, \hat{Y}_{ci}),
\label{equation:dc}
\end{equation}
where $H$ is the number of channels per feature maps, $N$ is the number of layer in the CNN network, $Y_{ci}$ is the feature maps with the ground-truth $y$ and $\hat{Y}_{ci}$ is the feature maps with the predicted image $\hat{y}$.
Thus, the final loss function during training is formulated as:
\begin{equation}
\ell_{T}=\ell_{mfm}(x,y,G(x))+ \lambda \ell_{dice}(y,G(x)), \label{equation:dc}
\end{equation}
This loss $\ell_{T}$ is efficiently integrated into back-propagation of the network through the ADAM optimization.
\section{Experiments and Results}
In the experiments, we used a 64-bit I7-6700, 3.40GHz CPU with 16GB of RAM and NVIDIA GTX 1070 GPU with 8GB of video RAM, running on Ubuntu 16.04 Linux operating system. We used Pytorch\footnote{\url{https://pytorch.org/}} neural network library to devise the proposed neural network model.
We have used 360 images for training and 40 images along with their ground truth has been used to validate our model. During training, we have also tested different learning rates and loss optimizers (SGD, AdaGrad, Adadelta, RMSProp, Adam), finding Adam with $\beta_1$ = 0.5, $\beta_2$ = 0.999 and initial learning rate = 0.0002 with batch size 8 the best combination. For $\lambda$ , we have found experimentally that 150 is the most suited value. Finally, results were achieved by training a generator and CNN as a multi-scale feature extractor from scratch for 10 epochs.
\label{subsec:output}
\begin{figure*}[htp]
\centering
\includegraphics[width=1.0 \textwidth, height=0.4\textheight]{REFUGE_output.pdf}
\caption{Segmentation output of OD and OC from our proposed segmentation model}
\label{fig:cGAN architecture}
\end{figure*}
In Table \ref{table2}, the performance of the proposed model is shown in terms of dice coefficient accuracy for OD and OC segmentation, and MAE of curve to disc ratio (CDR). However, the result shown in Table \ref{table1} is from our second last submission (last submission result was not provided by the organizer, which we believe will outperform our current result).
The main advantage of the proposed model is, it takes only 10 epoch to train and efficiently segment the OD and OC from the test data.
In Fig \ref{subsec:output} and video visualization \footnote{\url{https://youtu.be/_aIwQphCDeQ}}, shows the segmentation output of OD and OC of a fundus images from the proposed segmentation model. In first row shows an input image and second rows showing a corresponding segmented images (OD in a gray color, OC in a black color and white pixels are for the background).
\begin{table}[htp]
\centering
\caption{Optic Disc and Cup segmentation Dice coefficient accuracy and MAE CDR with the REFUGE challenge dataset of 400 validation images}
\label{table2}
\scalebox{1.0}{
\begin{tabular}{|c|c|}
\hline
Methods & Accuracy (\%) \\ \hline \hline
Optic Cup & ~$ 0.8341\% $ \\ \hline
Optic Disc & ~$ 0.9340\%$ \\ \hline
MAE CDR & ~$ 0.0605\% $ \\ \hline
\end{tabular}
}
\end{table}
\section{Conclusion}
In this work, we a proposed a U-Net network followed a multi-scale feature matching network to segment the OD and OC in fundus images. A dice loss function is used based on matching the features of the predicted and ground truth images by minimizing the error at each layer. Our proposed model is properly able to distinguish between the two classes of OD and OC regions. The segmentation output of the proposed model can accurately preserve the boundaries of the two regions.
\bibliographystyle{splncs}
|
1,116,691,497,659 | arxiv | \section{Introduction}
{\revB{Wave propagation simulations, governed by the Helmholtz equation, in bounded heterogeneous and unbounded homogenous media are fundamental for numerous applications~\cite{KressColton,Ihlenburg:1998,nedlec:book}.}}
{\revB{Finite element methods (FEM) are efficient for simulating the Helmholtz equation in a bounded heterogeneous medium, say, $\Omega_0\subset \mathbb{R}^m$ ($m=2,3$). The standard (non-coercive) variational formulation of the variable coefficient Helmholtz equation in $H^1(\Omega_0)$~\cite{Ihlenburg:1998} has been widely used for developing and analyzing the sign-indefinite FEM,
see for example~\cite{BGP, het2, het3, mg2018, het7, het5}. The open problem of developing a coercive variational formulation for the heterogeneous Helmholtz model was solved recently in~\cite{mg2019}, and an associated preconditioned sign-definite high-order FEM was also established using direct and domain decomposition methods in~\cite{mg2019}.}
{\revB{For a large class of applications the wave propagation occurs in the bounded heterogeneous medium and also in its complement,
$\mathbb{R}^m \setminus \Omega_0$, the exterior unbounded homogeneous medium. Using the fundamental solution, the constant coefficient Helmholtz equation exterior to $\Omega_0$ can be reformulated as an integral equation (IE) on the boundary of $\Omega_0$.
Algorithms for simulating the boundary IE (BIE) are known as boundary element methods (BEM). Several coercive and non-coercive BIE reformulations~\cite{KressColton,nedlec:book} of the exterior Helmholtz model have been used to develop algorithms for the exterior homogeneous Helmholtz models, see for example the acoustic BEM survey articles~\cite{bem-eng-sur, bem-math-sur}, respectively, by mathematical and engineering researchers, each with over 400 references.}}
The exterior wave propagation BEM models lead to dense complex algebraic systems,
and the standard variational formulation based interior wave FEM models lead to sparse complex systems with their eigenvalues
\revB{in} the
left half of the complex plane~\cite{sdparti, Moiola}. Developing efficient preconditioned iterative solvers for such
systems \revB{has} also dominated research activities over the last two decades~\cite{gander_zhang}, in conjunction
with efficient implementations using multigrid and domain decomposition techniques,
see~\cite{mg2017, mg2018} and references therein.
For applications that require solving both the interior heterogenous and exterior homogeneous problems,
various couplings of the FEM and BEM algorithms with appropriate conditions on {\em polygonal interfaces} have also been investigated in the literature\cite{BrJo:1979,MR974843,BrJoNe:1978}. The review article~\cite{sayas} describes some theoretical validations of the coupling approaches considered in the earlier literature
and delicate choices of the coupling interface. The coupling methods in~\cite{BrJo:1979,MR974843,BrJoNe:1978, HanNew, sayas} lead to very large algebraic systems with both dense and sparse structures. For wave propagation models, given the complexity involved in even separately solving the FEM and BEM algebraic systems, it is efficient
to avoid large combined dense and sparse structured systems arising from the coupling methods in~\cite{BrJo:1979,MR974843,BrJoNe:1978, HanNew, sayas}.
Such complicated-structured coupled large-scale systems can be avoided, for the Helmholtz PDE interior and exterior
problems, using the approach proposed in~\cite{Kirsch2} and recently further explored in~\cite{GaMor:2016} using high-order elements for a class of applications with complex heterogeneous structures.
The FEM-BEM algorithms in~\cite{Kirsch2, GaMor:2016} are based on the idea of using a non-overlapping {\em smooth interface} to couple the interior and exterior solutions. As described in~\cite[Section~6]{GaMor:2016},
there are several open mathematical analysis problems remain to be solved in the coupling
and FEM-BEM framework of~\cite{Kirsch2, GaMor:2016}.
The choice of smooth interface in the
FEM-BEM algorithms of~\cite{Kirsch2, GaMor:2016} is crucial because the methods require solving
several interior and exterior wave problems to setup the interface condition. In particular, the number of FEM and BEM problems
to be solved is twice \revAB{the number of} degrees of freedom required to approximate the unknown interface function.
The interface function can be approximated by a few degrees of freedom only on smooth interfaces.
Efficient spectrally accurate BEM algorithms have been developed for simulating scattered waves exterior
to smooth boundaries in two and three dimensional domains~\cite{Bds2013,377360, KressColton, ganesh:high-order}.
However for standard interior FEM algorithms, it is desirable to have simple polygonal/polyhedral boundaries, and
in particular those with right angles, which
\revB{facilitate the} development and implementation of high-order FEM algorithms.
To this end, we develop an equivalent framework for the heterogeneous and unbounded
region wave propagation model with two artificial interfaces. In particular, our novel FEM-BEM framework is based on an interior smooth interface
$\Gamma$ for simulating scattered exterior waves using a spectrally accurate Nystr\"om BEM, and an exterior simple polygonal/polyhedral interface $\Sigma$ for the efficient high-order FEM simulation of the absorbed interior waves. In Figure~\ref{fig:01}, we
sketch the resulting overlapped decomposition of a heterogeneous and unbounded medium in which the absorbed and scattered waves are
induced by an input incident wave $u^{\rm inc}$.
\begin{figure}[!ht]
\centerline{\includegraphics[width=0.38\textwidth]{domain01June2019.pdf}}
\vspace{-0.1in}
\caption{\label{fig:01} A model configuration with an input incident wave $u^{\rm inc}$ impinging on a heterogeneous medium $\Omega_0$.
The artificial boundaries in our decomposition framework for the auxiliary bounded (FEM) and unbounded (BEM) models are $\Sigma$ and $\Gamma$, respectively. The bounded domain for the FEM is $\Omega_2$ (with boundary $\Sigma$), and the unbounded region for the BEM is $\mathbb{R}^m \setminus \overline{\Omega}_1$
(exterior to the smooth interface $\Gamma$).
The domain $\Omega_1$ (with boundary $\Gamma$) is chosen so that $\overline{\Omega}_0\subset \Omega_1\subset\Omega_2$, and the overlapping region in the framework, to match the FEM and BEM solutions, is $(\mathbb{R}^m \setminus \overline{\Omega}_1)\cap \Omega_2$.}
\end{figure}
The decomposition facilitates the application of efficient high-order FEM algorithms
in the interior polygonal/polyhedral domain $\Omega_2$, that contains
the heterogeneous region $\Omega_0 \subset \overline{\Omega}_1$.
The unbounded exterior region $\mathbb{R}^m \setminus \overline{\Omega}_1$ does not include
the heterogeneity and has a smooth boundary $\Gamma$.
It therefore supports spectrally accurate BEM algorithms to simulate exterior scattered waves, and also exactly preserves the radiation condition (RC), even in the computational model.
In addition, the decomposition framework provides an analytical integral representation of the far-field using the scattered field, and hence
our high-order FEM-BEM model {provides} relatively accurate approximations of the far-field arising
from the heterogeneous model. For inverse wave models,
accurate modeling of the far-field plays a crucial role in the identification of unknown
wave propagation configuration properties from far-field measurements~\cite{bagheri,KressColton,gh:bayesian}.
Our approach in this article is related to some ideas presented in \cite{CeDomSay:2004,DomSay:2007,MR2292079}.
The choice of two artificial boundaries leads to two bounded domains $\overline{\Omega}_0\subset\Omega_1\subset\Omega_2$ and an overlapping region between $\Omega_1^{\rm c} =\mathbb{R}^m\setminus \overline{\Omega}_1$ and $\Omega_2$. We prove that, under appropriate restrictions of
the scattered and absorbed fields in the overlapping region $\Omega_{12} (:= \Omega_1^{\rm c} \cap \Omega_2) $, our decomposed model is equivalent to the original Helmholtz model in the full space $\mathbb{R}^m$.
The unknowns in our decomposed framework, which exactly incorporates the RC, are: (a) the trace of the scattered wave on $\Gamma$
that will yield the solution in the unbounded domain $\Omega_1^{\rm c}$, through a boundary layer potential ansatz of the scattered field; (b) the trace of the total wave in the boundary $\Sigma$ of $\Omega_2$, that will provide the Dirichlet data to determine the total absorbed wave in the bounded domain
$\Omega_2$. These properties will play a crucial role in designing and implementing our high-order FEM-BEM algorithm.
The FEM-BEM numerical algorithm can be discerned at this point: It comprises approximating the absorbed wave field in a finite
dimensional space using an FEM spline ansatz in the bounded domain $\Omega_2$, and by a BEM ansatz for the scattered field in the unbounded region,
exterior to $\Gamma$, and these fields are constrained to (numerically) coincide on the overlapping domain $\Omega_{12}$, and hence on the
interface boundaries. Since these artificial boundaries can be freely chosen, we can ensure a bounded simple polygonal/polyhedral domain, more suitable for high-order FEM, and an unbounded region with a smooth boundary for spectrally accurate BEM.
In particular, the framework brings the best of the two numerical (FEM and BEM) worlds to compute the fields accurately for the
full heterogeneous model problem, without the need to truncate the unbounded wave propagation region and approximate the RC.
The algorithmic construction and solving of the interface linear system, which determines key unknowns of the model on the interface boundaries (that is, the ansatz coefficients of the trace of the FEM and BEM solutions), is challenging.
However, important properties of the continuous problem, such as a compact perturbation of the identity, are inherited by the numerical scheme.
\revA{Consequently, the system of linear equations for the interface unknowns is very well conditioned}.
Such properties, in conjunction with a cheaper matrix-vector multiplication for the underlying matrix,
support the use of iterative solvers such as GMRES \cite{MR848568,MR1990645} to compute the ansatz coefficients.
Major computational aspects of our high-order FEM and BEM discretizations in the framework
are independent and hence the underlying linear systems can be solved, {\em a priori}, by iterative Krylov methods.
\revA{We show that the number of GMRES iterations, to solve the interface system, is independent of various levels of discretization for a chosen frequency of the model.
For increasing frequencies, we also demonstrate that the growth of the number of GMRES iterations is lower than the frequency growth.}
Instead of using an iterative scheme for the interface system arising in our algorithm,
one may also consider the construction and storage of the matrix and
a direct solver for the system.
The advantage of the latter is that the interface problem matrix can be reused for numerous incident input waves
that occur in many practical applications, for example,
to compute the monostatic cross sections, and also for developing appropriate reduced order model (ROM)~\cite{tmatrom} versions of our algorithm. The matrix arising {\revA{ in our}} interface system is relatively small because of the spectral accuracy of the BEM algorithm, and because the system involves only unknowns on
the artificial interface boundaries.
Hence post-processing of the computed fields, such as for the evaluation of the far-field, can be done quickly and efficiently.
The far-field output also plays a crucial role in developing stable ROMs for wave propagation models~\cite{ghh:2011,tmatrom}.
The paper is organized as follows. In Section~\ref{sec:decomposition} we present the decomposition framework and prove that, under very weak assumptions, the decomposition is well-posed and is equivalent to the full heterogeneous and unbounded medium wave propagation model. In
Section~\ref{sec:fem-bem} we present a numerical discretization for the two dimensional case, combining high-order finite elements with spectrally accurate convergent boundary elements~\cite{Kress:2014} and describe the algebraic and implementation details.
In Section~\ref{sec:num-exp} we demonstrate the efficiency of the FEM-BEM algorithm for simulating wave propagation in two distinct classes of (smooth and non-smooth) heterogenous media.
\section{Decomposition framework and well-posedness analysis}\label{sec:decomposition}
Let $\Omega_0\subset \mathbb{R}^m$, $m=2,3$, be a bounded domain.
The ratio of the speed of wave propagation inside the heterogeneous (and not necessarily connected) region $\Omega_0$ and on its free-space exterior $\Omega_0^{\rm c} := \mathbb{R}^m \setminus \overline{\Omega}_0$ is described
through a \revB{refractive index function $n$} that we assume in this article to be piecewise smooth with $1-n$ having compact support in $\overline{\Omega}_0$ (i.e, $n|_{\Omega_0^{\rm c}}\equiv 1$).}
The main focus of this article is to study the wave propagation in $\mathbb{R}^m$, induced by the impinging of an
incident wave $u^{\rm inc}$, say, a plane wave with wavenumber $k >0$.
More precisely, the continuous wave propagation model is to find
the total field $u (:= u^s + u^{\rm inc }) \in H^1_{\rm loc}(\mathbb{R}^m)$ that satisfies the Helmholtz equation and
the Sommerfeld RC:
\begin{equation}
\label{eq:theproblem}
\left|
\begin{array}{rcl}
\Delta u + k^2 n^2\:u &=&0,\quad \text{in }\mathbb{R}^m,\\
\partial_r u^s-\mathrm{i}k{u}^s&=&o(|r|^{\frac{m+1}2}),\quad \text{as }|r|\to\infty.
\end{array}
\right.
\end{equation}
It is well known that~\eqref{eq:theproblem} is uniquely solvable~\cite{Kress:2014}.
(Later in this section, we introduce the classical Sobolev spaces $H^s$, for $s \geq 0$, with appropriate norms.)
\subsection{A decomposition framework}
The heterogeneous-homogeneous model problem \eqref{eq:theproblem} is decomposed by introducing two
artificial curves/surfaces $\Gamma$ and $\Sigma$ with interior $\Omega_1$ and $\Omega_2$ respectively satisfying $\overline{\Omega}_0\subset \Omega_1\subset \overline{\Omega}_1\subset\Omega_2$. We assume from now on that $\Gamma$ is smooth and $\Sigma$ is a polygonal/polyhedral boundary. A sketch of the different domains is displayed in Figure \ref{fig:01}. Henceforth,
$\Omega_i^{\rm c}:=\mathbb{R}^m\setminus\overline{\Omega}_i, ~i = 0,1,2$.
We introduce the following decomposed heterogeneous and homogeneous media auxiliary models:
\begin{itemize}
\item For a given function $f_\Sigma^{\rm inp} \in H^{1/2}(\Sigma)$, we seek a propagating wave field $w$ so that $w$ and its trace $\gamma_\Sigma w$ on the boundary $\Sigma$ satisfy
\begin{equation}
\label{eq:FEM:0}
\left|
\begin{array}{rcl}
\Delta w + k^2 n^2\:w &=&0,\quad \text{in }\Omega_2,\\
\gamma_\Sigma w &=&f_\Sigma^{\rm inp}.
\end{array}
\right.
\end{equation}
Throughout the article, we assume that this interior problem is uniquely solvable. We introduce the following operator notation for the heterogeneous auxiliary model: For any Lipschitz $m$- or $(m-1)$-dimensional (domain or manifold) $D\subset \Omega_2$, we define the solution operator
$\mathrm{K}_{D \Sigma } $ associated with the auxiliary model~\eqref{eq:FEM:0} as
\begin{equation}
\label{eq:FEM-oper:0}
\mathrm{K}_{D \Sigma } f_\Sigma^{\rm inp} :=w|_D.
\end{equation}
Two cases will be of particular interest for us: $\mathrm{K}_{\Omega_{2} \Sigma} \revB{f_\Sigma^{\rm inp}}$, which is nothing but $w$ satisfying~\eqref{eq:FEM:0}, and
$\mathrm{K}_{\Gamma\Sigma} f_\Sigma^{\rm inp}=\gamma_\Gamma w$, the trace of the solution ${w}$ of~\eqref{eq:FEM:0} on $\Gamma{\subset\Omega_2}$.
\item In the exterior unbounded homogeneous medium $\Omega_1^{\rm c}:=\mathbb{R}^m\setminus\Omega_1$, for a given function
$f_\Gamma^{\rm inp}{\in H^{1/2}(\Gamma)}$ we
seek a scattered field $\widetilde{\omega}$ satisfying
\begin{equation}
\label{eq:BEM:0}
\left|
\begin{array}{rcl}
\Delta \widetilde{\omega} + k^2 \widetilde{\omega} &=&0,\quad \text{in }\Omega_1^{\rm c},\\
\gamma_\Gamma \widetilde{\omega} &=&f_\Gamma^{\rm inp},\\
\partial_r\widetilde{\omega}- {\rm i}k\widetilde{\omega} &=& o(|r|^{(m-1)/2}).
\end{array}
\right.
\end{equation}
Unlike problem \eqref{eq:FEM:0}, \eqref{eq:BEM:0} is always uniquely solvable~\cite{Kress:2014}. We define the associated solution operator $\mathrm{K}_{D \Gamma } $ as
\begin{equation}
\label{eq:BEM-oper:0}
\mathrm{K}_{D \Gamma } f_\Gamma^{\rm inp} :=\widetilde{\omega}|_D,
\end{equation}
with special attention to $\mathrm{K}_{\Omega_1^{\rm c}\Gamma} f_\Gamma^{\rm inp}$ and $\mathrm{K}_{\Sigma\Gamma} f_\Gamma^{\rm inp}$, namely the scattered
field $\widetilde{\omega}$ satisfying~\eqref{eq:BEM:0} and its trace $\gamma_\Sigma \widetilde{\omega}$.
\end{itemize}
The decomposition framework that we propose for the continuous problem is the following:
\begin{subequations}
\label{eq:BEMFEM}
\begin{enumerate}
\item Solve the interface boundary integral system to find $(f_\Sigma,f_\Gamma)$, using data
$(\gamma_{\Sigma} u^{\rm inc}, \gamma_{\Gamma} u^{\rm inc})$ :
\begin{equation}
\label{eq:BEMFEM:0}
\left|
\begin{array}{ccccrcl}
\multicolumn{4}{l}{ (f_\Sigma,f_\Gamma)\in H^{1/2}(\Sigma)\times {H^{1/2}(\Gamma)} }\\[1.1ex]
f_\Sigma&-&\mathrm{K}_{\Sigma\Gamma} f_\Gamma&=& \gamma_{\Sigma} u^{\rm inc}\\
- \mathrm{K}_{\Gamma\Sigma} f_\Sigma&+&f_\Gamma &=& -\gamma_{\Gamma} u^{\rm inc}
\end{array}
\right.
\end{equation}
\item Construct the total field for the model problem~\eqref{eq:theproblem} using the
solution $(f_\Sigma,f_\Gamma)$ of~\eqref{eq:BEMFEM:0}, by solving the auxiliary models
\eqref{eq:FEM:0} and \eqref{eq:BEM:0}:
\begin{equation}
\label{eq:BEMFEM:01}
u:=\begin{cases}
\mathrm{K}_{\Omega_{2} \Sigma} f_\Sigma,\quad&\text{in $\Omega_2$},\\
\mathrm{K}_{\Omega_1^{\rm c}\Gamma} f_\Gamma+u^{\rm inc},\quad&\text{in $\Omega_1^{\rm c}$}.
\end{cases}~
\end{equation}
\end{enumerate}
\end{subequations}
We claim that, provided \eqref{eq:BEMFEM:0} is solvable, the decomposed framework-based
field $u$ defined in~\eqref{eq:BEMFEM:01} is the solution of \eqref{eq:theproblem}. Notice that
we are implicitly assuming in \eqref{eq:BEMFEM:01} that
\begin{equation}\label{eq:2.6}
\mathrm{K}_{\Omega_{12}\Sigma} f_\Sigma = u^{\rm inc}|_{\Omega_{12}}+\KGammaOmegaOneTwo f_\Gamma,
\end{equation}
where we recall the notation $\Omega_{12} =\Omega_1^{\rm c}\cap \Omega_2$.
Indeed, in view of \eqref{eq:BEMFEM:0}, both functions in \eqref{eq:2.6} agree on $\Sigma\cup \Gamma$ (the boundary of $\Omega_{12}$).
Assuming, as we will do from now on, that the only solution to the homogeneous system
\begin{equation}
\label{eq:BEMFEM:02}
\left|
\begin{array}{rcl}
\Delta v + k^2 v &=&0,\quad \text{in }\Omega_{12},\\
\gamma_\Gamma v &=&0,\quad \gamma_\Sigma v\,=\,0
\end{array}
\right.
\end{equation}
is the trivial one and noticing that $n|_{\Omega_{12}}\equiv 1$ which implies that $\mathrm{K}_{\Omega_{12}\Sigma} f_\Sigma $ and $\KGammaOmegaOneTwo f_\Gamma$ are solutions of the Helmholtz equation in $\Omega_{12}$, we can conclude that \eqref{eq:2.6} holds. Since $u$ defined in \eqref{eq:BEMFEM:01} belongs to $H^1_{\rm loc}(\mathbb{R}^m)$, it is simple to check that this function is the solution of \eqref{eq:theproblem}.
We remark that the hypothesis we have taken on the artificial boundaries/domains, i.e. the well-posedness of problems \eqref{eq:FEM:0} and \eqref{eq:BEMFEM:02}, are not very restrictive in practice: $\Sigma$ or $\Gamma$ can be modified if needed. Alternatively, one can consider different boundary conditions on $\Gamma$ and $\Sigma$ (such as Robin conditions), redefining $\mathrm{K}_{D \Sigma } $ and $\mathrm{K}_{D \Gamma } $ accordingly, which will lead to a variant of the framework that we analyze in this article. In a future work we shall explore other boundary conditions on the interfaces and analysis of the resulting variant models.
\subsection{Well-posedness of the decomposed continuous problem}
The aim of this subsection is to prove that the system of equations \eqref{eq:BEMFEM:0}, under the above stated hypothesis, has a unique solution. Consequently, we can conclude that the decomposition for the exact solution presented in \eqref{eq:BEMFEM:01} exists and is unique. To this end, we first derive some regularity results related to the operators $\mathrm{K}_{D \Sigma } $ and $\mathrm{K}_{D \Gamma } $ in Sobolev spaces.
\revB{For the topic of Sobolev spaces, we refer the reader to~\cite{AdFo:2003,McLean:2000}.}
\subsubsection{Functional spaces}
Let $D\subset \mathbb{R}^m$ be a Lipschitz domain. For any non-negative integer $s$, we denote
\[
\|f\|_{H^s(D)}^2:= \sum_{|\bm{\alpha}|\le s} \int_D |\partial_{\bm{\alpha}} f|^2
\]
the Sobolev norm, where the summation uses the standard multi-index notation in $\mathbb{R}^m$.
For $s= s_0+\beta$ with $s_0$ a non-negative integer and $\beta\in (0,1)$, we set
\[
\|f\|_{H^s(D)}^2:= \|f\|_{H^{s_0}(D)}^2 + \sum_{|\bm{\alpha}|\le s_0} \int_D\int_D
\frac{|\partial_{\bm{\alpha}} f({\bf x})-\partial_{\bm{\alpha}} f({\bf y})|^2 }{|{\bf x}-{\bf y}|^{m+2\beta}}{{\rm d}{\bf x}\,{\rm d}{\bf y}}.
\]
The Sobolev space $H^s(\Omega)$ ($s\ge 0$) can be defined as,
\[
H^s(D):=\{f\in L^2(D)\: : \: \|f\|_{H^s(D)}<\infty \},
\]
endowed with the above natural norm.
If $\partial D$ denotes the boundary of $D$, we can introduce $H^s(\partial D)$ with a similar construction using local charts: Let $\{\partial D^j,\mu^j,{\bf x}^j\}_{j=1}^{J}$ be an atlas of $\partial D$, that is, $\{\partial D\}_j$ is an open covering of $\partial D$, $\{\mu^j\}$ a subordinated Lipschitz partition of unity on $\partial D$, and ${\bf x}^j:\mathbb{R}^{m-1}\to \partial D $ being Lipschitz and injective with $\partial D^j\subset \mathop{\rm Im} {\bf x}^j$, then we define
\[
\|\varphi\|_{H^s(\partial D)}^2:=\sum_{j=1}^J \|(\mu^j \varphi)\circ {\bf x}^j \|^2_{H^s(\mathbb{R}^{m-1})}.
\]
We note that $(\mu^j \varphi)\circ{\bf x}^j$ can be extended by zero outside of the image of ${\bf x}^j$. We then set
\[
H^s(\partial D):=\{\varphi\in L^2(\partial D) \ : \ \|\varphi\|_{H^s(\partial D)}<\infty\}.
\]
The space $H^s(\partial D)$ is well defined for $s\in[0,1]$: Any choice of $\{\partial D^j,\mu^j,{\bf x}^j\}$ gives rise to an equivalent norm (and inner product).
If $\partial D$ is a ${\cal C}^m$-boundary,
such as $\Gamma$ in Figure~\ref{fig:01}, this construction can be set up for $s\in[0,m]$ by taking $\{{\bf x}^j,{\omega}^j\}$ to be in ${\cal C}^m$ as well. In particular, if $\partial D$ is smooth we can define $H^s(\partial D)$ for any $s\ge 0$. Further, the space $H^{-s}(\partial D)$ can be defined as the realization of the dual space of $H^{s}(\partial D)$ when the integral product is taken as a representation of the duality pairing.
It is a classical result that the trace operator $\gamma_{\partial D} u:= u|_{\partial D}$ \revB{defines} a continuous onto mapping from $H^{s+1/2}(D)$ into $H^s(\partial D)$ for any $s\in (0,1)$. Actually, if $\partial D$ is smooth then $s\in(0,\infty)$. In these cases, we can alternatively define
\[
H^s(\partial D): =\{\gamma_{\partial D} u\ : \ u\in H^{s+1/2}(D)\}
\]
endowed with the image norm:
\begin{equation}
\label{eq:ImNorm}
\|\varphi\|_{H^s(\partial D)}:= \inf_{{0\ne u\in H^{s+1/2}(D)\atop \gamma_\Sigma u = \varphi}} \|u\|_{H^{s+1/2}(D)}.
\end{equation}
We will use this definition to extend $H^{s}(\partial D)$ for $s>1$ in the Lipschitz case. Notice that with this definition, the trace operator from $H^{s+1/2}(D)$ into $H^s(\partial D)$ is continuous for any $s>0$.
\subsubsection{Boundary potentials and integral operators}
Let $\Phi_k$ be the fundamental solution for the two- or three-dimensional constant coefficient Helmholtz
operator ($\Delta +k^2I$) equation, defined for $\mathbf{x}, \mathbf{y} \in \mathbb{R}^m$ with $r := |\mathbf{x}-\mathbf{y}|$ as
\begin{equation}
\label{eq:fund}
\Phi_k(\mathbf{x}, \mathbf{y}) :=\begin{cases}
\displaystyle
\frac{\mathrm{i}}{4} H^{(1)}_0(kr), & \mathbf{x}, \mathbf{y} \in \mathbb{R}^2, \\
\displaystyle \frac{1}{4 \pi r} \exp(\mathrm{i}kr), & \mathbf{x}, \mathbf{y} \in \mathbb{R}^3,
\end{cases}
\end{equation}
where $H^{(1)}_n$ denotes the first kind Hankel
function of order $n$. For a smooth curve/surface $\Gamma$, with outward unit normal ${\boldsymbol \nu}$ and
normal derivative at ${\bf y} \in \Gamma$ denoted by $\partial_{{\boldsymbol \nu}({\bf y})}$, let
\[
({\rm SL}_k\varphi)({\bf x}) :=\int_{\Gamma} \Phi_k({\bf x}-{\bf y})\varphi({\bf y}) \,{\rm d}\sigma_{\bf y},\quad
({\rm DL}_k g)({\bf x}) :=\int_{\Gamma} \partial_{{\boldsymbol \nu}({\bf y})}\Phi_k({\bf x}-{\bf y})g({\bf y}) \,{\rm d}\sigma_{\bf y}
, \qquad {\bf x}\in \mathbb{R}^m\setminus\Gamma,\]
denote the single- and double-layer potentials, with density functions $\varphi$ and $g$, respectively.
The single- and double-layer boundary integral operators are then given,
via the well-known jump relations~\cite{KressColton} for the boundary layer potentials, by
\begin{eqnarray}\label{eq:Vk}
\mathrm{V}_{k}\varphi &:=&(\gamma_{\Gamma}{\rm SL}_k)\varphi=\int_{\Gamma} \Phi_k(\,\cdot\,-{\bf y})\varphi({\bf y}) \,{\rm d}\sigma_{\bf y},
\\
\mathrm{K}_{k}g &:=&\pm\tfrac12 g + (\gamma^{\mp}_{\Gamma}{\rm DL}_k)g
=\int_{\Gamma} \partial_{{\boldsymbol \nu}({\bf y})}\Phi_k(\,\cdot\,-{\bf y})g({\bf y}) \,{\rm d}\sigma_{\bf y}
\label{eq:Dk}
\end{eqnarray}
where $\gamma^-_{\Gamma}$ and $\gamma^+_{\Gamma}$ are trace operators on $\Gamma$, respectively, from the interior $\Omega_1$ and
exterior $\Omega_1^{\rm c}$. Given a real non-vanishing smooth function $\sigma:\Gamma\to \mathbb{R}$, and
$\mathrm{V}_{k,\sigma}\phi := \mathrm{V}_k(\sigma \phi)$ for any $\phi \in H^s(\Gamma)$, we consider the
combined field acoustic layer operator
\begin{equation}
\label{eq:2.6a}
\tfrac12 {\rm I}+\mathrm{K}_k -\mathrm{i}k\mathrm{V}_{k,\sigma}:H^s(\Gamma)\to H^s(\Gamma).
\end{equation}
\revB{Throughout this article, ${\rm I}$ denotes the identity operator.}
The standard combined field operator used in the literature~\cite{KressColton} is based on the choice $\sigma\equiv 1$.
In this article, we do not restrict ourselves to the usual choice for reasons which will be fully explained later. Since
$\Gamma$ is smooth and that
$\mathrm{K}_k, \mathrm{V}_{k,\sigma} :H^s(\Gamma)\to H^{s+1}(\Gamma)$ are continuous, the operator in \eqref{eq:2.6a} is invertible as a consequence of the Fredholm alternative and the injectivity of \eqref{eq:2.6a}, which follows from a very simple modification of the classical argument in \cite[Th 3.33]{KressColton}).
Thus the inverse of the combined field integral operator
\begin{equation}
\label{eq:3.4}
{\cal L} _{k,\sigma}:=\Big(\tfrac12 {\rm I}+\mathrm{K}_k -\mathrm{i}k\,\mathrm{V}_{k,\sigma}\Big)^{-1}{:H^s(\Gamma)\to H^s(\Gamma)}
\end{equation}
is well defined.
Further, using \eqref{eq:Vk}-\eqref{eq:Dk} and with $\mathrm{SL}_{k,\sigma}\phi := \mathrm{SL}_k(\sigma\phi)$ for any $\phi \in H^s(\Gamma)$, we can write
the solution operator occurring in the construction~\eqref{eq:BEMFEM:01} as
\begin{equation}
\label{eq:KGammaOmega}
\mathrm{K}_{\Omega_1^{\rm c}\Gamma} =(\mathrm{DL}_k-\mathrm{i}{k}\,\mathrm{SL}_{k,\sigma}) {\cal L}_{k, \sigma}.
\end{equation}
The above solution operator, a variant of the Brakhage-Werner formulation (BWF)~\cite{MR0190518, KressColton}, will be used in this article
for both theoretical and computational purposes. The choice $\sigma\equiv 1$ reduces to the standard BWF~\cite{MR0190518, KressColton}.
\subsubsection{Well-posedness analysis of the interface model}
In this subsection, we first develop two key results before proving well-posedness of the boundary integral system~\eqref{eq:BEMFEM:0}.
\begin{lemma}\label{lemma:01}
The operator
\begin{equation}\label{eq:2.8}
\mathrm{K}_{\Omega_1^{\rm c}\Gamma} : H^{s}(\Gamma)\to H_{\rm loc}^{s+1/2}(\Omega_1^{\rm c})
\end{equation}
is continuous for any $s\in [0,\infty)$. Further, for any bounded Lipschitz domain/manifold $D\subset \Omega_1^{\rm c}$ with $\overline{D}\cap \Gamma =\emptyset$, the solution operator $\mathrm{K}_{D \Gamma } $ in ~\eqref{eq:BEM-oper:0},
for the homogeneous media problem~\eqref{eq:BEM:0},
satisfies the following mapping property for any $s,r\ge 0$
\begin{equation}\label{eq:2.85}
\mathrm{K}_{D \Gamma } : H^{s}(\Gamma)\to H^{r}(D).
\end{equation}
In particular,
\begin{equation}\label{eq:2.85b}
\mathrm{K}_{\Sigma\Gamma} : H^{s}(\Gamma)\to H^{r}(\Sigma)
\end{equation}
is continuous and compact, for $s,r\in\mathbb{R}$.
\end{lemma}
\begin{proof}
The first desired property follows from the identities \eqref{eq:KGammaOmega}, \eqref{eq:3.4} and the well known mapping properties
\begin{equation}\label{eq:2.75}
\mathrm{DL}_k:H^s(\Gamma)\to H_{\rm loc}^{s+1/2}(\Omega_1^{\rm c}),\quad
\mathrm{SL}_k:H^{s-1}(\Gamma)\to H_{\rm loc}^{s+1/2}(\Omega_1^{\rm c}),
\end{equation}
see for instance \cite[Th. 6.12]{McLean:2000}.
If $\overline{D}\cap \Gamma =\emptyset$,
the kernels in the boundary potentials in $\mathrm{DL}_k$ and $\mathrm{SL}_k $ are smooth functions in $\overline{D}\times \Gamma$ and hence the properties \eqref{eq:2.85} and \eqref{eq:2.85b} hold.
\end{proof}
Next we consider the heterogeneous media model solution operator $\mathrm{K}_{\Omega_{2} \Sigma} $, as defined
in~\eqref{eq:FEM:0}-~\eqref{eq:FEM-oper:0}. We recall the well known classical estimate~\cite{Ihlenburg:1998}
\[
\|\mathrm{K}_{\Omega_{2} \Sigma} f_\Sigma^{\rm inp}\|_{H^1(\Omega_2)}\le C\|f_\Sigma^{\rm inp}\|_{H^{1/2}(\Sigma)},
\]
with $C>0$ being a constant independent of $\revB{f_\Sigma^{\rm inp}}$.
Below, we generalize this to obtain a higher regularity, using boundary layer potentials and boundary integral operators, {defined in this case on barely Lipschitz curves/surfaces} to improve the estimate for domains $D$ with $\overline{D}\subset \Omega_2\setminus\overline{\Omega}_1$.
\begin{lemma}\label{lemma:02}
There exists \revB{a constant} $C=C(k,n,\Omega_2)$ so that for any $s\in[0,1]$ and $f_\Sigma^{\rm inp}\in{H^ {s}(\Sigma)}$,
\begin{equation}\label{eq:2.10}
\|\mathrm{K}_{\Omega_{2} \Sigma} f_\Sigma^{\rm inp}\|_{H^ {s+1/2}(\Omega_2)}\le C \|f_\Sigma^{\rm inp}\|_{H^ {s}(\Sigma)}.
\end{equation}
Furthermore, if $D\subset \overline{D}\subset \Omega_2\setminus\overline{\Omega}_1$ the following solution
operator mapping property holds for any $r\in \mathbb{R}$
\begin{equation}\label{eq:2.95}
\mathrm{K}_{D \Sigma } : H^{0}(\Sigma)\to H^{r}(D).
\end{equation}
Consequently,
\begin{equation}\label{eq:2.95b}
\mathrm{K}_{\Gamma\Sigma} : H^{0}(\Sigma)\to H^{r}(\Gamma)
\end{equation}
is continuous and compact, for any $r\in\mathbb{R}$.
\end{lemma}
\begin{proof} {Throughout this proof we let $s\in[0,1]$ and, for notational convenience, we denote $v:=\mathrm{K}_{\Omega_{2} \Sigma} f_\Sigma^{\rm inp}$}.
Since, by definition,
\[
\Delta v+k^2 v= k^2(1-n^2)v,\quad \gamma_{\Sigma}v=f_\Sigma^{\rm inp}.
\]
By the third Green identity (see for instance \cite[Th. 6.10]{McLean:2000}) we have the representation
\begin{equation}\label{eq:2.9}
v = k^2\int_{\Omega_0} \Phi_k(\cdot-{\bf y}) g^v_n({\bf y})\,{\rm d}{\bf y}+
\mathrm{SL}_{k,\Sigma}\lambda^v_\Sigma-\mathrm{DL}_{k,\Sigma}f_\Sigma^{\rm inp},
\end{equation}
with $\mathop{\rm supp} {g^v_n}\subset\Omega_0$, where {we have used} the notation
\[
\lambda^v_\Sigma:= \partial_{\boldsymbol \nu} v,\quad g^v_n:=(1-n^2)v.
\]
In the expression above $\mathrm{SL}_{k,\Sigma}$ and $\mathrm{DL}_{k,\Sigma}$ denote respectively the single- and double-layer potential from the corresponding densities, defined on $\Sigma$, associated with the
constant coefficient Helmholtz operator $\Delta +k^2{\rm I}$. Next we prove that
\[
\|\lambda^v_\Sigma\|_{H^ {s-1}(\Sigma)}=\|\partial_{\boldsymbol \nu} v\|_{H^ {s-1}(\Sigma)}\le C \|f_\Sigma^{\rm inp}\|_{H^ {s}(\Sigma)}.
\]
To this end, we start from the decomposition $v=v_1+v_2$, where the harmonic $v_1$ and the interior wave-field $v_2$ are solutions of
\[
\left|\begin{array}{l}
\Delta v_1 = 0,\quad \text{in }\Omega_2,\\
\gamma_\Sigma v_1 =f_\Sigma^{\rm inp},
\end{array}\right.\quad\text{and} \quad
\left|\begin{array}{l}
\Delta v_2+k^ 2 n^2 v_2 = -k^ 2 n^2 v_1,\quad \text{in }\Omega_2,\\
\gamma_\Sigma v_2 =0.
\end{array} \right.
\]
Classical potential theory results, see \cite[Th 6.12]{McLean:2000} and the discussion which follows it {(see also references therein)}, show that there exists $C>0$ so that
\begin{equation}\label{eq:2.11}
\|v_1\|_{H^ {s+1/2}(\Omega_2)}\le C \|f_\Sigma^{\rm inp}\|_{H^{s}(\Sigma)},\quad \|\partial_{{\bm \nu}} v_1\|_{H^ {s-1}(\Sigma)}\le C' \|f_\Sigma^{\rm inp}\|_{H^{s}(\Sigma)},
\end{equation}
for any $f_\Sigma^{\rm inp}\in H^s(\Sigma)$. On the other hand, following ~\cite[Ch. 4]{Gri:1985} or \cite{Dauge:1988}
there exists $\varepsilon>0$ and $C_{\varepsilon}>0$ such that
\begin{equation}\label{eq:2.12}
\|v_2\|_{H^{3/2+\varepsilon}(\Omega_2)}\le C_\varepsilon\|v_1\|_{H^0(\Omega)}\le C_\varepsilon\|f_\Sigma^{\rm inp}\|_{H^{0}(\Sigma)}.
\end{equation}
By the trace theorem (applied to $\nabla v_2$),
\[
\|\partial_{\boldsymbol \nu} v_2\|_{H^ {0}(\Sigma)} \le C\|\nabla v_2\|_{H^{1/2+\varepsilon}(\Omega_2)} \le
C'\|v_2\|_{H^{3/2+\varepsilon}(\Omega_2)}\le C''\|f_\Sigma^{\rm inp}\|_{H^{0}(\Sigma)}.
\]
Combining these estimates with \eqref{eq:2.9} we conclude that
\begin{eqnarray*}
\|v\|_{H^{s+1/2}(\Omega_2)}&\le& C_s \big(\|g_n^v\|_{L^2(\Omega_0)} + \|\lambda^v_\Sigma\|_{H^{s-1}(\Sigma)}+\|f_\Sigma^{\rm inp}\|_{H^{s}(\Sigma) }\big) \\
&\le& C'_s \big(\|v\|_{L^2(\Omega_0)} +\|f_\Sigma^{\rm inp}\|_{H^{s}(\Sigma)} \big) \\
&\le& C''_s \|f_\Sigma^{\rm inp}\|_{H^{s}(\Sigma)}.
\end{eqnarray*}
Notice also that if $D\subset \overline{D}\subset \Omega_2\setminus\overline{\Omega}_1$,
because the kernels of the potentials operators and the Newton potential are smooth in the corresponding variables,
we gain from the extra smoothing properties of the underlying operators in \eqref{eq:2.9} to derive
\[
\|v\|_{H^{r}(D)} \le C \big(\|g^v\|_{L^2(\Omega_0)} + \|\lambda^v_\Sigma\|_{H^{-1}(\Sigma)}+\|f_\Sigma^{\rm inp}\|_{H^{0}(\Sigma)} \big)\le
C'\|f_\Sigma^{\rm inp}\|_{H^0(\Sigma)},
\]
\revB{where the constants $C$ and $C'$ are independent of $f_\Sigma^{\rm inp}$.}
\end{proof}
For deriving the main desired result of this section, it is convenient to define the following off-diagonal
operator matrix
\[
{\cal K}:=\begin{bmatrix}
&\mathrm{K}_{\Gamma\Sigma} \\
\mathrm{K}_{\Sigma\Gamma} &
\end{bmatrix}.
\]
Then \eqref{eq:BEMFEM:0} can be written in operator form
\begin{equation}
\label{eq:BEMFEM:0b}
\left({\cal I}-{\cal K}\right)\begin{bmatrix}
f_\Sigma\\
f_\Gamma
\end{bmatrix}=\begin{bmatrix*}[r]
\gamma_{\Sigma} u^{\rm inc}\\
-\gamma_{\Gamma} u^{\rm inc}
\end{bmatrix*},
\end{equation}
\revB{where ${\cal I}$ denotes the $2\times 2$ block identity operator.}
A simple consequence of Lemmas \ref{lemma:01} and \ref{lemma:02} is that
\[
{\cal I}-{\cal K}:H^s(\Sigma)\times H^{\revAB{s}}(\Gamma)\to H^s(\Sigma)\times H^s(\Gamma)
\]
is continuous for any $s\ge 0$. Next we prove that this operator is indeed an isomorphism:
\begin{theorem}\label{th:3.3} For any $s \ge 0$,
\[
{\cal I}-{\cal K}: H^s(\Sigma)\times H^s(\Gamma)\to H^s(\Sigma)\times H^s(\Gamma),
\]
is an invertible compact perturbation of the identity operator.
\end{theorem}
\begin{proof} The continuity of ${\cal K}: H^0(\Sigma)\times H^0(\Gamma)\to H^s(\Sigma)\times H^s(\Gamma)$ for any $s \ge 0$ has already been established in the two preceding lemmas. In particular, ${\cal K}$ is compact. Moreover, the null space ${\cal I}-{\cal K}$ consists of smooth functions. For any $( g_\Sigma,g_\Gamma)\in N({\cal I}-{\cal K})$,
we construct
\[
v :=\mathrm{K}_{\Omega_{2} \Sigma} g_\Sigma,\quad
\vartheta :=\mathrm{K}_{\Omega_1^{\rm c}\Gamma} g_\Gamma.
\]
Note that $w:=(v-\vartheta) $ defined, in principle, in $\Omega_{12}={\Omega_2\cap \Omega_1^{\rm c}}$ satisfies
\[
\Delta w+k^ 2w=0,\quad \text{in } \Omega_{12},\quad \gamma_\Sigma w=\gamma_{\Gamma} w=0.
\]
By the well-posedness of problem \eqref{eq:BEMFEM:02}, we have $w=0$ in $\Omega_{12}$. We define $u$ on
$\mathbb{R}^m$ as
\[
u({\bf x}) = \begin{cases}
v({\bf x}),\quad&\text{if }{\bf x} \in \Omega_2,\\
\vartheta({\bf x}),\quad&\text{if }{\bf x}\in \Omega_1^{\rm c}.\\
\end{cases}
\]
Note that $u$ is well defined in $\Omega_{12}$, {and} it is a solution of \eqref{eq:theproblem} with incident wave {$u^{\rm inc}=0$}. Therefore, $u=0$ which implies that $\vartheta=0$ in $\Omega_2^{\rm c}$. The principle of analytic continuation yields that $\vartheta=0$ also in $\Omega_1^{\rm c}$ and therefore $g_\Gamma =\gamma_{\Gamma} \vartheta=0$. Finally,
\[
g_{\Sigma} =\gamma_{\Sigma } u=\gamma_{\Sigma }\vartheta = 0,
\]
and hence the desired result follows.
\end{proof}
\section{A FEM-BEM algorithm for the decomposed model}\label{sec:fem-bem}
In this section we consider the numerical discretizations on the proven equivalent decomposed system~\eqref{eq:BEMFEM}.
In this article, we restrict to the
two-dimensional (2-D) case. [The 3-D algorithms and analysis for~\eqref{eq:BEMFEM} will be different to
the 2-D case, and in a future work we shall investigate a 3-D FEM-BEM computational model.]
Briefly, the approach consists of replacing the continuous operators $\mathrm{K}_{\Omega_{2} \Sigma} $ and $\mathrm{K}_{D \Gamma } $ with suitable
high-order FEM and BEM procedures-based discrete operators. The stability of such a discretization depends on the numerical methods chosen in each case.
For discretization of the differential operator $\mathrm{K}_{\Omega_{2} \Sigma} $ based on the heterogeneous domain model, we could consider a standard FEM with triangular, quadrilateral or even more complex elements. We will choose the first case, for the sake of simplicity, and we expect
the analysis developed in this case could cover these other types of elements, with appropriate minor modifications.
The BEM procedure, for discretizing the exterior homogeneous medium associated $\mathrm{K}_{D \Gamma } $ through boundary integral
operators, is more open since an extensive range of methods is available in the literature. We will restrict ourselves to the spectral Nystr\"om method~\cite{Kress:2014} {(see also \cite{MR3526814})}. This scheme provides a discretization of the four integral operators of the associated Calderon calculus, and has exponential rate of convergence. In this article, we will make use of high-order discretizations of the \revB{single- and double-layer operators} that are easy to implement.
A key restriction of the standard Nystr\"om method to achieve spectrally accurate convergence is the requirement \revB{of a} smooth diffeomorphic parameterization of the boundary. This is because the method starts from appropriate decompositions and factorizations of the kernels of the operators to split
these into regular and singular parts. This is not a severe restriction in our case since $\Gamma$ is an auxiliary user-chosen smooth curve and can therefore be easily constructed as detailed as required.
Next we briefly consider these two known numerical procedures and hence describe our combined FEM-BEM algorithm and implementation details.
\subsection{The FEM procedure }
Let $\{\mathcal{T}_h\}_h$ be a sequence of regular triangular meshes where $h$ is the discrete mesh parameter, the diameter of the largest element of the grid. Hence we write $h\to 0$ to mean that the maximum of the diameters of the elements tends to 0.
Using ${\cal T}_h$, we construct the finite dimensional spline approximation space
\[
\mathbb{P}_{h,d}:=\{{v}_h\in{\cal C}^0(\Omega_2) :\ v_h|_{T}\in\mathbb{P}_d, \ \forall T\in {\cal T}_h \},
\]
where $\mathbb{P}_d$ is the space of \revB{bivariate polynomials}
of degree $d$. We define the FEM approximation $\mathrm{K}_{\Omega_{2} \Sigma} ^h$ to $\mathrm{K}_{\Omega_{2} \Sigma} $ as follows:
The FEM operator
\[
\mathrm{K}_{\Omega_{2} \Sigma} ^h:\gamma_\Sigma \mathbb{P}_{h,d}\to \mathbb{P}_{h,d},
\]
for $f_{\Sigma,h}^{\rm inp} \in \gamma_\Sigma \mathbb{P}_{h,d}$,
is constructed as $u_h:=\mathrm{K}_{\Omega_{2} \Sigma} ^hf_{\Sigma,h}^{\rm inp}$, where $u_h \in \mathbb{P}_{h,d}$ is the solution of the discrete FEM equations:
\begin{equation}\label{eq:4.1}
\left|
\begin{array}{l}
b_{k,n}(u_h,v_h)=0,\quad \forall v_h\in \mathbb{P}_{h,d}\cap H_0^1(\Omega_2)\\
\gamma_\Sigma u_h = f_{\Sigma,h}^{\rm inp},
\end{array},
\right. \qquad b_{k,n}(u,v)=\int_{\Omega_2}\nabla u \cdot \overline{\nabla v} - k^2 \int_{\Omega_2} n^2\, u \overline{v}.
\end{equation}
The discrete FEM {operator $\mathrm{K}_{\Omega_{2} \Sigma} ^h$ is well defined for sufficiently small $h$}.
\subsection{The BEM procedure }
Let
\begin{equation}\label{eq:parameterization}
{\bf x}:\mathbb{R}\to\Gamma, \qquad {\bf x}(t) :=(x_1(t),x_2(t)), \quad t \in \mathbb{R}
\end{equation}
be a smooth $2\pi-$periodic regular parameterization of $\Gamma$.
We denote by the same symbol ${\rm SL}_k$, ${\rm DL}_k$, ${\rm V}_k$ and ${\rm K}_k$ the parameterized layer potentials and boundary layer operators:
\begin{eqnarray*}
({\rm SL}_k\varphi)({{\bf z}})&=&\int_0^{2\pi} \Phi_k({{\bf z}}-{\bf x}(t))\varphi(t)\,{\rm d}t\,\quad\\
({\rm DL}_kg)({{\bf z}})&=&\int_0^{2\pi} \big(\nabla_{\bf y} \Phi_k({{\bf z}}-{\bf y})\big)\Big|_{\bf y={\bf x}(t)}\cdot \bm{\mu}(t)\,g(t)\,{\rm d}t
\end{eqnarray*}
where $\bm{\mu}(t):=(x_2'(t),-x_1'(t))=|{\bf x}'(t)|\:{\bm \nu}\circ{\bf x}(t)$.
Observe that $|{\bf x}'(t)|$ is incorporated \revB{into} the density in ${\rm SL}_k$ and to the kernel in ${\rm DL}_k$. We follow the same convention for the single- and double-layer weakly singular boundary integral operators. For high-order approximations,
it is important to efficiently take care of the singularities. In particular, for the spectrally accurate
Nystr\"om BEM solver, we use the following representations of the layer operators with smooth
$2\pi$ bi-periodic kernels $A,\ B,\ C,\ D$~\cite{KressColton}:
\begin{eqnarray*}
({\rm V}_k\varphi)(s)&=&\int_{0}^{2\pi}A(s,t)\log\sin^2\tfrac{s-t}2\:\varphi(t)\:{\rm d}t+\int_{0}^{2\pi}B(s,t)\varphi(t)\:{\rm d}t,\\
({\rm K}_kg)(s)&=&\int_{0}^{2\pi}C(s,t)\log\sin^2\tfrac{s-t}2\:g(t)\:{\rm d}t+\int_{0}^{2\pi}D(s,t)g(t)\:{\rm d}t.
\end{eqnarray*}
The Nystr\"om method, based on a discrete positive integer parameter $N$, starts with setting up a uniform grid
\begin{equation}\label{eq:grid-points}
t_j := {\tfrac{\pi j}{N}},\quad j = -N+1, \dots, N,
\end{equation}
{and the space of trigonometric polynomials of degree at most $N$}
\begin{equation}\label{eq:def:Tn}
\mathbb{T}_N:= \spann\langle {\exp({\rm i}\ell t)}\ :\ \ell\in\mathbb{Z}_N\rangle,
\end{equation}
with $\mathbb{Z}_N:= \{-N+1,-N+2,\ldots, N\}.$
We next introduce the interpolation operator $\mathrm{Q}_N$
\begin{equation}\label{eq:interp}
\mathbb{T}_N\ni \mathrm{Q}_N\varphi \quad \text{s.t.}\quad (\mathrm{Q}_N\varphi)(t_j)=\varphi(t_j),
\qquad j = -N+1, \dots, N,
\end{equation}
to define discretizations of the single and double layer operators:
\begin{eqnarray*}
({\rm V}_{k}^{N}\varphi)(s)&:=& \int_{0}^{2\pi}{\rm Q}_N(A(s,\cdot)\varphi\big)(t)\log\sin^2\tfrac{s-t}2\:{\rm d}t+\int_{0}^{2\pi}{\rm Q}_N(B(s,\cdot)\varphi\big)(t)\:{\rm d}t,\\
({\rm K}_{k}^{N}g)(s) &:=& \int_{0}^{2\pi} {\rm Q}_N(C(s,\cdot)g\big)(t)\log\sin^2\tfrac{s-t}2\:{\rm d}t+\int_{0}^{2\pi}{\rm Q}_N(D(s,\cdot)g\big)(t)\:{\rm d}t.
\end{eqnarray*}
We stress that the above integrals can be computed exactly using the identities:
\[
-\frac{1}{2\pi}\int_{0}^{2\pi} \log\sin^2\tfrac{t}2\:{\exp({\rm i}\ell t)}\:{\rm d}t =
-\frac{1}{2\pi} \int_{0}^{2\pi} \log\sin^2\tfrac{t}2 \cos({\ell} t)\:{\rm d}t=\begin{cases}
\log 4 , &{\ell}=0,\\
\frac{1}{|{\ell}|}, &{\ell}\ne 0,
\end{cases}
\]
and for $g_N \in \mathbb{T}_N$,
\begin{equation}\label{eq:quad}
\int_{0}^{2\pi} g_N(t)\:{\rm d}t =
\frac{\pi}{N} \sum_{j=0}^{2N-1} g_N(t_j),
%
\end{equation}
which are based on properties of the trapezoidal/rectangular rule for $2\pi-$periodic functions.
The high-order approximation evaluation of the potentials is achieved in a similar way:
\begin{equation}\label{eq:SLNDLN}
\begin{aligned}
\big({\rm SL}_{k}^{N}\varphi\big)({{\bf z}})\ &:=\ \int_{0}^{2\pi}
{\rm Q}_N(\Phi_k({{\bf z}}-{\bf x}(\cdot))\varphi)(t)\,{\rm d}t, \\
\big({\rm DL}_{k}^{N}g\big)({{\bf z}})\ &:=\ \int_{0}^{2\pi}
{\rm Q}_N\big(\big(\nabla_{\bf y}\Phi_k({{\bf z}}-{\bf y})\big) \big|_{\bf y={\bf x}(\cdot)}
\cdot \bm{\nu}(\cdot)\,g\big)
(t)\,{\rm d}t,
\end{aligned}
\end{equation}
{leading to the rectangular rule approximation as in \eqref{eq:quad}}
Now we are ready to describe the discrete operator $\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^N$ that is
a high-order approximation to
the exterior homogeneous model continuous operator $\mathrm{K}_{\Omega_1^{\rm c}\Gamma} $. First, we introduce
the parameterized counterpart of the continuous operator in \eqref{eq:2.6a},
\begin{equation}\label{eq:4.2}
{\cal L}_k g :=( \tfrac12\mathrm{I}+\mathrm{K}_k-\mathrm{i}k\mathrm{V}_k)^{-1} g,
\end{equation}
(which corresponds to $\sigma\circ{\bf x} = \frac{1}{|{\bf x}|}$). Then we define
\begin{equation}\label{eq:00}
\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^{N}
g:=(\mathrm{DL}_{k}^{N}-{\rm i}k\mathrm{SL}_{k}^{N}) {\cal L}_{k}^{N} g,\quad \text{with }
{\cal L}_{k}^{N}:= (\tfrac12\mathrm{I}+\mathrm{K}_{k}^{N}-\mathrm{i}k\mathrm{V}_{k}^{N})^{-1}.
\end{equation}
We remark that the definition of $\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^N$ requires only evaluation of input functions at the grid points. {In particular, it is well defined on continuous functions.}
Indeed, we have
\[
\varphi = {\cal L}_{k}^{N}g
\quad\Rightarrow\quad
{\rm Q}_N\varphi ={\rm Q}_N{\cal L}_{k}^{N}{\rm Q}_N g,
\]
and since the discrete boundary layer operators only use pointwise values of the density at the grid points (i.e., {${\rm Q}_N\varphi$}),
evaluation of $\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^{N}g$
requires only values of $g$ at the grid points. So we can replace, when necessary,
\begin{equation}\label{eq:3.10}
\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^N g = {\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^N {\mathrm Q}_N g.}
\end{equation}
The discrete operator $\mathrm{K}_{\Sigma\Gamma} ^N g$ {is defined accordingly} by taking the trace of $\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^N g$ on $\Sigma$.
Thus our algorithm is based on the idea of taking the trace of FEM and BEM solutions on $\Gamma$ and $\Sigma$ respectively.
\subsection{The FEM-BEM computational model}
In addition to the discrete operators defined above, we need one {last} discrete
operator to describe the {FEM-BEM algorithm. Let}
\begin{equation}
{\rm Q}^h_{\Sigma}:{\cal C}^0(\Sigma)\to \gamma_\Sigma \mathbb{P}_{h,d},
\end{equation}
{denote} the usual Lagrange interpolation operator on $\gamma_\Sigma \mathbb{P}_{h,d}$, {the inherited finite element space on $\Sigma$}.
Our full FEM-BEM algorithm is:
\begin{subequations}
\begin{itemize}
\item {\bf Step 1:} Solve the finite dimensional system\label{eq:NhBEMFEM}
\begin{equation}
\label{eq:NhBEMFEM:0} \left({\cal I}-\begin{bmatrix}
&{\rm Q}^h_{\Sigma} \mathrm{K}_{\Sigma\Gamma} ^N\\
{\rm Q}_N\mathrm{K}_{\Gamma\Sigma} ^h
\end{bmatrix}
\right)\begin{bmatrix}
f^h_{\Sigma}\\
f^N_\Gamma
\end{bmatrix}=\begin{bmatrix*}[r]
{\rm Q}^h_{\Sigma}\gamma_{\Sigma} u^{\rm inc}\\
- {\rm Q}_N \gamma_{\Gamma} u^{\rm inc}
\end{bmatrix*}.
\end{equation}
\item {\bf Step 2:} Construct the FEM-BEM solution
\begin{equation}
\label{eq:NhBEMFEM:01}
u_h:=\mathrm{K}_{\Omega_{2} \Sigma} ^h f_\Sigma^h,\quad
\omega_N:=\mathrm{K}_{\Omega_1^{\rm c}\Gamma} ^Nf^N_\Gamma,\quad u_{h,N}:=\begin{cases}
u_h,\quad&\text{in $\Omega_2$},\\
\omega_N+u^{\rm inc},\quad&\text{in $\Omega_1^{\rm c}$}.
\end{cases}
\end{equation}
\end{itemize}
\end{subequations}
\begin{remark}
We have committed a slight abuse of notation in the right-hand-side of \eqref{eq:NhBEMFEM:0} by writing
\[
{\rm Q}_N \gamma_{\Gamma} u^{\rm inc}
\]
instead of the correct, but more complex, $
{\rm Q}_N\big((\gamma_{\Gamma} u^{\rm inc})\circ{\bf x}\big)$. Similarly,
\[
{\rm Q}_N\big((\mathrm{K}_{\Gamma\Sigma} ^h\,\cdot\,)\circ{\bf x}\big)
\]
should be read in the lower extra-diagonal block of the matrix in \eqref{eq:NhBEMFEM:0}.
Indeed, this is equivalent to replacing a space on $\Gamma$ with that obtained via the parameterization \eqref{eq:parameterization}. Since both spaces are isomorphic, being strict in the notation for
description of these operators is not {absolutely} necessary. In particular, we {avoid} complicated notation and {use} a compact way to describe the algorithm
and associated theoretical results.
\end{remark}
\begin{remark}
Complete numerical analysis of the FEM-BEM algorithm is beyond the
scope of this article.
In a future {work}, we shall carry out a detailed numerical analysis of the FEM-BEM algorithm. {Below we give the main results.} In summary, the analysis is based
on the following assumption on the mesh-grid:
\paragraph{\bf{Assumption 1}} There exists $\varepsilon_0>0$ such that the sequence of grids $\{{\cal T}_h\}_h$ satisfies
\begin{equation}\label{eq:mesh_res}
h^{1/2} h_D^{-\varepsilon_0}\to 0
\end{equation}
where $D\subset \Omega_2\setminus\overline{\Omega}_0$ is an open neighborhood of $\Gamma$, and $h_D$, the maximum of the diameters of the elements of the grid ${\cal T}_h$ with non-empty intersection with $D$.
We note that this assumption allows locally refined grids, but introduces a very weak restriction on the ratio between the largest element in $\Omega_2$ and the smallest element in $D$. However, since the exact solution is smooth on $D$, {the partial differential equation in this domain is just the homogeneous Helmholtz equation}, and it is reasonable to expect that small elements are not going to be used in this subdomain.
\revB{Using Assumption 1, in a future work we shall prove the well-posedness of the
discrete system~\eqref{eq:NhBEMFEM} and optimal order of convergence of the FEM-BEM solution.
In particular, after deriving convergence of the individual FEM and BEM approximations, we shall prove
the following convergence result:} For any region $\Omega_R\subset\Omega_1^{\rm c}=\mathbb{R}^2\setminus \overline{\Omega}_1$,
$0 < \epsilon \leq \epsilon_0$,
$r\ge 0$, $t\ge d+3/2$,
\begin{eqnarray}
& & \hspace{-0.5in} \|u-u_h\|_{H^{1}(\Omega_2)}+ \| \omega-\omega_N\|_{H^{r}(\Omega_R)} \nonumber \\
&\le& C\big(h_D^{d-\varepsilon} N^{-\varepsilon}+h_\Sigma^{d+1/2}+N^{-t}+h_D^d\big)\|u^{\rm inc}\|_{H^{t+1}(\Omega_2)}
+C\inf_{v_h\in\mathbb{P}_{h,d}}\|u-v_h\|_{H^1(\Omega_2)}, \label{eq:fin_res}
\end{eqnarray}
where $h_D$ is as in~\eqref{eq:mesh_res} and $h_\Sigma$ is the maximum distance between any two consecutive Dirichlet/constrained nodes in $\mathcal{T}_h$;
$(u, \omega) = (\mathrm{K}_{\Omega_{2} \Sigma} f_\Sigma,\mathrm{K}_{\Omega_1^{\rm c}\Gamma} f_\Gamma)$ is the exact solution of \eqref{eq:BEMFEM}; and $(u_h, \omega_N)$ is the unique solution of the numerical method \eqref{eq:NhBEMFEM}.
\end{remark}
Next we describe algebraic details required for
implementation of the algorithm, followed by numerical experiments in Section~4 to demonstrate the efficiency of the FEM-BEM algorithm to simulate
wave propagation in the heterogeneous and unbounded medium.
\subsection{FEM-BEM algebraic systems and evaluation of wave fields}
Simulation of approximate interior and exterior wave fields $u_{h,N}$ using the solution of~\eqref{eq:NhBEMFEM:0}
and the representation in~\eqref{eq:NhBEMFEM:01} requires: (i) computing the interior
solution $u_h$ by once solving the finite element system~\eqref{eq:4.1} using the Dirichlet data~$f_{\Sigma}^h$;
and (ii) the exterior solution ${\omega_N}$ in $\Omega_1^{\rm c}$ by evaluating the layer potential value $(\mathrm{DL}_k^N-\mathrm{i}\mathrm{SL}_k^N){\cal L}_{k}^{N} f^N_\Gamma$, using the representation in~\eqref{eq:SLNDLN}.
\revA{Since ${\cal L}_{k}^{N} f^N_\Gamma \in \mathbb{T}_N$ and that the dimension of $\mathbb{T}_N$ is $2N$,
using~\eqref{eq:def:Tn}{--\eqref{eq:SLNDLN}}, the degrees of freedom (DoF) required to compute the exterior solution ${\omega_N}$ is equal to the
number of interpolatory uniform grid points $t_j,~j=-N+1, \dots, N$ in~\eqref{eq:grid-points} that determine the interpolatory operator
$\mathrm{Q}_N$ in~\eqref{eq:interp}}.
The linear algebraic system corresponding to the Dirichlet
problem~\eqref{eq:4.1} for
$u_h \in \mathbb{P}_{h,d}$ is obtained by using an ansatz that is a linear combination of the basis functions spanning
$\mathbb{P}_{h,d}$. Coefficients in the $u_h$ ansatz are values of $u_h$ at the nodes that determine $\{\mathcal{T}_h\}_h$.
The nodes include constrained/boundary Dirichlet nodes on $\Sigma$ and free/interior non-Dirichlet nodes in $\Omega_2$.
Henceforth, for a chosen mesh for the bounded domain $\Omega_2$,
we use the notation $M$ and $L$ to denote the number of Dirichlet- and free-nodes nodes in the mesh, respectively.
The FEM system~\eqref{eq:4.1} to compute the solution $u_h$
leads to an $L$-dimensional linear system for the unknown vector
${\bf u}_L$ (that are values of ${u}_h$ at the interior nodes). The system is governed
by a real symmetric sparse matrix, say, ${\bf A}_{L}$.
The matrix ${\bf A}_{L}$ is obtained by eliminating the row and column vectors associated at the boundary nodes.
Let ${\bf D}_{L,M}$ be the $L\times M$ matrix that is used to move the Dirichlet condition
to the right-hand-side of the system. Thus for a given Dirichlet data vector $\widehat{{\bf f}}_{M}$, we may theoretically write
${\bf u}_L = {\bf A}^{-1}_{L}{\bf D}_{L,M} \widehat{{\bf f}}_{M}$. Let ${\bf T}_{2N,L}$ be the $2N\times L$ {sparse} matrix so that
${\bf T}_{2N,L} {\bf u}_L (= {\bf T}_{2N,L} {\bf A}^{-1}_{L}{\bf D}_{L,M} \widehat{{\bf f}}_{M})$ is the {trace} of the finite element solution ${w}_h$ of~\eqref{eq:4.1}
at the $2N$ interior points $\mathbf{x}(t_j) \in \Gamma, j = -N+1, \dots, N$ that are the BEM grid points.
For describing
the full FEM-BEM system, using the above representation, it is convenient to define the $2N \times M$ matrix
\begin{equation}\label{eq:2N-M}
{\bf \widetilde{K}}_{2N,M}: = {\bf T}_{2N,L} {\bf A}^{-1}_{L}{\bf D}_{L,M}.
\end{equation}
The matrix ${\bf A}^{-1}_{L}$ in~\eqref{eq:2N-M}, in general, should not be computed in practice.
We may consider instead a ${\bf L}_L {\bf D}_{L}{\bf L}_L^\top$
factorization~\cite{Duff:2004} (for example, implemented in the Matlab command {\tt ldl}), where ${\bf D}_L$ is a block diagonal matrix with $1\times 1$ or $2\times 2$ blocks and ${\bf L}_L$ is a block (compatible) unit lower triangular matrix. Hence, each multiplication by $ {\bf A}^{-1}_{L}$ is reduced to solving two (block) triangular and one $2\times 2$ block diagonal system which can be efficiently done, leading to evaluation of
${\bf \widetilde{K}}_{2N,M}$ on $M$-dimensional vectors.
Of course the ${\bf L}_L {\bf D}_{L}{\bf L}_L^\top$ factorization is a relatively expensive process, but worthwhile in our method to simulate the complex
heterogeneous and unbounded region model. (We further quantify this process using numerical experiments in Section~\ref{sec:num-exp}.)
The ansatz for the unknown density $f^N_\Gamma \in \mathbb{T}_N$ is a linear combination of $2N$ known basis
functions ${\exp({\rm i}\ell t)}, ~\ell = -N+1, \dots, N$ in~\eqref{eq:def:Tn} that span $\mathbb{T}_N$.
The $2N$-dimensional BEM system for the unknown vector
$\widetilde{\bf f}_{2N}$ (that are values of the unknown density at the Nystr\"om node points $t_j,~j=-N+1, \dots, N$)
is governed by a complex dense matrix and an input $2N$-dimensional vector $\widetilde{{\bf f}}_{2N}$ determined
by the Dirichlet data on $\Gamma$ in the exterior homogeneous model~\eqref{eq:BEM:0} evaluated
at $t_j,~j=-N+1, \dots, N$. We may write
\begin{equation}\label{eq:bem_vec}
{\bf B}_{2N}{\bm{\varphi}}_{2N} = \widetilde{{\bf f}}_{2N},
\end{equation}
where
${\bf B}_{2N}$ is the $2N \times 2N$ Nystr\"om matrix corresponding to the discrete
boundary integral operator in~\eqref{eq:00}. Similar to ${\bf T}_{2N,L}$, let ${\bf P}_{M,2N}$ be the matrix representation of the (discrete) combined potential generated by a density at the $M$ Dirichlet nodes of ${\cal T}_h$. That is,
${\bf P}_{M,2N} \bm{\varphi}_{2N}$ is the vector form of
${\rm Q}_{h}^\Sigma \gamma_\Sigma(\mathrm{DL}_{k}^{N}-{\rm i}k\mathrm{SL}_{k}^{N})\varphi$, following the BEM representation~\eqref{eq:00} for evaluation
of the exterior field at the $M$ Dirichlet nodes on $\Sigma$. Similar to the interior problem based matrix in~\eqref{eq:2N-M}, corresponding
to the exterior field it is convenient to introduce the $M \times 2N$ matrix
\begin{equation}\label{eq:M-2N}
{\bf \widehat{K}}_{M,2N}: = {\bf P}_{M,2N}{\bf B}_{2N}^{-1}.
\end{equation}
\revA{Obviously, $M<<L$ (since $M\sim L^{1/2}$ in the 2D case for quasi-uniform grids) and, thanks to the choice of the smooth boundary $\Gamma$, the standard Nystr\"om BEM is spectrally accurate, which further implies that $2N < < M$. (We will quantify this substantially smaller ``$<<$'' claim using numerical experiments in Section~\ref{sec:num-exp}.)}
Thus the cost of setting up an ${\bf LU}$ decomposition of the dense matrix ${\bf B}_{2N}$ is
negligible and consequently the matrix $\widehat{\bf K}_{M,2N}$ product with any $2N$-dimensional
vector can be efficiently evaluated.
The implementation procedure described above to compute the interior and exterior fields using~\eqref{eq:NhBEMFEM:01} requires
{the $M$-dimensional vector $\widehat{{\bf f}}_M$ with the values of the unknown at the Dirichlet nodes}
on $\Sigma$ and the $2N$-dimensional
${\widetilde{{\bf f}}_{2N}}$ at the $2N$ uniform grid points ${\bf x}(t_j),~j= -N+1, \dots, N$ on $\Gamma$.
Since $\Sigma$ and $\Gamma$ are artificial boundaries
for the decomposition of the original model, the vectors $\widehat{{\bf f}}_M, \widetilde{{\bf f}}_{2N}$
are unknown. The interface system~\eqref{eq:NhBEMFEM:0}, that uses the data $u^{\rm inc}$ in the original model,
completes the process to compute $\widehat{{\bf f}}_M, \widetilde{{\bf f}}_{2N}$. In particular, for
the matrix-vector
form description of~\eqref{eq:NhBEMFEM:0}, we obtain input data vectors, say
$\widehat{{\bf u}}_{M}^{\rm inc}$ and $\widetilde{{\bf u}}_{2N}^{\rm inc}$, using the vector form representations of
${\rm Q}^h_{\Sigma}\gamma_{\Sigma} u^{\rm inc}$ and ${\rm Q}_N \gamma_{\Gamma} u^{\rm inc}$, respectively.
More precisely, using~\eqref{eq:2N-M}--\eqref{eq:M-2N}, the
matrix-vector algebraic system corresponding to ~\eqref{eq:NhBEMFEM:0}
takes the form
\begin{equation}\label{eq:system:01}
\begin{bmatrix}
{\bf I}_M & -{\bf \widehat{K}}_{M,2N}\\ \\
-{\bf \widetilde{K}}_{2N,M}& {\bf I}_{2N}
\end{bmatrix}
\begin{bmatrix}
\widehat{{\bf f}}_M\\ \\
\widetilde{{\bf f}}_{2N}
\end{bmatrix}
=
\begin{bmatrix}
\widehat{{\bf u}}_{M}^{\rm inc}\\ \\
-\widetilde{{\bf u}}_{2N}^{\rm inc}
\end{bmatrix}
\end{equation}
{where ${\bf I}_{M}, {\bf I}_{2N}$ are, respectively, the $M \times M$ and $2N \times 2N$ identity matrices.}
In our implementation, instead of solving the full linear system in~\eqref{eq:system:01} we work with the Schur complement
\begin{subequations}\label{eq:system:02}
\begin{eqnarray}
(\underbrace{{\bf I}_{2N}-{\bf \widetilde{K}}_{2N,M}{\bf \widehat{K}}_{M,2N}}_{{=:}{\bf A}_{\rm Sch}})\widetilde{\bf f}_{2N} &=&
-\widetilde{\bf u}_{2N}^{\rm inc}+{\bf \widetilde{K}}_{2N,M}\widehat{\bf u}_M^{\rm inc}, \label{eq:system:02a}\\
\widehat{\bf f}_M &=& \widehat{\bf u}_M^{\rm inc}+{\bf \widehat{K}}_{M,2N}\widetilde{\bf f}_{2N} \label{eq:system:02b}.
\end{eqnarray}
\end{subequations}
After solving for $\widetilde{\bf f}_{2N}$ in~\eqref{eq:system:02a}, the main computational cost for finding
$\widehat{\bf f}_M$ involves only the matrix-vector multiplication ${\bf \widehat{K}}_{M,2N}\widetilde{\bf f}_{2N}$. The latter
requires solving a BEM system, which can be carried out using a direct solve because $2N$ is relatively small.
\section{Numerical experiments}\label{sec:num-exp}
In this section we consider two sets of numerical experiments to demonstrate the overlapping decomposition framework based
FEM-BEM algorithm. In the first set of experiments, the heterogeneous domain $\Omega_0$ has non-trivial curved boundaries and the refractive index function
$n$ is smooth; and in the second set of experiments $\Omega_0$ is a complex non-smooth structure and \revB{$n$ is a discontinuous function}. For these two sets of experiments, we consider the $\mathbb{P}_d$ Lagrange finite elements with $d=2,3,4$
for the interior FEM model with mesh values $h$, and several values of the Nystr\"om method parameter $N$ to achieve spectral accuracy and to make the BEM errors
less than those in the FEM discretizations. The reported CPU times in the section are based on serial
a implementation of the algorithm in Matlab (2017b) on a desktop with a 10-core Xeon E5-2630 processor and $128$GB RAM.
\revA{In our numerical experiments to compute $\widetilde{\bf f}_{2N}$ in~\eqref{eq:system:02a}, we solve the linear
system using: (i) the iterative GMRES method with the (relative) residual set to $10^{-8}$ in all the cases; and (ii) the direct Gaussian elimination solve which requires the full matrix ${\bf A}_{\rm Sch}$ in~\eqref{eq:system:02a}. Both approaches are compared for the numerical experiments in Section~\ref{subsec:dir_iter_comp}.
As an error indicator of our full FEM-BEM algorithm,
we analyze the widely used quantity of interest (QoI) in numerous wave propagation applications: the far-field arising from both the interior and exterior fields induced by the incident field impinging from a particular direction.
For a large class of inverse wave models~\cite{KressColton}, the {far-field}
measured at several directions
is fundamental to infer various
properties of the wave propagation medium.}
\revA{To computationally verify the quality of our FEM-BEM algorithm in Section~\ref{sec:num-exp}, we analyze the numerical far-field error
at thousands of direction unit vectors $\bf z$. Using~\eqref{eq:bem_vec}, we define a spectrally accurate approximation to the QoI as
\begin{equation}\label{eq:far-z}
\left(\bm{\mathcal{F}}_N\bm{\varphi}_{2N}\right)({\bf z}):= \sqrt{\frac{k}{8\pi} }\exp\big(-\tfrac14\pi{\rm i} \big)\frac{\pi}{N} \sum_{j=-N+1}^{N} \exp(
-\mathrm{i} k({\bf z}\cdot{\bf x}(t_j))) \big[{\bf z}\cdot(x_2'(t_j),-x_1'(t_j))+1\big] \left[\bm{\varphi}_{2N}\right]_j.
\end{equation}
The exact representation of the QoI is~\cite{KressColton}
\begin{equation}\label{eq:far}
\left({\cal F}\varphi\right)({\bf z}):=
\sqrt{\frac{k}{8\pi } } \exp\big(-\tfrac14\pi{\rm i} \big)\int_{0}^{2\pi} \exp(
-\mathrm{i} k ({\bf z}\cdot{\bf x}(t))) \big[{\bf z}\cdot(x_2'(t),-x_1'(t))+1\big]\varphi (t)\,{\rm d}t.
\end{equation}
Using the angular representation of the direction vectors ${\bf z}$,
we compute approximate far-fields at $1,000$ uniformly distributed angles. We report the QoI
errors for various grid parameter sets $(h,N)$, and demonstrate high-order convergence of our FEM-BEM algorithm.
The maximum of the estimated errors in the approximate QoI, using the values at the $1,000$ uniform directions, are used
below to validate the efficiency and high-order accuracy of the FEM-BEM algorithm. }
\subsection{Star-shaped domain with five-star-pointed refractive index}
In Experiment 1 set, we choose $\Omega_0$ to be the star-shaped region
sketched in the interior of the disk $\Omega_1$ in Figure~\ref{fig:expPatrickStar}, and the
refractive index function is defined using polar coordinates as
\[
n^2(r,\theta):= 1+16 \chi\Big(\frac{1}{0.975}\Big[\frac{r}{2+0.75\sin(5\theta)}-0.025\Big]\Big),
\]
with
\[
\chi(x) := \frac{1}{2}(\widetilde{\chi}(x)+1-\widetilde{\chi}(1-x)),\quad
\widetilde{\chi}(x) := \begin{cases}
1, &\text{if $x\le 0$},\\
\exp\big(\frac{1}{e-e^{1/x}}\big),\quad&\text{if } x\in (0,1),\\
0, &\text{if $x>1$}.
\end{cases}
\]
Notice that $\widetilde{\chi}(x)$ is a smooth cut-off function with $\mathop{\rm supp}\chi =(-\infty,1]$. Therefore, the function $\chi$ is smooth and also {\em symmetric} around $1/2$: $\chi(1-x)=1-\chi(x)$ for any $x$.
\begin{figure}[ht]
\vspace{-0.1in}
\centerline{
\includegraphics[width = 0.8\textwidth]{patrickStar01.png}}
\vspace{-0.6in}
\caption{\label{fig:expPatrickStar} Heterogeneous medium and artificial boundaries for Experiment 1.}
\end{figure}
\revA{For this example,
$\Omega_2$ is the rectangle $[-6,6]\times [-8,8]$ with boundary $\Sigma$, so that the
diameter of the interior domain is $20$. Thus, for a chosen wavenumber $k$, the interior heterogeneous model is of wavelength $10k/\pi$. For our numerical experiments we choose three wavenumbers $k=\pi/4, \pi,4\pi$, to simulate the problems with acoustic characteristic size of $2.5, 10, 40$ wavelengths, respectively. The smooth boundary $\Gamma$ for this example is a circle centered at zero and radius $3.5$}
For the interior FEM model, the initial coarse grid consists of $2,654$ triangles, which is refined up to four times, in the usual way. We show the simulated far-field error results in \revB{Tables~\ref{table:exp01:08} and \ref{table:exp01:09}
using $\mathbb{P}_3$ and $\mathbb{P}_4$ elements, respectively.}
In these tables estimates of the
(relative) maximum errors in computing the QoI far-fields are presented as well as the number (given within parentheses) of GMRES iterations needed to achieve convergence with the residual tolerance $10^{-8}$. Next we discuss some key aspects of the computed results in~Tables~\ref{table:exp01:08}-\ref{table:exp01:09}.
To compute the errors for a set of discretization parameters, as {\em exact/truth} solutions we used the FEM-BEM algorithm solutions obtained with $N=640$ and the next level of FEM mesh refinement to these in the tables. The fast spectrally accurate convergence of the Nystr\"om BEM, after achieving a couple of
digits of accuracy, can be observed by following the far-field maximum errors in the last columns in Tables~\ref{table:exp01:08}-\ref{table:exp01:09}.
In particular the last columns results, for the FEM spline degree {$d = 3, 4$} cases, demonstrate that
relatively small DoF $2N$ is required for the Nystr\"om BEM solutions accuracy to match that of the FEM solutions,
especially compared to the FEM DoF $L$.
The last rows in
Tables~\ref{table:exp01:08}-\ref{table:exp01:09} clearly demonstrate that higher values of $N$ are
not useful because of the stagnation of the errors due to limited accuracy of the FEM discretizations.
Further, a closer analysis of the results in Tables~\ref{table:exp01:08}-\ref{table:exp01:09} shows that the
the computed far-fields exhibit superconvergence, with ${\cal O}(h^{2d})$ errors. In addition,
in Figure~\ref{fig:kpi} we demonstrate the faster convergence of the (Experiment 1) smooth total field solutions in the $H^1$- norm,
and compare with the rate of convergence for a non-smooth solution (Experiment 2) case.
In the Experiment 1 set, with a smooth heterogeneous region $\Omega_0$ and a smooth refractive index
function $n$, it can be shown that the exact near-field solution for the model problem is smooth.
However, this fact alone is not sufficient to explain in detail the superconvergence of the computed far-fields.
We may conjecture that some faster convergence is {occurring} in the background for
the near-field in some weak norms, and that the calculation of the far-fields is benefitting from this to
achieve the superconvergence. In a future work, we shall explore the numerical analysis our FEM-BEM algorithm.
\begin{table}[h] \small\setlength{\tabcolsep}{4pt}
\begin{center}\tt
\begin{tabular}{r|rl|rl|rl|rl|rl}
$N/{L}$&\multicolumn{2}{c|}{7,999 }& \multicolumn{2}{c|}{31,657} & \multicolumn{2}{c|}{125,953} & \multicolumn{2}{c|}{502,465} & \multicolumn{2}{c}{2,007,169}\\
\hline&&&&&&&&\\
010 & 3.1e-03& (012) & 6.6e-05& (012) & 2.2e-06& (012) & 1.2e-06& (012) & 1.2e-06& (012) \\
020 & 3.1e-03& (012) & 6.5e-05& (012) & 2.0e-06& (012) & 2.5e-10& (012) & 4.7e-11& (012) \\
040 & 3.1e-03& (012) & 6.5e-05& (012) & 2.0e-06& (012) & 1.8e-10& (012) & 1.4e-11& (012) \\
080 & 3.1e-03& (012) & 6.4e-05& (012) & 2.0e-06& (012) & 1.5e-10& (012) & 9.0e-12& (012) \\
\end{tabular}
\end{center}
\begin{center}\tt
\begin{tabular}{r|rl|rl|rl|rl|rl}
$N/{L}$&\multicolumn{2}{c|}{7,999 }& \multicolumn{2}{c|}{31,657} & \multicolumn{2}{c|}{125,953} & \multicolumn{2}{c|}{502,465} & \multicolumn{2}{c}{2,007,169}\\
\hline&&&&&&&&\\
010 & 4.3e-01 & (020) & 1.8e-01 & (020) & 1.8e-01 & (020) & 1.8e-01 & (020) & 1.8e-01 & (020) \\
020 & 3.5e-01 & (031) & 1.6e-02 & (031) & 3.3e-04 & (031) & 7.3e-06 & (031) & 5.2e-06 & (031) \\
040 & 3.5e-01 & (031) & 1.6e-02 & (031) & 3.2e-04 & (031) & 6.0e-06 & (031) & 3.5e-07 & (031) \\
080 & 3.5e-01 & (031) & 1.6e-02 & (031) & 3.3e-04 & (031) & 6.0e-06 & (031) & 1.4e-07 & (031) \\
\end{tabular}
\end{center}
\begin{center}\tt
\begin{tabular}{r|rl|rl|rl|rl|rl}
$N/{L}$&\multicolumn{2}{c|}{7,999 }& \multicolumn{2}{c|}{31,657} & \multicolumn{2}{c|}{125,953} & \multicolumn{2}{c|}{502,465} & \multicolumn{2}{c}{2,007,169}\\
\hline&&&&&&&&\\
020 & 2.8e+00&(040) & 1.4e+00&(040) & 1.1e+00 &(040) & 1.4e+01&(040) & 4.0e+00 &(040)\\
040 & 1.8e+00&(060) & 5.2e-01&(080) & 6.0e-01 &(080) & 9.1e-02&(080) & 8.6e-02 &(080)\\
080 & 2.3e+00&(063) & 5.9e+00&(100) & 6.3e-01 &(100) & 4.7e-02&(102) & 8.3e-04 &(102)\\
160 & 2.2e+00&(063) & 5.0e+00&(100) & 6.3e-01 &(100) & 4.7e-02&(102) & 8.3e-04 &(102)
\end{tabular}
\end{center}
\vspace{-0.2in}
\caption{\label{table:exp01:08}Experiment 1: $\mathbb{P}_3$ Finite element space and
$k = \pi/4, \pi, 4\pi$ (top, middle, bottom tables). \revAB{In the first row and the first column, ${L}$ and $2N$ are the number of degrees of freedom used to compute the FEM and BEM solutions, respectively. The number of GMRES iterations required for solving the system, with a residual tolerance of $10^{-8}$, is given within the parenthesis. Estimated (relative) uniform errors in the far-field are given in columns two to five.}}
\end{table}
\begin{table}[h] \small
\begin{center}\tt
\begin{tabular}{r|rl|rl|rl|rl|rl}
$N/{L}$& \multicolumn{2}{c|}{14,145}& \multicolumn{2}{c|}{56,129} & \multicolumn{2}{c|}{223,617 } & \multicolumn{2}{c|}{892,673} & \multicolumn{2}{c}{3,567,105}\\
\hline&&&&&&&&\\
010 & 3.9e-04 & (012) & 9.4e-06 & (012) & 1.4e-06 & (012) & 1.2e-06 & (012) & 1.2e-06 & (012)\\
020 & 3.9e-04 & (012) & 8.9e-06 & (012) & 2.5e-07 & (012) & 6.9e-10 & (012) & 8.4e-11 & (012)\\
040 & 3.9e-04 & (012) & 8.9e-06 & (012) & 2.5e-07 & (012) & 7.0e-10 & (012) & 1.0e-10 & (012) \\
080 & 3.9e-04 & (012) & 8.9e-06 & (012) & 2.5e-07 & (012) & 7.0e-10 & (012) & 9.9e-11 & (012) \\
\end{tabular}
\end{center}
\begin{center}\tt
\begin{tabular}{r|rl|rl|rl|rl|rl}
$N/{L}$& \multicolumn{2}{c|}{14,145}& \multicolumn{2}{c|}{56,129} & \multicolumn{2}{c|}{223,617 } & \multicolumn{2}{c|}{892,673} & \multicolumn{2}{c}{3,567,105}\\
\hline&&&&&&&&\\
010 & 2.0e-01 & (020) & 1.8e-01 & (020) & 1.8e-01 & (020) & 1.8e-01 & (020) &1.8e-01 & (020) \\
020 & 6.9e-02 & (031) & 7.1e-04 & (031) & 6.9e-06 & (031) & 5.4e-06 & (031) &5.4e-06 & (031) \\
040 & 6.9e-02 & (031) & 7.1e-04 & (031) & 3.9e-06 & (031) & 3.2e-08 & (031) &4.7e-10 & (031) \\
080 & 6.9e-02 & (031) & 7.1e-04 & (031) & 4.0e-06 & (031) & 2.4e-08 & (031) &4.0e-10 & (031) \\
\end{tabular}
\end{center}
\begin{center}\tt
\begin{tabular}{r|rl|rl|rl|rl|rl}
$N/{L}$& \multicolumn{2}{c|}{14,145}& \multicolumn{2}{c|}{56,129} & \multicolumn{2}{c|}{223,617 } & \multicolumn{2}{c|}{892,673} & \multicolumn{2}{c}{3,567,105}\\
\hline&&&&&&&&\\
020 & 5.0e+00 & (040) & 9.3e+00 & (040) & 3.1e+00 & (040) & 4.1e+00 & (040) & 3.9e+00 & (040)\\
040 & 3.7e+00 & (080) & 4.9e-01 & (080) & 2.4e-01 & (080) & 8.5e-02 & (080) & 8.6e-02 & (080)\\
080 & 9.1e+00 & (098) & 4.6e-01 & (100) & 2.6e-01 & (102) & 2.0e-03 & (102) & 8.8e-06 & (102) \\
160 & 9.8e+00 & (098) & 4.6e-01 & (100) & 2.6e-01 & (102) & 2.0e-03 & (102) & 8.8e-06 & (102)
\end{tabular}
\end{center}
\vspace{-0.2in}
\caption{Experiment 1: \label{table:exp01:09} $\mathbb{P}_4$ Finite element space and
$k = \pi/4, \pi, 4\pi$ (top, middle, bottom tables). \revAB{In the first row and the first column, ${L}$ and $2N$ are the number of degrees of freedom used to compute the FEM and BEM solutions, respectively. The number of GMRES iterations required for solving the system, with a residual tolerance of $10^{-8}$, is given within the parenthesis. Estimated (relative) uniform errors in the far-field are given in columns two to five.}}
\end{table}
\clearpage
In Figure~\ref{fig:gmres}, we illustrate the convergence of the GMRES iterations and show that as
the frequency is increased four-fold,
\revA{the number of required iterations for the solutions to converge with the $10^{-8}$ residual tolerance
increases at (a slightly) slower rate.}
\begin{figure}[h]
\centerline{
\includegraphics[width = .8\textwidth ]{resplot.pdf}}
\caption{\label{fig:gmres} \revA{Number of GMRES iterations and residual errors for Experiment 1 simulations with $k=\pi/4$, $k=\pi$ and $k=4\pi$
using the $\mathbb{P}_3$ finite element space on a grid
with the FEM DoF $L= 502,465$ and the BEM DoF $2N=160$.}}
\end{figure}
\revA{Next we consider how the size of the overlapped FEM-BEM region $\Omega_{12}$ affects the speed of convergence of the GMRES iterations. To this end, we have run a set of additional experiments for the star-shaped (Experiment 1) problem with $k = \pi$, using several
choices of $\Gamma$, to obtain larger to smaller diameter overlapped regions $\Omega_{12}$. In particular, we chose several BEM smooth boundaries
$\Gamma$ to be circles centered at the origin with radii spanning from 2.625 (closer to the heterogeneity) to 5.856 (closer to the
FEM boundary $\Sigma$), yielding several $\Omega_{12}$, respectively, with larger to smaller sizes. For all these simulation cases, we fixed the BEM DoF to be $2N = 160$, and the fixed $\mathbb{P}_3$ elements were obtained using $445,440$ triangles with the number of free-nodes (FEM DoF) to be
$L = 1,106,385$. We present the corresponding results in Figure~\ref{fig:new}.}
\revA{In the left panel of Figure~\ref{fig:new}, we can see a sample of the curves $\Gamma$ used for the set of experiments with varying size $\Omega_{12}$,
and correspondingly in the right panel of Figure \ref{fig:new}, we present the number of GMRES iterations required to converge with, again, the residual tolerance $10^{-8}$. Results in Figure \ref{fig:new} clearly demonstrate that the number of GMRES iterations increases as the size of the overlapped region $\Omega_{12}$ decreases. This can be explained as follows: At the continuous level, the interacting operators $\mathrm{K}_{\Sigma\Gamma} $ and
$\mathrm{K}_{\Gamma\Sigma} $ tend to lose the compactness property, as the overlapped region becomes thinner. (We shall explore this
observation theoretically in a future work.).
On the other hand, it is interesting to note from these experiments that the choice of $\Gamma$ being very close to the heterogeneity does not affect the
convergence of the GMRES iterations. We could conjecture that this might happen for the considered set of experiments because the exact solution
for Experiment 1 problem is smooth. However, we have noticed a similar behavior for the next Experiment 2 problem, with a complex
non-smooth heterogeneous region, for which regularity of the total wave field is limited.}
\begin{figure}
\[
\includegraphics[width=0.51\textwidth]{multipleGammasPatrickStarDomain.png}
\includegraphics[width = 0.52\textwidth]{multipleGammasPatrickStarIterations.pdf}
\]
\vspace{-0.5in}
\caption{\label{fig:new}\revA{Dependence of the number of GMRES iterations on the size of the overlapping region: On the left, various choices of
the smooth (circular) interface $\Gamma$. On the right,
radii of the circles $\Gamma$ vs. number of GMRES iterations required for convergence with a residual tolerance of $10^{-8}$.}}
\end{figure}
\subsection{{\em Pikachu}-shaped domain with piecewise smooth refractive index}
\begin{figure}[h]
\centerline{
\includegraphics[height = .6\textwidth]{pikachuDomain.png}}
\caption{\label{fig:expPikachu} Pikachu heterogeneous domain and artificial boundaries $\Gamma$ and $\Sigma$ for Experiment 2.}
\end{figure}
In Experiment 2 set of experiments, we consider a more complicated non-smooth heterogeneous region shown in the interior
of the curved domain $\Omega_1$ in Figure~\ref{fig:expPikachu}. The region $\Omega_0$ is set to be a polygonal {\em Pikachu}-shaped domain
with the discontinuous refractive index function
\[
n^2(x,y) := \begin{cases}
5 + 4\chi\Big(\frac{1}{0.9}\Big[\frac{r}{2-0.75\cos(4\theta)}-0.025\Big]\Big), &(x,y)\in \Omega_0,\\
1 , &(x,y)\not\in \Omega_0,\\
\end{cases}
\]
where $r = \sqrt{(x + 0.18)^2 + (y + 0.6)^2}, \theta = \arctan \! 2((y + 0.6),(x + 0.18))$.
The grids used in our computation, are adapted to the region $\Omega_0$, in such a way that any triangle $\tau \in{\cal T}_h$ is either contained or has empty intersection with $\Omega_0$. As the boundary of $\Omega_1$ and for the smooth curve $\Gamma$ for the exterior model, we choose
\[
{\bf x}(t) = \frac{7\sqrt{2} }4\Big( (1+\cos^2 t)\cos t + (1+\sin^2 t)\sin t,(1+\sin^2 t)\sin t- (1+\cos^2 t)\cos t \Big)
\]
For the interior FEM model, we choose $\Omega_2$ to be a polygonal domain as in Figure~\ref{fig:expPikachu} with boundary $\Sigma$.
We then proceed as in the previous experiment, using an initial coarse grid with $8,634$ triangles which is refined up to four times.
The solution $u$ of the model is not smooth {in $\Omega_0$ and $\overline{\Omega}_0^{c}$,
because of the non-smoothness} of the region $\Omega_0$ and {the jump in the refractive index function}.
One may consider the use of a graded mesh around the boundary of $\Omega_0$ to obtain faster convergence. Based on
the size of $\Omega_2$,
the choices \revA{$k = \pi/4, \pi, 4\pi$} lead to approximately $2.5$, $10$, and $40$ wavelengths interior FEM model, respectively, for simulations in Experiment 2.
We observe from the integer numbers (within in parentheses) in Tables~\ref{table:exp02:03}-\ref{table:exp02:05} that the number of GMRES iterations grow, slower than the quadruple growth of the three frequencies considered in Experiment 2. The estimated (relative) maximum far-field errors for the non-smooth Experiment 2 model are given in Tables~\ref{table:exp02:03}-\ref{table:exp02:05}, demonstrating high-order accuracy of our FEM-BEM model as the finite element space degree, grid size, and the BEM DoF are increased.
In Figure~\ref{fig:kpi}, for $d =2, 3, 4$, we compare convergence of the total field in the $H^1$-norm for the smooth (Experiment 1) and non-smooth (Experiment 2) simulations.
In Figure~\ref{fig:exp02:01} we depict the simulated wave field solution for $k =\pi$, with $\mathbb{P}_4$ finite elements on a grid with $138,144$ triangles and \revB{$L=1,106,385$} free-nodes for the FEM solution, and \revB{$2N=320$} for the BEM solution. Specifically, we plot the simulated
absorbed and scattered field numerical solution $u_{h,N}$ inside $\Omega_2$ in Figure~\ref{fig:exp02:01}.
\begin{table}[th] \small
\begin{center} \tt \small
\begin{tabular}{r|rl|rl|rl|rl}
$N/{L}$&
\multicolumn{2}{c|}{ 39,085} & \multicolumn{2}{c|}{ 69,381} & \multicolumn{2}{c|}{ 622,573 } & \multicolumn{2}{c}{ 2,488,441} \\
\hline &&&&&&&\\
010 & 2.8e-03 &(015) & 2.8e-03 &(015) & 2.8e-03 &(015) & 2.8e-03 &(015) \\
020 & 5.8e-05 &(015) & 8.4e-07 &(015) & 8.4e-07 &(015) & 8.4e-07 &(015)\\
040 & 5.3e-05 &(015) & 1.0e-07 &(015) & 6.1e-09 &(015) & 6.9e-10 &(015)\\
080 & 5.8e-05 &(015) & 7.4e-08 &(015) & 6.5e-09 &(015) & 4.4e-10 &(015)\\
\end{tabular}
\end{center}
\begin{center} \tt \small
\begin{tabular}{r|rl|rl|rl|rl}
$ N/{L}$&
\multicolumn{2}{c|}{ 39,085} & \multicolumn{2}{c|}{ 69,381} & \multicolumn{2}{c|}{ 622,573 } & \multicolumn{2}{c}{ 2,488,441} \\
\hline &&&&&&&\\
020 & 2.5e+00 &(040) & 2.5e+00 &(040) & 2.5e+00 &(040) & 2.5e+00 &(040) \\
040 & 3.8e-03 &(042) & 2.5e-04 &(042) & 7.1e-05 &(042) & 5.2e-05 &(042) \\
080 & 3.1e-03 &(042) & 1.7e-04 &(042) & 7.1e-06 &(042) & 2.7e-07 &(042) \\
160 & 3.4e-03 &(042) & 1.4e-04 &(042) & 7.9e-06 &(042) & 2.6e-07 &(042) \\
\end{tabular}
\end{center}
\begin{center} \tt \small
\begin{tabular}{r|rl|rl|rl|rl}
$ N/{L}$&
\multicolumn{2}{c|}{ 39,085} & \multicolumn{2}{c|}{ 69,381} & \multicolumn{2}{c|}{ 622,573 } & \multicolumn{2}{c}{ 2,488,441} \\
\hline&&&&&& \\
040 & 6.8e+00 &(080) &3.2e+00 &(080) & 3.7e+00 &(080) & 3.6e+00 &(080) \\
080 & 9.2e+00 &(130) &7.4e-01 &(140) & 2.1e-00 &(139) & 2.3e+00 &(139) \\
160 & 6.7e+00 &(140) &4.6e-01 &(148) & 1.3e-02 &(149) & 4.1e-04 &(149) \\
320 & 6.8e+00 &(140) &4.4e-01 &(148) & 1.1e-02 &(149) & 2.8e-04 &(149)
\end{tabular}
\end{center}
\caption{\label{table:exp02:03}Experiment 2: $\mathbb{P}_3$ Finite element space
and $k = \pi/4, \pi, 4\pi$ (top, middle, bottom tables). \revAB{In the first row and the first column, ${L}$ and $2N$ are the number of degrees of freedom used to compute the FEM and BEM solutions, respectively. The number of GMRES iterations required for solving the system, with residual tolerance of $10^{-8}$, is given within the parenthesis. Estimated (relative) uniform errors in the far-field are given in columns two to five.}}
\end{table}
\begin{table}[t] \small \small
\begin{center} \tt \small
\begin{tabular}{r|rl|rl|rl|rl}
$N/{L}$& \multicolumn{2}{c|}{ 69,381} & \multicolumn{2}{c|}{276,905} & \multicolumn{2}{c|}{ 1,106,385} & \multicolumn{2}{c}{4,423,073} \\
\hline&&&&&& \\
010 & 2.8e-03 &(015) &2.8e-03 &(015) & 2.8e-03 &(015) & 2.8e-03 &(015) \\
020 & 1.3e-06 &(015) &8.4e-07 &(015) & 8.4e-07 &(015) & 8.4e-07 &(015) \\
040 & 1.3e-06 &(015) &1.6e-07 &(015) & 6.8e-10 &(015) & 6.8e-10 &(015) \\
080 & 1.3e-06 &(015) &1.6e-07 &(015) & 6.9e-10 &(015) & 6.8e-10 &(015) \\
\end{tabular}
\end{center}
\begin{center} \tt \small
\begin{tabular}{r|rl|rl|rl|rl}
$N/{L}$&
\multicolumn{2}{c|}{ 69,381} & \multicolumn{2}{c|}{276,905} & \multicolumn{2}{c|}{1,106,385} & \multicolumn{2}{c}{4,423,073 } \\
\hline &&&&&&&\\
020 & 2.5e+00 &(040) & 2.5e+00 &(040) & 2.5e+00 &(040) & 2.5e+00 &(040) \\
040 & 2.8e-04 &(042) & 4.7e-05 &(042) & 5.3e-07 &(042) & 5.2e-05 &(042) \\
080 & 1.9e-04 &(042) & 2.2e-06 &(042) & 1.8e-07 &(042) & 6.9e-09 &(042) \\
160 & 1.6e-04 &(042) & 1.1e-06 &(042) & 6.3e-08 &(042) & 3.3e-09 &(042) \\
\end{tabular}
\end{center}
\begin{center} \tt \small
\begin{tabular}{r|rl|rl|rl|rl}
$N/{L}$& \multicolumn{2}{c|}{ 69,381} & \multicolumn{2}{c|}{276,905} & \multicolumn{2}{c|}{ 1,106,385} & \multicolumn{2}{c}{4,423,073} \\
\hline &&&&&&&\\
040 & 1.7e+00 &(080) & 3.8e+00 &(080) & 3.6e+00 &(080) & 3.6e+00 &(080) \\
080 & 8.8e-01 &(139) & 1.8e+00 &(140) & 2.2e+00 &(139) & 2.3e+00 &(139) \\
160 & 5.4e-01 &(147) & 3.9e-02 &(149) & 4.8e-04 &(149) & 6.9e-05 &(149) \\
320 & 5.4e-01 &(147) & 3.6e-02 &(149) & 2.9e-04 &(149) & 8.4e-06 &(149)
\end{tabular}
\end{center}
\caption{\label{table:exp02:05}Experiment 2: $\mathbb{P}_4$ Finite element space
and $k = \pi/4, \pi, 4\pi$ (top, middle, bottom tables). \revAB{In the first row and the first column, ${L}$ and $2N$ are the number of degrees of freedom used to compute the FEM and BEM solutions, respectively. The number of GMRES iterations required for solving the system, with residual tolerance of $10^{-8}$, is given within the parenthesis. Estimated (relative) uniform errors in the far-field are given in columns two to five.}}
\end{table}
\clearpage
\begin{figure} [!ht]
\centerline{\includegraphics[width=0.66 \textwidth]{pikachuSolTotalk314.png}}
\caption{\label{fig:exp02:01}Real part of the total field FEM solution $u_h$ in $\Omega_2$ for $k=\pi$.}
\centerline{\includegraphics[width=0.80\textwidth]{H1ErrorComparisonJune2019.pdf}}
\caption{\label{fig:kpi}Comparisons of convergence of the FEM-BEM algorithm for the total field in the $H^1(\Omega_2)$-norm for Experiment 1 and 2
using $\mathbb{P}_2$, $\mathbb{P}_3$ and $\mathbb{P}_4$ elements with $N = 80$ {and $k=\pi/4$}. The bottom part of the figure shows the expected order of convergence, as given in~\eqref{eq:fin_res}, for {\em smooth} solutions.}
\end{figure}
\clearpage
\subsection{Direct solver implementation and comparison with iterative solver}\label{subsec:dir_iter_comp}
In this subsection we discuss the direct solver implementation of our method and compare its performance with the iterative approach we have used for simulating results described earlier in the section. When computing the matrix in~\eqref{eq:system:02a}, the main issue is concerned with the matrix ${\bf \widetilde{K}}_{2N,M}$,
{which comprises the calculation of finite element solution followed by its evaluation at the nodes of the} {BEM.}
Because of the spectral accuracy of the Nystr\"om BEM
approximation, the DoF $2N$ is expected to be smaller, in practice, even compared to the number $M$ of FEM boundary Dirichlet (constrained) nodes
(that is, $M > 2N$). Accordingly, in our implementation we use instead the representation
\[
{\bf \widetilde{K}}_{2N,M}^\top = ( {\bf T}_{ 2N,L}{\bf A}^{-1}_{L} {\bf D}_{L,M})^\top =
{\bf D}^\top_{L,M}{\bf L}_L^{-1}{\bf D}_L^{-1}{\bf L}_L^{-\top} {\bf T}_{ 2N,L}^\top,
\]
where we recall that ${\bf A}_{L} ={\bf L}_L {\bf D}_{L}{\bf L}_L^\top$ is symmetric. This representation requires solving $2N$ (independent) finite element problems, one for each {column of $\widetilde{\bf K}_{2N,M}^\top$, and a (sparse) matrix-vector multiplication. The first process, consumes the bulk of computation time (but is a naturally parallel task w.r.t. $N$) and can be carried out with wall-clock time similar to solving one FEM problem~\cite[Section 5.1.5]{GaMor:2016}.
The common CPU time for the direct and iterative solver amounts to the assembly of the finite element matrices ${\bf A}_L$ and ${\bf D}_{L,M}$, the ${\bf L} {\bf D} {\bf L}^\top$ factorization of the former, the boundary element matrix ${\bf B}_{2N}$ and the auxiliary matrices ${\bf T}_{2N,L}$ and ${\bf P}_{M,2N}$. Consequently the major difference in computation between
the two approaches is: (i) the construction and storage of the matrix in~\eqref{eq:system:02a}, followed by
exactly solving the linear system for the direct method; versus (ii) the setting up of the system~\eqref{eq:system:02a} for matrix-vector multiplication and
approximately solving the linear system {with} the GMRES iterations. The former approach is faster especially
if the number of GMRES iterations is not very low
(in single-digits) because of modern fast multi-threaded implementation of the direct solver. However, the latter approach is memory efficient and needed
especially for large scale 3-D models.
Using a desktop machine, with a $10$-core processor and $128$GB RAM, we were able to apply the direct solver to simulate the example 2-D models in Experiment 1 and 2,
even with millions of FEM (sparse) DoF within our FEM-BEM framework . For one of the largest cases reported
in Table~\ref{table:exp01:09}, {with} $\mathbb{P}_4$ elements for the wavenumber $k = 4\pi$ ($40$ wavelengths case), with
\[
N = 80, \quad L = 3,567,105 \quad \text{{(with $445, 440$ triangles), and}} \quad M = 7,168
\]
the GMRES approach system setup CPU time was $172$ seconds; and the direct approach setup CPU time was $332$ seconds. Because
of requiring $102$ GMRES iterations, the solve time to compute a converged iterative solution was {\bf 586 seconds}. However,
because of the very efficient multi-threaded direct solvers (in Matlab) the direct solve time to compute the exact solution was only {\bf 0.014 seconds}.
The size of the interface linear system for the experiment is only $160 \times 160$ and hence our algorithm can be very efficiently used
for a large number of incident waves $u^{{\rm inc}}$, that occur only in the small interface system.
Thus we conclude that our FEM-BEM framework provides options to apply direct or iterative approaches to efficiently simulate wave propagation
in heterogeneous and unbounded media. For 2-D low and medium frequency models with sufficient RAM, it seems to be efficient even to use
the direct solver, and for higher frequency cases iterative solvers are efficient because of the demonstrated well-conditioned property of
the system.
\section{Conclusions}
\revAB{In this article we developed a novel continuous and discrete computational framework for an equivalent reformulation and efficient simulation of
an absorbed and scattered wave propagation model, respectively, in a bounded heterogeneous medium and an unbounded homogeneous free-space. The
model is governed by the Helmholtz equation and a decay radiation condition at infinity. The decomposed framework incorporates
the radiation condition exactly and is based on creating two overlapping regions, without truncating the full space unbounded propagation medium.
The overlapping framework has the advantage of choosing a smooth artificial boundary for the unbounded region of the reformulation, and a simple polygonal/polyhedral boundary for the bounded part of the two regions. The advantage facilitates the application of a spectrally accurate BEM for
approximating the scattered wave, and setting up a high-order FEM for simulating the absorbed wave. We prove the equivalence of the
decomposed overlapping continuous framework and the given model. The efficiency of our two-dimensional FEM-BEM computational framework was demonstrated in this work using two sets of numerical experiments, one comprising a smooth and the other a non-smooth heterogeneous medium.}
\section*{Acknowledgement}
V\'{\i}ctor Dom\'{\i}nguez thanks the support of the project MTM2017-83490-P. Francisco-Javier Sayas was partially supported by the NSF grant DMS-1818867.
|
1,116,691,497,660 | arxiv | \section{Introduction\label{se:introduction}}
In the perturbative construction of Einstein-Hilbert (EH) gravity on four dimensional spacetime one splits the metric $g^{\mu\nu}$ into a background $\hat{g}^{\mu\nu}$ and oscillations $h^{\mu\nu}$ around it which are quantized.
Back in the 1970's quite a few attempts were undertaken to formulate such models of quantized gravity.
Most influential were the pioneering papers of 'tHooft and Veltman \cite{tHooft:1974toh}, in which explicit calculations showed that in higher than one-loop order the theory becomes intractable due to power counting non-renormalizability.
Many more papers dealt with the problem without surmounting these difficulties (see e.g.\ \cite{Goroff:1985th}).
Out of these early papers we concentrate on two in which important progress had been achieved and which were very helpful for our own understanding.\\
Kugo and Ojima \cite{KuOj} provided a quantized model of EH general relativity.
In order to deal with the indefinite metric problem which results after having replaced diffeomorphism invariance by an appropriate Becchi-Rouet-Stora-Tyutin (BRST) invariance they use their quartet mechanism.
Hence they realize unitarity.
They base their reasoning, remarkably enough, on a general solution of the Slavnov-Taylor identity (ST)
associated with the BRST transformation without restriction by power counting.
This is, of course, motivated by the fact that the model is power counting non-renormalizable, hence quite reasonable. The renormalization problem is left
open. \\
Stelle \cite{Stelle} presented a complementary approach to quantize
classical relativity: he added the square of the Ricci tensor and the
square of the curvature scalar to the EH action. This model is
power counting renormalizable, but it is not unitary. Looking at the propagator
which has a fall-off like $1/(p^2)^2$ for large $p$ it is obvious that the lack
of unitarity has nothing to do with the gauge dependence of the model, but
originates from the {\sl invariants} which contain four derivatives of the metric. \\
Calculations to be presented below show that the gauge fixing of \cite{KuOj}
can also be used in the context, where the square of the Ricci tensor and the square of the scalar curvature are present in the action.
Hence one has the quartet mechanism at one's disposal.
Since the higher derivative terms render the model power counting renormalizable, we could be led to interpret the regularizing effect as Pauli-Villars type, which can be removed after renormalization with a suitable scheme \cite{Zimmermann:1975gk}.
This turns out to be wrong.
We rather arrive at the conclusion that the higher derivatives
are tied fundamentally to the EH theory. Their seemingly disastrous effect
of causing negative metric in state space can be overcome by a
suitable LSZ-projection. The dependence of the resulting theory from the
additional
two coupling parameters however remains. Since this enlarged model is power
counting renormalizable, but depends crucially on a field of canonical
dimension zero, it contains infinitely parameters, which are associated with
the redefinition of this field as a function of itself. These generalized field
amplitudes are, fortunately, of gauge parameter type, hence do not contribute
to physical quantities.\\
Before going into details of the realization of the model we would like to present
the argument why we are convinced that the higher derivatives are necessary
ingredients for the definition of EH in quantum field theory.\\
Suppose we would like to gauge the translations in a matter model, say a massless
scalar field of canonical dimension one, with the usual Noether procedure, then
one lets the parameter $a_\mu$ of the translations depend on $x$ and couples
the respective conserved current, the energy-momentum-tensor $T_{\mu\nu}$ (EMT),
to an external tensor field field $h^{\mu\nu}$. This entails the field $h$ with
transformations dictated by the local translations. These turn out to be
just the general coordinate transformations known from general relativity (GR).
If then the field $h$ becomes a dynamical field with its own invariant kinetic
term, this kinetic term has to involve four derivatives, if one wants to keep
power counting renormalizability: after all the EMT has canonical dimension four.
I.e. the field $h$ must have dynamical dimension zero. The metric $g^{\mu\nu}$
which arises also in the course of the Noether procedure is given by
$g^{\mu\nu}=\eta^{\mu\nu}+h^{\mu\nu}$ -- without any parameter carrying
dimension. Quite reasonable in QFT. In classical GR the metric may depend on
parameters which have
mass dimension, but that is the engineering dimension and not the dynamical one
which it has to be in quantum field theory, where the dimensions are dictated
by the kinetic terms. (The details of this derivation can be found in
\cite{EKKSI, EKKSII, EKKSIII}. However many other authors have considered gauging translations,
concluding that the resulting gauge theory is a gravitational theory with
higher derivatives, e.g. \cite{Hehl:2020hhp} and citations therein.) \\
We therefor continue with quantization, renormalization and analysis of the
implications.\\
We choose the Bogoliubov-Parasiuk-Hepp-Zimmermann-Lowenstein (BPHZL) renormalization scheme \cite{Zimmermann:1969jj,Lowenstein:1975ps} for our purposes.
The auxiliary mass which is required in this scheme, is put in by hand, but it serves very well to construct finite Green functions since
the higher derivatives rendered the model power counting renormalizable.
The main gain of this version to deal with the UV-infinities is that one has an
action principle \cite{Lowenstein:1971jk}
at one's disposal which one would not have in the power counting
non-renormalizable EH model. The hurdle that this scheme is not
BRST invariant can be overcome by cohomology results existing in the literature
since the 1980's (see \cite{Baulieu:1983tg}).
They become now powerful tools because -- supplemented by power counting -- they exist also analytically.\\
Even in this rather modest approach of quantizing gravity, namely perturbation theory and flat background, one encounters quite
a few difficulties: the interaction is non-polynomial and the main field to start
with has canonical dimension zero, hence in a perturbative approach one has
an expansion in the number of loops and in the number of fields -- a situation
familiar from supersymmetric gauge theories \cite{Piguet:1984mv}. The presence of a field with
vanishing canonical dimension, which goes hand in hand with propagators
falling off as $1/(p^{2})^2$ for large $p$, points to possible infrared problems
already off-shell. Those will be controlled by infrared power counting
which is a built-in instrument of the scheme.
The paper is structured according to the use of the fundamental field $h^{\mu\nu}$. In
sections \ref{se:treeapproximation}$\to$\ref{se:removingregulators} we take $h$ at face value and formulate in terms of it the standard invariants of general relativity related to: $R,R^2,R^{\mu\nu}R_{\mu\nu}$ -- expanded in terms of $h$. We call this the ``special solution'' (of diffeomorphism invariance). In the tree approximation we set up the model, construct propagators, the ST identity, prove unitarity of the $S$-matrix, make explicit the parameters of the model and look at gauge parameter independence. In Sect.\ \ref{se:renormalization} we start the renormalization by introducing an auxiliary mass required in the BPHZL scheme which we use. Central is then power counting: in the ultraviolet (UV) and infrared (IR) region of
momentum space integrations, and convergence. It guarantees the existence of normal product insertions and thus of Green functions: one-particle-irreducible (1PI) or vertex functions, connected and general one's. We then establish the ST to all orders of perturbation theory.
Thereby formal unitarity of the $S$-matrix is established. Sections \ref{se:invdiffop}$\to$\ref{se:invparadiffeq} are devoted to the derivation and use of symmetric differential operators which yield parametric differential equations: the Lowenstein-Zimmermann (LZ) equation which shows that the Green functions are ultimately independent of the auxiliary mass; the renormalization group (RG) equation which governs the change of the normalization parameter; the Callan-Symanzik equation (CS) which yields the scaling properties of Green functions.
In Sect.\ \ref{se:removingregulators} we project down to the EH-theory.
In Sect.\ \ref{se:generalsolutionSTI} we study the ``general'' solution, i.e.\ we replace the
original field $h$ by an arbitrary function of itself
$h^{\mu\nu}\to \mathcal{F}^{\mu\nu}(h)$. This is possible due to the
vanishing canonical dimension of $h$ and this space of functions $\mathcal{F}$ is swept out in the course of renormalization, hence the study is necessary.
Sect.\ \ref{se:DisCon} is devoted to discussions and conclusions.
\section{Tree approximation\label{se:treeapproximation}}
For a decent perturbative treatment it is mandatory to set up the first orders
carefully. In the present context this refers to the zero-loop order and the
first and second order in the number of fields.
\subsection{The model and its invariances\label{se:modelandinvariances}}
As explained in the introduction we base our study of EH in the more general context of permitting invariants under diffeomorphisms up to fourth order in the derivatives. Restricting ourselves to spacetimes which are topologically equivalent to flat one's we may use the Gau\ss-Bonnet theorem and express the square of the Riemann tensor
in terms of the Ricci tensor and the curvature scalar
\begin{equation}\label{GaBo}
\int\sqrt{-g}R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}
=\int\sqrt{-g}(4R^{\mu\nu}R_{\mu\nu}-R^2).
\end{equation}
Together with the cosmological constant a basis of invariants is then provided by the terms in the following action
\begin{equation}\label{ivc}
\Gamma^{\rm class}_{\rm inv}=\int d^4x\sqrt{-g}(c_0\kappa^{-4}
+c_3\kappa^{-2}R+c_2R^2+c_1R^{\mu\nu}R_{\mu\nu}) \, .
\end{equation}
Here $\kappa$ denotes the gravitational constant.
The invariance under general coordinate transformations is to be translated into
Becchi-Rouet-Stora-Tyutin invariance (BRST) with respective gauge fixing.
The field $h^{\mu\nu}$ is defined via
\begin{equation}\label{dfh}
h^{\mu\nu}=g^{\mu\nu}-\eta^{\mu\nu} .
\end{equation}
The propagators of $h$ (s.b.) will tell us that
$h$ has canonical dimension $0$, hence $\kappa$ must not show up in its definition.
The classical action
\begin{eqnarray}\label{clssct}
\Gamma^{\rm class}&=& \Gamma^{\rm class}_{\rm inv} + \Gamma_{\rm gf}
+ \Gamma_{\phi\pi}+\Gamma_{\rm e.f.}\\
\Gamma_{\rm gf}&=&-\frac{1}{2\kappa}\int g^{\mu\nu}
(\partial_\mu b_\nu+\partial_\nu b_\mu)
-\frac{1}{2}\alpha_0\int \eta^{\mu\nu}b_\mu b_\nu \label{gf1} \\
\Gamma_{\phi\pi}&=&-\frac{1}{2}\int(D^{\mu\nu}_\rho c^\rho)
(\partial_\mu \bar{c}_\nu +\partial_\nu\bar{c}_\mu)\label{gf2}\\
D^{\mu\nu}_\rho&\equiv&-g^{\mu\lambda}\delta^\nu_\rho\partial_\lambda
-g^{\nu\lambda}\delta^\mu_\rho\partial_\lambda
+\partial_\rho g^{\mu\nu}\\
\Gamma_{\rm e.f.}&=&\int (K_{\mu\nu}\mathdutchcal{s} h^{\mu\nu}+L_\rho \mathdutchcal{s} c^\rho)
\end{eqnarray}
is invariant under the BRST-transformation
\begin{eqnarray}\label{brst}
\mathdutchcal{s} g^{\mu\nu}&=&\kappa D^{\mu\nu}_\rho c^\rho
\qquad \mathdutchcal{s} c^\rho=-\kappa c^\lambda \partial_\lambda c^\rho \\
\mathdutchcal{s} \bar{c}_\rho &=& b_\rho
\qquad \mathdutchcal{s} b_\rho=0\\
\mathdutchcal{s}_0 h^{\mu\nu}&=&-\kappa(\partial^\mu c^\nu+\partial^\nu c^\mu)\\
\mathdutchcal{s}_1 h^{\mu\nu}&=&
-\kappa(\partial_\lambda c^\mu h^{\lambda\nu}
+\partial_\lambda c^\nu h^{\lambda\mu}
-c^\lambda\partial_\lambda h^{\mu\nu}) .
\end{eqnarray}
In accordance with the expansion in the number of fields we
have introduced the transformations $\mathdutchcal{s}_0,\mathdutchcal{s}_1$ which maintain the number, resp.\ raise it by one.
$K_{\mu\nu}, L_{\rho}$ are external fields to be used for generating insertions
of non-linear field transformations. The Lagrange multiplier
$b_\mu$
couples to $\partial_\lambda h^{\mu\lambda}$ and thus
fixes eventually these derivatives (deDonder like gauge fixing).
Since the terms
$R^2, R^{\mu\nu}R_{\mu\nu}$ contain however four derivatives one might be
tempted to fix also the higher derivatives in a corresponding manner, or
only those. It turns out that this is superfluous or even contradictory when using a Lagrange multiplier field $b$,
so we stick to (\ref{gf1}),(\ref{gf2}) which is the gauge fixing chosen in \cite{KuOj}.
\subsection{Propagators\label{se:propagators}}
The definition of the propagators as inverse of vertex functions requires the
knowledge of first and second orders in the number of fields of (\ref{clssct}).
Since the cosmological term contributes at first order in the field $h$ we suppress it here in the tree approximation by
putting $c_0=0$ and in higher orders by a normalization
condition. (A classical argument for this demand is that flat space should be a solution to the h-field equations.)
In Fourier space one arrives at
\begin{eqnarray}\label{bln}
\Gamma_{h_{\mu\nu}h_{\rho\sigma}}&=&
\frac{1}{4}\sum_{KLr}\gamma^{(r)}_{KL}(P_{KL}^{(r)})_{\mu\nu\rho\sigma} \\
\Gamma_{b_\rho h_{\mu\nu}}&=&
-\frac{i}{\kappa} \Big(\frac{1}{2}(\theta_{\rho\mu} p_\nu+\theta_{\rho\nu}p_\mu)
+\omega_{\mu\nu}p_\rho \Big) \\
\Gamma_{b_\rho b_\sigma}&=& -\alpha_0\eta_{\rho\sigma} \\
\Gamma_{c_\rho\bar{c}_\sigma}&=& -ip^2 \big( \theta_{\rho\sigma}\xi(p^2)+
\omega_{\rho\sigma}\frac{1}{2}\eta(p^2) \big) .
\end{eqnarray}
For the $h$-bilinear terms we introduced projection operators $P$ (see App.\ A)
and general coefficient functions $\gamma$. It will turn out that the
propagators can be uniquely determined for general scalar functions
$\gamma(p^2)$ with the projectors taking care of the spin structure
inherent in the terms of $(\ref{ivc})$. In tree approximation the values for
$\gamma$ are given by
\begin{eqnarray}\label{coffs}
\gamma^{(2)}_{TT} &=&-p^2(c_1p^2-c_3\kappa^{-2})\\
\label{eq:coffs0}
\gamma^{(0)}_{TT} &=&p^2 \big((3c_2+c_1)p^2+\frac{1}{2}c_3\kappa^{-2} \big)\\
\gamma^{(1)}_{SS}&=&\gamma^{(0)}_{WW}=\gamma^{(0)}_{TW}=\gamma^{(0)}_{WT}=0 .
\end{eqnarray}
The coefficients of $\Gamma_{bh}$ and
$\Gamma_{bb}$ will turn out to be fixed, whereas those of $\Gamma_{c\bar{c}}$
again can be very general with tree values $\xi=\eta=1$.
The inversion equations to obtain the propagators read for the bosonic fields
\begin{eqnarray}\label{bosinver}
\Gamma_{h_{\mu\nu}h_{\alpha\beta}}G^{h^{\alpha\beta}h^{\rho\sigma}}
+\Gamma_{h_{\mu\nu}b_\lambda}G^{b^\lambda h^{\rho\sigma}}&=&
\frac{i}{2}(\tensor{\eta}{_\mu^\rho} \tensor{\eta}{_\nu^\sigma} + \tensor{\eta}{_\mu^\sigma} \tensor{\eta}{_\nu^\rho})\\
\Gamma_{hh}G^{hb}+\Gamma_{hb}G^{bb}&=&0\\
\Gamma_{bh}G^{hh}+\Gamma_{bb}G^{bh}&=&0\\
\Gamma_{b_\rho h^{\alpha\beta}}G^{h^{\alpha\beta} b^\sigma}
+\Gamma_{b_\rho b_\lambda}G^{b^\lambda b^\sigma}&=&-i\tensor{\eta}{_\rho^\sigma}.
\end{eqnarray}
\noindent
For the ghosts they have the form
\begin{equation}\label{ghostinver}
\Gamma_{c^\rho\bar{c}^\lambda}G^{c^{\lambda}\bar{c}^\sigma}=i \tensor{\eta}{_\rho^\sigma}.
\end{equation}
For the $\langle hh \rangle$-propagators we introduce like for the 2-point-vertex functions
an expansion in terms of projection operators
\begin{equation}\label{hhpropproj}
G^{hh}_{\mu\nu\rho\sigma}=
4\sum_{KLr} \langle hh \rangle^{(r)}_{KL}(P_{KL}^{(r)})_{\mu\nu\rho\sigma}.
\end{equation}
In order to solve the inversion equations we introduce
\begin{eqnarray}\label{bhprp}
G^{bh}_{\rho\mu\nu}&=&\frac{\kappa}{p^2} \big((p_\mu \theta_{\nu\rho}
+p_\nu \theta_{\mu\rho})b_1
+p_\rho \omega_{\mu\nu}b_2
+p_\rho \theta_{\mu\nu}b_3 \big)\\
G^{hb}&=&G^{bh} .
\end{eqnarray}
Here $b_1,b_2$, and $b_3$ are arbitrary scalar functions such that this is the most
general expression compatible with Lorentz invariance and naive dimensions.\\
The gauge parameter independent solutions $\langle hh \rangle^{(r)}_{KL}$ turn
out to be
\begin{equation}\label{bosprop}
\langle hh \rangle^{(2)}_{TT}=\frac{i}{\gamma^{(2)}_{TT}}
\qquad\quad\langle hh \rangle^{(0)}_{TT}=\frac{i}{\gamma^{(0)}_{TT}},
\end{equation}
whereas the ``gauge parameter multiplet'' is given by
\begin{eqnarray}\label{gaugeprop}
\langle hh \rangle^{(1)}_{SS}&=&\frac{4i\alpha_0\kappa^2}{p^2} \qquad
\langle hh \rangle^{(0)}_{WW}=\frac{4i\alpha_0\kappa^2}{p^2}\\
\qquad \langle hh \rangle^{(0)}_{TW}&=&\langle hh \rangle^{(0)}_{WT}=0.
\end{eqnarray}
It is important to observe that
the gauge parameter independent part is determined by the coefficient
functions $\gamma$, which depend on the model, i.e.\ by the invariants
and -- as will be seen later -- by higher orders, whereas the gauge
multiplet is essentially fixed and only determined by the specific gauge fixing.
The remaining bosonic propagators read
\begin{equation}\label{bprop}
\langle b_\rho h_{\mu\nu} \rangle =\frac{\kappa}{p^2} \big( (p_\mu \theta_{\nu\rho}+p_\nu
\theta_{\mu\rho})b_1+p_\rho \omega_{\mu\nu}b_2+p_\rho \theta_{\mu\nu}b_3 \big)
\end{equation}
and
\begin{equation}
\langle b_\rho b_\sigma\rangle =0 .
\end{equation}
In the tree approximation $b_1=b_2=1$ and $b_3=0$.
The antighost/ghost propagator has the general form
\begin{equation}\label{ggpropKO}
\langle \bar{c}_\rho c_\sigma \rangle=\frac{-1}{p^2}
\Big( \frac{\theta_{\rho\sigma}}{\xi(p^2)}+ \frac{1}{2} \frac{\omega_{\rho\sigma}}{\eta(p^2)} \Big).
\end{equation}
The tree approximation values are $\xi=\eta=1$, s.t.\
\begin{equation}\label{ggpropEK}
\langle \bar{c}_\rho c_\sigma \rangle =-i \big(\theta_{\rho\sigma}
+\frac{1}{2}\omega_{\rho\sigma} \big) \frac{1}{p^2}.
\end{equation}
We note that $\langle bb \rangle =0$, in accordance with the field $b_\rho$ to be
a Lagrange multiplyer.
Another general remark is in order. In the Landau gauge $\alpha_0 = 0$
the two-point functions $\langle hh \rangle$ fall off for large $|p|$ like $|p|^{-4}$,
hence one has to associate to the field $h$ the canonical dimension zero.
This implies that field monomials $\partial^{\boldsymbol{\mu}}h\cdots h$ always have
canonical dimension $|\boldsymbol{\mu}|= \hbox{\rm degree}$ of the multiderivative
$\partial^{\boldsymbol{\mu}}$, independent of the number of fields $h$ in the monomial.
\subsection{The Slavnov-Taylor identity in tree approximation\label{se:STidentitytree}}
Since the $\mathdutchcal{s}$-variations of $h,c$ are non-linear in the fields, they are best
implemented in higher orders via coupling to external fields
(cf. \eqref{clssct}), hence the ST identity then reads
\begin{equation}\label{fbrst}
\mathcal{S}(\Gamma)\equiv
\int(\frac{\delta\Gamma}{\delta{K}}\frac{\delta\Gamma}{\delta h}
+\frac{\delta\Gamma}{\delta L}\frac{\delta\Gamma}{\delta c}
+b\frac{\delta\Gamma}{\delta\bar{c} })=0 .
\end{equation}
Since the $b$-equation of motion
\begin{equation}\label{beq}
\frac{\delta \Gamma}{\delta b^\rho}=
\kappa^{-1}\partial^\mu h_{\mu\rho}-\alpha_0b_\rho
\end{equation}
is linear in the quantized field $b$, it can be integrated trivially to the original
gauge fixing term. Thus it turns out to be useful to introduce a functional
$\bar{\Gamma}$ which does no longer depend on the $b$-field:
\begin{equation}\label{Gmmbr}
\Gamma=\Gamma_{\mathrm{gf}}+\bar{\Gamma} .
\end{equation}
One finds
\begin{equation}\label{rstc}
\kappa^{-1}\partial_\lambda\frac{\delta\bar{\Gamma}}{\delta K_{\mu\lambda}}
+\frac{\delta\bar{\Gamma}}{\delta\bar{c}_\mu} =0
\end{equation}
as restriction. Hence $\bar{\Gamma}$ depends on $\bar{c}$ only via
\begin{equation}\label{sceH}
H_{\mu\nu}=K_{\mu\nu} - \frac{1}{2\kappa}(\partial_\mu\bar{c}_\nu+\partial_\nu\bar{c}_\mu)
\end{equation}
and the ST identity takes the form
\begin{eqnarray}\label{brGm}
\mathcal{S}(\Gamma)&=&\frac{1}{2}\mathcal{B}_{\bar{\Gamma}}\bar{\Gamma}=0\\
\mathcal{B}_{\bar{\Gamma}}&\equiv&
\int(
\frac{\delta\bar{\Gamma}}{\delta H}\frac{\delta}{\delta h}
+ \frac{\delta\bar{\Gamma}}{\delta h}\frac{\delta}{\delta H}
+ \frac{\delta\bar{\Gamma}}{\delta L}\frac{\delta}{\delta c}
+ \frac{\delta\bar{\Gamma}}{\delta c}\frac{\delta}{\delta L}
) .
\end{eqnarray}
This form shows that $\mathcal{B}_{\bar{\Gamma}}$ can be interpreted as a variation und thus
(\ref{brGm}) expresses an invariance for $\bar{\Gamma}$.
\subsection{Unitarity in the tree aproximation\label{se:unitaritytree}}
The $S$-operator can be defined \cite{Itzykson:1980rh} via
\begin{eqnarray}\label{sma}
S&=&:\Sigma: Z(\underline{J})|_{\underline{J}=0}, \\
\Sigma &\equiv& \exp\left\{ {\int dx\,dy\, \Phi_{\rm in}(x)K(x-y)z^{-1}
\frac{\delta}{\delta \underline{J}(y)}}\right\},
\end{eqnarray}
where $\underline{J}$ denotes the sources
$J_{\mu\nu},j_{\bar{c}}^\rho,j_c^\rho,j^\rho_b$ for the fields
$h^{\mu\nu},\bar{c}^\rho,c^\rho,b_\rho$, respectively, and
their in-field versions are collected in $\Phi_{\rm in}.$
$K(x-y)z^{-1}$ refers to all in-fields and stands for the higher derivative wave operator,
hence removes the complete (tree approximation) propagator matrix.
$\Sigma$ would then map onto the respective large Fock space of the higher derivative model. As mentioned already the dynamical degrees of freedom which originate from the
higher derivatives are definitely unphysical,
therefore they have to be removed before we consider the S-matrix for
the Einstein-Hilbert theory. Here in the tree approximation this is trivial
because all Green functions are well-defined. So we put simply $c_1=c_2=0$.
With this the massive poles are absent, the wave operator is the one of Einstein-Hilbert
and we study just those unphysical degrees of freedom which go along with that
model. These differ slightly from those studied by \cite{KuOj} because we employ a different field $h$, but the general structure is the same (cf.\ (\ref{Gldbrgv})).
Here we follow \cite{Becchi:1985bd}
and would like to show, that the $S$-matrix
commutes with the BRST-charge $Q$ by establishing the equations
\begin{equation}\label{brscomm}
[\mathcal{S},:\Sigma:]Z_{|\underline{J}=0}=-[Q,:\Sigma:]Z_{|\underline{J}=0}=[Q,S]=0,
\end{equation}
where
\begin{equation}
\mathcal{S}\equiv \int \Big(J_{\mu\nu}\frac{\delta}K_{\mu\nu}
-j^\rho_c\frac{\delta}{\delta L^\rho}
-j^\rho_{\bar{c}}\frac{\delta}{\delta j^\rho_b} \Big)
\quad \mbox{with} \quad
\mathcal{S}Z=0 \, .
\end{equation}
The lhs of \eqref{brscomm} is a commutator in the space of functionals, i.e.\ of $\mathcal{S}$, the ST-operator, with the $S$-matrix defined on the functional level via $Z$, the generating functional for general Green functions. Now
\begin{equation}\label{2brscomm}
[\mathcal{S},:\Sigma:]Z_{|\underline{J}=0}=0
\end{equation}
since the first term of the commutator vanishes because $\mathcal{S}=0$ for
vanishing sources, the second term of the commutator vanishes due to the
validity of the ST-identity.\\
The rhs of (\ref{brscomm}) is an equation in terms of (pre-)Hilbert
space operators: $S$-operator and BRST-charge, both defined on the indefinite
metric Fock space of creation and annihilation operators. The claim is
that we can find an operator $Q$ such that the rhs holds true.\\
We then know that a subspace defined by $Q|\mathrm{phys}\rangle=0$ is stable under $S$, hence
physical states are mapped into physical states.\\
To show that (\ref{2brscomm}) indeed holds, we observe first that the commutator
$[\mathcal{S},:\Sigma:]$ is of the form $[\mathcal{S},e^Y]$. If
$[\mathcal{S},Y]$ commutes with $Y$, one can reorder
the series into $[\mathcal{S},e^Y]=[\mathcal{S},Y]e^Y$. This has to be
evaluated.
Since in the tree approximation $z=1$, hence $K(x-y)_{\Phi\Phi'}= \Gamma_{\Phi\Phi'}$
we define for the explicit calculation
\begin{equation}\label{auxy}
Y\equiv \int\Big(
h^{\mu\nu}\Gamma^{hh}_{\mu\nu\rho\sigma}\frac{\delta}{\delta J_{\rho\sigma}}
+h^{\mu\nu}\Gamma^{hb}_{\mu\nu\rho}\frac{\delta}{\delta j_\rho^b}
+ b^{\rho}\Gamma^{bh}_{\rho\alpha\beta}\frac{\delta}{\delta J_{\alpha\beta}}
+ b^{\rho}\Gamma^{bb}_{\rho\sigma}\frac{\delta}{\delta j_\sigma^b}
+ c^{\rho}\Gamma^{c\bar{c}}_{\rho\sigma}\frac{\delta}{\delta j_\sigma^{\bar{c}}}
+ \bar{c}^{\rho}\Gamma^{\bar{c}c}_{\rho\sigma}\frac{\delta}{\delta j_\sigma^c}\Big) .
\end{equation}
For the desired commutator one finds
\begin{equation}\label{XYcomm}
[\mathcal{S},Y]=-\int\Big(
h^{\mu\nu}\Gamma^{hh}_{\mu\nu\rho\sigma}\frac{\delta}{\delta K_{\rho\sigma}}
-c^\rho\Gamma^{c\bar{c}}_{\rho\sigma}\frac{\delta}{\delta j^b_\sigma}
-\bar{c}^\rho\Gamma^{\bar{c}c}_{\rho\sigma}\frac{\delta}{\delta L_\sigma}\Big),
\end{equation}
so it clearly commutes with $Y$.\\
In the next step we have to consider $:[\mathcal{S},Y]e^Y:Z$, i.e.\ terms of
the type
\begin{align}\label{XYcomm1}
-\int:\Big(h^{\mu\nu}\Gamma^{hh}_{\mu\nu\rho\sigma}
\frac{\delta}{\delta K_{\rho\sigma}}
-c^\rho\Gamma^{c\bar{c}}_{\rho\sigma}\frac{\delta}{\delta j^b_\sigma}
-\bar{c}^\rho\Gamma^{\bar{c}c}_{\rho\sigma}\frac{\delta}{\delta L_\sigma} \Big)
:Y(1)\cdots Y(n)\cdot Z(\underline{J})_{|\underline{J}=0}\phantom{asdfg}
\end{align}
i.e.
\begin{align}
-\int:\Big(h^{\mu\nu}\Gamma^{hh}_{\mu\nu\rho\sigma}\kappa D^{\rho\sigma}_\lambda
c^\lambda
-c^\rho\Gamma^{c\bar{c}}_{\rho\sigma}b^\sigma
-\bar{c}^\rho\Gamma^{\bar{c}c}_{\rho\sigma}c^\lambda\partial_\lambda c^\sigma\Big)
:Y(1)\cdots Y(n)\cdot Z(\underline{J})_{|\underline{J}=0} . \phantom{asdfg}\nonumber
\end{align}
These terms constitute insertions into the functional $Z$. A closer look
in terms of Feynman diagrams reveals that due to momentum conservation
from $D^{\rho\sigma}_\lambda c^\lambda$ only terms linear in the fields
survive and also the
last term bilinear in $c$ cannot contribute -- when going on mass shell they
cannot develop particle poles. We arrive thus at
\begin{equation}\label{transeq}
:[\mathcal{S},Y]:Z=
:\Sigma\Big[ \int(-h^{\mu\nu}\Gamma^{hh}_{\mu\nu\alpha\beta}
\kappa (\partial^\alpha c^\beta+\partial^\beta c^\alpha)
+c^\rho\Gamma^{c\bar{c}}_{\rho\sigma}b^\sigma)
\Big]:\cdot Z(\underline{J})_{|\underline{J}=0}.
\end{equation}
The second factors in the insertion are just the linearized BRST-variations
of $h^{\alpha\beta}$, resp.\ $\bar{c}^\sigma$. This suggests to introduce
a corresponding BRST operator $Q$ which generates these
transformations
\begin{eqnarray}\label{linBRS}
Q\Gamma&\equiv&\int\Big\lbrack
\kappa(\partial^\mu c^\nu+\partial^\nu c^\mu)
\frac{\delta}{\delta h^{\mu\nu}}
+ b^\rho\frac{\delta}{\delta \bar{c}^\rho}\Big\rbrack \Gamma\\
QZ_c&\equiv&-i\int\Big\lbrack
\kappa(\partial^\mu\frac{\delta Z_c}{\delta j_\nu^c} +
\partial^\nu\frac{\delta Z_c}{\delta j_\mu^c})J_{\mu\nu}+
\frac{\delta Z_c}{\delta j^b_\rho}j_\rho^{\bar{c}}
\Big\rbrack\\
QZ&\equiv&-\int\Big\lbrack
J_{\mu\nu}\kappa(\partial^\mu\frac{\delta}{\delta j^c_\nu} +
\partial^\nu\frac{\delta}{\delta j_\mu^c})+
j^{\bar{c}}_\rho\frac{\delta}{\delta j^b_\rho}\Big\rbrack Z ,
\end{eqnarray}
and to calculate the commutator $[Q,:\Sigma:]Z_{|\underline{J}=0}$. And, indeed it
coincides with the rhs of (\ref{transeq}). Following in detail the
aforementioned diagrammatic analysis we have a simple interpretation:
in the Green functions $G(y;z_1,...,z_n)$ a field
entry has been replaced by the linearized BRST-transformation of it.
Having established (\ref{brscomm}) one can continue along the lines of
\cite{KuOj}, form within the linear subspace of physical states equivalence
classes by modding out states with vanishing norm with the well-known result that
these factor states have non-vanishing norm and the $S$-matrix is unitary.
\subsection{Parametrization and gauge parameter independence.\label{se:paraandgpi}}
It is a necessary preparation for higher orders to clarify, which parameters the model contains and how they are fixed.
Also a glance at the free propagators, (\ref{bosprop}) versus
(\ref{gaugeprop}), shows that they differ in their fall-off properties depending from the value of the gauge parameter
$\alpha_0$. Since Landau gauge $\alpha_0=0$ simplifies calculations enormously we would like to show that it is stable against perturbations. Since these two issues are closely linked
we treat them here together.
Obvious parameters are the couplings $c_0,c_1,c_2$, and $c_3$. In the next subsection we give a prescription, how to fix them by
appropriate normalization conditions. Also obvious is the gauge parameter $\alpha_0$. It will be fixed by the equation of motion for the $b$-field. Since this equation is linear in the $b$-field it also determines its amplitude.
Less obvious is the normalization of the fields
$h^{\mu\nu},c^\rho$ and of the external fields $K,L$. In order to find
their amplitudes it is convenient to inquire under which linear redefinitions of them the ST (\ref{fbrst}) stays invariant.
We define
\begin{align}\label{frdfs}
\hat{h}^{\mu\nu}&=z_1(\alpha_0)h^{\mu\nu} &\hat{c}^\rho&=y(\alpha_0)c^\rho\\
\hat{K}_{\mu\nu}&=\frac{1}{z_1(\alpha_0)}K_{\mu\nu} &\hat{L}_\rho&=\frac{1}{y(\alpha_0)}L_\rho,
\end{align}
where we admitted a dependence on the gauge parameter because we
would like to vary it and detect in this way $\alpha_0$-dependence algebraically. Clearly, the values for $z_1$ and $y$ have to be prescribed. It is also clear that with
$\alpha_0$-independent values for $z_1$ and $y$ the ST-identity is maintained.
In order to make changes of $\alpha_0$ visible we differentiate (\ref{clssct}) with respect to it, i.e.\
\begin{equation}\label{varalph}
\frac{\partial}{\partial\alpha_0}\Gamma=\frac{\partial}{\partial{\alpha_0}}
\Gamma_{\rm gf}=\int(-\frac{1}{2})b_\mu b_\nu\eta^{\mu\nu}=
\mathdutchcal{s}\int(-\frac{1}{4})(\bar{c}_\mu b_\nu+\bar{c}_\nu b_\mu)\eta^{\mu\nu} .
\end{equation}
We observe that this is an $\mathdutchcal{s}$-variation and thus, if we introduce a fermionic partner $\chi= \mathdutchcal{s}\alpha_0$ and perform the change
\begin{equation}\label{chG}
\Gamma_{\rm gf} +\Gamma_{\phi\pi}\to\Gamma_{\rm gf}+\Gamma_{\phi\pi}
+\int(-\frac{1}{4})\chi(\bar{c}_\mu b_\nu+\bar{c}_\nu b_\mu)\eta^{\mu\nu} .
\end{equation}
we have
\begin{equation}\label{chidouble}
\mathcal{S}(\Gamma)+\chi\partial_{\alpha_0}\Gamma=0 .
\end{equation}
We carry over this extended BRST-transformation to $Z$
\begin{equation}\label{chiZ}
\hat{\mathcal{S}}Z\equiv\mathcal{S}Z+\chi\partial_{\alpha_0}Z=0,
\end{equation}
with the implication
\begin{equation}
\partial_\chi(\hat{\mathcal{S}}Z)=0 \quad\Rightarrow\quad
\partial_{\alpha_o} Z=-\mathcal{S}\partial_\chi Z
\end{equation}
showing that $\alpha_0$-dependence is a BRST-variation, hence unphysical.
This last equation
can be easily checked on the free propagators (for propagators connected and
general Green functions coincide).
Using for $Z(\underline{J})$ the form
\begin{equation}\label{znt1}
Z(\underline{J})=\exp \Big\{i\int \mathcal{L}_{\rm int}\big(\frac{\delta}{i\delta\underline{J}}\big)\Big\}Z_0 \qquad Z_0=\exp\Big\{\int i\underline{J} \langle \Phi\Phi \rangle i\underline{J} \Big\}
\end{equation}
one obtains
\begin{equation}\label{znt2}
\partial_{\alpha_0}Z(\underline{J})=\partial_{\alpha_0}Z_0\cdot Z(\underline{J})
=\Big(\partial_{\alpha_0}\int i\underline{J} \langle \Phi\Phi \rangle i\underline{J} \Big)\cdot Z .
\end{equation}
(Here $\underline{J}$ stands for the sources of all propagating fields
$\Phi$.)
Hence $\alpha_0$-dependence remains purely at external lines, if one does not add
$\alpha_0$-dependent counterterms, and then vanishes on the $S$-matrix where these
lines are amputated. It also means that the power counting for the gauge multiplet
is irrelevant because this multiplet shows up only as external
lines.
We now step back and analyze $\alpha_0$-dependence more systematically. Equations (\ref{chidouble}), (\ref{chiZ}) and the analogous one for connected Green functions
\begin{equation}\label{chiZc}
\mathcal{S}Z_c+\chi\partial_{\alpha_0}Z_c=0,
\end{equation}
where $\alpha_0$ undergoes the change
\begin{equation}\label{vrgp}
\mathdutchcal{s}\alpha_0=\chi \qquad \mathdutchcal{s}\chi=0
\end{equation}
have to be solved.
The rhs of (\ref{chG}) is solution of the extended gauge condition
\begin{equation}\label{egc}
\frac{\delta \Gamma}{\delta b^\rho}=
\kappa^{-1}\partial^\mu h_{\mu\rho} -\alpha_0 b^\nu-\frac{1}{2}\chi\bar{c}_\rho.
\end{equation}
Acting with $\delta/\delta b^\rho$ on the ST (\ref{chidouble}) we find
that the ghost equation of motion has changed accordingly
\begin{equation}\label{gee}
G^\rho\Gamma\equiv \Big(\kappa^{-1}\partial^\mu\frac{\delta}{\delta K_{\mu\rho}}
+\frac{\delta}{\delta \bar{c}_\rho} \Big) \Gamma=\frac{1}{2}\chi b^\rho
\end{equation}
As in (\ref{Gmmbr}) and (\ref{sceH}) we introduce
$H_{\mu\nu}=K_{\mu\nu} - \frac{1}{2\kappa}(\partial_\mu\bar{c}_\nu
+\partial_\nu\bar{c}_\mu)$ and $\bar{\Gamma}$ by
\begin{equation}\label{pbrGm}
\Gamma=\bar{\Gamma}
+\int \Big(-\frac{1}{2}\alpha_0 b_\mu b_\nu \eta^{\mu\nu}
-\frac{1}{2\kappa}h^{\mu\nu}(\partial_\mu b_\nu+\partial_\nu b_\nu)
-\frac{1}{4}\chi(\bar{c}_\mu b_\nu+\bar{c}_\nu b_\mu)\eta^{\mu\nu} \Big) .
\end{equation}
The extended ST reads in terms of $\bar{\Gamma}$
\begin{equation}\label{ebrG}
\mathcal{S} (\Gamma)=\mathcal{B}(\bar{\Gamma})=0
\end{equation}
with
\begin{equation}\label{pbrGm2}
\mathcal{B}(\bar{\Gamma})\equiv
\int \Big(\frac{\delta\bar{\Gamma}}{\delta K}\frac{\delta\bar{\Gamma}}
{\delta h}
+\frac{\delta\bar{\Gamma}}{\delta L}
\frac{\delta\bar{\Gamma}}{\delta c}
+\chi\frac{\partial\bar{\Gamma}}{\partial \alpha_0} \Big).
\end{equation}
$\bar{\Gamma}$ satisfies the homogeneous ghost equation of motion
\begin{equation}\label{pgem}
G\bar{\Gamma}=0.
\end{equation}
We now have to find the most general solution of ghost equation
(\ref{gee}) and the new ST (\ref{ebrG}). Due to dimension and
$\phi\pi$-charge neutrality $\bar{\Gamma}$ can be decomposed
as
\begin{equation}\label{gsa}
\bar{\Gamma}=\bar{\bar{\Gamma}}(h,c,K,L,\alpha_0)
+\chi\int(f_K(\alpha_0)Kh+f_L(\alpha_0)Lc)
\end{equation}
With the choice of linear dependence from $h$, however, we
certainly do not cover the most general case:
due to the vanishing dimension of $h^{\mu\nu}$ one could
replace the linear factor $h^{\mu\nu}$ by an arbitrary function
$\mathcal{F}^{\mu\nu}(h)$ in $K_{\mu\nu}h^{\mu\nu}$.
For simplicity we discuss here the linear case, which continues (\ref{frdfs}), whereas the non-linear one will be treated below (see Sect.\ \ref{se:generalsolutionSTI}).
From (\ref{ebrG}) and (\ref{pbrGm2}) we deduce that
\begin{equation}\label{ntrm}
0=\mathcal{B}(\bar{\Gamma})=\mathcal{B}(\bar{\bar{\Gamma}})|_{\chi=0}+\chi\int(
-f_Hh^{\mu\nu}\frac{\delta\bar{\bar{\Gamma}}}{\delta h^{\mu\nu}}
+f_HH^{\mu\nu}\frac{\delta\bar{\bar{\Gamma}}}{\delta H^{\mu\nu}}
+f_Lc\frac{\delta\bar{\bar{\Gamma}}}{\delta c}
-f_LL\frac{\delta\bar{\bar{\Gamma}}}{\delta L})
+\chi\frac{\partial\bar{\bar{\Gamma}}}{\partial \alpha_0} .
\end{equation}
At $\chi=0$ follows first
\begin{equation}\label{ntrm2}
\mathcal{B}(\bar{\bar{\Gamma}})|_{\chi=0}=0,{}
\end{equation}
and then
\begin{equation}\label{ntrm3}
\int \Big(
-f_Hh^{\mu\nu}\frac{\delta\bar{\bar{\Gamma}}}{\delta h^{\mu\nu}}
+f_HH^{\mu\nu}\frac{\delta\bar{\bar{\Gamma}}}{\delta H^{\mu\nu}}
+f_Lc\frac{\delta\bar{\bar{\Gamma}}}{\delta c}
-f_LL\frac{\delta\bar{\bar{\Gamma}}}{\delta L} \Big)
+\frac{\partial\bar{\bar{\Gamma}}}{\partial \alpha_0}=0.
\end{equation}
(\ref{ntrm2}) corresponds to (\ref{fbrst}), hence we know that the general solution (of the linear case) is given by
\begin{eqnarray}\label{frt}
\bar{\bar{\Gamma}}&=&
\hat{c}_3\kappa^{-2}\int\sqrt{-g}R(z_1(\alpha_0)h)
+\hat{c}_1\int\sqrt{-g}R^{\mu\nu}R_{\mu\nu}(z_1(\alpha_0)h)
+\hat{c}_2\int\sqrt{-g}R^2(z_1(\alpha_0)h)\nonumber\\
&&+\hat{c}\int(\kappa H_{\mu\nu}(\frac{y(\alpha_0)}{z_1(\alpha_0)}
(-\partial^\mu c^\nu-\partial^\nu c^\mu)
-y(\alpha_0)(\partial_\lambda c^\mu h^{\lambda\nu}
-c^\lambda\partial_\lambda h^{\mu\nu}
+c^\lambda\partial_\lambda h^{\mu\nu})) \\
&&-\kappa y(\alpha_0) L_\rho\partial^\lambda c^\rho) . \nonumber
\end{eqnarray}
(\ref{frt}) inserted into (\ref{ntrm3}) implies after some calculations that all $\hat{c}$
are independent of $\alpha_0$, whereas the functions $f_{H,L}$
satisfy the relations
\begin{equation}\label{rsfrt}
\partial_{\alpha_{0}}z_1=f_H z_1 \qquad
\partial_{\alpha_{0}}y=-f_L y
\end{equation}
All parameters $\hat{c}$ can therefore be fixed by normalization conditions
independent of $\alpha_0$. Since we shall work in Landau gauge,
$\alpha_0=0$, the functions $f_H,f_L$ will be independent of
$\alpha_0$, as well as $z_1$ and $y$, hence numbers.
\subsection{Normalization conditions I\label{se:nc1}}
In the tree approximation as studied in this section the free parameters of the model
can be prescribed by the following conditions
\begin{eqnarray
\frac{\partial}{\partial p^2}\,\gamma^{(2)}_{\rm TT}|_{p^2=0}&
=&c_3\kappa^{-2}\qquad ({\rm coupling\,\, constant})\label{trnorm3}\\
\frac{\partial}{\partial p^2}\frac{\partial}{\partial p^2}\,
\gamma^{(2)}_{\rm TT}&
=&-2c_1\qquad({\rm coupling\,\, constant})\label{trnorm1}\\
\frac{\partial}{\partial p^2}\frac{\partial}{\partial p^2}\,\gamma^{(0)}_{\rm TT}
&=&2(3c_2+c_1)\qquad (\rm{coupling\,\,constant})\label{trnorm2}\\
\Gamma_{h^{\mu\nu}}&
=&-\eta_{\mu\nu}c_0\doteq0\qquad(\rm{coupling\,\,constant})\label{trnorm0}\\
\frac{\partial}{\partial p_\sigma}\Gamma_{K^{\mu\nu}c_\rho}&=&
-i\kappa(\eta^{\mu\sigma}\delta^\nu_\rho
+\eta^{\nu\sigma}\delta^\mu_\rho
-\eta^{\mu\nu}\delta^\sigma_\rho)\qquad ({\rm amplitude\,\,of\,\,h\,\,and\,\, K})\label{trnorm4}\\
\frac{\partial}{\partial p^\lambda}\Gamma_{L_\rho c^\sigma c^\tau}&=&
-i\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma})\qquad({\rm amplitude\,\,of\,\,c\,\,and\,\,L})\label{trnorm5}
\end{eqnarray}
Imposing the $b$-equation of motion (\ref{beq}) fixes $\alpha_0$ and the
$b$-amplitude.
It is worth mentioning that the $c_1$-contribution to $\gamma^{(0)}$ in \eqref{eq:coffs0} is an implication of the invariance under $\mathdutchcal{s}_1 h$, hence must not be postulated via some normalization condition.
\section{Renormalization\label{se:renormalization}}
At first we have to specify the perturbative expansion in which we would
like to treat the model. Due to the vanishing canonical dimension of the
field $h^{\mu\nu}$ we have to expand in the number of this field. Second
we expand as usual in the number of loops. Next we have to choose a
renormalization scheme in order to cope with the divergences of the loop
diagrams. We shall use the Bogoliubov-Parasiuk-Hepp-Zimmermann-Lowenstein
(BPHZL) scheme \cite{Lowenstein:1975ps} which is based on momentum subtractions and an auxiliary
mass in order to avoid spurious infrared divergences which otherwise
would be introduced by the momentum subtractions when dealing with massless
propagators.\\
The key ingredients of this scheme are the subtraction operator
acting on one-particle-irreducible diagrams (1PI) and the forest formula
which organizes the subtractions. The subtraction operator reads
\begin{equation}\label{sbtr}
(1-\tau_\gamma)=(1-t^{\rho(\gamma)-1}_{p^\gamma(s^\gamma-1)})
(1-t^{\delta(\gamma)}_{p^\gamma s^\gamma}).
\end{equation}
Here $t^d_{x_1...x_n}$ denotes the Taylor series about $x_i=0$ to order
$d$ if $d\ge 0$ or $0$ if $d<0$. $\gamma$ denotes a 1PI diagram, $p^\gamma$
refers to its external momenta, and $s^\gamma$ to an auxiliary subtraction
variable to be introduced.
$\rho(\gamma)$ and $\delta(\gamma)$ are the infrared and ultraviolet subtraction degrees of $\gamma$, respectively.
Those will be specified below. As far as the forest formula is concerned we
refer to the literature (cf. \cite{Lowenstein:1975ug, Lowenstein:1975ps}).
For later use we note that
\begin{equation}\label{rmss
(1-\tau_\gamma)=(1-t^{\delta(\gamma)}_{p^\gamma})
\qquad {\rm for}\quad \rho(\gamma)=\delta(\gamma)+1 .
\end{equation}
\subsection{Auxiliary mass\label{se:auxmass}}
In the BPHZ subtraction scheme one removes UV divergences by suitable subtractions at vanishing external momenta. In the massless case those would introduce artificial (off-shell) IR divergences. Hence in an extension, the BPHZL scheme, one introduces an auxiliary mass term of type $M^2(s-1)^2$ for every massless propagator. Subtractions with respect to $p,s$ performed at $p=0,s=0$ take care of the UV divergences. Subtractions with respect to $p,s-1$ thereafter establish correct normalizations for guaranteeing
poles at $p=0$ and vanishing of three-point functions (of massless fields) at $p=0$ .\\
When trying to introduce such an auxiliary mass term for the massless pole in the double pole propagators one encounters difficulties. Neither with a naive $hh$-term nor with a Fierz-Pauli type mass term can one invert $\Gamma_{hh}$ to propagators
$G_{hh}$ such that the Lagrange multiplier field $b_\rho$ remains non-propagating. But its propagation would prevent its use in the quartet formalism of \cite{KuOj}. A glance at the propagators (\ref{bosprop}) and the coefficients
$\gamma^{({\rm r})}_{\rm KL}$, (\ref{coffs}) suggests to replace
the overall factor $p^2$ in the $\gamma$’s by
\begin{equation}\label{axms}
p^2 - m^2 \equiv p^2-M^2(s-1)^2 .
\end{equation}
Here $m^2$ denotes the auxiliary mass contribution.
This \emph{Push} in $p^2$ still maintains restricted invariance, i.e.\ under $\mathdutchcal{s}_0 h$, (see Sect.\ \ref{se:push}
and App.\ \ref{se:brst0inv}), and is
fairly easy to carry along as we shall see.\\
Accepting this change of vertices and propagators one has to analyze in some
detail what it implies. For the propagators it is clear that the pole at $p^2=0$
is shifted, as desired to a pole at $p^2=m^2$. It affects not only the invariant
parts, but also the gauge fixing dependent propagators $\langle bh \rangle$ and $\langle \bar{c}c \rangle$.
This can be seen when performing Push in $\Gamma$ and having a look at the
inversion equations. The $\gamma$'s (\ref{coffs}) then read
\begin{eqnarray}\label{mcoffs}
\gamma^{(2)}_{TT} &=&-(p^2-m^2)(c_1p^2-c_3\kappa^{-2})\\
&\Rightarrow&
m^2\hat{\gamma}^{(2)}_{TT}(m^2)=m^2(c_1p^2-c_3\kappa^{-2})\label{gmps1} \\
\gamma^{(0)}_{TT} &=&(p^2-m^2)((3c_2+c_1)p^2+\frac{1}{2}c_3\kappa^{-2})\\
&\Rightarrow&
m^2\hat{\gamma}^{(0)}_{TT}(m^2)=-m^2((3c_2+c_1)p^2+\frac{1}{2}c_3\kappa^{-2})\label{gmps2}\\
\gamma^{(1)}_{SS}&=& \gamma^{(0)}_{WW}=\gamma^{(0)}_{TW}=\gamma^{(0)}_{WT}
=0 .
\end{eqnarray}
In the inversion equations one has products of $\gamma^{(r)}_{KL}$ with
its direct counterpart $\langle hh \rangle^{(r)}_{KL}$, such that this change is not a change there.\\
For gauge fixing terms we find the effect of Push as follows
\begin{eqnarray}\label{mfg}
\Gamma^{hb}_{\mu\nu\rho}G^{bh}&=&\frac{i}{2\kappa}(\eta_{\rho\mu}p_\nu+\eta_{\rho\nu} p_\mu)\frac{\kappa}{p^2}(p^\mu\theta^{\nu\rho}+p^\nu\theta^{\mu\rho}+p^\rho\omega^{\mu\nu}) \qquad{\rm (local)}\\
&=&\frac{i}{2\kappa}(\eta_{\rho\mu}p_\nu+\eta_{\rho\nu} p_\mu)
\frac{p^2}{p^2}\frac{\kappa}{p^2}(p^\mu\theta^{\nu\rho}+p^\nu
\theta^{\mu\rho}+p^\rho\omega^{\mu\nu})
\qquad{\rm (local)} \\
&\stackrel{\rm Push}{\rightarrow}&\frac{i}{2\kappa}(\eta_{\rho\mu}p_\nu
+\eta_{\rho\nu}p_\mu)\frac{p^2-m^2}{p^2}\frac{\kappa}{p^2-m^2}
(\theta^{\rho\mu}p^\nu+\theta^{\rho\nu} p^\mu
+p^\rho\omega^{\mu\nu})\\
\Rightarrow\Gamma(m^2)^{hb}_{\mu\nu\rho}&=&\frac{-im^2}{2\kappa p^2}(\eta_{\rho\mu}p_\nu+\eta_{\rho\nu} p_\mu) \qquad({\rm non-local}),\label{gmgf}\\
\Rightarrow G^{bh}_{\rho\mu\nu}&=&\frac{\kappa}{p^2-m^2}(p_\mu\theta_{\rho\nu}
+ p_\nu\theta_{\rho\mu}+p_\rho\omega_{\mu\nu})\qquad({\rm massive\,\, propagator})
\end{eqnarray}
i.e.\ there appears an additional term in $\Gamma^{hb}$ and the $\langle bh \rangle$-propagator
becomes massive (with the auxiliary mass).
In $x$-space the complete gauge fixing term reads
\begin{eqnarray}\label{cmgf}
\Gamma_{\rm{gf}}&=&-\frac{1}{2\kappa}\int dxdy\, h^{\mu\nu}(x)
(\partial_\mu b_\nu+\partial_\nu b_\mu)(y)\Big\lbrace\delta(x-y)
+\frac{m^2}{(x-y)^2}\Big\rbrace -\frac{\alpha_0}{2}\int\eta^{\mu\nu} b_\mu b_\nu\nonumber\\
&=&-\frac{1}{2\kappa}\int dxdy\, h^{\mu\nu}(x)
(\partial_\mu b_\nu+\partial_\nu b_\mu)(y)\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)
\frac{1}{(x-y)^2}\Big\rbrace -\frac{\alpha_0}{2}\int\eta^{\mu\nu} b_\mu b_\nu.
\end{eqnarray}
A suitable Faddeev-Popov (FP) term is then
\begin{eqnarray}\label{FaPo}
\Gamma_{\phi\pi}&=&-\frac{1}{2}\int dxdy\, D^{\mu\nu}_\rho c^\rho(x)
(\partial_\mu \bar{c}_\nu+\partial_\nu \bar{c}_\mu)(y)\lbrace\delta(x-y)
+\frac{m^2}{(x-y)^2}\rbrace\nonumber\\
&=&-\frac{1}{2}\int dxdy\, D^{\mu\nu}_\rho c^\rho(x)
(\partial_\mu \bar{c}_\nu
+\partial_\nu \bar{c}_\mu)(y)\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\rbrace ,
\end{eqnarray}
because it maintains the BRST-doublet structure within the gauge fixing procedure.\\
A comment to the ``non-local'' terms is in order. Our writing is
symbolic shorthand in order to have a simple handling of these terms. Using the explicit form of $\mathdutchcal{s}_0 h$ and integration by parts one may observe that the actual non-local part is of projector type in terms of differential operators -- quite in line with its first appearance in $p$-space.
There the projectors lead formally to direction dependent integrals. However Zimmermann's $\varepsilon$, introduced as
\begin{align}
p^2 \to p^2+i\epsilon({\mathbf p}^2) \, , \label{psl}
\end{align}
guarantees absolute convergence, hence no serious problem will arise once we have reliable power counting and appropriate correct subtractions. Of course, at the physical value $s=1$ it disappears anyway.\\
We therefore discuss in the next subsection power counting and convergence with positive
outcome, and return thereafter to a discussion of the $m^2$-dependent terms.
Before starting with the presentation of power counting we have to have a look at the basis of naively symmetric insertions
once we have introduced an auxiliary mass term.
Obviously we can introduce the following \emph{Shift}
\begin{equation}\label{ivce}
\int\sqrt{-g}c_3\kappa^{-2}R \to \int\sqrt{-g}(c_{30}\kappa^{-2}
+c_{31}\kappa^{-1}m+c_{32}\frac{1}{2}m^2)R.
\end{equation}
In the tree approximation these terms are invariant (and for
$s=1$ reduce to the original term), but in higher orders they represent new and independent elements in the basis of symmetric normal products
with $\delta=\rho=4$ (cf. \cite{Zimmermann:1972tv}).
So, we have to carry them along as vertices when studying power counting.
\subsection{Power counting and convergence\label{se:powercounting}}
In the Landau gauge, $\alpha_0=0$, the only non-vanishing propagators are
the following one's:
\begin{eqnarray}\label{sprops}
\langle hh \rangle^{(2)}_{TT}&=&\frac{i}{(p^2-m^2)c_1( p^2-\frac{c_3\kappa^{-2}}
{c_1})}\\
\langle hh \rangle^{(0)}_{TT}&=&\frac{i}{(p^2-m^2)(3c_2+c_1)(p^2+\frac{c_3\kappa^{-2}}{2(3c_2+c_1)})}\\
\langle b_\rho h_{\mu\nu} \rangle&=&\frac{1}{p^2-m^2}(p_\mu\theta_{\nu\rho}
+p_\nu\theta_{\mu\rho}+p_\rho\omega_{\mu\nu})\\
\langle \bar{c}_\rho c_\sigma \rangle&=&
-i\big(\theta_{\rho\sigma}+\frac{1}{2}\omega_{\rho\sigma} \big)\frac{1}{p^2-m^2}
\end{eqnarray}
In addition to $m=M(s-1)$ one needs also Zimmermann's $\varepsilon$-prescription (\ref{psl}). This will guarantee absolute convergence of diagrams, once power counting
is established and subtractions are correctly performed.\\
Important note: in all formulas to follow in this section
the replacement of $c_3$ by the sum given in (\ref{ivce}) is to be understood. Relevant for power counting arguments is never
a coefficient in front of a vertex, but the number of lines and derivatives at the vertex and its associated subtraction degree.
The $\langle bh \rangle$ propagator will be of no relevance for reasons spelled out after (\ref{znt2}).
Power counting is based on ultraviolet (UV) and infrared (IR) degrees
of propagators and vertices. The upper degree $\overline{\rm deg}_{p,s}$ gives the asymptotic power for $p$ and $s$ tending to infinity; the lower degree
$\underline{\rm deg}_{p,(s-1)}$ gives the asymptotic power for $p$ and $s-1$ tending to zero.
For propagators they read
\begin{eqnarray}\label{dgpr}
{\overline{\rm deg}}_{p,s}(\langle hh \rangle^{(2)}_{TT})&=&-4 \qquad{}
{\underline{\rm deg}}_{p,s-1}(\langle hh \rangle^{(2)}_{TT})=-2 \\
\label{dgpr2}
{\overline{\rm deg}}_{p,s}(\langle hh \rangle^{(0)}_{TT})&=&-4 \qquad
{\underline{\rm deg}}_{p,s-1}(\langle hh \rangle^{(0)}_{TT})=-2 \\
{\overline{\rm deg}}_{p,s}(\langle \bar{c}c \rangle)&=&
{\underline{\rm deg}}_{p,s-1}(\langle \bar{c}c \rangle)=-2 .
\end{eqnarray}
As shorthand we write also
$\overline{\rm deg}\equiv \overline{D}_L$ and $\underline{\rm deg}\equiv \underline{D}_L$.
The degrees of the vertices thus have the values
\begin{eqnarray}\label{dgve}
\overline{D}_{V^{(c_1)}}&=&\overline{D}_{V^{(c_2)}}=4,
\quad \overline{D}_{V^{(c_3)}}=2
\quad \overline{D}_{V^{(\phi\pi)}}=2\\
\underline{D}_{V^{(c_1)}}&=&\underline{D}_{V^{(c_2)}}=4,
\quad \underline{D}_{V^{(c_3)}}=2
\quad \underline{D}_{V^{(\phi\pi)}}=2 .
\end{eqnarray}
Let us now consider a one-particle-irreducible (1PI) diagram $\gamma$ with $m$
loops, $I_{ab}$ internal lines, $a,b = h,c,\bar{c}$, and $V$
vertices of type $V^{(c_1,c_2,c_3,\phi\pi)}$ or insertions $Q_i$ as well as $N$ amputated
external lines. In the subsequent considerations a more detailed notation is useful: $N_a$ are of type $\Phi_a$,
$n_{ai}$ are of type $a$ and are attached to the $i^{\rm th}$ vertex. Then
with $Q_i$
\begin{equation}\label{ins}
Q_i(x)=(\frac{\partial}{\partial x})^{|\mu_i|}\prod_a(\Phi_a^{c_{ai}}(x)) ,
\end{equation}
we first find for the UV- and IR-degrees of $\gamma$
\begin{eqnarray}\label{gmmadeg}
d(\gamma)&=&4m(\gamma)+\sum_{V\in\gamma}\overline{D}_V+\sum_{L\in\gamma}\overline{D}_L\\
&=&4m(\gamma) +4V^{(c_1,c_2)}+2V^{(c_3)}+2V^{(\phi\pi)}
-4I_{hh}-2I_{c\bar{c}},\\
r(\gamma)&=&4m(\gamma)+\sum_{V\in\gamma}\underline{D}_V
+\sum_{L\in\gamma}\underline{D}_L\\
&=&4m(\gamma)+4V^{(c_1,c_2)}+2V^{(c_3)}+2V^{(\phi\pi)}
-2I_{hh}-2I_{\bar{c}c} .
\end{eqnarray}
The topological relations
\begin{eqnarray}\label{topform}
m&=&I-V+1\\
N_a&=&\sum_i n_{ai}
\qquad 2I_{aa}=\sum_i(c_{a i}-n_{ai})=\sum_ic_{ai}-N_a
\end{eqnarray}
permit to rewrite these degrees as
\begin{eqnarray}\label{uvirdeg}
d(\gamma)&=&4+\sum_{V\in\gamma}(\overline{D}_V-4)
+\sum_{L\in\gamma}(\overline{D}_L+4)\\
d(\gamma)&=&4-N_{\tilde{c}}-2V^{(c_3)}\\
r(\gamma)&=&4+\sum_{V\in\gamma}(\underline{D}_V-4)
+\sum_{L\in\gamma}(\underline{D}_L+4)\\
r(\gamma)&=&4-2V^{(c_3)}-2V^{(\phi\pi)}+2I_{hh}+2I_{c\bar{c}} .
\end{eqnarray}
(Here $\tilde{c}$ stands for both, $c$ and $\bar{c}$.)
The aim is now to associate subtraction degrees to them which are independent
of the detailed structure of the respective diagrams. An obvious choice is
\begin{equation}\label{subtrdeg}
\delta(\gamma) = 4 \qquad \rho(\gamma) = 4
\end{equation}
\noindent
Before proceeding,
a comment to $\delta(\gamma)=4$ is in order. Obviously there are infinitely many
divergent diagrams possible, even for every number $N$ of external $h$-lines.
This requires infinitely many parameters as normalizations. Those are
provided by the infinitely many arbitrary parameters which arise from the
redefinition of $h$ as a function of itself. They are gauge type parameters
and constitute only wave function
renormalizations hence are unphysical. This will be discussed in detail later (see Sect.\ \ref{se:generalsolutionSTI}).
We would like to prove convergence along the lines of theorems established in
\cite{Lowenstein:1975ps}. In order to do so we formulate a few conditions
which will later turn out to be sufficient for proving convergence. The first
one reads
\begin{equation}\label{c1}
\delta(\gamma)= d(\gamma)+b(\gamma)
\quad\mbox{and}\quad \rho(\gamma)= r(\gamma)-c(\gamma) \tag{C1}
\end{equation}
with $b(\gamma)$ and $c(\gamma)$ being non-negative integers.
$b(\gamma)\ge 0$ is obviously satisfied, but for $c(\gamma)$
we have to convince ourselves that the bracket terms in
(\ref{uvirdeg}) are greater or equal to zero. Hence we need the more detailed
information given by the line balances
\begin{eqnarray}\label{linetop}
2I_{hh}&=& \sum_{i\in \gamma}(c_{h,i}-n_{h,i})
= \sum_{i\in \gamma}(c_{h,i})-N_h
\quad i\in\{V^{(c_1)},V^{(c_2}), V^{(c_3)},V^{(\phi\pi)}\}\\
2I_{c\bar{c}}&=&\sum_{i\in\phi\pi}(c_{c,i}-n_{c,i})
= \sum_{i\in\phi\pi}c_{c,i}-N_{c} .
\end{eqnarray}
We find
\begin{equation}\label{cg}
c(\gamma)=\sum_{i\in c_1,c_2}(c_{h,i}-n_{h,i})
+\sum_{i\in c_3}(c_{h,i}-n_{h,i}-2)
+\sum_{i\in \phi\pi}(c_{\tilde{c},i}-n_{\tilde{c},i}-2)
+\sum_{i\in\phi\pi}(1-n_{h,\phi\pi})
\end{equation}
If the vertex $i$ in question is not present in $\gamma$, the respective brackets
just vanish. If this vertex is present in $\gamma$, then $(c_{h,i}-n_{h,i})\ge 2$ and $(c_{h,i}-n_{h,i}-2)\ge 0$ -- both for 1PI $\gamma$.
Since $c_{\tilde{c},\phi\pi}=2$ the third bracket combines with the
fourth such that their sum is $\ge 0$ -- again for 1PI $\gamma$ -- we find two cases:
either $n_{h,i_0}=1$ at vertex $i_0$ s.t. $n_{\tilde{c},i_0}=0$ (otherwise
$\gamma$ is not 1PI)
or $n_{h,i_0}=0$ at vertex $i_0$ s.t. $+1$ from here and from
$n_{\tilde{c},i_0}$ at most 1, i.e. $-1$ in the sum (otherwise $\gamma$ is not 1PI), which
together is $0$, i.e. non-negative.
Hence equations (\ref{c1}) are valid.
The next requirements refer to reduced diagrams
$\bar{\Lambda}=\Lambda/\lambda_1,...\lambda_n$, which are obtained from
$\Lambda$
by contracting mutually disjoint, non-trivial 1PI subdiagrams $\lambda_i$
to points (reduced vertices) $V(\lambda_i)$ assigning (for the sake of power
counting) the unit polynomial of momenta to each $V(\lambda_i)$. For 1PI
$\gamma$ one has the relations
\begin{eqnarray}\label{subdeg}
d(\gamma)&=&d(\gamma/\lambda_1...\lambda_n)+\sum_{i=1}^{n}d(\lambda_i)\\
r(\gamma)&=&r(\gamma/\lambda_1...\lambda_n)+\sum_{i=1}^{n}r(\lambda_i).
\end{eqnarray}
Their analogues are also valid for connected diagrams.
Now one can formulate further conditions for convergence, i.e.\
\begin{align}\label{c2}
\delta(\gamma)&\ge d(\gamma/\lambda_1...\lambda_n)
+\sum_{i=1}^{n}\delta(\lambda_i) \tag{C2}\\
\rho(\gamma)&\le r(\gamma/\lambda_1...\lambda_n)
+\sum_{i=1}^{n}\rho(\lambda_i) \tag{C3} \label{c3}\\
\rho(\gamma)&\le \delta(\gamma)+1 \tag{C4} \label{c4}
\end{align}
for arbitrary reduced 1PI subdiagrams $\gamma/\{\lambda_i\}$ of $\Gamma$.
In order to verify \eqref{c2} one just inserts the values for the respective
degrees.
\begin{eqnarray}\label{cC2}
\delta(\gamma)&=&4 \\
\delta(\gamma_i)&=&4\\
d(\gamma)&=&4-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)+2I_{c\bar{c}}(\gamma)\\
d(\gamma_i)&=&4-2V^{(c_3)}(\gamma_i)-2V^{(\phi\pi)}(\gamma_i)
+2I_{c\bar{c}}(\gamma_i)\\
d(\bar{\gamma})&=&4-2V^{(c_3)}(\bar{\gamma})-2V^{(\phi\pi)}(\bar{\gamma})
+I_{c\bar{c}}(\bar{\gamma})-4n\\
d(\bar{\gamma})+\sum_i\delta(\gamma_i)&=&4-2V^{(c_3)}(\bar{\gamma})
+\sum_{i\in\phi\pi}(-2+c_{c,\phi\pi}-n_{c,\phi\pi})(\bar{\gamma})\\
\delta(\gamma)=4&\ge& 4-N_{\tilde{c}}(\bar{\gamma})-2V^{(c_3)}(\bar{\gamma}) .
\end{eqnarray}
(We have used that $c_{c,\phi\pi}=2$.)
The last inequality was to be proved.
For the proof of \eqref{c3} one treats first the
case $\rho(\gamma)=\rho(\gamma_i)=4$
and uses the fact that the line balances used
for proving \eqref{c1} also hold for reduced diagrams.
For the case
$\rho(\gamma)=\rho(\gamma_i)=5=\delta(\gamma)+1=\delta(\gamma_i)+1$,
which is
the upper bound admitted for the IR-degrees, one finds also that the
desired inequality holds.
\eqref{c4} is satisfied by definition.
We can now refer to \cite[Theorem 4]{Lowenstein:1975ps} in which
it is shown that these conditions being satisfied, Green functions exist as
tempered distributions, whereas for non-exceptional momenta (Euclidean sense)
vertex functions exist as functions. Due to a theorem of Lowenstein and Speer
\cite{Lowenstein:1975ku}
in the limit $\varepsilon \rightarrow 0$ Lorentz covariance is also satisfied.
An important improvement concerning Lorentz covariance has been provided by \cite{Clark:1976ym}. If one introduces Zimmermann's $\varepsilon$
via a change of metric
$\eta_{\mu\nu} \to {\rm diag}(1,-(1-i\varepsilon),-(1-i\varepsilon), -(1-i\varepsilon) )$ in addition to multiplying each mass-square by
$(1-i\varepsilon)$ then Lorentz covariance already holds for the rhs of ZI's before establishing the $\varepsilon\to 0$ limit. This is quite helpful for actual work with
ZI's.
The above proof of convergence refers to diagrams constructed out of vertices
with vanishing Faddeev-Popov (FP) charge. For installing the ST-identity in higher
orders one needs however diagrams which once contain the vertex $V^{(-)}$ of types
\begin{eqnarray}\label{fpm1}
\overline{D}(V^{(-)})=\left\{
\begin{array}{ll}
3& {\rm for}\quad V^{(-)}\simeq \int c\,\partial\partial\partial\, h\cdots h\\
5& {\rm for}\quad V^{(-)}\simeq \int c\,\partial\partial\partial\partial\partial\,
h\cdots h
\end{array} \right. &
\underline{D}(V^{(-)})=\overline{D}(V^{(-)}),
\end{eqnarray}
i.e.\ of FP-charge $-1$. The UV- and IR-degrees become resp.\
\begin{eqnarray}\label{gmmadeg2}
d(\gamma)&=&4m(\gamma)+\sum_{V\in\gamma}\overline{D}_V
+\sum_{L\in\gamma}\overline{D}_L+\overline{D}_{V^{(-)}} \\
r(\gamma)&=&4m(\gamma)+\sum_{V\in\gamma}\underline{D}_V
+\sum_{L\in\gamma}\underline{D}_L+\underline{D}_{V^{(-)}} .
\end{eqnarray}
With (\ref{topform}) this results into $(V^{(-)}\in \gamma)$
\begin{eqnarray}\label{vld}
d(\gamma)&=&4+\sum_{V\in \gamma}(\overline{D}_V-4)
+\sum_{L\in\gamma}(\overline{D}_L+4)\\
&=&4-N_{\tilde{c}}-2V^{(c_3)} +(\overline{D}_{V^{(-)}}-4)\\
r(\gamma)&=&4+\sum_{V\in \gamma}(\underline{D}_V-4)
+\sum_{L\in\gamma}(\underline{D}_L+4)\\
&=&4-2V^{(c_3)}-2V^{(\phi\pi)}+(\underline{D}_{V^{(-)}}-4) +2I_{hh}+2I_{c\bar{c}} .
\end{eqnarray}
As subtractions degrees we define
\begin{eqnarray}\label{sdr2}
\delta(\gamma)&=d(\gamma)+b(\gamma)
=\left\{\begin{array}{ll}
4& {\rm if}\quad V^{(-)}\notin \gamma\\
5& {\rm if}\quad V^{(-)}\in \gamma
\end{array} \right. \\
\rho(\gamma)&=r(\gamma)-c(\gamma)
=\left\{\begin{array}{ll}
4& {\rm if}\quad V^{(-)}\notin \gamma\\
5& {\rm if}\quad V^{(-)}\in \gamma .
\end{array} \right.
\end{eqnarray}
The line balances read now
\begin{eqnarray}\label{lbcst
2I_{hh}&=& \sum_{i\in \gamma}(c_{h,i}-n_{h,i})
= \sum_{i\in \gamma}(c_{h,i})-N_h
\quad i\in\{V^{(c_1)},V^{(c_2}), V^{(c_3)},V^{(\phi\pi)},V^{(-)}\}\\
2I_{c\bar{c}}&=&\sum_{i\in\gamma}(c_{c,i}-n_{c,i})
= \sum_{i\in\gamma}c_{c,i}-N_{c}
\quad i\in\{V^{(\phi\pi)},V^{(-)}\} .
\end{eqnarray}
In order to verify \eqref{c1} we have to show that
$b(\gamma)=\delta(\gamma)-d(\gamma)\ge 0$.
\begin{eqnarray}\label{c1dst
b(\gamma)&=&5-d(\gamma)\\
&=&5-4+2V^{(c_3)}+2V^{(V_{\phi\pi})}-(\overline{D}^{V^{(-)}}-4)-2I_{c\bar{c}}\\
&=&1+2V^{(c_3)}-1+\sum_{i\in\phi\pi}n_{\tilde{c},\phi\pi}-(1-n_{c,V^{(-)}})\\
&=&2V^{(c_3)}+\sum_{i\in\phi\pi}n_{\tilde{c},\phi\pi}-(1-n_{c,V^{(-)}}) .
\end{eqnarray}
Here we have used the line balance for $I_{c\bar{c}}$ (\ref{linetop})and chosen
the more dangerous case $\overline{D}_{V^{(-)}}=5$.
If $n_{c,V^{(-)}}=0$, there must a $+1$ coming from the $\phi\pi$-sum, because the
FP-charge is conserved. Hence the inequality holds.
The control of
\begin{eqnarray}\label{2c1r}
c(\gamma)&=&r(\gamma)-\rho(\gamma)\\
&=&4-2V^{(c_3)} -2V^{(\phi\pi)}+2I_{hh}+2I_{c\bar{c}}
+(\underline{D}(V^{(-)})-4)-5\\
&=&-2V^{(c_3)} -2V^{(\phi\pi)}+2I_{hh}+2I_{c\bar{c}}
+(\underline{D}(V^{(-)})-4)-1\\
&=&-2V^{(c_3)})-2V^{(\phi\pi)})+2I_{hh}+2I_{c\bar{c}}+
\left\{\begin{array}{l}
-1 \,\,{\rm for}\,\,\underline{D}_{V^{(-)}}=3\\
+1 \,\,{\rm for}\,\,\underline{D}_{V^{(-)}}=5
\end{array} \right.\ge 0 .
\end{eqnarray}
is similar:
On the vertices we have the information
\begin{equation}\label{vrbl
\sum_{i\in c_1,c_2,c_3}(c_{h,i}-n_{h,i})
+\sum_{i\in\phi\pi}(c_{h,\phi\pi}-n_{h,\phi\pi})
+ (c_{h,V^{(-)}}-n_{h,V^{(-)}})+c_{c,V^{(-)}}\ge0,
\end{equation}
where $c_{c,V^{(-)}}=1$: there is one $c$-field in $V^{(-)}$.
Inserting this into the more dangerous case $\underline{D}_{V^{(-)}}=3$
and taking into account the terms $-2V^{(c_3)}-2V^{(\phi\pi)}-2$ we get
\begin{eqnarray}\label{cgaes}
c(\gamma)&=&\sum_{i\in c_1,c_2}(c_{h,i}-n_{h,i})
+\sum_{i\in c_3}(c_{h,i}-n_{h,i}-2)\nonumber \\
&&+\sum_{i\in\phi\pi}((1-n_{h,i}-2)+(2-n_{\tilde{c},i}))\\
&&+(c_{h,V^{(-)}}-n_{h,V^{(-)}})+1-n_{c,V^{(-)}}-2\ge 0 .
\end{eqnarray}
The two sums in the first line are non-negative for $\gamma$ 1PI.
The same is true as before for the sum in the second line.
In the third line we look at $1+c_{h,V^{(-)}}-n_{h,V^{(-)}}-n_{c,V^{(-)}}$.
\begin{eqnarray}
n_{c,V^{(-)}}&=&1 \Rightarrow\, c_{h,V^{(-)}}-n_{h,V^{(-)}}\ge 2\\
n_{c,V^{(-)}}&=&0 \Rightarrow\, c_{h,V^{(-)}}-n_{h,V^{(-)}}\ge 1,
\end{eqnarray}
Hence in both cases is $1+c_{h,V^{(-)}}-n_{h,V^{(-)}}-n_{c,V^{(-)}}\ge 2$
and thus $c(\gamma)\ge 0$.
In order to check \eqref{c2}
we start with the case $V^{(-)}\notin \gamma_i$, i.e.
\begin{eqnarray}\label{stC21
d(\gamma)&=&4-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)+2I_{c\bar{c}}(\gamma)
+\left\{\begin{array}{l}
-1\quad {\rm for}\quad \overline{D}_{V^{(-)}}=3 \\
+1\quad {\rm for}\quad \overline{D}_{V^{(-)}}=5
\end{array}\right.\\
d(\gamma_i)&=&4-2V^{(c_3)}(\gamma_i)-2V^{(\phi\pi)}(\gamma_i)
+2I_{c\bar{c}}(\gamma_i)\\
\delta(\gamma)&=&5 \quad \mbox{and} \quad \delta(\gamma_i)=4\\
d(\bar{\gamma})&=&4\mp1-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)
+2I_{c\bar{c}}(\gamma)\\
&&-\sum_i (4-2V^{(c_3)}(\gamma_i)
-2V^{(\phi\pi)}(\gamma_i)+2I_{c\bar{c}}(\gamma_i))\\
d(\bar{\gamma})+\sum_i\delta(\gamma_i)&=&
4\mp 1-2V^{(c_3)}(\bar{\gamma})-2V^{(\phi\pi)}(\bar{\gamma})
+2I_{c\bar{c}}(\bar{\gamma}) \stackrel{?}{\le} 5\,.\nonumber
\end{eqnarray}
The estimates for $b(\gamma)$ are also valid for $b(\bar{\gamma})$, hence this
inequality is satisfied.
For the case $V^{(-)}\in \gamma_{i_0}$ the following equations are
relevant
\begin{eqnarray}\label{stC22
\delta(\gamma)&=&5 \quad \delta(\gamma_i)=4 \quad i\not=i_0
\quad \delta(\gamma_{i_o})=5\\
d(\gamma)&=&4-1(+1)-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)
+2I_{c\bar{c}}(\gamma)\\
d(\gamma_i)&=&4-2V^{(c_3)}(\gamma_i)-2V^{(\phi\pi)}(\gamma_i)
+2I_{c\bar{c}}(\gamma_i) \quad i\not=i_0\\
d(\gamma_{i_0})&=&4\mp1-2V^{(c_3)}(\gamma_{i_0})-2V^{(\phi\pi)}(\gamma_{i_0})
+2I_{c\bar{c}}(\gamma_{i_0})\\
d(\bar{\gamma})&=&4\mp1-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)
+2I_{c\bar{c}}(\gamma)\\
&&- (4\mp1-2V^{(c_3)}(\gamma_{i_0})
-2V^{(\phi\pi)}(\gamma_{i_0})+2I_{c\bar{c}}(\gamma_{i_0}))\nonumber\\
&&-\sum_{i\not={i_0}} (4-2V^{(c_3)}(\gamma_i)-2V^{(\phi\pi)}(\gamma_i)
+2I_{c\bar{c}}(\gamma_i))\nonumber\\
d(\bar{\gamma})+\sum_i\delta(\gamma_i)&=&
5-2V^{(c_3)}(\bar{\gamma})-2V^{(\phi\pi)}(\bar{\gamma})
+2I_{c\bar{c}}(\bar{\gamma}) \stackrel{?}{\le} 5\,.
\end{eqnarray}
Again: Since the estimate for $b(\gamma)$ is also valid for $b(\bar{\gamma})$
the inequality holds in this case, hence \eqref{c2} is verified.
We now have to verify \eqref{c3}
For the case $V^{(-)}\notin \gamma_i$ we find
\begin{eqnarray}\label{stC31
r(\gamma)&=&4-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)\\
&&+2I_{hh}(\gamma)+2I_{c\bar{c}}(\gamma)
+\left\{\begin{array}{l}
-1\quad {\rm for}\quad \underline{D}_{V^{(-)}}=3 \\
+1\quad {\rm for}\quad \underline{D}_{V^{(-)}}=5
\end{array}\right.\\
r(\gamma_i)&=&4-2V^{(c_3)}(\gamma_i)-2V^{(\phi\pi)}(\gamma_i)
+2I_{hh}(\gamma_i)+2I_{c\bar{c}}(\gamma_i)\\
\rho(\gamma)&=&5 \quad \mbox{and} \quad \rho(\gamma_i)=4\\
r(\bar{\gamma})&=&4\mp1-2V^{(c_3)}(\gamma)-2V^{(\phi\pi)}(\gamma)
+2I_{hh}(\gamma)+2I_{c\bar{c}}(\gamma)\\
&&-\sum_i (4-2V^{(c_3)}(\gamma_i)
-2V^{(\phi\pi)}(\gamma_i)+2I_{hh}(\gamma_i)+2I_{c\bar{c}}(\gamma_i))\\
r(\bar{\gamma})+\sum_i\rho(\gamma_i)&=&
4\mp1-2V^{(c_3)}(\bar{\gamma})-2V^{(\phi\pi)}(\bar{\gamma})
+2I_{hh}(\bar{\gamma})+2I_{c\bar{c}}(\bar{\gamma}) \stackrel{?}{\ge} 5\,.\nonumber
\end{eqnarray}
The estimates for $c(\gamma)$ are also valid for $c(\bar{\gamma})$, hence this
inequality is satisfied.
For the case $V^{(-)}\in \gamma_{i_0}$ the following equations are
relevant
\begin{equation}\label{stC3i0}
\rho(\gamma)=5 \qquad \rho(\gamma_i)=4 \quad (i\not=i_0) \qquad \rho(\gamma_0)=5 .
\end{equation}
The equation for $r(\bar{\gamma})$ is unchanged, but due to the presence of
$V^{(-)}$ in $\gamma_{i_0}$ the final equation reads
\begin{equation}\label{stC3i0f
r(\bar{\gamma})+\sum_i\rho(\gamma_i)=
5\mp1-2V^{(c_3)}(\bar{\gamma})-2V^{(\phi\pi)}(\bar{\gamma})
+2I_{hh}(\bar{\gamma})+2I_{c\bar{c}}(\bar{\gamma}) \stackrel{?}{\ge} 5\,.
\end{equation}
The question then is, whether
$\tilde{c}(\bar{\gamma})\equiv -2V^{(c_3)}(\bar{\gamma})
-2V^{(\phi\pi)}(\bar{\gamma})
+2I_{hh}(\bar{\gamma})+2I_{c\bar{c}}(\bar{\gamma}) \ge 1$.
As in (\ref{cgaes}) we rewrite this expression explicitly in sums over vertices
and their line ``occupation''
\begin{eqnarray}\label{vlo}
\tilde{c}(\bar{\gamma})&=&\sum_{i\in c_1,c_2}(c_{h,i}-n_{h,i})(\bar{\gamma})
+\sum_{i\in c_3}(c_{h,i}-n_{h,i}-2)(\bar{\gamma})\nonumber \\
&&+\sum_{i\in\phi\pi}((1-n_{h,i}-2)+(2-n_{\tilde{c},i}))(\bar{\gamma})\\
&&+(c_{h,V^{(-)}}-n_{h,V^{(-)}})+(c_{c,V^{(-)}}-n_{c,V^{(-)}})-2\ge 0 \nonumber
\end{eqnarray}
The first two lines represent a situation without $V^{(-)}$ hence the estimates
as before apply, these contributions are non-negative. For the third line we
distinguish two cases:\\
(1) $(n_{c,V^{(-)}})_\gamma=(n_{c,V^{(-)}})_{\bar{\gamma}_{i_0}}=1$
(notation: $\bar{\gamma}_{i_0}\equiv\gamma/\gamma_{i_0}$)\\
Here the bracket $c_{c,V^{(-)}}-n_{c,V^{(-)}}$ vanishes. However the first
bracket (referring to the
$h$-lines) contributes at least 2. Hence the total sum is non-negative.\\
(2) $(n_{c,V^{(-)}})_\gamma=(n_{c,V^{(-)}})_{\bar{\gamma}_{i_0}}=0$\\
Now since the $c\bar{c}$-line starting at $V^{(-)}$ goes straight through the
whole diagram $\gamma$, it can not form a $c\bar{c}$-loop (it carries a FP-charge).
It must meet at least one $\phi\pi$-vertex $V^{(\phi\pi)}_{*}$. If this vertex
belongs to $\gamma_{i_0}$, it is contracted with $V^{(-)}$ to form a new vertex
in $\bar{\gamma}_{i_0}$ which has one negative FP-charge. Then this is the previous
case. If it does not belong to $\gamma_{i_0}$ then this $V^{(\phi\pi)}_{*}$ appears
as an ordinary FP-vertex in $\bar{\gamma}_{i_0}$ and its contribution is
covered by the second line in (\ref{vlo}).
Hence the overall estimate holds true and condition \eqref{c3} is satisfied.\\
The condition \eqref{c4}: $\rho(\gamma)=5 \le \delta(\gamma)+1 =5+1$ is satisfied by
the definition of the subtraction degrees.
In the context of condition \eqref{c4} it is of quite some interest to investigate,
whether the upper limit $\rho(\gamma)=\delta (\gamma)+1$ is consistent with
all the other conditions.
We start with condition \eqref{c1} $\rho(\gamma)\le r(\gamma)$.
For 1PI diagrams diagrams $\gamma$ containing the vertex $V^{(-)}$ this
means to check, whether
\begin{equation}
\delta(\gamma)+1=6\le r(\gamma)= 4-2V^{(c_3)}-2V^{(\phi\pi)}+2I_{hh}+2I_{c\bar{c}}
+\left\{\begin{array}{l}
-1\\
+1 .
\end{array}\right.
\end{equation}
Rewritten in terms of line balances this means (see (\ref{cgaes}))
\begin{eqnarray}\label{ulrh
0&\le&-2+\sum_{i\in c_1,c_2}(c_{h,i}-n_{h,i})
+\sum_{i\in c_3}(c_{h,i}-n_{h,i}-2) \\
&&+\sum_{i\in\phi\pi}((1-n_{h,i}-2)+(2-n_{\tilde{c},i}))\nonumber\\
&&+(c_{h,V^{(-)}}-n_{h,V^{(-)}})+(1-n_{c,V^{(-)}})
+\left\{\begin{array}{l}
-1\\
+1
\end{array}\right.\nonumber
\end{eqnarray}
Since the sums in the first and second line are non-negative (s. discussions
above), this boils down to
\begin{equation}
(c_{h,V^{(-)}}-n_{h,V^{(-)}})+(1-n_{c,V^{(-)}})
+\left\{\begin{array}{l}
-3\\
-1
\end{array}\right.\nonumber
\ge0
\end{equation}
(Let us recall: upper entry $-3$ stands for contributions $\int c(\partial)^3 h\cdots h$,
lower entry $-1$ for $\int c((\partial)^5 h\cdots h$ to $V^{(-)}$.)
But we only know for sure that
$(c_{h,V^{(-)}}-n_{h,V^{(-)}})+(1-n_{c,V^{(-)}})\ge 2$.
Hence, if this lower bound can indeed be realized, the upper limit for
$\rho(\gamma)$ would not be allowed in the derivation of the ST. It would however
be allowed for the Green functions constructed out of $\lbrack NP \rbrack^4_4\,$
normal products.
If indeed $\rho(\gamma)=\delta(\gamma)+1$ can not be used then the
IR-subtractions within $\tau(\gamma)$ (\ref{sbtr}) are active i.e.\
UV-subtractions alone would not guarantee convergence. In QED
$\rho(\gamma)=\delta(\gamma)+1$ is allowed, hence by (\ref{rmss}) only
UV-subtractions are active. To the contrary, as here, in Yang-Mills (YM) it is not.
Of course, at $s=1$ the dependence on $M$ disappears if the LZ-equation
holds (cf. (\ref{lzv})).
Again, as for Lagrangian vertices we can refer also in the present case to
Lowenstein's theorem for convergence in the same sense as above.
\subsection{Slavnov-Taylor identity\label{se:stidentity}}
The ST identity which we have to establish to higher orders takes the same form
as in tree approximation, (\ref{fbrst}), supplemented however
by the $m^2$-dependent gauge fixing, (\ref{cmgf}), and Faddeev-Popov-terms,
(\ref{FaPo}), i.e.
\begin{equation}\label{2fbrst}
\mathcal{S}(\Gamma)\equiv
\int\Big(\frac{\delta\Gamma}{\delta{K}}\frac{\delta\Gamma}{\delta h}
+\frac{\delta\Gamma}{\delta L}\frac{\delta\Gamma}{\delta c}
+b\frac{\delta\Gamma}{\delta\bar{c} }\Big)=0
\end{equation}
\begin{eqnarray}
\label{2cmgf}
\Gamma_{\rm gf}&=&-\frac{1}{2\kappa}\int dxdy\, h^{\mu\nu}(x)
(\partial_\mu b_\nu+\partial_\nu b_\mu)(y)\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)
\frac{1}{(x-y)^2}\Big\rbrace \\
&&-\int\frac{\alpha_0}{2}\eta^{\mu\nu} b_\mu b_\nu\\
\label{2FaPo}
\Gamma_{\phi\pi}&=&-\frac{1}{2}\int dxdy\, \mathdutchcal{s}\,h^{\mu\nu}(x)
(\partial_\mu \bar{c}_\nu
+\partial_\nu \bar{c}_\mu)(y)\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace .
\end{eqnarray}
The $b,\bar{c}$-field equations of motion take now the form
\begin{eqnarray}
\label{2beq}
\frac{\delta \Gamma}{\delta b^\rho}&=&\kappa^{-1}\int
dy\,\partial^\mu h_{\mu\rho}(y)
\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace-\alpha_0 b_\rho\\
\label{2ghe}
\frac{\delta\Gamma}{\delta \bar{c}_\rho(x)}&=&
-\int dy\, \kappa^{-1} \partial_\lambda\frac{\delta\Gamma}{\delta K_{\lambda\rho}(y)}
\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace .
\end{eqnarray}
Again the $b$-field equation can be integrated trivially back to (\ref{2cmgf}) and
therefor the functional $\bar{\Gamma}$ be introduced as in the tree approximation
\begin{equation}\label{2Gmmbr}
\Gamma = \Gamma_{\rm gf}+\bar{\Gamma} .
\end{equation}
(\ref{rstc}) is changed into
\begin{equation}\label{2rstc}
\kappa^{-1}\int dy\,
\partial_\lambda\frac{\delta\bar{\Gamma}}{\delta K_{\mu\lambda}(y)}
\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace
+\frac{\delta\bar{\Gamma}}{\delta\bar{c}_\mu} =0,
\end{equation}
whereas (\ref{sceH}) becomes
\begin{equation}\label{2sceH}
H_{\mu\nu}(x)=K_{\mu\nu}(x)
+\frac{1}{2}\int dy\,(\partial_\mu\bar{c}_\nu+\partial_\nu\bar{c}_\mu)(y)
\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace .
\end{equation}
The relations (\ref{brGm}) are unchanged:
\begin{eqnarray}\label{2brGm}
\mathcal{S}(\Gamma)&=&\frac{1}{2}\mathcal{B}_{\bar{\Gamma}}\bar{\Gamma}=0\\
\mathcal{B}_{\bar{\Gamma}}&\equiv&
\int\Big(
\frac{\delta\bar{\Gamma}}{\delta H}\frac{\delta}{\delta h}
+ \frac{\delta\bar{\Gamma}}{\delta h}\frac{\delta}{\delta H}
+ \frac{\delta\bar{\Gamma}}{\delta L}\frac{\delta}{\delta c}
+ \frac{\delta\bar{\Gamma}}{\delta c}\frac{\delta}{\delta L}\Big) .
\end{eqnarray}
In the BPHZL renormalization scheme the starting point for establishing equations
like the above one's to all orders is a $\Gamma_{\rm eff}$ with which one calculates accordingly subtracted Feynman diagrams.
Here we choose
\begin{equation}\label{Gmmff}
\Gamma_{\rm eff}=\Gamma^{\rm class}_{\rm inv} +\Gamma_{\rm gf}+\Gamma_{\phi\pi}
+\Gamma_{\rm e.f.}+\Gamma_{\rm ct} .
\end{equation}
In addition to (\ref{ivc}),(\ref{clssct}),(\ref{2cmgf}), and (\ref{2FaPo})
one has to take into account the changes caused by the auxiliary mass term
in (\ref{gmps1}) and (\ref{gmps2}).
$\Gamma_{\rm ct}$ will collect counterterms as needed. All these
expressions are to be understood as normal products, i.e.\ insertions into Green
functions with power counting degrees $\delta=\rho=4$.
Starting from $Z$, the generating functional for general Green functions,
and from the definition of $\mathcal{S}$ in (\ref{sma}) we
postulate
\begin{equation}\label{Zbrst}
\mathcal{S}Z=0.
\end{equation}
Then the action principle yields
\begin{equation}\label{acZbrst}
\mathcal{S}Z=\Delta_Z\cdot Z= \Delta_Z +O(\hbar \Delta_Z),
\end{equation}
where $\Delta_Z\equiv[\Delta_Z]^5_5$ is an integrated insertion with
$Q_{\phi\pi}(\Delta_Z)=+1$.
Again, by invoking the action principle one can realize the $b$-field
equation of motion (\ref{2beq}), with (\ref{2rstc}), now on the renormalized
level, as a consequence of (\ref{2fbrst}). This admits (\ref{2brGm}) as a
postulate and results into
\begin{eqnarray}\label{rgheq}
\mathcal{S}(\Gamma)&=&\Delta\cdot\Gamma\\
\frac{1}{2}\mathcal{B}_{\bar{\Gamma}}\bar{\Gamma}&=&\Delta+O(\hbar\Delta) .
\end{eqnarray}
Here $\Delta\equiv [\Delta]_5^5$ with $Q_{\phi\pi}(\Delta)=+1$
does not dependent on $b$ and $\bar{c}$. These relations admit a cohomological
treatment, since
\begin{equation}\label{cstc}
\mathcal{B}_{\bar{\Gamma}}\mathcal{B}_{\bar{\Gamma}}\bar{\Gamma} =0, \qquad
\mathcal{B}_{\bar{\Gamma}}\mathcal{B}_{\bar{\Gamma}}=0,
\end{equation}
the latter being true as a necessary condition, if (\ref{2brGm}) is to be
satisfied.
Since in the tree approximation (\ref{2brGm}) holds one has
\begin{equation}\label{2cstc}
\mathdutchcal{b}\Delta=0 \quad {\rm for} \quad \mathdutchcal{b}\equiv \mathcal{B}_{\bar{\Gamma}_{\rm class}}
\qquad {\rm with} \quad \mathdutchcal{b}^2=0
\end{equation}
as the final consistency condition to be solved.
The standard way to solve this cohomology problem is to list contributions to $\Delta$ by starting with terms depending on external fields and then those consisting of elementary fields only, i.e.
\begin{equation}\label{chmlg}
\Delta= \int(K_{\mu\nu}\Delta^{\mu\nu}(h,c)+L_\rho\Delta^\rho(h,c))
+\Lambda(h,c) .
\end{equation}
All terms are insertions compatible with $[...]^5_5$
and $Q^{\phi\pi}=+1$. (Recall that $Q^{\phi\pi}(K)=-1$ and
$Q^{\phi\pi}(L)=-2$.)
In \cite{Barnich:1994kj,Barnich:1995ap} it is shown, that all these contributions eventually are $\mathdutchcal{b}$-variations.
This is true even for the $\Lambda$-term.
This means that pure gravity has no anomalies, the solution reads:
\begin{equation}\label{fchgrv}
\Delta=\mathdutchcal{b} \hat{\Delta}
\end{equation}
with a $\hat{\Delta}$ which can be absorbed into $\Gamma_{\rm eff}$.
In the quoted references the algebra leading to this result has been performed by using cohomological methods.
Without power counting and convergence and not within a concrete renormalization scheme, this represents a classical consideration.
In the present context we have, however, supplied it with ``analytic''
information, i.e.\ assured the existence of the relevant quantities as
insertions into existing Green functions.
The result is thus that we have indeed a ST-identity which holds as inserted
into general Green's functions of elementary fields, at non-exceptional momenta
and $s=1$.
Along the lines given in the tree approximation one can now establish the unitarity of the $S$-matrix. It is however clear
that such a construction is to a large extent purely formal, because one has to go on-shell and hits physical IR divergences there in many configurations of incoming and outgoing particles.\\
Let us nevertheless sketch some of the required steps. First of all the matrix of residua $z^{-1}$ becomes relevant. Then like in the tree approximation the state space operator $Q^{\rm BRST}$ can be calculated with the same arguments as there: only linear
terms in the functional transformation contribute. They appear however with factors which have to be shown via some tests on the ST to permit a multiplicative renormalization of the tree approximation charge. With this result one can deduce that the $S$-matrix maps physical states onto physical states.
These physical states have to be constructed in two steps: In the first one a state $|{\rm phys\rangle}$ is called ``physical'' if it is annihilated by $Q^{\rm BRST}$, i.e.
\begin{equation}\label{phsst}
Q^{\rm BRST}|\rm{phys}\rangle=0
\end{equation}
This requirement defines a linear subspace in the full indefinite metric Fock space and eliminates states with negative norm.
In the second step one forms equivalence classes of physical states which differ only by the number of particles which generate vanishing norm. The completion of this state of equivalence classes contains then only states with non-zero norm.
On this physical Hilbert space the $S$-matrix is unitary.
It is worthwhile to mention that this construction has been shown to exist rigorously e.g.\ in the context of Yang-Mills theory with complete breakdown of internal symmetry to a completely massive theory \cite{Becchi:1985bd}.
Due to on-shell IR-divergences it is only formally valid in the present case. One can however expect that scattering amplitudes which are not affected by IR-divergences are physically meaningful.
Based on the ST one may construct Green functions of BRST-covariant operators which are independent of gauge parameters
and could then serve as building blocks for observables.
But this will not be covered in this work and is left for future research.
\subsection{Normalization conditions II\label{se:nc2}}
The normalization conditions (\ref{trnorm1})-(\ref{trnorm5}) have to be
modified such
that they are compatible with higher orders of perturbation theory: they
have to be taken at values in momentum space which are consistent with
the subtraction procedure.
They read
\begin{eqnarray}\label{highnorm}
\frac{\partial}{\partial p^2}\,\gamma^{(2)}_{\rm TT\,|{\substack{p=0 \\ s=1} }}&=
&c_3\kappa^{-2}\\
\frac{\partial}{\partial p^2}\frac{\partial}{\partial p^2}\,
\gamma^{(2)}_{\rm TT\,|{\substack{p^2=-\mu^2\\ s=1} }}&=&-2c_1\\
\frac{\partial}{\partial p^2}\frac{\partial}{\partial p^2}\,
\gamma^{(0)}_{\rm TT\,|{\substack{p^2=-\mu^2 \\ s=1} }}
&=&2(3c_2+c_1)\\
\Gamma_{h^{\mu\nu}} &=&-\eta_{\mu\nu}c_0=0\\
\frac{\partial}{\partial p_\sigma}
\Gamma_{K^{\mu\nu}c_\rho|{\substack{p^2=-\mu^2 \\ s=1} }}&=&
-i\kappa(\eta^{\mu\sigma}\delta^\nu_\rho
+\eta^{\nu\sigma}\delta^\mu_\rho
-\eta^{\mu\nu} \delta^\sigma_\rho)\label{highnorm1} \\
\frac{\partial}{\partial p^\lambda}
\Gamma_{{L_\rho}c^\sigma c^\tau|{\substack{p^2=-\mu^2 \\ s=1} }}&=&
-i\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma}).
\end{eqnarray}
Imposing the $b$-equation of motion (\ref{beq}) still fixes $\alpha_0$
and the $b$-amplitude, whereas (\ref{highnorm1}) again
fixes the $h$- and $K$-amplitudes.
\section{Invariant differential operators and invariant insertions\label{se:invdiffop}}
Here we develop the concept of BRST-invariant differential operators
and their one-to-one counterparts, BRST-invariant insertions.
One can essentially follow the paper \cite{Piguet:1984js} and translate from YM to gravity.
Suppose a model satisfies the WI of a linear transformation
\begin{equation}\label{ltWI}
W^a\Gamma\equiv\int\delta^a\phi\frac{\delta\Gamma}{\delta\phi}=0
\end{equation}
and $\lambda$ is a parameter of the theory (e.g.\ coupling, mass,
normalization parameter) of which the WI-operator $W^a$ does not
depend. Then $\lambda\partial_\lambda$ commutes with $W^a$, i.e.
\begin{equation}\label{sc0}
[\lambda\partial_\lambda,W^a]=0.
\end{equation}
Then the action principle tells us that
\begin{equation}\label{srt}
\lambda\partial_\lambda\Gamma=\Delta_\lambda\cdot\Gamma .
\end{equation}
Applying $W^a$ to (\ref{srt}) and using (\ref{sc0}) we find
\begin{equation}\label{sct}
W^a(\Delta_\lambda\cdot\Gamma)= W^a\Delta_\lambda+O(\hbar\Delta_\lambda)=0,
\end{equation}
which expresses the invariance of $\Delta_\lambda$ under the symmetry
transformation $W^a$: $\lambda\partial_\lambda$ and $\Delta_\lambda$ are
called symmetric with respect to the symmetry $W^a$.
For the $\Gamma$-non-linear BRST-symmetry one has to proceed slightly
differently. We shall call an insertion $\Delta$ BRST-symmetric if to first
order in $\epsilon$
\begin{eqnarray}
\mathcal{S} (\Gamma_\epsilon)&=&O(\epsilon^2) \label{scbrst}\\
{\rm for} \qquad \Gamma_\epsilon&=&\Gamma+\epsilon\Delta\cdot\Gamma
\qquad{\rm with}\qquad \mathcal{S} (\Gamma)=0.
\end{eqnarray}
If $\Delta$ is generated by a differential operator $(\ref{srt})$, this
differential operator will be called BRST-symmetric. Writing (\ref{scbrst})
explicitly we have
\begin{equation}\label{scbrst2}
\mathcal{S}(\Gamma)+\epsilon \mathcal{S}_\Gamma\Delta\cdot\Gamma=O(\epsilon^2)
\end{equation}
\begin{equation}\label{scbrst3}
\mathcal{S}_\Gamma\equiv\int\left(
\frac{\delta\Gamma}{\delta K}\frac{\delta}{\delta h}
+\frac{\delta\Gamma}{\delta h}\frac{\delta}{\delta K}
+\frac{\delta\Gamma}{\delta L}\frac{\delta}{\delta c}
+\frac{\delta\Gamma}{\delta c}\frac{\delta}{\delta L}
+b\frac{\delta}{\delta\bar{c}}\right)
+\chi\frac{\partial}{\delta\alpha_0} ,
\end{equation}
i.e.\ the symmetry condition reads
\begin{equation}\label{scbrst4}
\mathcal{S}_\Gamma\Delta\cdot\Gamma=0 .
\end{equation}
A comment is in order. Although later we shall exclusively work in Landau gauge, we carry here the gauge parameter $\alpha_0$ along as
preparation for the general solution with arbitrarily many parameters $z_{nk}$. This
facilitates the formulation of the general version. Actually relevant at the end
are only the formulae with $\alpha_0=\chi=0$.
The explicit form of $\mathcal{S}_\Gamma$ precisely defines how to perform the
variation of the fields. \footnote{This formula shows that it is not the
demand ``linearity in $\Gamma$'' which determines its form, but rather
the demand ``correct transformation of an insertion ''.}
The operator $\mathcal{S}_\Gamma$ is helpful for rewriting the gauge fixing and
$\phi\pi$-contributions to the action \eqref{2cmgf}:
\begin{equation}\label{vfrmgffp}
\Gamma_{\rm gf}+\Gamma_{\phi\pi}
= \mathcal{S}_\Gamma\left(-\frac{1}{2\kappa}\int h^{\mu\nu}(x)
(\partial_\mu\bar{c}_\nu+\partial_\nu\bar{c}_\mu)(y)
\Big\{\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\}
-\int\frac{\alpha_0}{2}\eta^{\mu\nu}\bar{c}_\mu b_\nu\right) .
\end{equation}
(Note: the last term creates a contribution which has not been taken into
account in (\ref{2cmgf}), however in (\ref{varalph}).)
When going over to $Z$, the generating functional for the general Green
functions,
it is clear, that gauge fixing and $\phi\pi$-term vanish between physical
states, because they are a BRST-variation.
A necessary condition for insertions to be BRST-symmetric is obtained
by acting with $\delta/\delta b$ on (\ref{scbrst}):
\begin{equation}\label{snn}
G\Delta\cdot\Gamma=S_\Gamma\frac{\delta\Delta\cdot\Gamma}{\delta b},
\qquad G^\rho \equiv\frac{\delta}{\delta\bar{c}_\rho(x)}
+\kappa^{-1}\int dy\,\partial_\lambda\frac{\delta\bar{\Gamma}}{\delta K_{\rho\lambda}(y)}
\Big\{\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\} .
\end{equation}
For $b$-independent insertions $\Delta$ one must ensure the homogeneous
ghost equation
\begin{equation}\label{ghD}
G\Delta\cdot\Gamma = 0 .
\end{equation}
Using the gauge condition
\begin{equation}\label{gc}
\frac{\delta\Gamma}{\delta b_\rho}=-\alpha_0 \eta^{\rho\lambda}b_\lambda
+\kappa^{-1}\int dy\,\partial_\mu h^{\mu\rho}(y)\Big\{\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\} ,
\end{equation}
one can reduce (\ref{snn}) to
\begin{equation}\label{rsc}
\mathcal{B}_{\bar{\Gamma}}\Delta\cdot\Gamma=0 .
\end{equation}
In the tree approximation we have called this operator $\mathdutchcal{b}$.
Our next task is to construct a {\it basis} for all symmetric insertions
of dimension 4, $\phi\pi$-charge 0, and independent of $b_\rho$ -- first in the
tree approximation and then to all orders. A systematic way to find them is
to solve the cohomology problem
\begin{equation}\label{cpr}
\mathdutchcal{b}\Delta=0
\end{equation}
for $\Delta$ satisfying
\begin{eqnarray}\label{cpr2}
\frac{\delta\Delta}{\delta b}=&0,&G\Delta=0\\
{\rm dim}(\Delta)=&4,&Q_{\phi\pi}(\Delta)=0 .
\end{eqnarray}
Here $\mathdutchcal{b}=\mathcal{B}_{\bar{\Gamma}_{\rm class}}$, hence
\begin{eqnarray}\label{lsttrs
\mathdutchcal{b}&=& \mathdutchcal{s} \qquad{\rm on\, all\, elementary\, fields}\\
\mathdutchcal{b} H_{\mu\nu}&=&\frac{\delta\bar{\Gamma}_{\rm cl}}{\delta h^{\mu\nu}}
=\frac{\delta\Gamma^{\rm class}_{\rm inv}}{\delta h^{\mu\nu}}
-\kappa(H_{\lambda\mu}\partial_\nu c^\lambda
+H_{\lambda\nu}\partial_\mu c^\lambda
+\partial_\lambda(H_{\mu\nu}c^\lambda))\label{lsttrsH}\\
\mathdutchcal{b} L_\rho&=&\frac{\delta{\bar\Gamma}_{\rm cl}}{\delta c^\rho}=
\kappa(2\partial^\lambda H_{\lambda\rho}
+2\partial_{\lambda'}(H_{\rho\lambda}h^{\lambda'\lambda}
+H_{\lambda'\lambda}\partial_\rho h^{\lambda\lambda'}))\\
&&\qquad\quad -\kappa(L_\lambda\partial_\rho c^\lambda
+\partial_\lambda(L_\rho c^\lambda)) . \label{lsttrsL}
\end{eqnarray}
In order to proceed we first separate the $\alpha_0$-dependence
\begin{equation}\label{s0d}
\Delta=\chi \Delta_- +\Delta_0 .
\end{equation}
We now define
\begin{equation}\label{bbr}
\bar{\mathdutchcal{b}}=\left\{\begin{array}{l}
\mathdutchcal{b}\qquad{\rm on}\quad h,c,H,L\\
0\qquad{\rm on}\quad \alpha_0
\end{array}
\right.
\end{equation}
and note that
\begin{equation}\label{bbra}
\partial_{\alpha_0}(\mathdutchcal{b}\psi)=0\qquad{\rm for}\quad \psi=h,c,H,L
\end{equation}
with $\bar{\mathdutchcal{b}}^2$=0, since $\bar{\Gamma}_{\rm cl}$ is independent of $\alpha_0$.
(\ref{cpr}) implies
\begin{equation}\label{spr}
\bar{\mathdutchcal{b}}\Delta_- -\partial_{\alpha_0}\Delta_0=0 \qquad \bar{\mathdutchcal{b}}\Delta_0=0,
\end{equation}
hence
\begin{equation}\label{spr2}
\Delta=\mathdutchcal{b}\hat{\Delta}_- +\hat{\Delta}_0.
\end{equation}
Here $\hat{\Delta}_0$ is $\alpha_0$-independent and $\bar{\mathdutchcal{b}}$-invariant.
Since $\bar{c}$ does not occur, a negative $\phi\pi$-charge can only be
generated by external fields, hence
\begin{equation}\label{lfe}
\hat{\Delta}_- = \int(f_H(\alpha_0)H_{\mu\nu}h^{\mu\nu}
+f_L(\alpha_0)L_\rho c^\rho)
\end{equation}
which is the precise analogue of \cite[(4.19)]{Piguet:1984js},
is certainly a solution. However in the present case the field $h^{\mu\nu}$
has canonical dimension zero, whereas its counterpart in Yang-Mills theory,
the vector field $A_\mu$ has dimension one. So every function
$\mathcal{F}^{\mu\nu}(h)$ is also a solution. For the time being we continue
with (\ref{lfe}) and discuss the general solution
at a later stage (cf. Sect. \ref{se:generalsolutionSTI}).
It is worth solving the subproblem
\begin{equation}\label{spr3}
\partial_{\alpha_0}\hat{\Delta}_0=0 \qquad \bar{\mathdutchcal{b}}\hat{\Delta}_0=0
\end{equation}
explicitly.
We start listing the contributions to $\hat{\Delta}_0$ ordered by their
external field dependence, i.e.
\begin{equation}\label{Ld}
\hat{\Delta}_0=-f_L(0)\kappa\int L_\rho c^\lambda\partial_\lambda c^\rho
+\cdots({\rm indep.\, of}\, L),
\end{equation}
where $f_L(0)$ is an arbitrary number independent of $\alpha_0$.
With (\ref{lsttrsL}) this term can be rewritten as
\begin{equation}\label{Ld2}
\hat{\Delta}_0=f_L(0)\bar{\mathdutchcal{b}}(\int L_\rho c^\rho)
+\cdots({\rm indep.\, of}\, L)
\end{equation}
\begin{equation}\label{Ld3}
\hat{\Delta}_0= \mathdutchcal{b} \int(f_L(0)L_\rho c^\rho)
+\cdots({\rm indep.\, of}\, L) .
\end{equation}
We next make explicit the $H$-dependence
\begin{equation}\label{Hd}
\hat{\Delta}_0= \mathdutchcal{b} \int(f_L(0) L_\rho c^\rho)
+\int H_{\mu\nu}F_{(+)}^{\mu\nu}(h,c)+\cdots(L,H)-{\rm indep.}
\end{equation}
The postulate (\ref{spr3}) reads
\begin{eqnarray}\label{Hd2}
0&=&\bar{\mathdutchcal{b}}\hat{\Delta}_0=
\int\Big(\frac{\delta\bar{\Gamma}_{\rm cl}}{\delta h}F^{(+)}
-H\bar{\mathdutchcal{b}}F^{(+)}\Big)+(L,H)-{\rm indep.}\\
&=:&-\int H\mathcal{C}F^{(+)}+(L,H)-{\rm indep.}
\end{eqnarray}
and defines a transformation $\mathcal{C}$ as the coefficient of $H$ in
(\ref{Hd}):
\begin{equation}\label{mthcC}
\mathcal{C}F_{(+)}=\bar{\mathdutchcal{b}}F^{(+)}
+\kappa(\partial_\lambda c^\mu F^{\nu\lambda}_{(+)}
+\partial_\lambda c^\nu F^{\mu\lambda}_{(+)}
-c^\lambda\partial_\lambda F^{\mu\nu}_{(+)}).
\end{equation}
This transformation is nilpotent and satisfies, due to (\ref{Hd2}),
\begin{equation}\label{mthcC2}
\mathcal{C}F_{(+)}=0
\end{equation}
One solution is
\begin{equation}\label{sF}
F^{\mu\nu}_{(+)}=\mathcal{C}(f_H(0)h^{\mu\nu}).
\end{equation}
Since
\begin{equation}\label{sF2}
\mathcal{C}(h^{\mu\nu})=
\kappa(-\partial^\mu c^\nu-\partial^\nu c^\mu) ,
\end{equation}
it fits correctly to the $H$-dependent part of $(\ref{lsttrsH})$ in
(\ref{Hd2}).
One thus arrives for this solution at
\begin{equation}\label{sHd3}
\bar{\mathdutchcal{b}}\int f_H(0)H_{\mu\nu}h^{\mu\nu}
=\int H_{\mu\nu}\mathcal{C}(f_H(0)h^{\mu\nu}) ,
\end{equation}
i.e.\ the $H$-dependent part in $\hat{\Delta}_0$ is also a variation.
As mentioned above this is not the most general solution, but that will
be treated later with the analogous outcome.
The remaining contributions to $\hat{\Delta}_0$ depend only on $h$ and must not depend on $\alpha_0$.
The only invariants are the terms appearing in $\Gamma^{\rm class}_{\rm inv}$.
They are not variations, but constitute obstruction terms to the $\bar{\mathdutchcal{b}}$-cohomology.
Altogether we thus have
\begin{equation}
\Delta_0= \mathdutchcal{b} \int(f_L(0)L_\rho c^\rho+f_H(0)H_{\mu\nu}h^{\mu\nu})
+\int\,\sqrt{-g}(\hat{c}_3R+\hat{c}_1R^{\mu\nu}+\hat{c}_2R^2) .
\end{equation}
(The factors $\hat{c}$ are independent of $\alpha_0$.)
In tree approximation we end up with five invariant insertions of dimension 4
and $\phi\pi$-charge 0, which are independent of $b_\rho$ and satisfy the
ghost equation:
\begin{eqnarray}\label{sns5}
\Delta'_L&=&\mathdutchcal{b}\left(f_L(\alpha_0)\int L_\rho c^\rho\right)\\
\Delta'_H&=&\mathdutchcal{b}\left(f_H(\alpha_0)\int H_{\mu\nu} h^{\mu\nu}\right)\\
\Delta_{c_3}&=&c_3\kappa^{-2}\int\sqrt{-g}\kappa^{-2}R
\quad\, \Delta_{c_1}=c_1\int\sqrt{-g}R^{\mu\nu}R_{\mu\nu}
\quad\, \Delta_{c_2}=c_2\int\sqrt{-g}R^2 .
\end{eqnarray}
(Here we
renamed the couplings of the non-variations.)
In higher orders we may define easily invariant insertions for those which are not variations:
\begin{equation}\label{hghcpls}
\Delta_{c_i}:=c_i\frac{\partial}{\partial c_i}\Gamma\quad
(i=1,2,3 \quad{\rm no\,\, sum}),
\end{equation}
however it is clear that the $(s-1)$-dependent normal products
$c_{31}[\kappa^{-1}m\int\sqrt{-g}R\,]^4_4$ and
$c_{32}1/2[m^2\int\sqrt{-g}R\,]^4_4$
also belong to the basis in higher orders and make part of $\Gamma_{\rm eff}$.
Hence we define them also as invariant by the respective derivation with
respect to their coupling
\begin{equation}\label{gnbss
\Delta_{c_{31}}:=c_{31}\frac{\partial}{\partial c_{31}}\Gamma \qquad
\Delta_{c_{32}}:=c_{32}\frac{\partial}{\partial c_{32}}\Gamma .
\end{equation}
Accordingly we change the notation $c_3\rightarrow c_{30}$.
The other terms we also try to represent as symmetric {\sl differential}
operators acting on $\Gamma$.\\
We rewrite $\Delta'_L$:
\begin{eqnarray}\label{Ldo}
\Delta'_L= \mathdutchcal{b}\left(f_L(\alpha_0)(\alpha_0)\int L_\rho c^\rho\right)
&=&\chi f'_L\int Lc
+f_L\int\left(\frac{\delta\bar{\Gamma}_{\rm cl}}{\delta c}
+L\frac{\delta\bar{\Gamma}_{\rm cl}}{\delta L}\right)\\
&=&\chi f'_L\int Lc
+f_L\int\left(-c\frac{\delta\bar{\Gamma}_{\rm cl}}{\delta c}
+L\frac{\delta\bar{\Gamma}_{\rm cl}}{\delta L}\right)\\
&=&-f_L\mathcal{N}_L\Gamma_{\rm cl}
+\chi f'_L\int Lc,
\end{eqnarray}
where $\mathcal{N}$ denote a leg-counting operator. This suggests defining
$\Delta_L$ to all orders by
\begin{eqnarray}\label{Ldoh}
\Delta_L\cdot\Gamma&=&f_L(\alpha_0)\mathcal{N}_L\Gamma-\chi f'_L\int Lc,\\
\mathcal{N}_L&\equiv&\int\Big(c\frac{\delta}{\delta c}-L\frac{\delta}{\delta L}\Big)
=N_c-N_L .
\end{eqnarray}
It is to be noted that the $\chi$-dependent term in (\ref{Ldoh}) is well
defined since $L$ is an external field, hence the expression is linear in the
quantized field (c). $\Delta_L$ does obviously not depend on $b_\rho$, it
satisfies the ghost equation and it fulfills (\ref{rsc}), since it can be
written as
\begin{equation}\label{fdL}
\Delta_L\cdot\Gamma=-\mathcal{B}_{\bar{\Gamma}}\left(f_L\int Lc\right),
\end{equation}
and since $\mathcal{B}_{\bar{\Gamma}}$ is nilpotent. Hence it is a
BRST-symmetric operator to all orders.
Finally we have to extend $\Delta_H'$. We first rewrite it in the form
\begin{equation}\label{Hdo}
\Delta_H'= \mathdutchcal{b}\left(f_H(\alpha_0)\int H_{\mu\nu}h^{\mu\nu}\right)
=f_H N_H\bar{\Gamma}_{\rm cl}-f_HN_H\Gamma_{\rm cl}
+\chi f'_H\int H_{\mu\nu}h^{\mu\nu}.
\end{equation}
Next we go over to $\Gamma_{\rm cl}$ in the variables $K$ and $\bar{c}$:
\begin{eqnarray}\label{Hon}
\Delta_H'&=&f_H(N_h-N_K-N_b-N_{\bar{c}}
+2\alpha_0\partial_{\alpha_0}+2\chi\partial_\chi)\Gamma_{\rm cl}\\
&&+\chi f'_H\Big(\int \Big( Kh-\bar{c}\frac{\delta\Gamma_{\rm cl}}{\delta b}\Big)
+2\alpha_0\frac{\partial}{\partial \chi}\Gamma_{\rm cl}\Big) .
\end{eqnarray}
This suggests as definition of $\Delta_H$ to all orders
\begin{eqnarray}\label{fdH}
\Delta_H\cdot\Gamma:&=&f_H\mathcal{N}_K\Gamma
+\chi f'_H \Big(\int \Big( Kh-\bar{c}\frac{\delta\Gamma_{\rm cl}}{\delta b}\Big)
+2\alpha_0\frac{\partial}{\partial \chi}\Gamma\Big)\\
\mathcal{N}_H&\equiv& N_h-N_K-N_b-N_{\bar{c}}
+2\alpha_0\partial_{\alpha_0}+2\chi\partial_\chi.
\end{eqnarray}
Or else
\begin{equation}\label{fdHa}
\Delta_H\cdot\Gamma :=\mathcal{S}_\Gamma\left(f_H(\alpha_0)
\Big(\int \Big(Kh-\bar{c}\frac{\delta\Gamma}{\delta b} \Big)
+2\alpha_0\frac{\partial\Gamma}{\partial\chi}\Big)\right).
\end{equation}
In view of
\begin{equation}\label{npS}
\mathcal{S}_\Gamma \mathcal{S}_\Gamma=0
\end{equation}
for all $\Gamma$ with $\mathcal{S}(\Gamma)=0$, $\Delta_H$ is BRST-symmetric once we
have verified that it is independent of $b_\rho$ and satisfies the ghost
equation.\\
\begin{equation}\label{chbi}
\frac{\delta}{\delta b}(\Delta_H\cdot\Gamma)=0
\end{equation}
is readily checked in the form (\ref{fdH}).
\begin{equation}\label{chgh}
G(\Delta_H\cdot\Gamma)=0
\end{equation}
is best checked in the form (\ref{fdHa}) by observing that
\begin{equation}\label{hpch
G\left(\int\Big(Kh-\bar{c}\frac{\delta\Gamma}{\delta b}\Big)
+2\alpha_0\frac{\partial\Gamma}{\partial\chi}\right)=0,
\end{equation}
and
\begin{equation}\label{ghsg}
\{G,\mathcal{S}_\Gamma\}=0
\end{equation}
(this latter property being due to $G\Gamma=-1/2 \chi b$).
To summarize in compact notation we denote the above symmetric differential
operators by
\begin{equation}\label{tns
\nabla_i\in \{c_1\partial_{c_1}, c_2\partial_{c_2},
c_{30}\partial_{c_{30}}, c_{31}\partial_{c_{31}}, c_{32}\partial_{c_{32}},
\mathcal{N}_H, \mathcal{N}_L \}
\end{equation}
and have with (\ref{hghcpls}),(\ref{gnbss}), (\ref{fdL}), and (\ref{fdHa})
defined a basis of symmetric insertions to all orders by
\begin{equation}\label{gnp}
\nabla_i\Gamma \stackrel{.}{=} \Delta_i\cdot\Gamma.
\end{equation}
The fact that symmetric differential operators and symmetric insertions
are in one-to-one correspondence just means that adding symmetric
counterterms $\Delta_i$ to $\Gamma$ is renormalizing the corresponding
quantity $i$ indicated by $\nabla_i$ of the theory. Fixing the arbitrary
parameters in the symmetric insertions (\ref{sns5}) is again performed by
satisfying normalization conditions and the present analysis shows
that the conditions (\ref{highnorm}) are appropriate. In higher orders
the Euclidean point $-\mu^2$ is relevant. $\alpha_0=0$ and $\chi=0$
are to be chosen now.
Once one has satisfied these normalization conditions the theory is completely fixed.
\section{Removing auxiliary mass dependence via Zimmermann Identities\label{se:removingauxmassZI}}
Above we have introduced among the symmetric insertions several which depend
on the auxiliary mass. Here we study to which extent they can be effectively removed by using ZI's \cite{Zimmermann:1972te}.
\subsection{Shift\label{se:shift}}
In (\ref{ivce}) we replaced
$c_3\kappa^{-2}
{\rm within}\quad\gamma^{(r)}_{KL},\, r=2,K=L=T$ by
$c_3\kappa^{-2} \rightarrow c_{30}\kappa^{-2}+m\kappa^{-1}c_{31}
+\frac{1}{2}m^2 c_{32},
$
where $m\equiv M(s-1)$.
On the level of symmetric insertions this replacement corresponds to enlarging the basis of naively BRST-invariant insertions with $\rho=\delta=4$
by $c_{31}m\kappa^{-1}\int\sqrt{-g}R$ and
$c_{32}\frac{1}{2} m^2\int\sqrt{-g}R$, which are to be taken into account in
$\Gamma_{\rm eff}$ .\\
Then the question is, whether one can via ZI's eliminate the $m$-terms and
maintain invariance. The sought invariant $[...]^4_4$ insertions are
defined to all orders as symmetric insertions via the invariant derivatives
\begin{eqnarray}\label{smns}
\big[\kappa^{-2}\int\sqrt{-g}R\,\big]^{4}_{4}&=&
\frac{\partial}{\partial c_{30}}\Gamma\\
\big[\kappa^{-1}\int\sqrt{-g}m R\,\big]^{4}_{4}&=&
\frac{\partial}{\partial c_{31}}\Gamma\\
\big[\int\sqrt{-g}\frac{1}{2}m^2R\,\big]^4_4&=&
\frac{\partial}{\partial c_{32}}\Gamma
\end{eqnarray}
and the symmetric counting operators $\mathcal{N}_{H,L}$.
The relevant ZI's have the form
\begin{eqnarray}\label{sZ}
\big[\kappa^{-2}\int\sqrt{-g}R\,\big]^{4}_{4}&=&
\big[\kappa^{-2}\int\sqrt{-g}R\,\big]^{3}_{3}+[...]^4_4\label{sZ0}\\
{\rm with}\quad [...]^4_4&=&[\int\,\sqrt{-g}(\kappa^{-2}u_{0}R
+u_{31}m\kappa^{-1}R+u_{32}\frac{1}{2}m^2R\nonumber\\
&& + u_1 R^{\mu\nu}R_{\mu\nu}+u_2R^2)+u_h\,\mathcal{N}_H+u_c\,\mathcal{N}_L]^4_4\\
\big[\kappa^{-1}\int\sqrt{-g}mR\,\big]^{4}_{4}&=&
m\big[\kappa^{-1}\int\sqrt{-g}\kappa^{-1}R\,\big]^{3}_{3}+[...]^4_4
\label{sZ1}\\
{\rm with}\quad [...]^4_4&=&[\int\,\sqrt{-g}(\kappa^{-2}v_{30}R
+v_0m\kappa^{-1}R+v_{31}\frac{1}{2}m^2R\nonumber\\
&& + v_1 R^{\mu\nu}R_{\mu\nu}+v_2R^2)
+v_h\,\mathcal{N}_H+v_c\,\mathcal{N}_L]^4_4
\end{eqnarray}
and
\begin{eqnarray}
\big[\int\sqrt{-g}\frac{1}{2}m^2 R\,\big]^4_4&=&
m\big[\int\sqrt{-g}\frac{1}{2}mR\,\big]^3_3+[...]^4_4
\label{sZ2}\\
{\rm with}\quad [...]^4_4&=&[\int\,\sqrt{-g}(\kappa^{-2}w_{30}R
+w_{31}m\kappa^{-1}R+w_0\frac{1}{2}m^2R\nonumber\\
&& + w_1 R^{\mu\nu}R_{\mu\nu}+w_2R^2)
+w_h\,\mathcal{N}_H+w_c\,\mathcal{N}_L]^4_4 .
\end{eqnarray}
All coefficients $u,v,w$ are of order $\hbar$. The terms multiplied by
$u_0,v_0,w_0$ resp.\ will be absorbed on the resp.\ lhs
and then the resp.\ line divided by $1-u_0,1-v_0,1-w_0$, such that the
normal products on the rhs have the factors $(1-u_0)^{-1}, (1-v_0)^{-1}, (1-w_0)^{-1}$
in the resp.\ line. From this representation it is then obvious that all
$[...]^3_3$ insertions on the rhs are symmetric, because all other insertions
are symmetric. Since the relevant determinant in this linear system of
equations is clearly non-vanishing, one can solve for all hard insertions
$[\int\sqrt{-g}R(\kappa^{-2}, m\kappa^{-1},\frac{1}{2}m^2)]^4_4$ in terms of
the soft one's together with $(c_1,c_2,\mathcal{N}_{H,L})$-terms. But those soft insertions
which contain the factor $m$ vanish at $s=1$, hence all hard $m$-dependent
insertions have been eliminated. And the hard insertion
$[\kappa^{-2}\int\sqrt{-g}R]^4_4$ has been effectively replaced by its soft
counterpart.
These considerations are crucial for deriving the parametric
differential equations in symmetric form and without $m$-dependence at $s=1$.
\subsection{Push\label{se:push}}
Next we consider the problem of removing Push by using appropriate ZI's.
First we treat the contributions of Push to $\Gamma^{\rm class}_{\rm inv}$ (cf. (\ref{mcoffs})).
They occur in the second power of $h$ and have the form (see (\ref{gmps1})),(\ref{gmps2}))
\begin{equation}\label{pshnv}
\Gamma_{(hh)}(m^2)=
\int\,h^{\mu\nu}(m^2\hat{\gamma}^{(2)}_{\rm TT}P^{(2)}_{\rm TT}
+m^2\hat{\gamma}^{(0)}_{\rm TT}P^{(0)}_{\rm TT})_{\mu\nu\rho\sigma}
h^{\rho\sigma}.
\end{equation}
In higher orders we have just the same terms, but now to be understood as
normal products $[...]^4_4$ in $\Gamma_{\rm eff}$. We use the ZI
\begin{multline}\label{pZi}
[\int\,h^{\mu\nu}(m^2\hat{\gamma}^{(2)}_{\rm TT}P^{(2)}_{\rm TT}
+m^2\hat{\gamma}^{(0)}_{\rm TT}P^{(0)}_{\rm TT})_{\mu\nu\rho\sigma}
h^{\rho\sigma}]^4_4\cdot\Gamma_{(hh)}
\phantom{M(s-1)} \\
=M(s-1)[\int\,h^{\mu\nu}(m\hat{\gamma}^{(2)}_{\rm TT}P^{(2)}_{\rm TT}
+m\hat{\gamma}^{(0)}_{\rm TT}P^{(0)}_{\rm TT})_{\mu\nu\rho\sigma}
h^{\rho\sigma}]^3_3\cdot\Gamma_{(hh)}\\
+[{\rm corr.s}]^4_4\cdot\Gamma_{(hh)}.
\end{multline}
Here the $\hat{\gamma}$'s are interpreted as differential operators and
$m\equiv M(s-1)$ is to be recalled.
The corrections comprise first of all the starting term from the lhs with
a coefficient $q=O(\hbar)$. We bring it to the lhs and divide by $1-q$.
This yields
\begin{multline}\label{pZi2}
[\int\,h^{\mu\nu}(m^2\hat{\gamma}^{(2)}_{\rm TT}P^{(2)}_{\rm TT}
+m^2\hat{\gamma}^{(0)}_{\rm TT}P^{(0)}_{\rm TT})_{\mu\nu\rho\sigma}
h^{\rho\sigma}]^4_4\cdot\Gamma_{(hh)}
\phantom{M(s-1)}\\
= \frac{M(s-1)}{1-q}
[\int\,h^{\mu\nu}(m\hat{\gamma}^{(2)}_{\rm TT}P^{(2)}_{\rm TT}
+m\hat{\gamma}^{(0)}_{\rm TT}P^{(0)}_{\rm TT})_{\mu\nu\rho\sigma}
h^{\rho\sigma}]^3_3\cdot\Gamma_{(hh)}\\
+\frac{1}{1-q}[{\rm corr.s}]^4_4\cdot\Gamma_{(hh)}.
\end{multline}
As correction terms appear the $hh$-vertex functions with all
$[...]^4_4$-insertions. We now can demand $\mathdutchcal{s}_0$-invariance because this
is a linear transformation. Among the $\hat{\gamma}^{(r)}_{\rm K,L}$-
contributions precisely those with $r=2,0; K=L=T$ are $\mathdutchcal{s}_0$-invariant (see App.\ B),
hence they have been absorbed already. The other contributions go with the
symmetric differential operators $\mathcal{N}_{\rm H,L}$. These are however
BRST-variations and thus vanish between physical states. Therefore this
part of Push does at $s=1$ not contribute to physical quantities.
The second (and last) appearance of Push is within gauge fixing and
$\phi\pi$-terms.
\begin{eqnarray}\label{pshgf}
(\Gamma_{\rm gf}+\Gamma_{\phi\pi})(m^2))&=&
-\frac{1}{2}\int \Big( \frac{1}{\kappa}h^{\mu\nu}(x)
(\partial_\mu b_\nu+\partial_\nu b_\mu)(y)\frac{m^2}{(x-y)^2}\nonumber\\
&&\qquad+ D^{\mu\nu}_\rho c^\rho(x)(\partial_\mu\bar{c}_\nu
+\partial_\nu\bar{c}_\mu)(y)\frac{m^2}{(x-y)^2} \Big)\nonumber\\
&=&-\frac{1}{2}\int\, \mathdutchcal{s}_\Gamma \Big(h^{\mu\nu}(x)(\partial_\mu\bar{c}_\nu
+\partial_\nu\bar{c}_\mu)(y)\frac{m^2}{(x-y)^2} \Big).
\end{eqnarray}
The product in the last line is point split in $(x\leftrightarrow y)$.
Divergences can be
developed at coinciding points in such a way that they can be controlled by a ZI
\begin{eqnarray}\label{dcs
[h^{\mu\nu}(x)(\partial_\mu\bar{c}_\nu
+\partial_\nu\bar{c}_\mu)(y)m^2]^4_4\cdot\Gamma&=&
m[h^{\mu\nu}(x)(\partial_\mu\bar{c}_\nu
+\partial_\nu\bar{c}_\mu)(y)m]^3_3\cdot\Gamma\nonumber\\
&&+\,[{\rm corr.s}]^4_4\cdot\Gamma
\end{eqnarray}
Among the corrections, again, appears the normal product of the lhs, which can
be absorbed there, such that on the rhs only all other insertions
of dimension 4 and $\phi\pi$-charge $-1$ show up. These are
$K_{\mu\nu}h^{\mu\nu},L_\rho c^\rho$ which are both naively defined because
they are linear in the quantized fields. At $s=1$ they are the only surviving
terms which contribute in (\ref{pshgf}) and then eventually vanish after
integration between physical states.
\section{The invariant parametric differential equations\label{se:invparadiffeq}}
\subsection{The Lowenstein-Zimmermann equation\label{se:LZeq}}
Green functions must be independent of the auxiliary mass $M$ at $s=1$, so
one has to know the action of $M\partial_M$ on them. Since the ST-identity
does not depend on $M$, $M\partial_M$ is a BRST-invariant differential
operator and can be expanded in the basis provided by $(\ref{tns})$.
In fact with the ZI's (\ref{sZ1}) and (\ref{sZ2}) and the discussion
there we can consider the basis of symmetric differential operators to
be given by $c_{30} \partial_{c_{30}}, c_1 \partial_{c_1}, c_2\partial_{c_2}$ complemented
with the symmetric counting operators $\mathcal{N}_{H,L}$.
Furthermore we have shown that the contributions
coming from Push (\ref{pZi2}) and the contributions from Shift
go at most into the symmetric counting operators. Hence
\begin{equation}\label{sMdM}
M\partial_M\Gamma=(-\beta^{\rm LZ}_{30} c_{30} \partial_{c_{30}}
-\beta^{\rm LZ}_{1} c_1 \partial_{c_{1}}
-\beta^{\rm LZ}_{2} c_2 \partial_{c_{2}}
+\gamma^{\rm LZ}_h\mathcal{N}_H+\gamma^{\rm LZ}_c\mathcal{N}_L)\Gamma .
\end{equation}
The coefficient functions $\beta,\gamma$ can be determined by
testing on the normalization conditions.
The test on \eqref{sMdM} involving external fields
\begin{equation}\label{Lcc1}
\frac{\partial}{\partial p^\lambda}\Gamma_{L_\rho c^\sigma c^\tau}
\,|_{\substack{ p^2=-\mu^2 \\ s=1} }
=-i\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma})
\end{equation}
implies
\begin{equation}\label{Lccp
M\partial_M\,\partial_p \Gamma_{Lcc}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
-\gamma^{\rm LZ}_c(\partial_p \Gamma_{Lcc}\,|_{\substack{ p^2=-\mu^2 \\ s=1}})=0 .
\end{equation}
Since the $M$-derivative in the first term is not in conflict with going
to the argument of $\Gamma$ the first term vanishes and hence
$\gamma^{\rm LZ}_c=0$.
Quite analogously we may proceed for
\begin{equation}\label{Kc1}
\frac{\partial}{\partial p^\sigma}\Gamma_{K^{\mu\nu}c_\rho}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
=-i\kappa(\eta^{\mu\sigma}\delta_\rho^\nu+\eta^{\nu\sigma}\delta_\rho^\mu
-\eta^{\mu\nu}\delta_\rho^\sigma).
\end{equation}
Here this test on (\ref{sMdM}) yields
\begin{equation}\label{Kcp
M\partial_M\,\partial_p \Gamma_{Kc}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
-\gamma^{\rm LZ}_h (-\partial_p \Gamma_{Kc}\,|_{\substack{ p^2=-\mu^2 \\ s=1}})
-\gamma^{\rm LZ}_c (-\partial_p \Gamma_{Kc}\,|_{\substack{ p^2=-\mu^2 \\ s=1}})=0 .
\end{equation}
With the same argument as before, $\gamma^{\rm LZ}_c=0$ and
$\gamma^{\rm LZ}_h=0$ follows.
For obtaining the $\beta$-functions we use the normalization conditions
(\ref{highnorm}) for $\gamma^{(2)}_{\rm TT}$ and $\gamma^{(0)}_{\rm TT}$.
The test
\begin{equation}\label{hTT3}
\frac{\partial}{\partial p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=0 \\ s=1}}
=c_{30}\kappa^{-2}
\end{equation}
implies
\begin{equation}\label{hTT3p}
M\partial_M\frac{\partial}{\partial p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=0 \\ s=1}}
+c_{30}\kappa^{-2}\beta^{\rm LZ}_{c_{30}}=0 .
\end{equation}
Since the normalization does not involve $M$, the first term is zero, hence
$\beta^{\rm LZ}_{c_{30}}=0$.
It is clear that the other $\beta$-functions vanish too.
Hence at $s=1$ the LZ-equation
\begin{equation}\label{lzv}
M\partial_M \Gamma|_{s=1} =0
\end{equation}
holds and reveals that the vertex functions are independent of $M$ at $s=1$.
\subsection{The Renormalization Group equation\label{se:RGeq}}
The RG-equation formulates the response of the system to the variation of
the normalization parameter $\mu$, (see (\ref{highnorm})),
where e.g.\ couplings or field amplitudes are defined. Since the ST-operator does not depend on $\mu$ the partial differential operator
$\mu\partial_\mu$ is symmetric and can be expanded in the basis
(\ref{tns}). Quite analogously to the LZ-equation (by removing Push and Shift) we end up with
\begin{equation}\label{rg1}
\mu\partial_\mu\Gamma_{|s=1}=
(-\beta^{\rm RG}_{30} c_{30} \partial_{c_{30}}
-\beta^{\rm RG}_{c_1} c_1 \partial_{c_1}
-\beta^{\rm RG}_{c_2} c_2 \partial_{c_2}
+\gamma^{\rm RG}_h\,\mathcal{N}_H
+\gamma^{\rm RG}_c\,\mathcal{N}_L)\Gamma_{|s=1} \, .
\end{equation}
We observe that some normalization conditions involve $\mu$,
hence performing derivatives wrt $\mu$ does not commute with choosing
arguments for the relevant vertex functions and we expect non-trivial
coefficient functions.
Again we start with those tests which involve external fields, i.e.
\begin{equation}\label{Lcc2}
\frac{\partial}{\partial p^\lambda}\Gamma_{L_\rho c^\sigma c^\tau}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
=-i\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma}).
\end{equation}
Now $\mu\partial_\mu$ does not commute with choosing a $\mu$-dependent
argument, hence
\begin{equation}
\mu\partial_\mu\frac{\partial}{\partial p^\lambda}
\Gamma_{L_\rho c^\sigma c^\tau}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
+i\gamma^{\rm RG}_c\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma})=0
\end{equation}
which determines $\gamma^{\rm RG}_c$.
For the normalization condition
\begin{equation}\label{Kc2}
\frac{\partial}{\partial p^\sigma}\Gamma_{K^{\mu\nu}c_\rho}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
=-i\kappa(\eta^{\mu\sigma}\delta_\rho^\nu+\eta^{\nu\sigma}\delta_\rho^\mu
-\eta^{\mu\nu}\delta_\rho^\sigma)
\end{equation}
the structure is exactly the same as in the preceding
example such that the result is
\begin{equation}
\mu\partial_\mu\frac{\partial}{\partial p^\sigma}\Gamma_{K^{\mu\nu}c_\rho}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
+(\gamma^{\rm RG}_c-\gamma^{RG}_h)i\kappa(\eta^{\mu\sigma}\delta_\rho^\nu
+\eta^{\nu\sigma}\delta_\rho^\mu-\eta^{\mu\nu}\delta_\rho^\sigma)=0.
\end{equation}
This equation gives $\gamma^{\rm RG}_h$.
The $\beta$-functions will be determined by the normalization conditions
for the couplings.
The normalization condition
\begin{equation}\label{c3n}
\partial_{p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=0 \\ s=1}}
=c_{30}\kappa^{-2}
\end{equation}
is independent from $\mu$ hence it implies
\begin{equation}
\mu\partial_\mu \partial_{p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=0 \\ s=1}}
=0=-\beta^{\rm RG}_{30}c_{30}\kappa^{-2}
+2c_{30}\kappa^{-2}\gamma^{\rm RG}_h.
\end{equation}
This determines $\beta^{\rm RG}_{c_{30}}$.
The other normalization conditions, however depend on $\mu$ and thus result into
\begin{eqnarray}
\mu\partial_\mu
\partial_{p^2}\partial_{p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
&=&2c_1\beta^{\rm RG}_1-2c_1\gamma^{\rm RG}_h\\
\mu\partial_\mu
\partial_{p^2}\partial_{p^2}\gamma^{(0)}_{\rm TT}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
&=&-6c_2\beta^{\rm RG}_{c_2}+2c_1\beta^{\rm RG}_1
+2(3c_2-c_1)\gamma^{\rm RG}_h.
\end{eqnarray}
These equations determine $\beta^{\rm RG}_1,\beta^{\rm RG}_2$. These coefficient functions depend on the product $\mu\kappa$. Since we work in Landau gauge, they do not depend on a gauge parameter.
\subsection{The Callan-Symanzik equation\label{se:CSeq}}
The CS-equation describes the response of the system to
the variation of all parameters carrying the dimension of mass.
Here $M$, $\mu$, and $\kappa$. The variation
of $M$ has been covered by the LZ-equation with the result that Green
functions do not depend on it at $s=1$. The variation of $\mu$ has been
treated as well. As far as $\kappa$ is concerned
it is crucial to observe that the ST-identity depends on it, hence it
does not per se give rise to a symmetric differential operator. However
acting with $-\kappa\partial_\kappa$ on $\Gamma^{\rm class}$ we find
\begin{equation}\label{smsch
-\kappa\partial_\kappa \Gamma^{\rm class}
= (2c_3\partial_{c_3} +(N_b-2\alpha_0\partial_{\alpha_0})
-N_K-N_L)\Gamma^{\rm class}.
\end{equation}
Hence the combination
\begin{equation}\label{skpp}
-\kappa\partial_\kappa-2c_{30}\partial_{c_{30}}
-(N_b-2\alpha_0\partial_{\alpha_0})
+N_K+N_L
\end{equation}
is independent of $\kappa$ on $\Gamma^{\rm class}$, the variation of
$\kappa$ is just balanced by the other derivatives, this combination forms a
differential operator which commutes with the ST-identity, and thus is
symmetric.
In higher orders we can therefore expand this operator in the basis
$(\ref{tns})$ and obtain
\begin{multline}\label{cs1}
(-\kappa\partial_\kappa-2c_{30}\partial_{c_{30}}
-(N_b-2\alpha_0\partial_{\alpha_0})
+N_K+N_L)\Gamma_{|s=1}=\\
(-\beta_{30} c_{30} \partial_{c_{30}}
-\beta_{c_1} c_1 \partial_{c_1}
-\beta_{c_2} c_2 \partial_{c_2}
+\gamma_h\,\mathcal{N}_H
+\gamma_c\,\mathcal{N}_L)\Gamma_{|s=1},
\end{multline}
where the contributions going with the variation of $c_{31},c_{32}$
have been eliminated with the ZI's (\ref{sZ1}) and (\ref{sZ2}).
Like for the LZ equation \eqref{sMdM} the coefficient functions vanish, since the normalization conditions and the differential operator are not in conflict with each other, i.e.\
\begin{align}\label{cs2}
(-\kappa\partial_\kappa-2c_{30}\partial_{c_{30}}
-(N_b-2\alpha_0\partial_{\alpha_0})
+N_K+N_L)\Gamma_{|s=1}= 0
\end{align}
We eliminate in the RG-equation (\ref{rg1}) the hard insertion
$c_{30}\partial_{c_{30}}$ and add the result to (\ref{cs2}) obtaining the
CS-equation in its conventional form
\begin{multline}\label{cs3}
(\mu\partial_\mu-\kappa\partial_\kappa-2c_{30}\partial_{c_{30}}
-(N_b-2\alpha_0\partial_{\alpha_0})+N_K+N_L
+\beta^{\rm CS}_1 c_1 \partial_{c_1}+\\
+\beta^{\rm CS}_2 c_2 \partial_{c_2}
-\gamma^{\rm CS}_h\mathcal{N}_H
-\gamma^{\rm CS}_c\mathcal{N}_L)\Gamma_{|s=1}=
\alpha^{\rm CS}[\kappa^{-2}\int\sqrt{-g}R\,]^3_3\cdot\,\Gamma_{|s=1}.
\end{multline}
The coefficient functions are of order $O(\hbar)$. Their values have to be
determined by testing on the normalization conditions and taking care of
the soft contribution.
The differential operator can be interpreted as a symmetrized version of the dilatations and the equation then says that in the deep Euclidean region the soft breaking on the rhs becomes negligible and the hard breaking is parametrized by the functions $\beta$ and $\gamma$.
Between physical states only the $\beta$'s would be relevant.
Before testing on \eqref{cs3}, we have to note that all coefficient functions start with
order $O(\hbar)$. This is clear for $\beta$'s and $\gamma$'s because they
were introduced via the action principle after having applied the
symmetric differential operator to $\Gamma$. But contrary to more conventional models this is here also true for $\alpha^{\rm CS}$, because it was traded against the hard insertion $[\int\sqrt{-g}R]^4_4$.
This has to do with the special character of the symmetric differential
operator and the $\kappa$-dependence of $\Gamma$: The EH action depends on $\kappa$ which carries dimension, but acts as a mass
term only relative to the higher derivative terms.
We test on
\begin{equation}\label{Lcc3}
\frac{\partial}{\partial p^\lambda}\Gamma_{L_\rho c^\sigma c^\tau}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
=-i\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma}).
\end{equation}
In order to understand the impact of the symmetric differential operator we start with the tree approximation and find
\begin{equation}\label{cstr}
(-\kappa\partial_\kappa +1)
\frac{\partial}{\partial p^\lambda}\Gamma^{(0)}_{L_\rho c^\sigma c^\tau}=0 ,
\end{equation}
which is correct, since $\mu\partial_\mu-2c_{30}\partial_{c_{30}}$
does not contribute and from counting operators only $N_L$ does.
In higher orders $\mu\partial_\mu$ no longer commutes with going to the
desired value for $p$, whereas $\kappa\partial_ \kappa-2c_{30}\partial_{c_{30}}+N_L$ does, hence
\begin{eqnarray}\label{Lcch}
\mu\partial_\mu
\frac{\partial}{\partial p^\lambda}\Gamma_{L_\rho c^\sigma c^\tau}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
&-&\gamma^{\rm CS}_c(-i)\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma})\nonumber\\
&=&\alpha^{\rm CS}[\kappa^{-2}\int\sqrt{-g}R\,]^3_3\cdot\,
\partial_{p^\lambda}{\Gamma_{L_\rho c^\sigma c^\tau}}_{|{\substack{ p^2=-\mu^2 \\ s=1}}}.
\end{eqnarray}
Herewith $\gamma^{\rm CS}_c$ is determined. (The $\alpha$-term contributes not earlier than in two loops, since we are concerned with
1PI diagrams.)
We test on
\begin{equation}\label{Kc3}
\frac{\partial}{\partial p^\sigma}\Gamma_{K^{\mu\nu}c_\rho}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
=-i\kappa(\eta^{\mu\sigma}\delta_\rho^\nu+\eta^{\nu\sigma}\delta_\rho^\mu
-\eta^{\mu\nu}\delta_\rho^\sigma)
\end{equation}
and, again, because also the term $-\kappa\partial_\kappa-2c_{30}\partial_{c_{30}}+N_K$ commutes with going to a specific value of $p$, we find in higher orders
\begin{eqnarray}\label{cscffK}
\mu\partial_\mu
\frac{\partial}{\partial p^\sigma}\Gamma_{K^{\mu\nu}c_\rho}
\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
&-&(\gamma^{\rm CS}_h-\gamma^{\rm CS}_c)i\kappa(\eta^{\mu\sigma}\delta_\rho^\nu
+\eta^{\nu\sigma}\delta_\rho^\mu-\eta^{\mu\nu}\delta_\rho^\sigma)\nonumber\\
&=&\alpha^{\rm CS}[\kappa^{-2}\int\sqrt{-g}R\,]^3_3\cdot\,
\frac{\partial}{\partial p^\sigma}{\Gamma_{K^{\mu\nu}c^\rho}}_{|s=1} .
\end{eqnarray}
This yields eventually $\gamma^{\rm CS}_h$.
With the same argument $\alpha^{\rm CS}$ and the $\beta^{\rm CS}_{1,2}$ are given by
\begin{equation}
\mu\partial_\mu
\partial_{p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
-2c_{30}\kappa^{-2}\gamma^{\rm CS}_h
=\alpha^{\rm CS}[\kappa^{-2}\int\sqrt{-g}R\,]^3_3
\cdot\,\mathbb{P}^{(2)}_{30}\Gamma_{|s=1}\label{phccs}
\end{equation}
\begin{multline}
\mu\partial_\mu
\partial_{p^2}\partial_{p^2}\gamma^{(2)}_{\rm TT}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
-2c_1\beta^{\rm CS}_1-2c_1\gamma^{\rm CS}_h
=\alpha^{\rm CS}[\kappa^{-2}\int\sqrt{-g}R\,]^3_3
\cdot\,\mathbb{P}^{(2)}_1\Gamma_{|s=1}\label{bc1}
\end{multline}
\begin{multline}
\mu\partial_\mu
\partial_{p^2}\partial_{p^2}\gamma^{(0)}_{\rm TT}\,|_{\substack{ p^2=-\mu^2 \\ s=1}}
+6c_2\beta^{\rm CS}_2-2c_1\beta^{\rm CS}_1
+2(3c_2-c_1)\gamma^{\rm CS}_h\\
=\alpha^{\rm CS}[\kappa^{-2}\int\sqrt{-g}R\,]^3_3
\cdot\,\mathbb{P}^{(0)}_2\Gamma_{|s=1}.\label{bc2}
\end{multline}
(\ref{phccs}) determines $\alpha^{\rm CS}$ and (\ref{bc1}), (\ref{bc2}) determine
$\beta^{\rm CS}_{1,2}$, resp.
The symbols $\mathbb{P}$ stand for projectors of $\Gamma_{hh}$ into the components
\begin{equation}
\mathbb{P}_{30}\to\partial_{p^2} \gamma^{(2)}_{\rm TT}(p^2)\qquad
\mathbb{P}_1\to\partial_{p^2}\partial_{p^2} \gamma^{(2)}_{\rm TT}(p^2)\qquad
\mathbb{P}_2\to\partial_{p^2}\partial_{p^2} \gamma^{(0)}_{\rm TT}(p^2).
\end{equation}
These are part of the full vertex functions of higher orders. Clearly
those admit also the expansion in the projector basis as in the
classical approximation.
Also the coefficient functions of the CS-equation depend only on
$\mu\kappa$, besides the parameters $c_1,c_2,c_{30}$.
\section{Traces of the Einstein-Hilbert Theory\label{se:removingregulators}}
It has already been observed by \cite{Stelle} that the introduction of $R^{\mu\nu} R_{\mu\nu}$ and $R^2$ in the classical action leads to a regularization of the $h$-field propagator analogously to the Pauli-Villars regularization (cf.\ \cite{Bogolyubov:1980nc}). This regularization is not sufficient to render
the model finite, but it becomes power counting renormalizable. This implies
that all standard tools of, say, BPHZL renormalization become available.
Furthermore, the BPHZL renormalization scheme may be formulated with such regularization, but has been shown to be independent of it \cite{Zimmermann:1975gk,Clark:1976ym} provided the regulator-free model is finite.
Unsurprisingly, it can be shown that in our construction the limit $c_1, c_2 \rightarrow 0$ exists up to one-loop diagrams so that the result of \cite{tHooft:1974toh} can be recovered.
For diagrams of higher loop order, new divergencies occur which are not treated by the subtractions in the BPHZL scheme.
Those additional divergencies can be verified by setting the UV-degrees in \eqref{dgpr} and \eqref{dgpr2} equal to $-2$ and subsequently following the argument in Sect. \ref{se:powercounting} with these new degrees.
This just means for our work that beyond one loop we have to take non-vanishing parameters $c_1$ and $c_2$ and have to examine in which sense we find the EH theory in our model.
\subsection{Projection to Einstein-Hilbert\label{se:projEH}}
We still have to check in some detail how the $S$-matrix (\ref{sma}) is affected by this limit. The factor
$K(x-y)$ is the wave operator of the free theory, hence given by
$\Gamma^{(0)}_{\Phi_i\Phi_j}$ (recall that the fields $\Phi$ are the free
$\Phi_{\rm in}$ fields). At $c_1=c_2=0$ the $hh$-submatrix has only $p^2$-contributions, no $(p^2)^2$, hence projects to the pole at $p^2=0$ (for $s=1$), as desired.
The matrix $z^{-1}$, commonly the wavefunction renormalization matrix, is here in fact the matrix $r$ of the residues of the poles, since the $h$-wave function has been fixed in $(\ref{highnormG1})$ (and the others by the $b$-equation of
motion).
Contributions of the possible second singularity of the propagator is projected to zero because no respective factor in the numerator, coming from $\Phi_{\mathrm{in}}$, is available.
Hence for physical quantities they are always projected to 0, as we have seen for the $S$-matrix.\\
Before the fields $\Phi_{\rm in}$ project to the mass shells one can introduce a
$\Phi_{\underbar{$\scriptstyle \mathrm{in}$}}=z\Phi_{\rm in}$ with the implication
$z^{-1T}K(x-y)z^{-1}=\Gamma_{\Phi(x)\Phi(y)}$ -- here the {\bf full} $\Gamma_{\Phi\Phi}$. Then one can use the results of ST etc.\ and derive in analogy to the tree approximation that the commutator of $:\Sigma:$ with ST generates again $Q^{\rm BRST}$ as needed.\\
A comment is in order.
The reason for going via $c_1,c_2$ from the very beginning can be understood just as a means to avoid ``unnecessary'' even higher derivative counterterms (conf.\ \cite{Goroff:1985th}).
This can be seen as follows:
Starting with $c_3$-terms alone, one realizes in one-loop that higher derivative counterterms are required.
Absorbing these and transitioning into a new propagator, the power counting becomes the same as in the $(c_1,c_2,c_3)-$model.
This round-about procedure has been circumvented by starting immediately with all terms guaranteeing power counting renormalizability.
In this context, it is quite natural to consider even higher orders of derivatives of the metric in the classical action, which would render the model super-renormalizable (conf.\ \cite{Asorey:1996hz}).
However, these higher orders do not have a regularizing effect at the order $\hbar$ so that the occurring divergencies have to be treated separately.
Thus the analytic structure of such models is obscured to a certain extent.
\subsection{Parametric differential equations of the $S$-matrix\label{se:paradiffeqS}}
It is of quite some interest to investigate how the $S$-matrix behaves under
RG transformation and under scaling, i.e.\ under action of the CS-operator.\\
First we need the expressions of the symmetric differential operators
$\mathcal{N}_{\rm H,L}$ (cf. (\ref{fdHa}) and (\ref{fdL})) when they act on $Z$:
\begin{equation}\label{fdaHz}
\mathcal{N}_{\rm H}Z\equiv i\int\Big(-J_h\frac{\delta}{\delta J_h}-K\frac{\delta}
{\delta K}+j_{\bar{c}}\frac{\delta }{\delta j_{\bar{c}}}\Big)Z
\qquad
\mathcal{N}_{\rm L}Z\equiv i \int \Big(-j_c\frac{\delta}{\delta j_c}
-L\frac{\delta}{\delta L}\Big)Z .
\end{equation}
Next we introduce
\begin{equation}\label{ffS}
\hat{S}(\underline{J})\equiv :\Sigma:Z(\underline{J}),
\end{equation}
a kind of off-shell $S$-matrix.
In order to see how the $S$-matrix transforms under the RG we look at
\begin{eqnarray}\label{rgsma}
\mu\partial_\mu\hat{S}(\underline{J})&=&:(\mu\partial_\mu Y)e^Y:Z(\underline{J})
+:\Sigma:\mu\partial_\mu Z(\underline{J})\\
Y&\equiv&\int dxdy\Phi_{\rm in}(x)K(x-y)z^{-1}\frac{\delta}{\delta \underline{J}} \,,
\qquad K(x-y)=\Gamma^{(0)}_{\Phi\Phi}\nonumber\\
\mu\partial_\mu Y&=&\int dxdy\, \Phi_{\rm in}(x)K(x-y)(\mu\partial_\mu z^{-1})
\frac{\delta}{\delta \underline{J}} = 0 \nonumber
\end{eqnarray}
with $z^{-1}$ being the residue matrix of the poles at $p^2=0$.
In the $hh$-sector these residues are independent of $\mu$: for the spin two part directly
as guaranteed by the subtraction scheme (\ref{highnorm}); in the spin zero part then indirectly via ST.
In the $bh$-mixed sector they are
$\mu$-independent because they are directly determined by the gauge fixing which is independent of it.\\
In the second term of $(\ref{rgsma})$ the operators $\mathcal{N}$ do not contribute,
because they are BRST-variations and therefore mapped to zero by $:\Sigma:$.
The final outcome is
\begin{equation}\label{rgsma2}
\mu\partial_\mu S=(-\beta^{\rm RG}_{30} c_{30} \partial_{c_{30}} - \beta^{\rm RG}_1 c_1 \partial_{c_1} - \beta^{\rm RG}_2 c_2 \partial_{c_2}) S.
\end{equation}
For $S$-matrix elements which exist, regarding the infrared, this relation applies. It is remarkable that (although here it is formal in many cases) this is the analogue to the result which Zimmermann has derived axiomatically for massless $\phi^4$-theory \cite{Zimmermann:1979fd}.
With completely analogous arguments one can derive the CS equation for the $S$-operator, i.e.
\begin{align}
(\mu\partial_\mu-\kappa\partial_\kappa-2c_{30}\partial_{c_{30}} + \beta^{\rm CS}_1 c_1 \partial_{c_1} + \beta^{\rm CS}_2 c_2 \partial_{c_2})S & =
\alpha^{\rm CS}[\kappa^-2\int\sqrt{-g}R]^3_3\cdot S \nonumber \\
& = \alpha^{\rm CS}([\kappa^{-2}\int\sqrt{-g}R]^3_3)^{\rm Op} \,. \label{cssma}
\end{align}
The qualification is as before: the equation is meaningful only for matrix elements which exist regarding the infrared.
It shows however in those cases how scaling is realized.
\section{General solution of the Slavnov-Taylor identity\label{se:generalsolutionSTI}}
As mentioned at the end of Section \ref{se:propagators} the propagators for the field
$h^{\mu\nu}$ require to consider it as a field with canonical dimension zero.
It is thus impossible to distinguish via power counting between $h$ and an
arbitrary function $h'(h)$. This is familiar from supersymmetric gauge theories
where in linear realization of supersymmetry the real gauge superfield
$\phi(x,\theta,\bar{\theta})$, known as ``vector superfield'', also has
vanishing canonical dimension \cite{Piguet:1984mv}. One
can take over from there mutatis mutandis the treatment of such fields.
In the present context this means in particular
that for finding the general solution of the Slavnov-Taylor identity one
just chooses a special one, here $h^{\mu\nu}\equiv h_s^{\mu\nu}$,
with its transformation law (\ref{brst}) $\mathdutchcal{s} h_s^{\mu\nu}\equiv Q_s(h_s)$
and replaces it by a general invertible function $\mathcal{F}(h)$
\begin{equation}\label{gsst}
\mathcal{F}^{\mu\nu}(h)=z_1 h^{\mu\nu}
+\sum_{n,k}z_{nk}F_{n,k}^{\mu\nu}(\underbrace{h...h}_{n}).
\end{equation}
Here $n=2,3,...;k=1,2,...k_{\rm max}(n)$ and $F_{n,k}^{\mu\nu}$ denotes the most
general contravariant two-tensor
in flat Minkowski space which one can form out of $n$ factors of $h$ and which
does not contain terms with $\eta^{\mu\nu}$ as factor. The reason for this
restriction will be explained at the end of this section.\\
The coefficients have been denoted
$z_{nk}$ because the redefinition
$h\rightarrow \mathcal{F}$ is just a generalized wave function
renormalization, the standard one being given by $\mathcal{F}(h)=z_1 h$ leading
to $\hat{H}=z^{-1}_1 H$ in the ST-identity.\\
A remark is in order. That the non-linear redefinition $F^{\mu\nu}_{n,k}(h)$ is not a
formal exercise, but indeed necessary in the course of renormalization,
has been shown explicitly, e.g.\ \cite[formula (1.7)]{vandeVen:1991gw}. It is also to be noted that at every order $n$ in the number of
field $h$ there are only finitely many free parameters $z_{n,k}$ to be prescribed
by normalization conditions (s.b.).
\subsection{Tree approximation\label{se:generalsolutiontree}}
On the level of the functional $\Gamma^{\rm class}\equiv \Gamma^s$ this change
manifests itself in the form
\begin{equation}\label{gclb}
\bar{\Gamma}(h,c,H,L)=\bar{\Gamma}^s(\hat{h},\hat{c},\hat{H},\hat{L}),
\end{equation}
where $\bar{\Gamma}^s(\hat{h},\hat{c},\hat{H},\hat{L})$ is the special
solution of
(\ref{brGm}) with $h,c,H,L$ replaced by
\begin{eqnarray}\label{gtrf}
\hat{h}^{\mu\nu}=\mathcal{F}^{\mu\nu}(h^{\mu\nu}),
&\hat{H}_{\mu\nu}=\frac{\delta}{\delta \hat{h}^{\mu\nu}}
\int H^{\mu\nu}\mathcal{F}^{-1}_{\mu\nu}(\hat{h})_{|\hat{h}=\mathcal{F}(h)}\\
\hat{c}^\rho=z_cc^\rho&\hat{L}^\rho= \frac{1}{z_ c}L_\rho .
\end{eqnarray}
Again inspired by the case of supersymmetry \cite[Sect.~5.4, p.~68 ff]{Piguet:1986ug}
we shall now show that the parameters $z_{nk}, n\ge2$, are of gauge type,
hence unphysical. At the same time this represents a second way to find the
general solution of the ST-identity.
We start from an arbitrary invertible function $M$ and its BRST variation $N$
\begin{equation}\label{gfct}
M^{\mu\nu}(h)= a_1 h^{\mu\nu}+\sum_{n,k}a_{n,k}(\underbrace{h\cdots h}_{n})^{\mu\nu}\,,
\qquad \mathdutchcal{s} M=N,
\end{equation}
where $n=2,3,...$ and $k=1,...,k_{\rm max}(n)$ being the number of two-tensors
which can be formed out of $n$ factors $h$ without $\eta_{\mu\nu}$.
($k_{\rm max}(n)$ is finite for every $n$.)
Both are composite hence we couple them to external fields
$\mathcal{M}$ and $\mathcal{N}$. $M$ will serve as defining a new, non-linear gauge
\begin{equation}\label{nlg
\Gamma_{\mathrm{gf}}=\frac{1}{2\kappa}\int
(\partial_\mu M^{\mu\nu}b_\nu+\partial_\nu M^{\mu\nu}b_\mu)
-\frac{1}{2}\int\eta^{\mu\nu}b_\mu b_\nu,
\end{equation}
giving rise to the gauge condition
\begin{eqnarray}\label{ngcd}
\frac{\delta\Gamma_{\mathrm{gf}}}{\delta b_\mu}
&=&\frac{1}{\kappa}\partial_\lambda M^{\lambda\nu}-b^\mu\\
\frac{\delta\Gamma}{\delta b_\mu}
&=&\frac{1}{\kappa}\partial_\lambda
\frac{\delta \Gamma}{\delta\mathcal{M}_{\lambda\mu}}-b^\mu.
\end{eqnarray}
To this gauge fixing the $\phi\pi$-term
\begin{equation}\label{nlFP}
\Gamma_{\phi\pi}=-\frac{1}{2}\int N^{\mu\nu}(\partial_\mu\bar{c}_\nu
+\partial_\nu\bar{c}_\mu)
\end{equation}
and the ST
\begin{equation}\label{nlgST
\mathcal{S}(\Gamma)\equiv \int \Big( \frac{\delta \Gamma}{\delta K}\frac{\delta\Gamma}{\delta h}
+b\frac{\delta \Gamma}{\delta\bar{c}}
-\mathcal{M}\frac{\delta \Gamma}{\delta\mathcal{N}}
+\frac{\delta \Gamma}{\delta L}\frac{\delta \Gamma}{\delta c} \Big)=0
\end{equation}
are suitable.
Gauge condition (\ref{ngcd}) and ST-identity (\ref{nlgST}) lead to the ghost
equation of motion
\begin{equation}\label{nlgh}
\frac{\delta\Gamma}{\delta\bar{c}_\mu}
- \kappa^{-1} \partial_\lambda\frac{\delta \Gamma}{\delta\mathcal{N}_{\lambda\mu}}=0,
\end{equation}
which has the general solution
\begin{eqnarray}\label{gnlsn
\Gamma&=&\int(-\frac{1}{2}\eta^{\mu\nu}b_\mu b_\nu)
+\bar{\Gamma}(h,c,K,L,\mathcal{M}',\mathcal{N}')\\
&&\mathcal{M}'=\mathcal{M}
-\frac{1}{2\kappa}(\partial_\mu b_\nu+\partial_\nu b_\mu)\\
&&\mathcal{N}'=\mathcal{N}
-\frac{1}{2}(\partial_\mu\bar{c}_\nu+\partial_\nu\bar{c}_\mu)\\
\bar{\Gamma}&=&\Lambda(h)+\int(KO(h,c)+\mathcal{M}'M(h)+\mathcal{N}'N(h,c)
-L_\mu(c^\lambda\partial_\lambda c^\mu)).
\end{eqnarray}
We now demand BRST invariance, i.e.\ (\ref{nlgST}), providing
the linearized transformation law
\begin{equation}\label{trfl}
\mathcal{B}_{\bar{\Gamma}}h^{\mu\nu}= O^{\mu\nu} \qquad
\mathcal{B}_{\bar{\Gamma}}c^\mu=-\kappa c^\lambda\partial_\lambda c^\mu \qquad
\mathcal{B}_{\bar{\Gamma}}\bar{c}_\mu= b_\mu ,
\end{equation}
calculate the effect on (\ref{gnlsn}) and find the conditions
\begin{eqnarray}
\mathcal{B}_{\bar{\Gamma}}O&=&0 \label{tr1}\\
\mathcal{B}_{\bar{\Gamma}}M&=&N \label{tr2}\\
\mathcal{B}_{\bar{\Gamma}}N&=&0 \label{tr3}\\
\mathcal{B}_{\bar{\Gamma}}\Lambda&=&0 . \label{tr4}
\end{eqnarray}
The solution of (\ref{tr1}) we know from the first part of this section to be
\begin{equation}\label{gnlsh}
O=Q^{\mathcal{F}(h,c)}=
\int\frac{\delta\mathcal{F}^{-1}(\hat{h})}{\delta h}
Q_s(\hat{h},c)|_{\hat{h}=\mathcal{F}(h)},
\end{equation}
$\mathcal{F}$ being given from (\ref{gsst}) and $z_1=1$.
Since $\mathcal{B}_{\bar{\Gamma}}$ is nilpotent on functionals $\mathcal{T}(h,c)$
\begin{equation}\label{nlpt}
\mathcal{B}_{\bar{\Gamma}}^2 \mathcal{T}=0,
\end{equation}
(\ref{tr3}) follows from (\ref{tr2}) with
\begin{eqnarray}\label{gtrsfl}
N=\bar{\mathcal{B}}_{\bar{\Gamma}}M
&=&\int dx O(x)\frac{\delta M}{\delta\hat{h}(x)}
=\int dxdy\frac{\delta\mathcal{F}^{-1}(\hat{h}(x))}{\delta\hat{h}(y)}
Q_s(\hat{h},c)(y)\frac{\delta M(h)}{\delta h(x)}\\
&=&\int dyQ_s(\hat{h},c)(y)\frac{\delta}{\delta\hat{h}(y)}
M(\mathcal{F}^{-1}(\hat{h}))|_{\hat{h}=\mathcal{F}(h)}.
\end{eqnarray}
(\ref{tr4}) is solved by
\begin{equation}\label{gnvt}
\Lambda=\Gamma^{\rm class}_{\rm inv}(\mathcal{F}(h)),
\end{equation}
with $\Gamma^{\rm class}_{\rm inv}$ being given by (\ref{ivc}).
Therefore the general solution of the ST-identity (\ref{nlgST}) is given by
\begin{multline}\label{gsSTc
\Gamma(h,c,K,L,\mathcal{M}',\mathcal{N}')
= \big[
\Gamma^{\rm class}_{\rm inv}(\hat{h})
+\int dxdy K(x)\frac{\delta\mathcal{F}^{-1}(\hat{h}(x))}{\delta\hat{h}(y)}Q_s(\hat{h},c)(y)\\
+\int dxdy\mathcal{N}(x)Q_s(\hat{h},c)(y)\frac{\delta}{\delta\hat{h}(y)}M(\mathcal{F}^{-1}(\hat{h}))(x)
\big]|_{\hat{h}=\mathcal{F}(h)} \\
+\int (-\kappa L_\mu c^\lambda\partial_\lambda c^\mu +\mathcal{M}'M
+\mathcal{N}'N+\eta^{\mu\nu}b_\mu b_\nu).
\end{multline}
In order to compare this general solution with the previous one we define
a new gauge function by
\begin{equation}
\hat{M}=M(\mathcal{F}^{-1}(\hat{h}))
\end{equation}
with associated
\begin{multline}\label{qgsST
\Gamma(h,c,K,L,\mathcal{M}',\mathcal{N}')
= \big[
\Gamma^{\rm class}_{\rm inv}(\hat{h})
+\int dx\hat{K}(x)Q_s(\hat{h},c)(x) \\
+\int dxdy\mathcal{N}'(x)Q_s(\hat{h},c)(y)\frac{\delta}{\delta\hat{h}(y)}\hat{M}(\hat{h})(x)\\
+\int(\mathcal{M}'\hat{M}'(\hat{h})+\frac{1}{4\kappa^2}(\partial_\mu\partial_\nu \hat{h}^{\mu\nu})^2)
\big]|_{\hat{h}=\mathcal{F}(h)}
-\kappa\int(L_\mu c^\lambda\partial_\lambda c^\mu) ,
\end{multline}
where
\begin{equation}
\hat{K}=\int dx K(x)\frac{\delta\mathcal{F}^{-1}(\hat{h}(y))}{\delta\hat{h}(x)}|_{\hat{h}=\mathcal{F}(h)}.
\end{equation}
This shows that the solution (\ref{gsSTc}) corresponding to a function
$\mathcal{F}(h)$ and a gauge function $M(h)$ is modulo the canonical
transformation
$h\rightarrow \hat{h}=\mathcal{F}(h)$ and $K\rightarrow \hat{K}(K,h)$
equivalent to the solution corresponding to $\mathcal{F}(h)=h$ and gauge
function $\hat{M}=M(\mathcal{F}^{-1}(h))$.
At this stage we are able to explain the restrictions on
$\mathcal{F}(h)$ mentioned
at the beginning of this section. We want the transition
$h\rightarrow \mathcal{F}(h)$ to be a canonical transformation. But then the
one-particle states associated with the two fields must be the same (up to a
numerical factor). Then $\mathcal{F}$ must start with $z_1 h^{\mu\nu}$ and must
not contain $\eta^{\mu\nu}h^\lambda_{\phantom{\lambda}\lambda}$. \\
In \cite{EKKSI,EKKSII} the conformal transformation properties of the energy-momentum
tensor (EMT) in massless $\phi^4$-theory have been studied. In that context
redefinitions of $h^{\mu\nu}$ \cite{EKKSIII} as here had to be understood because they governed
the renormalization of the EMT. There admitting an $\eta^{\mu\nu}$ would have
mixed renormalization of the EMT as a whole with that of its trace and was
therefore forbidden altogether. Hence here, too, one does not admit it at any
power of $h$.\\
It is worth mentioning that in the same reference the BRST transformations of
$h^{\mu\nu}$ and their algebra had been derived in form of local Ward identities
for translations in spacetime. Their explicit solution, i.e.\ representation on
$h^{\mu\nu}$, turned out to be unstable, namely just admitting the transition
$h^{\mu\nu}\rightarrow \mathcal{F}^{\mu\nu}(h)$. So, that represents a welcome, independent
and explicit proof of the considerations here on the general solution of
the ST-identity.\\
As a further interesting byproduct of this redefinition question we would like
to mention that the transition from $h^{\mu\nu}=g^{\mu\nu}-\eta^{\mu\nu}$ to
the Goldberg variable $\tilde{h}^{\mu\nu}=\sqrt{-g}g^{\mu\nu}-\eta^{\mu\nu}$
implies changing one-particle states. This can be seen as follows
\begin{eqnarray}\label{Gldbrgv}
\sqrt{-g}g^{\mu\nu}&=&\eta^{\mu\nu}+\tilde{h}^{\mu\nu}\\
g^{\mu\nu}&=&\eta^{\mu\nu}+h^{\mu\nu}\\
\tilde{h}^{\mu\nu}-h^{\mu\nu}&=&(\sqrt{-g}-1)(\eta^{\mu\nu}+h^{\mu\nu})\\
\tilde{h}^{\mu\nu}&=&h^{\mu\nu}
-\frac{1}{2}\eta^{\mu\nu}h^\lambda_{\phantom{\lambda}\lambda}
+\eta^{\mu\nu} \Big(\frac{1}{8}(h^\alpha_{\phantom{\alpha}\alpha})^2
+\frac{1}{4}h^{\alpha\beta}h_{\alpha\beta}\Big)
-\frac{1}{2}h^\alpha_{\phantom{\alpha}\alpha}h^{\mu\nu}+O(h^3).
\end{eqnarray}
The $h$-linear term proportional to $\eta^{\mu\nu}$ generates new one-particle
poles relative to the original $h^{\mu\nu}$, as can be seen by comparing the
$\langle hh \rangle$-propagators in our approach with those of \cite{Stelle} and \cite{KuOj}. They belong to
the spin 0 part of the full field $h^{\mu\nu}$ and will eventually be
eliminated from the physical spectrum, but they have to be taken care of. Hand
in hand with this goes a change of the BRST transformation from
$\mathdutchcal{s} h^{\mu\nu}\rightarrow \mathdutchcal{s} \tilde{h}^{\mu\nu}$.
\subsection{Gauge parameter independence for the general case\label{se:gpigeneral}}
In the previous subsection we have seen that the field $h^{\mu\nu}$ can be
replaced by a general, invertible function $\mathcal{F}$ of itself, (\ref{gsst}),
and that the parameters $z_{nk}, n=2,3...; k=1,2,...,k_{\rm max}(n)$ are gauge type
parameters. Like for $\alpha_0$ we would like to show that the dependence
of the Green functions from these parameters can be controlled by a suitable
change of the ST-identity (see (\ref{chidouble}) and (\ref{chiZ})).
Hence we introduce anti-commuting parameters $\chi_{nk}$ which form together with
$z_{nk}$ doublets $(z_{nk},\chi_{nk})$ under BRST transformations
\begin{equation}\label{gd}
\mathdutchcal{s} z_{nk}=\chi_{nk} \quad n=2,3,...;k=1,2,...,k_{\rm max}(n) \qquad s\chi_{nk}=0.
\end{equation}
They contribute to the ST-identity
\begin{equation}\label{gpST
\mathcal{S}(\Gamma)+\chi_{nk}\partial_{z_{nk}}\Gamma=0 \,,\qquad
\hat{\mathcal{S}}Z\equiv \mathcal{S} Z+\chi_{nk}\partial_{z_{nk}}Z=0.
\end{equation}
If we succeed in proving these generalized ST-identities we know that
the parameters $z_{nk}$ generate unphysical insertions. We just differentiate
(\ref{gpST}) by $\chi_{nk}$ and obtain
\begin{equation}\label{vch
\partial_{z_{nk}}Z=-\mathcal{S} \partial_{\chi_{nk}}Z=i \mathcal{S}[\Delta^-_{(nk)}Z](J,K,L),
\end{equation}
where $\Delta^-$ is an insertion of dimension 4 and $\phi\pi$-charge -1,
generated by $\partial_{\chi_{nk}}$.
Whereas for the doublet $(\alpha_0,\chi)$ we had to enlarge the gauge fixing
we can proceed here more directly because the parameters $z_{nk}$ show up
only in the redefinition of $h$. It is readily seen that one has to
change only $\bar{\Gamma}$ into
\begin{equation}\label{pcgb
\bar{\Gamma}(h,c,H,L,z_{nk},\chi_{nk})
=\bar{\Gamma}^s(\hat{h},\hat{c},\hat{H},\hat{L})
+\sum_{nk}\chi_{nk}[\int K_{\mu\nu}G^{\mu\nu}_{nk}
+r_{nk}[\int L_\mu c^\mu]]
\end{equation}
with
\begin{eqnarray}\label{ctrgm
\hat{h}&=&\mathcal{F}(h,z_{nk})
\qquad \hat{H}=\frac{\delta}{\delta\hat{h}}
\int H\mathcal{F}^{-1}(\hat{h},z_{nk})|_{\hat{h}=\mathcal{F}(h,z_{nk})}\\
\hat{c}&=&y(z_{nk})c \qquad\quad \hat{L}=\frac{1}{y(z_{nk})}L\\
G_{nk}(h,z_{nk})&=&-\frac{\partial}{\partial z_{nk}}\mathcal{F}^{-1}(h,z_{nk})|_{\hat{h}=\mathcal{F}(h,z_{nk})} \qquad r_{nk}=-\frac{1}{y(z_{nk})}\frac{\partial}{\partial z_{nk}}y(z_{nk})
\end{eqnarray}
and $y(z_{nk})$ is a general function of its arguments. From the preceding subsection
we know that for $\chi_{nk}=0$ this is the general solution of the ST-identity.
For $\chi_{nk}\not=0$ one has to go through (\ref{gpST}) to convince one-self
that this is the case.
The parameters $z_{nk},y(z_{nk})$ will be fixed by normalization conditions. We choose the following one's.\\
The normalization condition (\ref{trnorm5}) fixes $y(z_{nk})=1$, hence
$r(z_{nk})=0$ (note: $n\ge 2$).
In order to fix $z_{nk}$ one has to look in the general solution of the ST-identity
at the term
$
\int H_{\mu\nu}\mathdutchcal{s}\mathcal{F}^{\mu\nu}
=\int\sum_{n,k}z_{nk}H_{\mu\nu}\mathdutchcal{s}(h...h)^{\mu\nu},
$
where $\mathdutchcal{s}$ denotes the standard BRST transformation of $h$, and to project such that e.g.\
\begin{equation}
\partial_p\Gamma_{Hc\mathcal{P}(\underbrace{h...h}_{n})}|_{p=0}=z_{nk}
\end{equation}
Here $\mathcal{P}$ denotes a suitable projector. We do not work out the details
of its definition.
\subsection{Gauge parameter independence in higher orders\label{se:GPIHO}}
The aim is now to prove (\ref{chiZ}) and (\ref{gpST}) to all orders of
perturbation theory. Taken together
\begin{equation}\label{cgpd
\mathcal{S}(\Gamma)+(\chi\partial_{\alpha_0}+\sum_{n,k}(\chi_{nk}\partial_{z_{nk}}))\Gamma=0 \,,
\qquad \mathcal{S} Z+(\chi\partial_{\alpha_0}+\sum_{n,k}(\chi_{nk}\partial_{z_{nk}}))Z=0 .
\end{equation}
We start from
\begin{multline}\label{sggpv
\Gamma^s(h,c,\bar{c},b,K,L) = \bar{\Gamma}^s(h,c,\bar{c},K,L)
-\frac{1}{2\kappa}\int dxdy\, h^{\mu\nu}(x)
(\partial_\mu b_\nu+\partial_\nu b_\mu)(y) \times \\
\times\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2} \Big\rbrace
-\frac{1}{2}\alpha_0\int b_\mu b_\nu\eta^{\mu\nu}
\end{multline}
\begin{multline}
\bar{\Gamma}^s(h,c,\bar{c},K,L) = \Gamma^{s\,({\rm class})}_{\rm inv}(h)
-\frac{1}{2}\int dxdy\, Q^{s\,\mu\nu}(x)(\partial_\mu \bar{c}_\nu+
\partial_\nu \bar{c}_\mu)(y)\times \\
\times \Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace
+\int(K_{\mu\nu}Q^{s\,\mu\nu}(h,c)-\kappa L_\mu c^\lambda\partial_\lambda c^\mu) \\
-\frac{1}{4}\chi(\bar{c}_\mu b_\nu+\bar{c}_\nu b_\mu)\eta^{\mu\nu}) .
\end{multline}
The $b$-dependent terms can be trivially regained from the gauge condition
\begin{equation}\label{3beq}
\frac{\delta \Gamma^s}{\delta b^\rho}=
\kappa^{-1}\int dy\,\partial^\mu h_{\mu\rho}(y)
\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2} \Big\rbrace-\alpha_0 b_\rho
-\frac{1}{2}\chi\bar{c}_\rho,
\end{equation}
whereas the ghost equation of motion reads
\begin{equation}\label{3ghe}
\frac{\delta\Gamma^s}{\delta \bar{c}_\rho(x)}=
-\int dy\,\partial_\lambda\frac{\delta\Gamma^s}{\delta K_{\lambda\rho}(y)}
\Big\lbrace\big( \frac{\Box}{4\pi^2} + m^2 \big)\frac{1}{(x-y)^2}\Big\rbrace +\frac{1}{2}\chi b^\rho.
\end{equation}
The general solution has been obtained on the classical level, (\ref{pcgb}), as
\begin{equation}\label{2pcgb
\bar{\Gamma}(h,c,H,L,z_{nk},\chi_{nk})
=\bar{\Gamma}^s(\hat{h},\hat{c},\hat{H},\hat{L})
+\sum_{nk}\chi_{nk}[\int K_{\mu\nu}G^{\mu\nu}_{nk}
+r_{nk}[\int L_\mu c^\mu]]
\end{equation}
with hatted fields given in (\ref{ctrgm}). Due to the presence of the
parameter doublets the ST-identity has the form
\begin{eqnarray}
\mathcal{S}(\Gamma)&=&\mathcal{B}(\bar{\Gamma})\\
&\equiv&\int\left[
\frac{\delta\bar{\Gamma}}{\delta H}\frac{\delta\bar{\Gamma}}{\delta h}
+\frac{\delta\bar{\Gamma}}{\delta L}\frac{\delta\bar{\Gamma}}{\delta c}
\right]
+\chi\frac{\partial\bar{\Gamma}}{\partial\alpha_0}
+\sum_{n,k}\chi_{n,k}\frac{\partial\bar{\Gamma}}{\partial z_{n,k}}=0 .
\end{eqnarray}
The non-linear operator $\mathcal{B}(\bar{\gamma})$ and the linear operator
\begin{equation}
\mathcal{B}_{\bar{\gamma}}\equiv \int\left[
\frac{\delta\gamma}{\delta H}\frac{\delta}{\delta h}
+\frac{\delta\gamma}{\delta h}\frac{\delta}{\delta H}
+\frac{\delta\gamma}{\delta L}\frac{\delta}{\delta c}
+\frac{\delta\gamma}{\delta c}\frac{\delta}{\delta L}\right]
+\chi\frac{\partial}{\partial \alpha_0}
+\sum_{n,k}\chi_{n,k}\frac{\partial}{\partial z_{n,k}}
\end{equation}
satisfy the identities
\begin{eqnarray}
\mathcal{B}_\gamma\mathcal{B}(\gamma)&=&0 \qquad \forall \gamma \label{1lbG}\\
\mathcal{B}_\gamma\mathcal{B}_\gamma&=&0 \qquad {\rm if}\,\, \mathcal{B}(\gamma)=0 .
\label{2lbG}
\end{eqnarray}
Since the classical action satisfies the ST-identity, we have for the
tree approximation from (\ref{2lbG})
\begin{equation}\label{ccdt
\mathdutchcal{b}^2=0 \qquad{\rm for} \qquad \mathdutchcal{b}\equiv
\mathcal{B}_{\bar{\Gamma}_{\rm class}},
\end{equation}
i.e. $\mathdutchcal{b}$ is nilpotent.
The action principle tells now that
\begin{equation}\label{ctp}
\mathcal{S}\Gamma=\left[\Delta\right]^5_5\cdot \Gamma= \Delta+O(\hbar\Delta) ,
\end{equation}
where $\Delta$ is an insertion with UV=IR-degree=5 and $Q_{\phi\pi}=1$ and we have
on the rhs separated the trivial diagram contribution (tree diagrams) from higher
orders (loop diagrams).
If we do not admit counterterms depending on $\alpha_0$, which is possible
since the $b$-equation of motion can be integrated trivially, we can
discard in the following the contribution of the doublet ($\alpha_0,\chi)$
and have to discuss only the doublets $(z_{nk},\chi_{nk})$.
(\ref{ccdt}) leads then to the consistency condition
\begin{equation}\label{bccdt}
\mathdutchcal{b} \Delta=0,
\end{equation}
which is a classical equation.
Furthermore gauge condition (\ref{3beq}) and ghost equation of motion (\ref{3ghe})
imply that the local functional $\Delta$ only depends on the fields
$h,c,H,L$.
The general solution of (\ref{bccdt}) is given by
\begin{equation}\label{gsccdt
\Delta= \mathdutchcal{b}\hat{\Delta}+r\mathcal{A}(h,c),
\end{equation}
where $\hat{\Delta}$ is an integrated local insertion (functional of $h,c,H,L$) with
UV=IR-dimension 4 and $Q_{\phi\pi}=0$. $\mathcal{A}$ represents an anomaly, i.e.\
has the same properties as $\hat{\Delta}$, but is not a $\mathdutchcal{b}$-variation. For
$z_{nk}=0$ we know already (cf. Sect.\ \ref{se:stidentity}) that the decomposition in
(\ref{gsccdt}) is valid and no $\mathcal{A}$(h,c) exists.
For $z_{nk}\not=0$ no $\mathcal{A}$ can be generated either, but we have to show
that the remaining terms form a $b$-variation.\\
This part of the proof relies only on the doublet structure of $(z_{nk},\chi_{nk})$
and can therefore be taken over literally from \cite[Appendix D, formulae (D.18)--(D.32)]{Piguet:1984mv},
with the result,
that the cohomology is trivial and thus $(\ref{gsccdt})$ verified
with $\mathcal{A}=0$.
In the context of BRST-invariant differential operators we shall need
a corresponding analysis for insertions with the quantum numbers of the action,
i.e.\ UV=IR-dimension=4 and $Q_{\phi\pi}=0$. The field dependent part was
treated above in Sect.\ \ref{se:paraandgpi}, where we constructed the general solution of the
ST-identity. $\Gamma^{\rm class}_{\rm inv}$ turned out to be the only obstruction
to the cohomology,
whereas all external field dependent terms are $\mathdutchcal{b}$-variations. The gauge parameter
dependence is also covered in \cite[Appendix D]{Piguet:1984mv} with the result that
the terms of $\Gamma^{\rm class}_{\rm inv}$ can only have gauge parameter
independent coefficients, whereas the external field dependent terms are
multiplied with functions of those such that the products are variations under
the general gauge parameter dependent terms. For later use we list them here.
A basis of dimension-4, $\phi\pi$ charge-0 $\mathdutchcal{b}$-invariant insertions is provided
by:
\begin{eqnarray}\label{gnvnsrt
\Gamma_{inv}&=&\int\sqrt{-g}(c_0+c_1R^{\mu\nu}R_{\mu\nu}+c_2R^2+c_3R)(h,z_{nk})\\
\Delta_1(h,c,H,z_{nk},\chi_{nk})&
=&\mathdutchcal{b}\left[d_1(z_1)\int H_{\mu\nu}h^{\mu\nu} \right] \\
\Delta_{nk}(h,c,H,z_{nk},\chi_{nk})&
=&\mathdutchcal{b}\left[d_{nk}(z_{n,k})\int H_{\mu\nu}(\underbrace{h...h}_{n,k})^{\mu\nu}\right]\\
\Delta_c(h,c,L)&
=&\mathdutchcal{b}\left[e_c\int L_\mu c^\mu\right].
\end{eqnarray}
Recall that counterterms must not depend on
$\alpha_0$, we work in Landau gauge, $\alpha_0=0$, hence there is also no $\chi$
present.
These $\mathdutchcal{b}$-invariant insertions are in one-to-one correspondence to $\mathdutchcal{b}$-symmetric
differential operators
\begin{eqnarray}\label{bsdffp
c_0\partial{c_0}\Gamma&=&\int\sqrt{-g}c_0 \\
c_1\partial{c_1}\Gamma&=&\int\sqrt{-g}c_1R^{\mu\nu}R_{\mu\nu}\\
c_2\partial{c_2}\Gamma&=&\int\sqrt{-g}c_2R^2\\
c_3\partial{c_3}\Gamma&=&\int\sqrt{-g}c_3\kappa^{-2}R\\
\left[d_1(z_1)\mathcal{N}_h+b(d_1)\mathcal{N}^{(-)}_h\right]\Gamma&=&\Delta_1\\
\left[d_{n,k}\partial_{z_{n,k}}+b(d_{n,k})\partial_{\chi_{n,k}}\right]\Gamma&=&
-\Delta_{n,k}+O(h^{n+1}\\
\left[e_c\mathcal{N}_c+b(e_c)\mathcal{N}^{(-)}_c\right]\Gamma&=&-\Delta_c .
\end{eqnarray}
Here we have defined combinations of counting operators
\begin{equation}\label{lcto
N_\phi\equiv\int\phi\frac{\delta}{\delta\phi}
\end{equation}
for the fields.\\
\begin{eqnarray}\label{slcto
\mathcal{N}_h\Gamma&\equiv&\left[N_h-N_K-N_{\bar{c}}-N_b+2\alpha_0\partial_{\alpha_0}+2\chi\partial_\chi\right]\Gamma\\
\mathcal{N}^{(-)}_h\Gamma&\equiv&\int Kh-\int\bar{c}\frac{\delta}{\delta b}\Gamma-2\alpha_0\partial_\chi\Gamma\\
\mathcal{N}_c\Gamma&\equiv&\left[N_c-N_L\right]\Gamma\\
\mathcal{N}^{(-)}_c\Gamma&\equiv&-\int L_\rho c^\rho
\end{eqnarray}
and went back from the variable $H_{\mu\nu}$ in (\ref{gnvnsrt}) to the
variables $K_{\mu\nu},\bar{c}$.
\subsection{Normalization conditions III\label{se:nc3}}
The normalization conditions (\ref{highnorm})--(\ref{highnorm1}) have to be supplemented by those introducing $z_{nk}$
and read now
\begin{eqnarray}\label{highnormG}
\frac{\partial}{\partial p^2}\,\gamma^{(2)}_{\rm TT\,|{\substack{ p=0 \\ s=1}}}&=
&c_3\kappa^{-2}\\
\frac{\partial}{\partial p^2}\frac{\partial}{\partial p^2}\,
\gamma^{(2)}_{\rm TT\,|{\substack{ p^2=-\mu^2 \\ s=1}}}&=&-2c_1\\
\frac{\partial}{\partial p^2}\frac{\partial}{\partial p^2}\,
\gamma^{(0)}_{\rm TT\,|{\substack{ p^2=-\mu^2 \\ s=1}}}
&=&2(3c_2+c_1)\\
\Gamma_{h^{\mu\nu}}&=&c_0=0\\
\frac{\partial}{\partial p_\sigma}
\Gamma_{K^{\mu\nu}c_\rho|{\substack{ p^2=-\mu^2 \\ s=1}}}&=&
-i\kappa(\eta^{\mu\sigma}\delta^\nu_\rho
+\eta^{\nu\sigma}\delta^\mu_\rho
-\eta^{\mu\nu} \delta^\sigma_\rho)\label{highnormG1} \\
\partial_p\Gamma_{Kc\mathcal{P}(\underbrace{h...h}_{n})}|_{\substack{ p^2=-\mu^2 \\ s=1}}&=&z_{nk}\\
\frac{\partial}{\partial p^\lambda}
\Gamma_{{L_\rho}c^\sigma c^\tau|{\substack{ p^2=-\mu^2 \\ s=1}}}&=&
-i\kappa(\delta^\rho_\sigma\eta_{\lambda\tau}
-\delta^\rho_\tau\eta_{\lambda\sigma}).
\end{eqnarray}
Imposing the $b$-equation of motion (\ref{beq}) still fixes $\alpha_0$
and the $b$-amplitude, whereas (\ref{highnorm}) again
fixes the $h$-amplitude. $\mathcal{P}$ projects to the $k^{\rm th}$ independent term in $\sum_{n,k}(\underbrace{h...h}_{n})^{\mu\nu}$.
\section{Discussion and conclusions\label{se:DisCon}}
In the present paper we propose the perturbative quantization of classical
Einstein-Hilbert gravity. The version which we discuss has as
background ordinary Minkowski space on which the respective theory deals with a massless spin two field with interactions provided by classical EH. The problem of power counting non-renormalizability is overcome in two steps. First we introduce the higher derivative terms $R^2,R^{\mu\nu} R_{\mu\nu}$ which make the model power counting renormalizable,
create however negative norm states, hence can only be considered as a Pauli-Villars regularization. Then there are two spin two fields in the model, their combined propagator yielding dynamic dimension $0$ to the combined field $h$.
In a second step we perform momentum space subtractions according to the Bogoliubov-Parasiuk-Hepp-Zimmermann-Lowenstein scheme, treating the $R$-term as an oversubtracted normal product with subtraction degrees $d=r=4$. This takes correctly into account the vanishing naive dimension of the combined field $h$.\\
Since this model is closed under renormalization we have at our disposal the full machinery of the BPHZL scheme, in particular the action principle, which admits the systematic construction and proof of the Slavnov-Taylor identity, i.e.\
formal (pseudo -)unitarity,
and parametric partial differential equations. Those are the Lowenstein-Zimmermann equation, which says that Green functions are independent of the auxiliary mass term $M$ which belongs to the scheme. Further there are the renormalization group and Callan-Symanzik equation. These control completeness of the parametrization and scaling, respectively.\\
The final step of establishing a quantized EH-theory cannot be taken since the regulators cannot be eliminated in a controlled way.
The model has to stay as such, which suggests that the higher derivative terms in the action constitute an essential part of the theory, for which traces of the Einstein-Hilbert action have to be extracted.
However physical states for the EH theory can be constructed, according to the standard quartet mechanism of \cite{KuOj}: projecting out states with negative norm and then forming equivalence classes of states with vanishing norm.
The full $S$-matrix, which is derived from ST, is thus restricted to EH theory, but its unitarity is questionable.
Even if the latter would hold, the dependence on the parameters $c_1$ and $c_2$ presumably prevail nevertheless.\\
Next we mention a few items in which the present paper differs from previous attempts to solve the quantization problem. First of all we do not rely on an invariant regularization, i.e.\ the regularization employed in dimensional renormalization, which, it seems, has been used exclusively in the past. The BPHZL renormalization scheme requires that power counting is such that convergence results, e.g.\ for Green functions. This we provide here. Then the study of anomalies is constructively possible.
We can thus safely use results obtained in the past in many papers by purely algebraic reasoning (cf.\ \cite{Baulieu:1983tg,Dragon:2012au}).
Those can now be completed with a power counting based, ``analytic'' treatment. This refers not only to anomaly discussions, but also to the so-called Batalin-Vilkovisky formalism (in quantum field theory).
The latter has been invoked for quantum gravity, specifically also for EH, in \cite{Brunetti:2013maa}.
Although therein many innovative concepts have been introduced the construction suffers from the lack of renormalizability.
In the presumably simplest context we present a solution for this, which is lacking a proof of unitarity though.
The hope then is that this example is fruitful in that wider range. For instance, when invoking the principle of generalized covariance (cf.\ \cite{Brunetti:2001dx}) one always relates two systems of manifold plus metric. One of them
could then just be ours with Minkowski space plus metric, and fluctuations around it.\\
Another aspect concerns the field variable $h^{\mu\nu}$. In the literature most commonly used is the Goldberg variable $h^{\mu\nu}=\sqrt{-g}g^{\mu\nu}-\eta^{\mu\nu}$,
whereas we use $h^{\mu\nu}=g^{\mu\nu}-\eta^{\mu\nu}$. These variables are not equivalent (in the sense of point transformations), but differ by unphysical
degrees of freedom. Our variable has the advantage that two-point functions (1PI and propagator) have fewer components in the spin expansion to be dealt with. \\
Let us also recall that our way of proceeding forced us to treat the fundamental field $h$ as a field of vanishing canonical dimension. It is then mandatory to discuss non-linear field redefinitions. They are quite analogous to those which one has to face in a power counting non-renormalizable formulation, but can here be handled in a completely controlled manner like in supersymmetric Yang-Mills theories when supersymmetry is linearly realized.
In the context of the CS-equation and in view of the RG-equation one comes in the vicinity of the concept of ``asymptotic safety'' \cite{Reuter:2012id},
where one deals directly with the infinite dimensional space of interactions with
arbitrarily high dimension which we (by purpose) avoided. It would be interesting to see where our proposal is to be detected there.
Similarly one could repeat the analysis
of \cite{Fradkin:1981iu} under the present auspices. There one worked in Euclidean space and with the full, non-unitary model.
By its very nature our approach differs from the treatment as effective theory \cite{Donoghue:1995cz}, where one tries to find quantum effects of gravity without constructing a fundamental quantized model of it -- as one can formulate a model of mesons and hadrons without recurrence to QCD with its unsolved problem of confinement.
Extension of the present work to include matter seems to be most straightforward for scalar fields.
Then one could contribute to the study of observables \cite{Frob:2017gyj} and spontaneous scale symmetry breaking \cite{Kubo:2020fdd}, having at one's disposal a power-counting renormalizable model.
Adding vector fields of matter would also not require serious changes. Once fermions are introduced
one should employ the vierbein-formalism. In that context it should be particularly rewarding that one can now safely discuss chiral anomalies which are otherwise not easily handled. Also supergravity theories would deserve new interest.
Some new ideas or methods seem to be required, if one wants to go over to curved background. In particular normalization conditions and asymptotic limits pose problems which in the present, flat background case are absent.
A recent study on the formulation of perturbative gravity in presence of a cosmological constant \cite{Anselmi:2019ukt} tackles the challenge of developing new tools and uses a prescription to treat new degrees of freedom, which is described in \cite{Anselmi:2017ygm}.
Another candidate as far as methods are concerned is provided by the fairly recent work of one of the present authors (SP) \cite{Pottel:2017mnc}.
There the BPHZ scheme has been extended to analytic (curved) spacetimes. I.e.\ propagators, power counting and the like are those of curved spacetime.
Massive and massless models can be treated on an equal footing.
For a graviton field details would have to be worked out.
The problem of normalization conditions seems to be linked to asymptotic properties of the spacetime manifold which, regarding physics, is absolutely plausible.
This could be an interesting area of future research.
|
1,116,691,497,661 | arxiv | \section{Introduction}
Hybrid metal-organic perovskite semiconductors such as methylammonium lead iodide CH$_3$NH$_3$PbI$_3$ (MAPbI3) have emerged as promising new materials for photovoltaic devices and optoelectronics. The power conversion efficiency of solar cells based on perovskite materials exceeded 20\% within eight years \cite{Zhou,Yang,Table} and a theoretical limit of 30\% has been proposed \cite{Wei}. Such an impressive performance results from favorable material properties such as direct band gap of roughly 1.6 eV, large absorption coefficient and high charge carriers mobility \cite{Brenner}.
The perovskite materials can be broadly divided into two types: bulk crystals and polycrystalline thin films. Single crystals of MAPbI3, have long electron-hole diffusion length (up to 175 $\mu$m) and low density of recombination centers \cite{Cao,Valverde,Tian}. Thin films are more suitable for device application but display photoexcited carriers with smaller mobility and shorter lifetime \cite{Ponseca,Herz_Reco,Herz_Moby}. Moreover, the fast degradation of thin films in operating conditions poses serious limits to viable applications. Both the photoexposure and annealing in atmospheric conditions induces the formation of PbI$_2$ inclusions\cite{Jemli}. The main objective of our work is to show that compositional disorder leads to shallow traps where carriers localize on the picosecond timescale.
The carriers diffusion, localization and recombination is investigated by Two Photon PhotoEmission (2PPE) experiments on single MAPbI3 crystals. Remarkably, 2PPE monitors the energy distribution of excited electrons with good temporal resolution and high surface sensitivity. Moreover, the clean surface of a single crystal is a model and well controlled system where to explore the impact of localized states on the electronic motion.
The article is organized as follow: Section II contains X-rays diffraction, photoluminescence and photoemission characterization of our crystals. Section III describes the methodology and the technical aspects of the 2PPE technique. Section IV discusses the subpicosecond cooling of excited electrons. Section V proposes a diffusion model to explain the observed evolution of 2PPE signal. Section VI investigates the carriers localization in intentionally degraded crystals. Section VII deals with carriers recombination and possible effects of the surface. Section VIII reports the conclusions and acknowledgments.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig1M.jpg}
\caption{A: Image of a single MAPbI3 crystal acquired by scanning electron microscope. B: X-ray diffraction of Bragg peaks in the \{0,$k_\bot$,$k_{||}$\} plane of the tetragonal phase. C: Intensity map of photolumiscence as function of photon energy and sample temperature. The abrupt transition around 160 K is due to the orthorhombic to tetragonal phase transition. D: Photoluminescence spectrum of MAPbI3 measured at 130 K. }
\label{1M}
\end{figure}
\section{Sample characterization}
We investigate single crystals of MAPbI3 grown by inverse crystallization method.
Methylammonium iodide (0.78 g, 5 mmol) and lead iodide (2.30 g, 5 mmol) were dissolved in gamma-butyrolactone (5 mL) at 60 $^\circ$C. The yellow solution (2 mL) was placed in a vial and heated at 120 $^\circ$C during one to four hours depending on the desired crystal size. As shown in the scanning electron microscope image of Fig. \ref{1M}A, the samples are crystals of millimetric size.
X-ray diffraction measurents have been performed at the CRISTAL beamline of synchrotron SOLEIL by means of a four-circle diffractometer. The Bragg's reflections in Fig. \ref{1M}B confirm the high quality of the single crystals \cite{Antonio}.
Figure \ref{1M}C displays the photoluminescence map of the MAPbI3 single
crystals between 100 K and 200 K. The emission is composed of a single peak arising from electron-hole recombination across the band-gap. Upon cooling, the sudden blueshift of the emission line at $\cong160$ K \cite{Deleporte} is due to the widening of band gap at the tetragonal to orthorhombic phase transition. Below 160 K, the evolution of photoluminescence with temperature indicates a reduction of the band gap with lattice contraction. From the photoluminescence spectrum recorded at 130 K (see Fig. \ref{1M}D), we extract the band gap energy $\Delta_g=1.66$ eV. In the following, we will refer all spectroscopic measurements to the orthorhombic phase at 130 K.
Single crytals have been mounted on the \{0,1,0\} plane and cleaved in ultra high vacuum at base pressure below 10$^{-10}$ mbar. Despite the high crystalline quality of our samples, Low Energy Electron Diffraction (LEED) did not display any Bragg spot. We evince that cleaved surfaces are rough, probably because of the brittle nature of MAPbI3. Figure \ref{2M}A shows Angle Resolved PhotoElectron Spectroscopy (ARPES) measurements performed at the CASSIOPEE beamline of synchrotron SOLEIL. The selected photon beam of 94 eV maximizes the cross section of the valence band and corresponds to a perpendicular wavevector $k_\bot\sim 0$. Variations of spectral intensity with respect to $k_{||}$ are consistent with the electronic band dispersion in the first Brillouin zone \cite{Antonio}. We show in Fig. \ref{2M}B the spectrum obtained by integrating the photoelectron map in the interval [-1.5,1.5] \AA$^{-1}$. The chemical potential $\mu_F$ is located 1.6 eV higher than the top of the valence band, indicating that MAPbI3 crystals are naturally $n$-doped.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig2M.jpg}
\caption{A: Photoelectron intensity map acquired with photon energy of 94 eV in the \{$k_{||}$,$k_\bot\sim 0$,0\} direction of the tetragonal phase. B: Wavevector integrated photoelectron spectrum acquired with photon energy of 94 eV. The maximum of the valence band is located 1.6 eV below the chemical potential.}
\label{2M}
\end{figure}
\begin{figure}
\includegraphics[width=0.7\columnwidth]{Fig3M.jpg}
\caption{A: Band structure of MAPbI3 freely adapted from Filip \textit{et al.} \cite{Filip}. The green arrows stand for direct transitions induced by photons with $h\nu_1= 3.15$ eV. B: Energetics of the 2PPE experiment with pump photon energy $h\nu_1$, probe photon energy $h\nu_2$, band gap $\Delta_g$, chemical potential $\mu_F$ and analyzer work function $\phi$.}
\label{3M}
\end{figure}
\section{Two Photon PhotoEmission}
The temporal evolution of the excited state is measured by means of Two Photon PhotoEmission (2PPE). Our photon source is a Ti:Sapphire laser system delivering 6 $\mu$J pulses with repetition rate of 250 kHz. Part of the fundamental beam ($h\nu_0 = 1.55$ eV) is converted to the second harmonic ($h\nu_1= 3.15$ eV) in a $\beta$-BBO crystal while the rest is employed to generate the third harmonic ($h\nu_2 = 4.7$ eV) \cite{Faure}. We photoexcite the sample at 130 K by 50 fs pump pulses of 20 $\mu$J/cm$^2$, centered at $h\nu_1$. According to the reported value of the absorption coefficient \cite{Green}, this pulse results in an electron-hole density of $8\times10^{18}$ cm$^{-3}$.
As shown in Fig. \ref{3M}A the photons of the pump beam generate photoexcited electrons with excess energy up to $h\nu_1-\Delta_g=1.5$ eV, thereby inducing optical transitions in two branches of the conduction band. After a variable delay time, probe pulses centered at $h\nu_2=4.7$ eV promote the excited electrons above the vacuum level (see Fig. \ref{3M}B). Photoelectrons outgoing from the sample are detected by a hemispherical energy analyzer with an acceptance angle of roughly $5\times 1$ degrees$^2$ around normal emission. The overall energy resolution of 60 meV is dominated by the bandwidth of the probe beam. This technique provides a direct mapping of the electronic distribution in the photoexcited sample. Moreover, the electrons that have absorbed the two photons hold an ineastic mean free path of few nanometers \cite{Unal}. Such high surface sensitivity can be exploited to question the electrons dynamics in the topmost layers of the cleaved crystal.
\section{Ultrafast cooling of hot electrons}
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig4M.jpg}
\caption{A: Photoelectron intensity map in the as-grown sample as a function of kinetic energy and pump-probe delay. B: Energy distribution curves acquired at different values of the pump probe delay and normalized to their maximum value. C: Evolution of the average kinetic energy as a function of time. The solid line is an exponential fit with time constant $\tau_1=0.25$ ps.}
\label{4M}
\end{figure}
We show in Fig. \ref{4M}A a color scale plot of the photoelectron intensity acquired as a function of kinetic energy and pump probe delay. The nominal kinetic energy of the conduction band minimum is $E_c= h\nu_2-\phi+\Delta_g-\mu_F+E_v=0.45$ eV, where $\phi=4.3$ eV is the workfunction of our analyzer. As shown in Fig. \ref{4M}B, the spectrum acquired at the maximal overlap between pump and probe pulse (zero delay) displays a peak in proximity of $E_c$, a second peak near to $E'=E_c+0.35$ eV and a third peak at $E''=E_c+0.9$ eV. As sketched in Fig. \ref{3M}A, the structures $E'$ and $E''$ are located at excess energies where the 3.15 eV photons generate high density of optical transitions \cite{Filip}.
Next, we consider the dynamics of the electrons soon after the arrival of the pump pulse.
Figure \ref{4M}B shows that the excited electronic distribution varies strongly within the first picosecond. The electrons with large excess energy thermalize towards the bottom of the conduction band by means of electron-electron scattering and phonon emission. We exclude the occurrence of carriers multiplication because the photon energy of the pump beam is lower than twice the value of the bandgap. The initial thermalization time can be quantified by evaluating the average kinetic energy contained in the spectrum as a function of time. In practice, we calculate $\langle E\rangle_t= \int I(E,t) E dE/\int I(E,t) dE$, where $E$ is the kinetic energy, $I(E,t)$ is the photoelectron intensity, and $t$ is the pump-probe delay.
Figure \ref{4M}C shows the resulting $\langle E\rangle_t$ together with the fit by an exponential decay with time constant of $\tau_1=0.25$ ps. This value is in excellent agreement with recent 2PPE experiments on MAPbI3 thin films \cite{Niesner} and it is faster than the energy relaxation time observed in inorganic semiconductors \cite{Tanimura}. We suggest that the rapid electronic cooling of MAPbI3 arises via the coupling of excited carriers to the internal vibrations of the CH$_3$NH$_3^+$ cations. For example, the stretching modes of C-H and N-H bonds holds quantum energy of $370-400$ meV and can efficiently drain the excess energy of photoexcited electrons \cite{Brivio}. Moreover, the recent observation of optical sidebands in two-dimensional hybrid perovskites indicate a sizable coupling of the exciton to an organic mode with quantum energy of 40 meV \cite{Straus}.
According to Fig. \ref{4M}C, the hot electrons will reach quasi-equilibrium with the coupled phonons after $\cong 3\tau_1=0.75$ ps. Several authors suggested that strongly coupled modes attain an occupation level higher than the thermal one and relax on the slower timescale of anharmonic interaction \cite{Beard_hot,Price}. In this respect, time resolved Raman experiments may be helpful to address the specific vibrational modes where electrons transfer their excess energy \cite{Heinz}.
Note in Fig. \ref{4M}B that the spectra acquired at delay time of 1 ps and 10 ps display a shoulder at excess energy $\sim0.2$ eV. Niesner \emph{et al.} observed a similar shoulder only in the tetragonal phase of MAPbI3 and they ascribed it to an unconventional kind of polaronic dressing \cite{Niesner,Zhu}. This issue requires additional measurements and it is currently under investigation.
\section{Diffusion in the as-grown sample}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig5M.jpg}
\caption{A: The density profile of photoexcited electrons is calculated by the diffusion model of equation \ref{eq1} for selected delay times. B: The integrated intensity of the 2PPE signal (black marks) as a function of time is compared with the diffusion model (red line). The dotted blue line at $t=3\tau_1$ indicates the delay time when electrons have fully thermalized.}
\label{5M}
\end{figure}
Since the 2PPE technique probes carriers within the topmost nanometers, the integrated signal $\int I(E,t) dE$ follows the instantaneous electronic concentration at the surface of the sample times the photoemission cross section. Once the electrons have fully thermalized, the photoemission cross section becomes time independent. Therefore, the progressive decay of 2PPE signal that can be observed in the middle and right panel of Fig. \ref{4M}A is due to the drift-diffusion of electronic charges from the surface into the bulk. First, we analyze the effect of pure diffusion on the long timescale dynamics. Electrons that are excited in an optical penetration depth \cite{Green} $\alpha^{-1}=50$ nm move in to the bulk because of Brownian motion. The electronic concentration at distance $x$ from the surface and pump-probe delay $t$ is given by \cite{Beard_Reco}
\begin{eqnarray}
n(x,t)\propto \frac{1}{2}\exp\left(-\frac{x^2}{4Dt}\right)w\left(\alpha\sqrt{Dt}-\frac{x}{2\sqrt {Dt}}\right)+\nonumber\\
+\frac{1}{2}\exp\left(-\frac{x^2}{4Dt}\right)w\left(\alpha\sqrt{Dt}+\frac{x}{2\sqrt {Dt}}\right),
\label{eq1}
\end{eqnarray}
where $w(z)=\exp(z^2)(1-\mathrm{erf}(z))$ and $D$ is the electrons diffusion constant of MAPbI3 crystals at 130 K. We recall that the mobility $\mu$ of MAPbI3 is limited by electron-phonon scattering \cite{Herz_Moby} and scales as $T^{-3/2}$. Therefore, we can invoke the Einstein relation $D=\mu k_bT\propto T^{-1/2}$ and refer to the literature value of $D$ at room temperature \cite{Cao} in order to estimate the electron diffusion constant at T=130 K. The resulting $D=3$ cm$^{2}$/s is plugged in to equation 1 of the electronic density. We show in Fig. \ref{5M}A the resulting $n(x,t)$ as a function of $x$ for selected values of $t$.
Figure \ref{5M}B compares the calculated $n(0,t)$ with the experimental evolution of the photoemission signal in the temporal window [0.05,400] ps. The measured decay follows the diffusion behavior for delay time larger than 3 ps but proceeds much faster during the first picosecond. Niesner \emph{et al.} ascribed this initial drop to the rapid variation of the photoemission cross section during the thermalization process \cite{Niesner}. On one hand, it is fully plausible that carriers changing energy and wavevector display sudden changes of cross section. On the other hand, the measured intensity deviates from the diffusion model even after that the electrons have fully thermalized (namely for $3\tau_1<t<10\tau_1$). Eventually, built-in fields at the sample surface lead to an initial drift of the electrons out of the detection regions. Such ultrafast segregation between electrons and holes has been recently observed in other doped semiconductors \cite{Hajlaoui}. In the case of MAPbI3 the built-in field may originate from the residual polarity of the crystal termination \cite{She}.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig6M.jpg}
\caption{A: Photoluminescence spectra acquired as grown, 100 $^\circ$C annealed and 200 $^\circ$C samples. The dashed line is a guide to the eye showing the development of trapped states upon annealing. Photoelectron intensity map in samples annealed at 100 $^\circ$C (panel B) and 200 $^\circ$C (Panel C). D: Temporal evolution of the integrated 2PPE signal in the pristine and annealed samples. The arrows indicate the characteristic timescale when electronic trapping takes place.}
\label{6M}
\end{figure}
\section{Electronic localization in annealed samples}
\begin{figure}
\includegraphics[width=0.75\columnwidth]{Fig7M.jpg}
\caption{Photoluminescence emitted in the visible spectral range from the MA200 sample at 10 K. The peak centered at 2.45 eV arises from carriers recombination in PbI2 inclusions.}
\label{7M}
\end{figure}
After having discussed the drift-diffusion of the electrons in high quality crystals, we can address the important role that sample degradation has on the carriers motion. The data of the as-grown crystal are compared with the ones of the samples annealed in air at 100 $^\circ$C (MA100) and 200 $^\circ$C (MA200) for 30 minutes. The MAPbI3 single crystal is very sensitive to humidity, illumination and annealing temperature. Figure \ref{6M}A shows photoluminescence on the as-grown and annealed samples. Upon increasing the annealing temperature, a new photoluminescence peak, arising from trapped states, develops $\cong60$ meV below the bandgap value. We report in \ref{6M}B,C the photoelectron intensity map acquired on annealed samples as a function of pump-probe delay. The MA100 and the as-grown sample display similar behavior on the short timescale ($\cong 1$ ps). However the 2PPE map of MA100 (Fig. \ref{6M}B) and MA200 (Fig. \ref{6M}C) holds a remnant signal up to 400 ps. We evince that shallow traps introduced by annealing lead to a partial localization of the electrons. Note in Fig. \ref{6M}D that the long timescale evolution of integrated 2PPE signal deviates from the diffusive $1/\sqrt{t}$ decay only in the case of annealed samples. The onset of the trapping strongly depends on the quality of the crystal. In MA100 the localization takes place on the timescale of few picoseconds whereas it falls below the picosecond in highly degraded MA200. Probably the defect density of MA200 is so high that localization takes place as soon as the electrons cool down below the mobility edge of the traps. The localization landscape of conduction electrons is related to the compositional disorder of degraded samples \cite{Jemli}. According to Xie \textit{et al.}, an annealing to 150 $^\circ$C leads to a shallow distribution of I and Pb components in the MAPbI3 thin film \cite{Xie}. Furthermore, Deretzis \textit{et al.} reported that thermodynamic degradation above 150 $^\circ$C can induce a partial conversion of MAPbI3 in to PbI$_2$ \cite{Deretzis}. The low temperature photoluminescence measurements in Fig. \ref{7M} confirms that MA200 contains large inclusions of PbI$_2$.
\section{Radiative and surface recombination}
\begin{figure}
\includegraphics[width=0.68\columnwidth]{Fig8M.jpg}
\caption{Temporal evolution of the integrated 2PPE signal in the sample annealed at $100 ^\circ$C (blue marks) and $200 ^\circ$C (green marks). The red line is a fitting curve with bimolecular recombination rate $\gamma=4\pm1\times 10^{-10}$ cm$^3$/s whereas the yellow line (which is almost indistinguishable from the red line) is the fit obtained by equation \ref{eq2} with $S=4000$ cm/s and $D_T=0.05$ cm$^2$/s.}
\label{8M}
\end{figure}
Remarkably, the long lasting presence of electrons at the surface of MA100 and MA200 indicates a long recombination time of the photoexcited carriers. It has been shown that direct recombination between electrons and holes \cite{Herz_Reco} becomes dominant at photoexcitation density $\sim 10^8$ cm$^{-3}$. We display in Fig. \ref{8M} the fitting curve obtained by a kinetic equation with bimolecular recombination rate $\gamma=4\pm1\times 10^{-10}$ cm$^3$/s. The extracted value is in good agreement with previous reports and defies the Langevin prediction by four orders of magnitude \cite{Herz_Reco}. Wehrenfennig \emph{et al.} proposed that electrons and holes are prefentially localized in spatially distinct regions of the disordered landscape. Accordingly, density functional calculations on MaPbI3 predict that valence band maxima consist of $6s$- and $5p$ orbitals of lead and iodine, respectively, while conduction band minima mostly incorporate $6p-$orbitals of lead \cite{Nakao}.
Next, we exploit the high surface sensitivity of 2PPE to question the surface recombination velocity of samples cleaved in ultra high vacuum conditions. By following the literature \cite{Beard_Reco,Hoffman}, we model the density of the electrons at the surface as
\begin{equation}
n(0,t)\propto \frac{\alpha D_T w\left(\alpha\sqrt{D_Tt}\right)-Sw\left(S\sqrt{\frac{t}{D_T}}\right)}{\alpha D_T-S},
\label{eq2}
\end{equation}
where $D_T$ is the diffusion constant of trapped electrons and $S$ is the velocity of surface recombination. Figure \ref{8M} compares the long timescale dynamics measured in the MA100 and MA200 crystals with the fit obtained for $S=4000$ cm/s and $D_T=0.05$ cm$^2$/s. Such $S$ value is strictly an upper bound because: i) we chose the $D_T$ value that provides a good fit and maximize $S$, ii) we implicitly assume that the observed decay is ruled by surface recombination instead of bimolecular recombination. The upper bound of $S$ is not far from the surface recombination velocity estimated in other hybrid perovskites \cite{Beard_Reco, Wu}. It should be outlined that $S<4000$ cm/s is two or three orders of magnitude lower than the surface recombination velocity of most unpassivated semiconductors \cite{Schmutten, Riffe}. Accordingly, the electronic structure calculations do not predict mid-gap states at MaPbI3 surfaces with thermodynamical stability\cite{Haruyama}. In operating devices, this asset favors the efficient extraction or injection of carriers at the electrical contacts with the MAPbI3 layer.
\section{Conclusions}
To conclude, we characterized the dynamics of excited electrons at the surface of MAPbI3. Our data provide a direct visualization of the electronic cooling at early delay times. It follows that photoexcited carriers thermalize on a subpicosecond timescale, presumably because of the coupling to the vibrations of organic cations. In the as-grown crystal, the electrons dynamics is ruled by diffusion at long timescales. Most likely, an additional drift due to built-in fields sets in at early delay.
We intentionally induce compositional disorder in some MAPbI3 crystals by thermal annealing in atmospheric conditions. As a result, the photoexcited electrons are localized by shallow traps within few picoseconds. Such localization mechanism is consistent with the drop of photoconversion efficiency in aged cells. Finally we estimate the surface recombination velocity of MAPbI3 cleaved in ultra high vacuum. The upper bound obtained by our analysis is consistent with previous results and is several orders of magnitude lower than the values reported in many unpassivated semiconductors.
The project leading to this article has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 687008 (GOTSolar). We also thanks dim-nanoK for funding under project 'PIED', the DIM-Oximore and the Ecole Polytechnique for funding under the project 'ECOGAN'.
|
1,116,691,497,662 | arxiv | \section{Introduction}\label{sec:intro}
High harmonic generation (HHG) is a nonlinear phenomenon in which atoms interacting
with an intense laser pulse emit photons whose frequencies are integer multiples of
the driving laser frequency. The emphatic motivation is the generation of spatially
and temporally coherent bursts of attosecond pulses with high frequencies covering
a range from vacuum ultraviolet (VUV) to the soft x-ray region~\cite{hentschel}.
Filtering the high-frequency part of a high-harmonic spectrum allows the syntheses
of ultrashort, coherent light pulses with energies in the extreme ultraviolet (XUV)
part of the spectrum. This allows for tracing and controlling electronic processes
in atoms, as well as coupled vibrational and electronic processes in
molecules~\cite{worner,itatani}. Some of the most visible applications of ultrashort
pulses of attosecond duration involve resolving the electronic structure with high
degree of spatial and temporal resolution~\cite{chen}, controlling the dynamics in
the XUV-pumped excited molecules~\cite{rini}, and exciting and probing inner-shell
electron dynamics with high resolution~\cite{sandberg}. Time-resolved
holography~\cite{tobey}, imaging of molecular orbitals~\cite{itatani}, and attosecond
streaking~\cite{itatani2} are also among the state-of-the-art applications of HHG.
High-order harmonic generation is a process well described within the semi-classical
three step model (ionization, propagation followed by recombination). The plateau
region, where consecutive harmonics have approximately the same intensity, constitutes
the main body of a high-harmonic spectrum. First step of the three step model is the
tunneling of the electron through the Coulomb potential barrier suppressed by the laser
field. The second step is laser-driven propagation of the free electron, and the third
step is the rescattering of the electron with its parent ion. During this last step,
the electron can recombine with its parent ion and liberate its excess energy as a
short wavelength harmonic photon. The three step model predicts that the highest kinetic
energy that an electron gains during its laser-driven excursion is given by $3.17 U_p$,
where $U_p = F^2/(4\omega_0^2)$ is the quiver energy of the free electron in the laser
field, and $F$ and $\omega_0$ are the laser field amplitude and frequency. The highest
harmonic frequency, $\omega_c$, that can be generated within this
model is $q_{\max}\omega_0 = \left|{E_b}\right| + 3.17 U_p$, where $\left|{E_b}\right|$
is the binding energy of electron in the atom and $q_{\max}$ is the order of the
cut-off harmonic~\cite{corkum}.
A crucial assumption in this physical picture is that the electron tunnels into the
continuum in the first step in a laser field characterized by a small Keldysh parameter.
This liberates the electron with no excess kinetic
energy, and its subsequent excursion is driven by the classical laser field alone.
Keldysh parameter $\gamma$ is commonly used to refer to one of the two dominant ionization
dynamics in strong fields; tunneling or multiphoton regimes~\cite{keldysh}.
It is defined as the the time it takes for the electron to tunnel the
barrier in units of the laser period, i.e., $\gamma \sim \tau/T$. Here
$\tau$ is the tunneling time and $T=2\pi /\omega_0$ is the laser period. If the tunneling
time is much smaller than the laser period, one could expect that it is likely for
the electron to tunnel through the barrier. In contrast, if tunneling time is much
longer than the laser period, then the electron doesn't have enough time tunnel through
the depressed Coulomb barrier, and ionization can only occur through photon absorption.
The Keldysh parameter can be expressed as
$\gamma =\omega_0\sqrt{2\left|{E_b}\right|}/F$~\cite{keldysh}.
Although the Keldysh parameter is widely used to refer to the underlying dynamics
in strong field ionization, there are studies which suggest that it is an inadequate
parameter in making this assessment~\cite{reiss1,reiss2,topcu12} when a large range of laser
frequencies are considered. Thus, it is natural to ask what happens in the strong field
ionization step of HHG as a function of $n$, as relevant parameters, such as laser
intensity and frequency, are varied while $\gamma$ is kept fixed. In this paper, we
investigate the HHG process from the ground and the Rydberg states of a hydrogen atom
using a one-dimensional $s$-wave model supported by fully three-dimensional quantum
simulations. The central idea is that in a hydrogen atom, both the field strength $F$
and the frequency $\omega_0$ scale in a particular fashion with the principal quantum
number $n$. Scaling the field strength by $1/n^4$ and the frequency by $1/n^3$, it is
evident that $\gamma =\omega_0\sqrt{2\left|{E_b}\right|}/F$ remains unaffected as $n$ is
changed, provided that both $F$ and $\omega_0$ are scaled accordingly while $n$ is
varied.
In the spirit of the Keldysh theory, going beyond the ground state and starting
from higher $n$ as the initial state, scaling $F$ and $\omega_0$ to maintain a fixed value
of $\gamma$ should keep the ionization step of the harmonic generation in the same
dynamical regime. We calculate HHG spectra starting from the ground state of hydrogen
using laser parameters for which $\gamma<1$ (tunneling), and then calculate the
high-harmonic spectra from increasingly larger $n$-states, scaling $F$ and $\omega_0$
from the ground state simulations and keep $\gamma$ fixed. If the Keldysh parameter
is indeed adequate in referring to the ionization step properly in HHG, one should
expect that the physics of the three-step process would remain unchanged, as the
remainder of the steps involve only classical propagation of the electron in the
continuum, and the final recombination step, which is governed by the conservation of
energy.
There are a number of studies devoted to HHG from Rydberg atoms. The main motivation
in these efforts are primarily increasing the conversion efficiency in the harmonic
generation to obtain higher yields, which in turn would enable the generation of more
intense attosecond pulses. Hu {\it et al.}~\cite{hu} demonstrated that, by stabilization
of excited outer electron of the Rydberg atom in an intense field, a highly efficient
harmonic spectrum could be generated from the more strongly bound inner electrons.
In another recent study, Zhai {\it et al.}~\cite{zhai1,zhai2} proposed that an enhanced
harmonic spectrum is possible if the initial state is prepared as a superposition of
the ground and the first excited state. The idea behind this method is that when coupled
with the ground state, ionization can occur out of the excited state, initiating the harmonic
generation. Since the excited state has lower ionization potential than the ground
state, this in principle can result in higher conversion efficiency if the electron
subsequently recombines into the excited state. In this scenario, the high-harmonic
plateau would still cut-off at the semiclassical limit $I_p+3.17U_p$ with $I_p$ being
that of the excited state. If, however, upon ionization out of the excited state, the
electron recombines into the ground state, the cut-off can be pushed up to higher
harmonics. Same principle is also at play in numerous studies proposing two-color
driving schemes for HHG, with one frequency component serving to excite the ground
state up to an excited level with a lower ionization potential, thus increasing the
ionization yield (see for example~\cite{zhai3}).
In this paper, we report HHG spectra from ground and various Rydberg states with
$n$ up to 40 for hydrogen atom, where the laser intensity and
the frequency are such that the ionization step occurs predominantly in the
tunneling regime. Starting with $\gamma=0.755$ at $n=1$, we go up in $n$ of the initial
state and scale $F$ by $1/n^4$ and $\omega_0$ by $1/n^3$, keeping $\gamma$ constant.
We discuss the underlying mechanism in terms of field ionization and final
$n$-distributions after the laser pulse. We find that the harmonic order of the cut-off
predicted by the semiclassical three-step model scales as
$1/n$ when $F$ and $\omega_0$ are scaled as described above, and $\gamma$ is kept fixed.
We repeat some of these model simulations by solving the fully three-dimensional time-dependent
Schr\"odinger equation to investigate the effects which may arise due to angular momenta
in high-$n$ manifolds. For select initial $n$ states, we look at momentum distributions
of the ionized electrons, and the wave function extending beyond the
peak of the depressed Coulomb potential at $1/\sqrt{F}$. Unless otherwise stated, we use
atomic units throughout.
\section{One-dimensional calculations}\label{sec:theory_hhg1d}
The time-dependent Schr\"{o}dinger equation of an electron interacting with the
proton and the laser field $F(t)$ in the $s$-wave model in length gauge reads
\begin{eqnarray}\label{schro}
\rm i\; \frac{\partial \psi (r,t)}{\partial t}
= \left[ -\frac{1}{2}\frac{{{d}^{2}}}{d{{r}^{2}}}-\frac{1}{r}+rF(t) \right]
\psi (r,t).
\end{eqnarray}
In our simulations, time runs from $-t_f$ to $t_f$. This choice of time range
centers the carrier envelope of the laser at $t=0$, which simplifies its mathematical
expression. We choose the time-dependence of the electric field $F(t)$ to be
\begin{eqnarray}\label{laser}
F(t)={F_0}\exp (-(4\ln 2){{t}^{2}}/{{\tau }^{2}})\cos ({{\omega }_{0}}t),
\end{eqnarray}
where $F_0$ is the peak field strength, $\omega_0$ is the laser frequency and $\tau $
is the field duration at FWHM. Our one dimensional model is an $s$-wave
model and restricted to the half space $r\ge 0$ with a hard wall at $r=0$. Having a hard
wall at $r=0$ when there is no angular momentum can potentially be problematic, because
the electron can absorb energy from the hard wall when using $-1/r$ potential. However, we
believe that this model is adequate for the problem at hand, because we are deep in the
tunneling regime. In our calculations, the number of photons required for ionization to occur
through photon absroption is $\sim$9 for $n=1$, approaches to 71 by $n=10$ and stays so for
higher $n$. As a result, ionization takes place primarily in the tunneling regime. If an extra
photon is absorbed at the hard wall, its effect would mostly concern the lowest harmonics,
which we are not interested in. In Sec.~\ref{sec:theory_hhg3d}, we show that the results
we obtain in this section are consistent with our findings from fully three-dimensional
calculations.
We consider cases in which the electron is initially prepared in an $ns$ state, where $n$
ranges from 1 up to 40. Our pulse duration is 4-cycles at FWHM for each case, and the wavelength
of the laser field is 800 nm for the ground state. This gives a 2.7 fs optical cycle
when the wavelength is 800 nm. Thus, the total pulse duration $\tau$ for the ground state is
$\sim$11 fs and it scales as $n^{3}$. For the 4s state, this results in a pulse duration
of $\sim$704 fs, while it amounts to $\sim$5.6 ps for the 8s state.
For the numerical solution of equation~(\ref{schro}), we perform a series of calculations
to make sure that the mesh and box size of the radial grid and the time step we use are
fine enough so that our results are converged to within a few per cent. As we go beyond the
1s state, we increase the radial box size to accommodate the growing size of the initial
state and the interaction region. We propagate Eq.~\eqref{schro} for
excited states using a square-root mesh of the form ${{j}^{2}}\delta r$, where $j$ is the
index of a radial grid point, $\delta r=R/{{N}^{2}}$, $R$ is the box size, and $N$ is the
number of grid points. This type of grid is more efficient than using a uniform mesh in
problems involving Rydberg states~\cite{topcu07}, because it puts roughly the same number
of points between the successive nodes of a Rydberg state. For the ground state, the box
size is $R=750$ a.u. and ${N}=800$, which gives ${\delta r}=0.0012$ a.u.. For excited states,
the box size grows $\sim{{n}^{2}}$ and with a proper selection of ${\delta r}$, we make
sure that the dispersion relation $k\delta r=0.5$ holds for each $n$ state, where $k$ is
the maximum electron momentum acquired from the laser field:
$k=\sqrt{2{{E}_{\max }}}$ and ${{E}_{\max }}=3.17{{U}_{p}}$.
The time propagation of the wave-function is carried out using an implicit scheme. For the
temporal grid spacing $\delta t$, we use $n^3$/180 of a Rydberg period, which is small
enough to give converged results. A smooth mask function which varies from 1 to 0 starting
from 2/3 of the way between the origin and the box boundary is multiplied with the solution of
equation~(\ref{schro}) at each time step to avoid spurious reflections from the box boundaries.
The time-dependent solutions of equation~(\ref{schro}) are obtained for each initial $ns$
state, which we then use to calculate the time-dependent dipole acceleration,
$a(t)=\langle \ddot{r} \rangle (t)$:
\begin{eqnarray}\label{inten}
a(t) &=& \langle \psi(r,t) | \left[H, \left[H,r \right] \right] | \psi(r,t) \rangle \;.
\end{eqnarray}
Because the harmonic power spectrum is proportional to the Fourier transform of the squared
dipole acceleration, we report $|a(\omega)|^2$ for harmonic spectra.
The initial wave function is normalized to unity, and the time-dependent ionization
probability is calculated as the remaining norm inside the spatial box
at a given time $t$,
\begin{eqnarray}\label{ion}
P(t)=1-\int\limits_{0}^{{R}}{{{\left| \psi (r,t) \right|}^{2}}dr}.
\end{eqnarray}
In evaluation of the ionization probability, we propagate the wavefunction long enough after
the pulse is turned off until $P(t)$ converges to a time-independent value.
\subsection{Results and discussion}\label{sec:results_hhg1d}
In our one-dimensional simulations, we consider cases where the atom is initially
in an $n$s state with $n$ up to 40. The laser parameters are critically chosen
so that the Keldysh parameter is fixed at $\gamma =0.755$ for each initial $n$,
and the scaled frequency of the laser field is ${{\omega }_{0}}{{n}^{3}}\ll 1$,
{\it i.e.}, the electric field has a slowly varying time-dependence compared with
the Kepler period ${{T}_{K}}=2\pi {n}^{3}$ of the Rydberg electron. For example,
for an 800 nm laser, an optical cycle is $\sim$18 times the Kepler period for $n=1$.
The cut-off frequency ${{\omega }_{c}}$ predicted by the three-step model is
${{\omega }_{c}}=\left| {{E}_{b}} \right|+3.17{{U}_{p}}$~\cite{corkum}, where
${{U}_{p}}=F^{2}/4\omega _{0}^{2}$ is the ponderomotive potential. Since
$\left| {{E}_{b}} \right|$, ${F}$ and ${{\omega }_{0}}$ scale as ${{n}^{-2}}$,
${{n}^{-4}}$ and ${{n}^{-3}}$ respectively, the cut-off frequency ${{\omega }_{c}}$
scales as ${{n}^{-2}}$ and the harmonic order of the cut-off
${{q}_{\max }}={{\omega }_{c}}/{{\omega }_{0}}$ scales as $n$ for fixed $\gamma$.
Harmonic spectra from these simulations are seen in Fig.~\ref{fig_hhg1d} (a)-(d) as
a function of the scaled harmonic order $\widetilde{q}=q/n$, where $q=\omega/\omega_0$
is the harmonic order. In Fig.~\ref{fig_hhg1d} (a), the scaled laser intensity and
the wavelength are $200/{{n}^{8}}$ TW/cm$^2$ and 800${{n}^{3}}$ nm, which correspond to
$\gamma =0.755$. The most prominent feature in these spectra is a clear double plateau
structure, exhibiting one plateau with a higher yield and another following with lower
yield. The second plateau terminates at the usual semiclassical cut-off. These plateaus
are connected with a secondary cut-off, which converges to a fixed scaled harmonic order
$\widetilde{q}=q/n$ as $n$ becomes large.
We also note that the overall size of $|a(\omega)|^2$ drops significantly with increasing
$n$ in Fig.~\ref{fig_hhg1d} (a). For example, going from $n=2$ to $n=4$, $|a(\omega)|^2$
drops about 3 orders of magnitude, and from $n=4$ to $n=8$ it drops roughly 4 orders of
magnitude. The spectrum obtained for $n=8$ is about 9 orders of magnitude lower than that
for $n=1$. Beyond $n=8$, the overall sizes of the spectra are too small and plagued by
numerical errors, which is why we stop at $n=8$ in panel (a). This is because the amplitude
of the wave function component contributing to the three-step process is too small to yield
a meaningful spectrum within our numerical precision.
In order to ensure sizable HHG spectra while climbing up higher in $n$, we adopt the
following procedure: We split the Rydberg series into different groups of initial
$n$-states, which are subject to different laser parameters but have the same $\gamma$
value within themselves. Within each group, we climb up in $n$ by scaling the laser
parameters for the lowest $n$ in the group until $|a(\omega)|^2$ becomes too small.
We then move onto the next group of $n$-states, increasing the laser intensity and
the frequency ($\gamma \propto \omega/F$) for the lowest $n$ in the group
while attaining the same $\gamma$ as in the previous $n$-groups. Scaling this intensity
and frequency, we continue to climb up in $n$ until again $|a(\omega)|^2$ becomes too
small, at which point we terminate the group and move onto the next.
Following this procedure, we are able to achieve HHG spectra for states up to $n=40$ in
Fig.~\ref{fig_hhg1d}. The first $n$-group in panel (a) includes states between $n=1-8$,
and the laser intensity and wavelength are $200/{{n}^{8}}$ TW/cm$^2$ and 800${{n}^{3}}$ nm.
In panel (b) is the second group with $n=10-18$ and the laser parameters $300/{{n}^{8}}$
TW/cm$^2$ and 652${{n}^{3}}$ nm. In panel (c), $n=20-28$ and the laser parameters are
$400/{{n}^{8}}$ TW/cm$^2$ and 566${{n}^{3}}$ nm, and finally in panel (d), $n=30-40$ with
intensity and wavelength $470/{{n}^{8}}$ TW/cm$^2$ and 522${{n}^{3}}$ nm.
The peak field strengths corresponding to these intensities are lower than the critical field
strengths for above-the-barrier ionization for the states we consider, and the ionization
predominantly takes place in the tunneling regime.
The dipole accelerations at the two cut-off harmonics for each $n$-group seen in
Fig.~\ref{fig_hhg1d} (a)-(d) are plotted in the upper two panels of Fig.~\ref{fig_hhg1d_iprob}.
Here, we plot $|a(\omega)|^2$ as a function of $n$. This figure points to a situation in
which $|a(\omega)|^2$ drops with increasing $n$ within each group of $n$. Also, for the first
few $n$-groups, $|a(\omega)|^2$ drops much faster compared to those involving higher $n$.
The reason for the decreasing $|a(\omega)|^2$ within each $n$-group in
Fig.~\ref{fig_hhg1d_iprob}, can be understood by calculating the ionization probabilities in
each case, and examining how it changes as $n$ is varied.
Although completely ionized electrons do not contribute to the HHG process, ionization and
HHG are two competing processes in the tunneling regime. As a result, decrease in
one alludes to decrease in the other. The ionization probabilities from the $n$s states
in Fig.~\ref{fig_hhg1d} are plotted against their principal quantum numbers in the lowest
panel of Fig.~\ref{fig_hhg1d_iprob}. It is clear that as we go beyond the ground state, the
ionization probabilities drop significantly as $n$ is increased within each group. This
decrease is rather sharp for the first group and it levels off as we go to successive groups
involving higher $n$. The values of the scaled frequencies $\Omega={\omega}{n}^{3}$ are the
same in each $n$-group, and the laser parameters are chosen so as to make sure the condition
$\Omega \ll 1$ holds. This ensures that the ionization is not hindered by processes such as
dynamic localization. The reason behind the decreasing ionization probabilities within each
$n$-group can be understood using the quasiclassical formula~\cite{keldysh} for the tunneling
ionization rate:
\begin{eqnarray}\label{keldysh}
{{\Gamma }_{K}}\propto {{\left( \left| {{E}_{b}}
\right|{{F}^{2}} \right)}^{1/4}}
\exp \left( -2{{(2\left| {{E}_{b}} \right|)}^{3/2}}/3F \right) \; .
\end{eqnarray}
The laser field intensity and electron binding energy scale as $\sim$$1/{{n}^{4}}$ and
$\sim$$1/{{n}^{2}}$. Thus, the exponent in the exponential factor in $\Gamma_K$ scales
as $1/n$, which results in decreasing ionization probabilities within each $n$-group when
plotted as a function of $n$ in the lowest panel of Fig.~\ref{fig_hhg1d_iprob}.
This behavior is reflected in the corresponding HHG spectra in Fig.~\ref{fig_hhg1d}
and the upper panels in Fig.~\ref{fig_hhg1d_iprob} as diminishing of the HHG yield.
The decrease in the ionization probability also slows down as as we successively move onto
groups of higher $n$, as indicated by the decreasing slopes of the ionization probabilities
in Fig.~\ref{fig_hhg1d_iprob} between successive $n$-groups. We find that the ratio of the
ionization probabilities between the 2s and 4s states in Fig.~\ref{fig_hhg1d_iprob} is
$\sim$39, whereas between the 12s and 14s states it is $\sim$7, between 22s and 24s states
$\sim$3, and between 32s and 34s states $\sim$2. This is an artifact of the scheme we employ
in which we divide up the Rydberg series into successive groups of $n$s states to ensure
sizable HHG spectra. The rate of decrease in the ionization probability in each group is
determined by the slope of $\Gamma_K$, {\it i.e.}, ${\rm d}\Gamma_K /{\rm d}n$. This slope
is proportional to the laser intensity we pick for the lowest $n$ in each group in order to
initiate it, and we scale it down by $1/n^8$ inside the group to keep $\gamma$ fixed.
However, although this start-up intensity for each group is larger than what it would have
been of we were to continue up in $n$ in the previous group, it is still smaller than the
initial intensity in the previous group. This results in a decreased slope going through
successive $n$-groups. Hence the decay rates for the ionization probability in successive
groups taper off, which is reflected in the two upper panels in Fi.g~\ref{fig_hhg1d_iprob}.
We also calculate the final $n$-distributions for the atom after the laser pulse to see the
extent of $n$ mixing which may have occurred during its evolution in the laser field. This is
done by allowing the wave functions to evolve according to Eq.~\eqref{schro} long enough
after the laser pulse to attain a steady state. We then project them onto the bound eigenstates
of the atom to determine the final probability distributions $P(n)$ to find the atom in a
given bound state. The results are shown in Fig.~\ref{fig_hhg1d_ndist}. It is evident from
the figure that most of the wavefunction resides in the initial state after the laser pulse,
and that there is small amount of mixing into adjacent $n$ states. The mixing is small
because only a small fraction of the total wavefunction takes part in the HHG process.
However, we cannot deduce from our calculations what fraction of the wavefunction actually
participates in HHG, and hence what fraction of it spreads to higher $n$.
Because the HHG and ionization are competing processes in this regime, the ionization
probabilities seen in the bottom panel of Fig.~\ref{fig_hhg1d_iprob} can be taken to be
an indication of the amplitude that goes into the HHG process. For example,
at $n=4$, the ionization probability is at $\sim$10\% level in Fig.~\ref{fig_hhg1d_iprob}
and the largest amplitude after the laser pulse is in $n=5$ in Fig.~\ref{fig_hhg1d_ndist}
at 10$^{-5}$ level. This indicates that roughly a part in $10^6$ of the amplitude participating
in the HHG process recombines into higher $n$-states. On the other hand, at $n=20$,
the ionization probability is also at $\sim$10\% level, but the spreading in $n$ is
between $\sim$1\% and $\sim$0.1\% level, suggesting that between roughly 1 and 10\% of the
wavefunction participating the HHG process gets spread over adjacent $n$.
In the recombination step of the HHG process, the probability for recombination back into the
initial state is the largest, chiefly because the electron leaves the atom through
tunneling with no excess kinetic kinetic energy. It largely retains the character of the
initial state because its subsequent excursion in the laser field is classical and mainly
serves for the electron wavepacket to acquire kinetic energy before recombination. In the
next section, we discuss how this small spread helps shape the double plateau structure
seen in Fig.~\ref{fig_hhg1d}.
\section{Three-dimensional Calculations} \label{sec:theory_hhg3d}
Three dimensional quantum calculations were carried out by solving the
time-dependent Schr\"odinger equation as described in Ref.~\cite{topcu07}. For
sake of completeness, we briefly outline the theoretical approach below. We
decompose the time-dependent wave function in spherical harmonics
$Y_{\ell,m}(\theta,\phi)$ as
\begin{equation}\label{scheq3d:decomp}
\Psi(\vec{r},t)=\sum_\ell f_\ell(r,t) Y_{\ell,m}(\theta,\phi)
\end{equation}
such that the time-dependence is captured in the coefficient $f_\ell(r,t)$.
For each angular momenta, $f_\ell(r,t)$ is radially represented on a square-root
mesh, which becomes a constant-phase mesh at large distances. This is ideal for
description of Rydberg states on a radial grid since it places roughly the same
number of radial points between the nodes of high-$n$ states. On a square-root
mesh, with a radial extent $R$ over $N$ points, the radial coordinate of points
are $r_j=j^2\delta r$, where $\delta r=R/N^2$.
We regularly perform convergence checks on the number of angular momenta we
need to include in our calculations as we change relevant physical parameters,
such as the laser intensity.
For example, $\delta r=4\times10^{-4}$ a.u. in a $R=2000$ a.u. box
gave us converged results for $n=4$, whereas $\delta r=8\times10^{-4}$ a.u. in a $R=2800$ a.u.
box was sufficient at $n=8$. We also found that the number of angular momenta we needed
to converge the harmonic spectra was much larger than $n-1$ for an initial $n$ state
({\it e.g.}, $\sim$120 for the $n=8$ state).
We split the total hamiltonian into an atomic hamiltonian plus the interaction
hamiltonian, such that $H(r,l,t)=H_\text{A}(r,l) + H_\text{L}(r,t)-E_\text{0}$. Note
that we subtract the energy of the initial state from the total hamiltonian to
reduce the phase errors that accumulate over time. The atomic hamiltonian
$H_\text{A}$ and the hamiltonian describing the interaction of the atom with the
laser field in the length gauge are
\begin{eqnarray}\label{scheq3d:split}
H_\text{A}(r,l) &=& -\frac{1}{2}\frac{\text{d}^2}{\text{d}r^2}-\frac{1}{r}
+\frac{l(l+1)}{2r^2} \;, \\
H_\text{L}(r,t) &=& F(t) z \cos(\omega t) \;.
\end{eqnarray}
Contribution of each of these pieces to the time-evolution of the wave function
is accounted through the lowest order split operator technique. In this
technique, each split piece is propagated using an implicit scheme of order
$\delta t^3$. A detailed account of the implicit method and the split operator
technique employed is given in Ref.~\cite{topcu07}.
The interaction Hamiltonian, $F(t)r\cos(\theta)$, couples $\ell$ to
$\ell \pm 1$. The laser pulse envelope has the same time-dependence as in the
one-dimensional $s$-wave model calculations (Eq.~\ref{laser}).
The harmonic spectrum is usually described as the squared Fourier transform of the
expectation value of the dipole moment ($d_z(t)=\langle z\rangle(t)$), dipole velocity
($v_z(t)=\langle \dot{z}\rangle(t)$), or the dipole acceleration ($a_z(t)=\langle \ddot{z}\rangle(t)$)
(see~\cite{bandrauk} and references therein).
In our three-dimensional calculations, we evaluate all three forms and compare
them for different initial $n$ states:
\begin{eqnarray}
d_z(t) &=& \langle \Psi(\vec{r},t) |z| \Psi(\vec{r},t) \rangle \\
v_z(t) &=& \langle \Psi(\vec{r},t) |\dot{z}| \Psi(\vec{r},t) \rangle \\
a_z(t) &=& \langle \Psi(\vec{r},t) |\ddot{z}| \Psi(\vec{r},t) \rangle \;,
\end{eqnarray}
where $\dot{z}=-\rm i[H,z]$ and $\ddot{z}=-[H,[H,z]]$.
Ref.~\cite{bandrauk} found that the Fourier transforms $d_z(\omega)$, $v_z(\omega)$,
and $a_z(\omega)$
are in good agreement when the pulses are long and ``weak" in harmonic generation
from the ground state of H atom, where ``weak" refers to intensities below over-the-barrier
ionization limit. As we increase the initial $n$ in our simulations keeping the
Keldysh parameter $\gamma$ constant, we find that the agreement between these three forms
of harmonic spectra gets better. This
observation is in agreement with the findings in Ref.~\cite{bandrauk}, because to keep
$\gamma$ fixed, we scale the pulse duration by $\sim$$n^3$ and the peak
laser field strength by $\sim$$1/n^4$. Although the energy of the initial state
is also scaled by $\sim$$1/n^2$ and the pulse duration is the same in number of optical
cycles, the ionization probability drops within a given
$n$-series in Fig.~\ref{fig_hhg1d_iprob}. This suggests that the pulse is effectively
getting weaker as we increase $n$ for fixed $\gamma$. We report only the
dipole acceleration form $|a_z(\omega)|^2$ to refer to harmonic spectra, chiefly
because it is this form that is directly proportional to the emitted power,
{\it i.e.}, $S(\omega)=2\omega^4 |a_z(\omega)|^2 /(3\pi c^3)$.
Because high-harmonic generation and ionization are competing processes in the
physical regime we are interested in, it is useful to investigate the momentum
distribution of the ionized part of the wave function to gain further insight
into the HHG process.
In order to evaluate the momentum distributions, we follow the same procedure
outlined in Ref.~\cite{vrakking}. For sake of completeness, we briefly describe
the method: In all simulations, the ionized part of the wave function is
removed from the box every time step during the time propagation,
in order to prevent unphysical reflections from the radial box edge. This is done
by multiplying the wavefunction by a mask function $m(r)$ at every time step,
where $m(r)$ spans 1/3 of the radial box at the box edge. We
retrieve the removed part of the wave function by evaluating
\begin{equation}\label{scheq3d:masked}
\Delta \psi_{l}(r,t')=[1-m(r)]\;\psi_{l}(r,t')
\end{equation}
at every time step, and Fourier transform it to get the momentum space wave
function
$\Delta \phi(p_{\rho},p_z,t')$,
\begin{align}\label{scheq3d:pmap}
\Delta\phi(p_{\rho},p_z,t') = 2\;&\sum_{l}(-i)^{l}
\;Y_{l,m}(\theta,\varphi) \nonumber \\
&\times\int_0^{\infty} j_{l}(pr) \Delta\psi_{l}(r,t') r^2 \;dr \;.
\end{align}
Here the momentum $p=(p_{\rho}^2+p_z^2)^{1/2}$ is in cylindrical coordinates and
$j_{l}(pr)$ are the spherical Bessel functions. We then time propagate
$\Delta\phi(p_{\rho},p_z,t')$ to a later final time $t$ using the semi-classical
propagator,
\begin{equation}\label{scheq3d:pprop}
\Delta \phi(p_{\rho},p_z,t) = \Delta \phi(p_{\rho},p_z,t')\;e^{-i S}
\end{equation}
where $S$ is the classical action. For the time-dependent laser field $F(t)$,
action $S$ is calculated numerically by integrating $p_z^2$ along the laser
polarization,
\begin{eqnarray}\label{scheq3d:action}
S &=& \frac{1}{2}p_{\rho}^2 (t-t') + \frac{1}{2}\int_{t'}^t p_z^2 dt'' \\
p_z &=& \int_{t'}^t F(t'') dt''
\end{eqnarray}
We are assuming that the ionized electron is freely propagating in the classical
laser field in the absence of the Coulomb field of its parent ion, and this method
is numerically exact under this assumption.
\subsection{Results and discussion} \label{sec:results_hhg3d}
The double plateau structure we see in the one-dimensional spectra in
Fig.~\ref{fig_hhg1d} can be also observed from our three-dimensional simulations.
In Fig.~\ref{fig_hhg3d}, the squared dipole acceleration $|a(\omega)|^2$ is plotted
for the initial states of 1s (black), 4s (green), and 8s (blue) of Hydrogen atom as
a function of the scaled harmonic order $\omega/(\omega_0 n)\equiv \widetilde{q}$.
In these calculations, we adhere to $\gamma=0.75$ as in the one-dimensional calculations,
and start at $n=1$ with intensity $2\times10^{14}$ W/cm$^2$ and $\lambda=800$ nm. From
this, we use the $n$ scaling discussed in Sec.~\ref{sec:results_hhg1d} to determine the
laser parameters for higher $n$ states. Apart from the double plateau structure, there is
decrease in the HHG yield with increasing $n$ in Fig.~\ref{fig_hhg3d}, similar to the
one-dimensional case. Again, this suggests that although $\gamma$ is fixed for all three
initial states in Fig.~\ref{fig_hhg3d}, the atom sinks deeper into the tunneling regime as
$n$ is increased, similar to what we have seen in the one-dimensional case in
Sec.~\ref{sec:results_hhg1d}. The main difference in Fig.~\ref{fig_hhg3d} is that the
first plateau is not as flat as in the one-dimensional calculations, as often the case when
comparing one- and three-dimensional HHG spectra.
In order to clearly identify the first and the second cut-offs seen in Fig.~\ref{fig_hhg1d},
we have smoothened the 4s and 8s spectra by boxcar averaging to reveal their main structure
(solid red curves) in Fig.~\ref{fig_hhg3d}. The usual scaled cut-off from the semiclassical
three-step model is at $q_{\rm max}/n\simeq 35$ in all three spectra, and it is independent
of $n$. A secondary cut-off emerges at the same scaled harmonic as in the one-dimensional case,
which is labeled as $k_2$ in the 4s and the 8s spectra at $\widetilde{q}\simeq 23.45$. It is
clear from Fig.~\ref{fig_hhg3d} that just as the usual cut-off at $q_{\rm max}/n$, $k_2$
is also universal beyond $n>4$. This secondary cut-off separates the two plateaus, first
spanning lower frequencies below $k_2$, and the second spanning higher frequencies between
$k_2$ and $q_{\rm max}/n$.
The mechanism behind the formation of the secondary cut-off $k_2$ can be understood in
terms of the ionization and the recombination steps of the semiclassical model. In the first
step, the electron tunnels out of the initial $n$s state into the continuum, and has initially
no kinetic energy. After excursion in the laser field, it recombines with its parent ion. In
this last step, recombination occurs primarily back into the initial state. This is because the
electron was liberated into the continuum with virtually no excess kinetic energy, and the
electron wavepacket mainly retains its original character. When it returns to its parent ion
to recombine, the recombination probability is highest for the bound state with which it overlaps
the most. As a result, recombination into the same initial state is favored. This mechanism is
associated with the usual cut-off since its position depends on the ionization potential:
$q_{\rm max}=(I_p + 3.17 U_p)/\omega_0$.
On the other hand, there is still probability that the electron can recombine to higher $n$
states. This would result in lower harmonics because less than $I_p$ needs to be converted
to harmonics upon recombination. The cut-off for this mechanism would be achieved when the
electron recombines with zero energy near the threshold ($n\rightarrow \infty$). Because
the maximum kinetic energy a free electron can accumulate in the laser field is $3.17 U_p$,
the lower harmonic plateau would cut off at $3.17 U_p$. For the laser parameters used in
Fig.~\ref{fig_hhg3d}, this corresponds to the scaled harmonic $\widetilde{q}=23.45$, which is
marked by the red arrows labeled as $k_2$ on the 4s and the 8s spectra. To reiterate, the
second plateau with higher harmonics includes:
\begin{enumerate}
\item trajectories which recombine to the initial state ($n_1 \rightarrow n_1$) after
accumulating kinetic energy up to $3.17 U_p$,
\item trajectories which recombine to a higher but nearby $n$ state ($n_1 \rightarrow n_2$,
where $n_2 > n_1$) that have acquired kinetic energy up to $3.17 U_p$,
\item trajectories which recombine to much higher $n$ states ($n_1 \rightarrow n_2$, where
$n_2 \gg n_1$) resulting in the cut-off at $\widetilde{q}=23.45$.
\end{enumerate}
The $n$- and $l$-distributions for the 4s and 8s states as a
function of time can be seen in Fig.~\ref{fig_hhg3d_nldist}. Notice that the laser
pulse is centered at $t=0$ o.c. and has 4 cycles at FWHM for both states. It is
clear from the first column that the the atom mostly stays in the initial state and
only a small fraction of the wavefunction contributes to the HHG process.
To appreciate how small, we note that the highest contour is at unity, and lowest
contour for both the 4s and the 8s states are at the $\sim$10$^{-10}$ level. At the end of
the pulse, there is a small spread in $n$, which is skewed towards higher $n$ in
both cases. This skew is expected since the energy separation between the adjacent $n$
manifolds drop as $\sim$$1/n^3$, and therefore it is easier to spread to the higher $n$
manifolds than to lower $n$. The small amplitude for this spread is a consequence of the
fact that we are not in the $n$-mixing regime. In the second column, we see that the
orbital angular momentum $l$ also spreads to higher $l$ within the initial $n$-manifold,
and the small leakage to higher angular momenta at the end of the pulse is a consequence
of the small probability for spreading to the higher $n$-manifolds.
The second step of the harmonic generation process involving the free evolution of the
electron in the laser field be understood on purely classical grounds.
It was the classical arguments that led to the $3.17 U_p$ limit for the maximum
kinetic energy attainable by a free electron. In the context of this paper, performing
such classical simulations can yield no insight to how the excursion step of the HHG
behaves under the scaling scheme we have employed so far. This is because the classical
equations of motion perfectly scale under the transformations $r\rightarrow r n^2$,
$t\rightarrow t n^3$, $\omega\rightarrow \omega/n^3$, and $E\rightarrow E/n^2$,
where $r$ is distance and $t$ is time.
On the other hand, it is the lack of this perfect scaling property of the Schr\"odinger
equation that accounts for the differences between different initial $n$ states
we have seen from our quantum simulations. One way to examine the excursion step by
itself in our quantum simulations is to look at the momentum distribution of the part of
the wavefunction that contributes to the HHG spectra.
To this end, we calculate the momentum map of the ionized part of the wavefunction when
the atom is initially prepared in the 4s state. The reason we look at the ionized part
of the wavefunction is because harmonic generation and ionization are competing processes.
Therefore one would expect that they should mirror each other in their behavior.
Fig.~\ref{fig_hhg3d_pmap} shows this momentum distribution obtained by Fourier transforming
the ionized part of the wavefunction, which is accumulated over time until after the
laser pulse (see Eq.~\eqref{scheq3d:masked} onward). Since the problem has cylindrical
symmetry, the horizontal axis is labeled $p_{||}$ to refer to the momentum component
parallel to the laser polarization direction (same as $p_z$). The vertical axis
$p_{\perp}$ is the perpendicular component. We have also labeled the $3.17 U_p$ limit
for the maximum kinetic attainable, which is along the dot-dashed semicircle. As expected,
the total momentum of the escaped electrons cut off at $3.17 U_p$, and the components
which would have contributed to the two different plateaus in Fig.~\ref{fig_hhg3d}
are visible close to the laser polarization direction.
We also look at the momentum map of the wavefunction inside our numerical box that
falls beyond the peak of the depressed Coulomb potential at $r=1/\sqrt{F}$.
Part of the wavefunction in the
region $r < 1/\sqrt{F}$ is removed by multiplying it with a smooth mask
function before the Fourier transformation step described in Sec.~\ref{sec:theory_hhg3d}.
The results when the atom is initially in the 4s and 8s states are seen in
Fig.~\ref{fig_hhg3d_pmap2} at five instances during the laser cycle at
the peak of the pulse (labeled A, B, C, D, and E). We have also labeled three semicircles
corresponding to three momenta $\sqrt{p^2_{||} + p^2_{\perp}}$ of interest:
\begin{enumerate}
\item the $3.17 U_p$ limit, also seen in Fig.~\ref{fig_hhg3d_pmap},
\item $k_1$ corresponding to the kinetic energy $U_p$,
\item $k_2$ corresponding to the kinetic energy necessary to emit the
harmonic $\widetilde{q}=23.45$ at the secondary cut-off in Fig.~\ref{fig_hhg3d},
{\it if the electron recombines into its initial 4s or 8s state upon rescattering}.
\end{enumerate}
The amplitude inside the $k_1$ semicircle contributes to only very low harmonics,
below the scaled harmonic labeled as $k_1$ in Fig.~\ref{fig_hhg3d}. This part of the
spectra is not suitable for the semiclassical three step description of HHG. The annular region
between the semicircles $k_1$ and the $k_2$ contributes to the first low harmonic
plateau in Fig.~\ref{fig_hhg3d}. Finally, the region between $k_2$ and the semiclassical
$3.17 U_p$ limit contributes to the less intense second plateau. The distinction
between the lower harmonics from the inner $k_1$ semicircle and the higher harmonics from
the annular region between $k_1$ and $k_2$ is manifested most clearly in the 4s column,
as longer and shorter wavelengths in the momentum maps inside these regions. Expectedly,
both momentum maps for the 4s and the 8s initial states show the same structures, the essential
difference being the number of nodes in the momentum space wave functions which scales as $n^2$.
Incidentally, a rescattering event is visible on the laser polarization axis at $k_2$ in panel
D of the 4s column, giving rise to kinetic energy beyond the $3.17 U_p$ limit on the left.
\section{Conclusions}
We have presented results from one- and three-dimensional time-dependent quantum
calculations for higher-order harmonic generation from excited states of H atom
for a fixed Keldysh parameter $\gamma$. Starting from the ground state, we chose
laser intensity and frequency such that we are in the tunneling regime and ionization
probability is well below one per cent. We then scale the intensity by $1/n^8$ and the
frequency by $1/n^3$ to keep $\gamma$ fixed as we increase the principal quantum
number $n$ of the initial state of the atom. Because $\gamma$ is fixed, the common
wisdom is that the dynamical regime which determine the essential physics should
stay unchanged in the HHG process as we go up in $n$ of the initial state.
Our one-dimensional calculations demonstrate that this is indeed the case, and
although the emitted power (HHG yield) drops as we climb up in $n$, the resulting
harmonic spectra display same {\it universal} features beyond $n$$\sim$10. The most
distinguished feature that develops when the atom is initially prepared in a Rydberg state
is the emergence of a secondary plateau below the semiclassical cut-off $q_{\rm max}$
in the HHG plateau. This secondary cut-off splits the harmonic plateau into two regions:
one spanning low harmonics and terminating with a secondary cut-off, and a second
plateau with lower yield and higher harmonics terminating at the usual semiclassical
cut-off at $q_{\rm max}$.
We have also found that the positions of these cut-off harmonics also scale
as $1/n$, and introduced the concept of ``scaled harmonic order",
$\widetilde{q}=\omega/(\omega_0 n)$. When plotted as a function of $\widetilde{q}$,
the harmonic spectra appear universal and, except for the overall yields, the spectra
for high $n$ look essentially identical.
We then carried out fully three-dimensional calculations for three of the $n$ states in the
lowest $n$-group in the one-dimensional calculations to gain further insight into
the scaling properties we have seen in the one-dimensional calculations. This also
serves to investigate possible effects of having angular momentum. We found the same features
as in the one-dimensional spectra, except that the yield from the first plateau is
skewed towards lower harmonics. We associate this with spreading to higher $n$ states
during the tunnel ionization and recombination steps by analyzing the
$n$- and $l$-distributions of the atom after the laser pulse. Momentum distributions
of the ionized electrons and the wave function beyond the peak of the depressed
Coulomb potential at $r=1/\sqrt{F}$ show features which we can relate to the universal features
we see in the HHG spectra at high $n$. We identify the first plateau in this universal
HHG spectrum with features in momentum space between two values of momentum: (1) the momentum
corresponding to kinetic energy $U_p$, and (2) the momentum corresponding to
kinetic energy if the electron emits the secondary cut-off harmonic
upon recombining to its initial state. The latter case also occurs when the electron
recombines to a much higher Rydberg state than the one it tunnels out after
accumulating maximum possible kinetic energy of $3.17 U_p$ during its excursion in
the laser field.
\section{Acknowledgments}
IY, EAB and ZA was supported by BAPKO of Marmara University. ZA would like to thank to
the National Energy Research Scientific Computing Center (NERSC) in Oakland, CA.
TT was supported by the Office of Basic Energy Sciences, US Department of Energy, and by
the National Science Foundation Grant No. PHY-1212482.
|
1,116,691,497,663 | arxiv | \section{Introduction} \label{Introduction}
Given a { general channel} $b$ over which the uniform $X$ source is \emph{directly} communicated within distortion $D$.
This means the following:
Let the source input space be $\mathcal X$ and the source reproduction space be $\mathcal Y$. { $\mathcal X$ and $\mathcal Y$ are finite sets.} Intuitively, a uniform $X$ source, $U$, puts a uniform distribution on all sequences with a type $p_X$. This is as opposed to the i.i.d. $X$ source which puts ``most of'' its mass on sequences with type ``close to'' $p_X$. See Section \ref{NotAndDefIII} for a precise definition. A { general channel} is a sequence $<b^n>_1^\infty$ where $b^n$ is a transition probability from $\mathcal X^n$ to $\mathcal Y^n$; a precise definition of a { general channel} can be found in Section \ref{NotAndDefIII}. When the block-length is $n$, the uniform $X$ source is denoted by $U^n$. With input $U^n$ into the channel, the output is $Y^n$, and is such that
\begin{align}
\lim_{n \to \infty} \Pr \left ( \frac{1}{n} d^{n}(U^{n}, Y^{n}) > D \right ) = 0
\end{align}
{ where $<d^n>_1^\infty$, $d^n: \mathcal X^n \times \mathcal Y^n \to [0, \infty )$, is a permutation-invariant (a special case is additive) distortion function.} { The generality of the channel is in the sense of Verdu and Han \cite{VerduHan}. See Section \ref{NotAndDefIII} for precise definitions.} {See Figure \ref{channelDistortionD}}.
\begin{figure}
\begin{center}
\includegraphics[scale = 1.0]{channelDTex.pdf}
\caption{A channel which communicates the uniform $X$ source within distortion $D$ }
\label{channelDistortionD}
\end{center}
\end{figure}
Such a { general channel} intuitively functions as follows: when the block-length is $n$, with high probability, a sequence in $u^n \in \mathcal U^{n}$ is distorted within a ball of radius $nD$ and this probability $\to 0$ as $n \to \infty$. Note that $u^n \in \mathcal U^n$ but the ball of radius $nD$ exists in the output space $\mathcal Y^n$. { See Figure \ref{intuitiveChannel}}.
\begin{figure}
\begin{center}
\includegraphics[scale = 1.0]{intuitiveChannelTex.pdf}
\caption{Intuitive action of a channel which directly communicates the uniform $X$ source within a distortion $D$ }
\label{intuitiveChannel}
\end{center}
\end{figure}
{
\emph{Note that the uniform $X$ source is not defined for all block-lengths; this point will be clarified in Section \ref{NotAndDefIII}}.}
Consider the two problems:
\begin{itemize}
\item
Covering problem: the rate-distortion source-coding problem of compressing the source $U$ within distortion $D$, that is, computing the minimum rate needed to compress the source $U$ within a distortion $D$. Denote the rate-distortion function by $R_U(D)$. Intuitively, the question is to find the minimum number of $y^n \in \mathcal Y^n$ such that balls of radii $nD$ circled around $y^n$ \emph{cover} the space $\mathcal U^n$. Note that balls are circled on $y^n \in \mathcal Y^n$ but balls of radius $nD$ exist in $\mathcal U^n$. Since the setting is information-theoretic, the balls should `almost' cover the whole space.
\item
Packing problem: the channel-coding problem of communicating reliably over a { general channel} $b$ which is known to directly communicate the source $U$ within distortion $D$ (packing problem). Denote the channel capacity by $C$. Intuitlvely, the question is to find the maximum number of $u^{n} \in \mathcal U^n$ such that balls of radii $nD$ circled around these $u^n$ \emph{pack} the $\mathcal Y^n$ space. Note that $u^n \in \mathcal U^n$ but balls of radil $nD$ circled around these codewords exist in the $\mathcal Y^n$ space. Since the setting is information theoretic, the balls which pack the space can overlap `a bit'.
\end{itemize}
Clearly, there is a duality in these problem statements. It is unclear how to make this duality precise for these deterministic problems. However, a randomized covering-packing duality can be established between the above two problems, thus also proving that the answer to the first problem is less than or equal to the answer to the second problem, in the following way:
The codebook construction and error analysis for the source-coding problem are roughly the following:
Let the block-length be $n$. Generate $2^{nR}$ codewords $\in \mathcal Y^n$ independently and uniformly from the set of all sequences with type $q$ where $q$ is an achievable type on the output space. \emph{Roughly}, a $u^n \in \mathcal U^n$ is encoded via minimum distance encoding. The main error analysis which needs to be carried out is the probability that a certain codebook sequence does not encode a particular $u^n$, that is,
\begin{align}\label{E1}
\Pr \left ( \frac{1}{n} d^n(u^n, Y^n) > D \right )
\end{align}
where $Y^n$ is a uniform random variable on sequences of type $q$. A best possible $q$ is chosen in order to get an upper bound on the rate-distortion function.
The codebook construction and error analysis for the channel-coding problem are roughly the following:
Let the block length be $n$. Generate $2^{nR}$ codewords $\in \mathcal U^n$ independently and uniformly. Let $y^n$ be received. The decoding of which codeword is transmitted is \emph{roughly} via minimum distance decoding. As will become clearer later, the main error calculation in the channel-coding problem is the probability of correct decoding for which the following needs to be calculated:
\begin{align}\label{E2}
\Pr \left ( \frac{1}{n} d^n(U^n, y^n) > D \right )
\end{align}
where $y^n$ has type $q$. Finally, a worst case error analysis is done by taking the worst possible $q$.
By symmetry, (\ref{E1}) and (\ref{E2}) are equal assuming the distortion function is additive (more generally, permutation invariant) and this leads to a proof that $C \geq R_U(D)$. The above steps will be discussed in much detail, later in this paper. \emph{This equality of (\ref{E1}) and (\ref{E2}) is a randomized covering-packing connection, and is a duality between source-coding and channel-coding.} Further, this is an operational view and proof in the sense that only the operational meanings of channel capacity as the maximum rate of reliable communication and the rate-distortion function as the minimum rate needed to compress a source with certain distortion are used. Of course, certain randomized codebook constructions are used. No functional simplifications beyond the equality of (\ref{E1}) and (\ref{E2}) are needed.
{ This proof is discussed precisely in Section \ref{RCPD} and intuitively in Appendix \ref{AppendixIntuitive}.}
If $b$ is the composition of an encoder, channel and decoder, that is, $b^n = e^n \circ k \circ f^n$ for some encoder, decoder, $<e^n, f^n>_1^\infty$ and channel $k$ and the uniform $X$ source is communicated over this channel by use of some encoder-decoder $<E^n, F^n>_1^\infty$. Then, it follows that by use of encoder-decoder $<E^n \circ e^n, f^n \circ F^n>_1^\infty$, reliable communication can be accomplished over channel $k$ at rates $<R_U(D)$. By use of the argument of source-coding followed by channel-coding, optimality of source-channel separation for communication of the uniform $X$ source over the channel $k$. \emph{This leads to an operational view of source-channel separation for communication with a fidelity criterion.} Note that both the channel capacity problem and the rate-distortion problem are infinite dimensional optimization problems. By use of this methodology, the optimality of source-channel separation is proved without reducing the problems to finite dimensional problems. This is as opposed to the proof of separation, for example, in \cite{Shannon} which crucially relies on the the single-letter maximum mutual information expression for channel capacity and the single-letter minimum mutual information expression for the rate-distortion function.
{ Since the decoding rule for the channel-coding problem depends only on the end-to-end description that the channel communicates the uniform $X$ source within distortion $D$, in addition to a general channel, assuming random codes are permitted, duality also holds for a compound channel, that is, where the channel belongs to a set, (see for example \cite{CsiszarKorner} for a discussion on compound channels). Note that the channel model is still general. For the same reason, source-channel separation for communication with a fidelity criterion also holds for a general, compound channel assuming random codes are permitted. This will be discussed in some detail, later}.
\emph{An operational view, as regards this paper, refers to a view which uses only the operational meanings of quantities: for example, of channel capacity as the maximum rate of reliable communication or the rate-distortion function as the minimum rate needed to code a source with a certain distortion} It does \emph{not} mean constructive.
The source $U$ is ideal for this purpose because it puts mass only on the set of sequences with a particular type. If one tries to carry out the above argument for the i.i.d. $X$ source, $\epsilon$s and $\delta$s enter the picture. A generalization to the i.i.d. $X$ source can be made via a perturbation argument.
\section{Literature Survey}
Duality between source-coding and channel-coding has been discussed in a number of settings in the information-theory literature.
Shannon \cite{Shannon} discussed, on a high level, a functional duality between source-coding and channel-coding by considering a channel-coding problem where there is a cost associated with different input letters which amounts to finding a source which is just right for the channel and desired cost. Similarly, the rate-distortion source-coding problem corresponds to finding a channel that is just right for the source and the allowed distortion level. Further, Shannon makes the statement, ``This duality can be pursued further and is related to a duality between past and future and notions of control and knowledge. Thus we may have knowledge of the past but cannot control it; we may control the future but have no knowledge of it.''
A general formulation of this functional duality has been posed in \cite{Ramchandran} which considers the channel capacity with cost constraints problem and the rate-distortion problem, defines when the problems are duals of each other, and proves that channel capacity is equal to the rate-distortion function if the problems are dual. The purpose of our paper is not a functional duality or a mathematical programming based duality, but a operational duality where operational is defined in the previous section.
Operational duality, as defined by Ankit et al \cite{Ankit} refers to the property that optimal encoding/decoding schemes for one problem lead to optimal encoding/decoding schemes for the corresponding dual problem. They show that if used as a lossy compressor, the maximum-likelihood channel decoder of a randomly chosen capacity-achieving codebook achieves the rate-distortion function almost surely . \emph{Note that the definition of operational used in \cite{Ankit} is different from the definition of operational used in this paper.}
Csiszar and Korner \cite{CsiszarKorner} prove the rate-distortion theorem by first constructing a ``backward'' DMC and codes for this DMC such that source-codes meeting the distortion criterion are obtained from this channel code by using the channel decoder as a source encoder and vice-versa; for this purpose, channel codes with large error probability are needed. The view-point is suggestive of a duality between source and channel coding. There is no backward channel in our paper: there is a forward channel which directly communicates the source $U$ within distortion $D$ and there is the rate-distortion source-coding problem.
{ Yassaee \cite{Yassaee} have studied duality between channel coding problem and secret-key agreement problem (in the source-model sense) They show how an achievability proof for each of these problems can be converted into an achievability proof for the other one. }
{The decoding rule used in this paper is a variant of a minimum distance decoding rule. For discrete memoryless channels, decoders minimizing a distortion measure have been studied as mis-match decoding and are suboptimal in general though optimal if the distortion measure is matched, that is, equal to the negative log of the channel transition probability; see for example the paper of Csiszar and Narayan \cite{CsiszarNarayan}.}
The results in this paper form a part of the first authors Ph. D. dissertation \cite{MukulPhdThesis}.
{ Recall the important point that the duality between source-coding and channel-coding, as discussed in this paper is operational in the sense it uses only the operational meanings of channel capacity as the maximum rate of reliable communications and the rate-distortion function as the minimum rate needed to code a source with certain distortion levels, and this sense is different from the sense in which duality is discussed in the above mentioned papers. Major functional simplifications are not used. Random codes are constructed for both problems and a connection is seen between the two problems, which leads to a randomized covering-packing duality. }
\section{Notation and definitions} \label{NotAndDefIII}
Superscript $n$ will denote a quantity related to block-length $n$. For example, $x^n$ will be the channel input when the block-length is $n$. As block-length varies, $x = <x^n>_1^\infty$ will denote the sequence for various block-lengths.
The source input space is $\mathcal X$ and the source reproduction space is $\mathcal Y$. $\mathcal X$ and $\mathcal Y$ are finite sets. $X$ is a random variable on $\mathcal X$. Let $p_X(x)$ be rational $\forall x$. Let $n_0$ denote the least positive integer for which $n_0p_X(x)$ is an integer $\forall x \in \mathcal X$. Let $\mathcal U^n$ denote the set of sequences with (\emph{exact}) type $p_X$. $\mathcal U^n$ is non-empty if and only if $n_0$ divides $n$. Let $n' \triangleq n_0n$. Let $U^{n'}$ denote a random variable which is uniform on $\mathcal U^{n'}$ and zero elsewhere. Then, $<U^{n'}>_1^\infty$ is the uniform $X$ source and is denoted by $U$. The uniform $X$ source can be defined only for those $X$ for which $p_X(x)$ is rational $\forall x \in \mathcal X$.
{
\emph{Every mathematical entity which had a superscript $n$ in Section \ref{Introduction} will have a superscript $n'$ henceforth. This is because the uniform $X$ source is defined only for block-lengths $n'$. The reader is urged not to get confused between this change of superscript between Section \ref{Introduction} and the rest of this paper. Further, the reader is urged to read Section \ref{Introduction} by replacing $n$ with $n'$ in mathematical entities.}
}
Let $q$ denote a type on the set $\mathcal Y$ which is achievable when the block-length is $n'$. $\mathcal V_q^{n'}$ is the set of all sequences with type $q$. The uniform distribution on $\mathcal V_q^{n'}$ is $V_q^{n'}$.
Since the uniform $X$ source is defined only for block-lengths $n'$, distortion function, channels, encoders and decoders will be defined only for block-lengths $n'$.
$d = <d^{n'}>_1^\infty$ is the distortion function where $d^{n'}: \mathcal X^{n'} \times \mathcal Y^{n'} \rightarrow [0, \infty )$. Let $\pi^{n'}$ be a permutation (rearrangement) of $(1, 2, \ldots, n')$. That is, for $1 \leq i \leq n'$, $\pi^{n'}(i) \in \{1, 2, \ldots, n' \}$ and that, $\pi^{n'}(i)$, $1 \leq i \leq n'$ are different. For $x^{n'} \in \mathcal X^{n'}$, denote
\begin{align}
\pi^{n'}x^{n'} \triangleq (x^{n'}(\pi^{n'}(1)), x^{n'}(\pi^{n'}(2)), \ldots, x^{n'}(\pi^{n'}(n')))
\end{align}
For $y^{n'} \in \mathcal Y^{n'}$, $\pi^{n'}y^{n'}$ is defined analogously. $<d^{n'}>_1^\infty$ is said to be permutation invariant if $\forall n'$,
\begin{align}
d^{n'}(\pi^{n'}x^{n'}, \pi^{n'}y^{n'}) = d^{n'}(x^{n'}, y^{n'}), \forall x^{n'}\in \mathcal X^{n'}, y^{n'} \in \mathcal Y^{n'}
\end{align}
An additive distortion function is defined as follows. Let $d: \mathcal X \times \mathcal Y \rightarrow [0, \infty)$ be a function. Define
\begin{align}
d^{n'}(x^{n'}, y^{n'}) = \sum_{i=1}^{n'} d(x^{n'}(i), y^{n'}(i))
\end{align}
Then, $<d^{n'}>_1^\infty$ is an additive distortion function.
Additive distortion functions are special cases of permutation invariant distortion function. Except at the end of the paper where conditions are derived for a certain technical conditions to be true for which additive distortion functions will be required, most of this paper will use permutation invariant distortion functions.
A { general channel} $b = <b^{n'}>_1^\infty$ is defined as follows:
The input space of the channel is $\mathcal X$ and the output space is $\mathcal Y$.
\begin{align}
b^{n'}:&\mathcal X^{n'} \rightarrow \mathcal P(\mathcal Y^{n'}) \\
& x^{n'} \rightarrow b^{n'}(y^{n'}|x^{n'}) \nonumber
\end{align}
$b^{n'}(y^{n'}|x^{n'})$ should be thought of as the probability that the output of the channel is $y^{n'}$ given that the input is $x^{n'}$.
{ Note that the channel model is general in the sense of Verdu and Han \cite{VerduHan}. }
Let
\begin{align}
\mathcal M^{n'}_R \triangleq \{1, 2, \ldots, 2^{\lfloor n'R \rfloor} \}
\end{align}
$\mathcal M^{n'}_R$ is the message set. When the block-length is $n'$, a rate $R$ deterministic source encoder is $e_s^{n'}: \mathcal X^{n'} \rightarrow \mathcal M_R^{n'}$ and a rate $R$ deterministic source decoder $f_s^{n'}: \mathcal M_R^{n'} \rightarrow \mathcal Y^{n'}$. $(e_s^{n'}, f_s^{n'})$ is the block-length $n'$ rate $R$ deterministic source-code. The source-code is allowed to be random in the sense that encoder-decoder is a joint probability distribution on the space of deterministic encoders and decoders. $<e_s^{n'}, f_s^{n'}>_1^\infty$ is the rate $R$ source-code. The classic argument used in \cite{Shannon} to prove the achievability part of the rate-distortion theorem uses a random source code.
When the block-length is $n'$, a rate $R$ deterministic channel encoder is a map $e_c^{n'}:\mathcal M_R^{n'} \rightarrow \mathcal X^{n'}$ and a rate $R$ deterministic channel decoder is a map $f_c^{n'}: \mathcal Y^{n'} \rightarrow \hat {\mathcal M}_R^{n'}$ where $\hat {\mathcal M}_R^{n'} \triangleq \mathcal M_R^{n'} \cup \{e\}$ is the message reproduction set where `e' denotes error. The encoder and decoder are allowed to be random in the sense discussed previously. $<e_c^{n'}, f_c^{n'}>_1^\infty$ is the rate $R$ channel code. The classic argument used in \cite{ShannonReliable} to derive the achievability of the mutual information expression for channel capacity uses a random channel code.
The source-code $<e_s^{n'}, f_s^{n'}>_1^\infty$ is said to code the source $U$ to within a distortion $D$ if with input $U^{n'}$ to $e_s^{n'} \circ f_s^{n'}$, the output is $Y^{n'}$ such that
\begin{align} \label{SourceCodeD}
\lim_{n' \to \infty} \Pr \left ( \frac{1}{n'} d^{n'}(U^{n'}, Y^{n'}) > D \right ) = 0
\end{align}
(\ref{SourceCodeD}) is the probability of excess distortion criterion.
The infimum of rates needed to code the uniform $X$ source to within the distortion $D$ is the rate-distortion function $R^P_U(D)$. If $\lim$ in (\ref{SourceCodeD}) is replaced with $\lim \inf$, the criterion is called the $\inf$ probability of excess distortion criterion and the corresponding rate-distortion function is denoted by $R^P_U(D, \inf)$.
Denote
\begin{align}
g = <g^{n'}>_1^\infty \triangleq <e_c^{n'} \circ b^{n'} \circ f_c^{n'}>_1^\infty
\end{align}
Then, $g$ is a general channel with input space $\mathcal M_R^{n'}$ and output space $\hat {\mathcal M}_R^{n'}$. Rate $R$ is said to be reliably achievable over $b$ if there exists a rate $R$ channel code $<e_c^{n'}, f_c^{n'}>_1^\infty$ such that
\begin{align}
\lim_{n' \to \infty} \sup_{m^{n'} \in \mathcal M_R^{n'}} g^{n'}(\{ m^{n'}\}^c|m^{n'}) = 0
\end{align}
Supremum of all achievable rates is the capacity of $b$.
The channel $b$ is said to communicate the source $U$ \emph{directly} within distortion $D$ if with input $U^{n'}$ to $b^{n'}$, the output is $Y^{n'}$ such that
\begin{align}
\lim_{n' \to \infty} \Pr \left ( \frac{1}{n'} d^{n'}(U^{n'}, Y^{n'}) > D \right ) = 0
\end{align}
See Figure \ref{channelDistortionD} in Section \ref{Introduction} with $n$ replaced by $n'$.
{
In this paper, only the end-to-end description of a channel $<{b^{n'}}_1^\infty$ which communicates the uniform $X$ source directly within distortion $D$ is used and not the particular $b^{n'}$; for this reason, the { general channel} should be thought of as a \emph{black-box} which communicates the uniform $X$ source within distortion $D$.}
In order to draw the randomized covering-packing duality between source and channel coding, the source-coding problem which will be considered is that of coding the source $U$ within distortion $D$ and the channel coding problem which will be considered is the rates of reliable communication over a $b$ which communicates the source $U$ directly within distortion $D$. A relation will be drawn between the rate-distortion function for the uniform $X$ source and the capacity of $b$ and in the process, the randomized covering-packing duality will emerge.
\section{Randomized covering-packing duality} \label{RCPD}
\begin{thm} \label{KKUniUniKK}
Let $b$ directly communicate source $U$ within distortion $D$ under a permutation invariant distortion function $d$. Assume that $R^P_U(D) = R^P_U(D, \inf)$. Then, reliable communication can be accomplished over $b$ at rates $<R^P_U(D)$. In other words, the capacity of $b$, $C \geq R^P_U(D)$.
\end{thm}
Note that the technical condition $R^P_U(D) = R^P_U(D, \inf)$ can be proved for an additive distortion function. See the discussion following the proof of the theorem.
\begin{proof}
This will be done by use of parallel random-coding arguments for two problems:
\begin{itemize}
\item
\emph{Channel-coding problem:}
Rates of reliable communication over $b$.
\item
\emph{Source-coding problem:}
Rates of coding for the uniform $X$ source with a distortion $D$ under the $\inf$ probability of excess distortion criterion.
\end{itemize}
\emph{Codebook generation:}
\begin{itemize}
\item
\emph{Codebook generation for the channel-coding problem:}
Let reliable communication be desired at rate $R$. Generate $2^{\lfloor n'R \rfloor}$ sequences independently and uniformly from $\mathcal U^{n'}$. This is the codebook $\mathcal K^{n'}$.
\item
\emph{Codebook generation for the source-coding problem:}
Let source-coding be desired at rate $R$. Generate $2^{\lfloor n'R \rfloor}$ codewords independently and uniformly from $\mathcal V_q^{n'}$ for some type $q$ on $\mathcal Y$ which is achievable for block-length $n'$. This is the codebook $\mathcal L^{n'}$.
\end{itemize}
\emph{Joint typicality:}
Joint typicality for both the channel-coding and source-coding problems is defined as follows:
$(u^{n'}, y^{n'}) \in \mathcal U^{n'} \times \mathcal Y^{n'}$ jointly typical if
\begin{align}
\frac{1}{n'} d^{n'} (u^{n'}, y^{n'}) \leq D
\end{align}
\emph{Decoding and encoding:}
\begin{itemize}
\item
\emph{Decoding for the channel-coding problem:}
Let $y^{n'}$ be received. If there exists unique $u^{n'} \in \mathcal K^{n'}$ for which $(u^{n'}, y^{n'})$ jointly typical, declare that $u^{n'}$ is transmitted, else declare error.
\item
\emph{Encoding for the source-coding problem:}
Let $u^{n'} \in \mathcal U^{n'}$ need to be source-coded. If there exists some $y^{n'} \in \mathcal L^{n'}$, encode $u^{n'}$ to one such $y^{n'}$, else declare error.
\end{itemize}
\emph{Some notation:}
\begin{itemize}
\item
\emph{Notation for the channel-coding problem:}
Let message $m^{n'} \in \mathcal M_R^{n'}$ be transmitted. Codeword corresponding to $m^{n'}$ is $u_c^{n'}$. Non-transmitted codewords are ${u'}_1^{n'}, {u'}_2^{n'}, \ldots, {u'}_{2^{\lfloor n'R \rfloor} - 1}^{n'}$. $u_c^{n'}$ is a realization of $U_c^{n'}$. $U_c^{n'}$ is uniform on $\mathcal U^{n'}$. ${u'}_i^{n'}$ is a realization of ${U'}_i^{n'}$. ${U'}_i^{n'}$ is uniform on $\mathcal U^{n'}$, $1 \leq i \leq 2^{\lfloor n'R \rfloor} - 1$. $U_c^{n'}, {U'_i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor} - 1$ are independent of each other. The channel output is $y^{n'}$. $y^{n'}$ is a realization of $Y^{n'}$. $y^{n'}$ may depend on $u_c^{n'}$ but does not depend on ${u'_i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor} - 1$. As random variables, $Y^{n'}$ and $U_c^{n'}$ might be dependent but $Y^{n'}, {U'_i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor} - 1$ are independent. If the type $q$ of the sequence $y^{n'}$ needs to be explicitly denoted, the sequence is denoted by $y_q^{n'}$. $\mathcal G^{n'}$ is the set of all achievable types $q$ on $\mathcal Y$ for block-length $n'$.
\item
\emph{Notation for the source-coding problem:}
$u_s^{n'}$ is the sequence which needs to be source-coded. $u_s^{n'}$ is a realization of $U_s^{n'}$ which is uniformly distributed on $\mathcal U^{n'}$. The codewords are $y_{q,i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor}$ where $q$ denotes the type. $y_{q, i}^{n'}$ is a realization of $V_{q, i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor}$ where $V_{q, i}^{n'}$ is uniformly distributed on the subset of $\mathcal Y^{n'}$ consisting of all sequences with type $q$. $u_s^{n'}, y_{q, i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor}$ are independently generated; as random variables, $U_s^{n'}, Y_{q, i}^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor}$ are independent. $\mathcal G^{n'}$ is the set of all achievable types $q$ on $\mathcal Y$ for block-length $n'$
\end{itemize}
\emph{Error analysis:}
For the channel-coding problem, the probability of correct decoding is analyzed and for the source-coding problem, the probability of error is analyzed.
\begin{itemize}
\item
\emph{Error analysis for the channel-coding problem:}
From the encoding-decoding rule, it follows that the event of correct decoding given that a particular message is transmitted is
\begin{align}\label{CorrectDecodingEvent}
& \left \{ \frac{1}{n'} d^{n'}(U_c^{n'}, Y^{n'}) \leq D \right \} \cap
\cap_{i = 1}^{2^{\lfloor n'R \rfloor} - 1} \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \}
\end{align}
\item
\emph{Error analysis for the source-coding problem:}
From the encoding-decoding rule, it follows that the error event given that a particular message needs to be source-coded is
\begin{align}\label{ErrorEvent}
\cap_{i=1}^{2^{\lfloor n'R \rfloor}} \left \{ \frac{1}{n'}d^{n'}(u^{n'}, V_{q, i}^{n'}) > D \right \}
\end{align}
\end{itemize}
Note that there is choice of $q$ for codebook generation.
\emph{Calculation:}
\begin{itemize}
\item
\emph{Calculation of the probability of correct decoding for the channel-coding problem:}
Bound for probability of event (\ref{CorrectDecodingEvent}):
\begin{align}
& \Pr \left ( \left \{ \frac{1}{n'} d^{n'}(U_c^{n'}, Y^{n'}) \leq D \right \} \cap
\cap_{i = 1}^{2^{\lfloor n'R \rfloor} - 1} \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \} \right ) \\
= & \Pr \left ( \left \{ \frac{1}{n'} d^{n'}(U_c^{n'}, Y^{n'}) \leq D \right \} \right ) +
\Pr \left ( \cap_{i = 1}^{2^{\lfloor n'R \rfloor} - 1} \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \}\right ) - \nonumber \\
& \hspace{2cm} \Pr \left ( \left \{ \frac{1}{n'} d^{n'}(U_c^{n'}, Y^{n'}) \leq D \right \} \cup
\cap_{i = 1}^{2^{\lfloor n'R \rfloor} - 1} \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \} \right ) \nonumber \\
\geq & (1 - \omega_{n'}) +
\Pr \left ( \cap_{i = 1}^{2^{\lfloor n'R \rfloor} - 1} \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \}\right ) - 1 \nonumber \\
= & - \omega_{n'} + \Pr \left ( \cap_{i = 1}^{2^{\lfloor n'R \rfloor} - 1} \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \}\right ) \nonumber \\
= & - \omega_{n'} + \prod_{i=1}^{2^{\lfloor n'R \rfloor } - 1} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U'}_i^{n'}, Y^{n'}) > D \right \} \right ) \nonumber \\
& \hspace{1cm} \mbox{ (since ${U'}_i^{n'}, 1 \leq i \leq 2^{\lfloor n'R \rfloor } - 1$, $Y^{n'}$ are independent random variables)} \nonumber \\
= & - \omega_{n'} + \left [ \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, Y^{n'}) > D \right \} \right ) \right ] ^ { 2^{\lfloor n'R \rfloor } - 1} \nonumber \\
& \hspace{1cm} \mbox{(where $U^{n'}$ is uniform on $\mathcal U^{n'}$ and is independent of $Y^{n'}$)} \nonumber
\end{align}
\begin{align}
\nonumber = & - \omega_{n'} + \left [ \sum_{y^{n'} \in \mathcal Y^{n'}} p_{Y^{n'}}(y^{n'}) \Pr \left ( \frac{1}{n'} d^{n'} ({U}^{n'}, Y^{n'}) > D \ \Bigg | \ Y^{n'} = y^{n'} \right ) \right ] ^ { 2^{\lfloor n'R \rfloor } - 1} \nonumber \\
= & - \omega_{n'} + \left [ \sum_{y^{n'} \in \mathcal Y^{n'}} p_{Y^{n'}}(y^{n'}) \Pr \left ( \frac{1}{n'} d^{n'} ({U}^{n'}, y^{n'}) > D \ \Bigg | \ Y^{n'} = y^{n'} \right ) \right ] ^ { 2^{\lfloor n'R \rfloor } - 1} \nonumber \\
= & - \omega_{n'} + \left [ \sum_{y^{n'} \in \mathcal Y^{n'}} p_{Y^{n'}}(y^{n'}) \Pr \left ( \frac{1}{n'} d^{n'} ({U}^{n'}, y^{n'}) > D \right ) \right ] ^ { 2^{\lfloor n'R \rfloor } - 1} \nonumber \\
& \hspace{1cm} \mbox{ (since $U^{n'}$ and $Y^{n'}$ are independent) } \nonumber \\
\geq & -\omega_{n'} + \left [ \inf_{y^{n'} \in \mathcal Y^{n'}} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y^{n'}) > D \right \} \right ) \right ] ^{ 2^{\lfloor n'R \rfloor } - 1} \nonumber \\
= & -\omega_{n'} + \left [ \inf_{q \in \mathcal G^{n'}} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'}) > D \right \} \right ) \right ] ^{ 2^{\lfloor n'R \rfloor } - 1} \nonumber
\end{align}
The last equality above follows because
\begin{align}
\Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y^{n'}) > D \right \} \right )
\end{align}
depends only on the type of $y^{n'}$; see the symmetry argument later.
Rate $R$ is achievable if
\begin{align}
& -\omega_{n'} +
\left [ \inf_{q \in \mathcal G^{n'}} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'})
> D \right \} \right ) \right ] ^{ 2^{\lfloor n'R \rfloor } - 1}
\to 1 \ \mbox{as} \ n' \to \infty
\end{align}
Since $\omega_{n'} \to 0$ as $n' \to \infty$, rate $R$ is achievable if
\begin{align}\label{JCorrectCalculationJ}
& \left [ \inf_{q \in \mathcal G^{n'}} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'}) > D \right \} \right ) \right ] ^{ 2^{\lfloor n'R \rfloor } - 1}
\to 1 \ \mbox{as} \ n' \to \infty
\end{align}
\item
\emph{Calculation of probability of error for the source-coding problem:}
Bound for probability of event (\ref{ErrorEvent}) is calculated using standard arguments:
\begin{align}
& \Pr \left ( \cap_{i=1}^{2^{\lfloor n'R \rfloor}} \left \{ \frac{1}{n'}d^{n'}(u^{n'}, V_{q, i}^{n'}) > D \right \} \right ) \\
= & \prod_{i=1}^{2^{\lfloor n'R \rfloor}} \Pr \left ( \left \{ \frac{1}{n'}d^{n'}(u^{n'}, V_{q, i}^{n'}) > D \right \} \right ) \nonumber \\
= & \left [ \Pr \left ( \left \{ \frac{1}{n'}d^{n'}(u^{n'}, V_{q, i}^{n'}) > D \right \} \right )\right ] ^{2^{\lfloor n'R \rfloor }} \nonumber
\end{align}
where $V_q^{n'}$ is uniform on $\mathcal V_q^{n'}$.
There is choice of $q \in \mathcal G^{n'}$. Thus, a bound for the probability of the event is
\begin{align}
\left [ \inf_{q \in \mathcal G^{n'}} \Pr \left ( \left \{ \frac{1}{n'}d^{n'}(u^{n'}, V_q^{n'}) > D \right \} \right )\right ] ^{2^{\lfloor n'R \rfloor }}
\end{align}
Since the $\inf$ probability of excess distortion criterion is used, it follows that rate $R$ is achievable if
\begin{align}\label{JSourceErrorJ}
\left [ \inf_{q \in \mathcal G^{n'_i}} \Pr \left ( \left \{ \frac{1}{n'_i}d^{n'_i}(u^{n'_i}, V_q^{n'_i}) > D \right \} \right )\right ] ^{2^{\lfloor n'_iR \rfloor }} \to 0 \ \mbox{for some} \ n'_i = n_0 n_i, \ n_i \ \to \infty
\end{align}
\end{itemize}
\emph{Connection/Duality between channel-coding and source-coding:}
The calculation required in the channel-coding problem is
\begin{align} \label{CCCalc}
\inf_{q \in \mathcal G^{n'}}\Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'}) > D \right \} \right )
\end{align}
and the calculation required in the source-coding problem is
\begin{align} \label{SCCalc}
\inf_{q \in \mathcal G^{n'}} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({u}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
It will be proved that (\ref{CCCalc}) and (\ref{SCCalc}) are equal. It will be proved more generally that
\begin{align}\label{JJMainDualityJJ}
& \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'}) > D \right \} \right ) =
\Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({u}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
This is a symmetry argument and requires the assumption of permutation invariant distortion function. The idea is that the left hand side of (\ref{JJMainDualityJJ}) depends only on the type of $y_q^{n'}$. From this it follows that the left hand side of (\ref{JJMainDualityJJ}) is equal to
\begin{align} \label{SymmetryExpression}
\Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
where $V_q^{n'}$ is independent of $U^{n'}$. Similarly, the right hand side of (\ref{JJMainDualityJJ}) depends only on the type of $u^{n'}$ and from this it follows that the right hand side of (\ref{JJMainDualityJJ}) is also equal to (\ref{SymmetryExpression}). (\ref{JJMainDualityJJ}) follows. Details are as follows:
First step is to prove that
\begin{align}\label{SymmetryStep1}
& \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'}) > D \right \} \right ) = \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, {y'_q}^{n'}) > D \right \} \right )
\end{align}
for sequences $y_q^{n'}$ and ${y'_q}^{n'}$ with type $q$. Since $U^{n'}$ is the uniform distribution on $\mathcal U^{n'}$, it follows that it is sufficient to prove that the sets
\begin{align}
& \left \{ u^{n'} : \frac{1}{n'} d^{n'} ({u}^{n'}, y_q^{n'}) > D \right \} \ \mbox{and} \ \left \{ u^{n'} : \frac{1}{n'} d^{n'} ({u}^{n'}, {y'_q}^{n'}) > D \right \}
\end{align}
have the same cardinality. ${y'_q}^{n'} = \pi^{n'} y_q^{n'}$ for some permutation $\pi^{n'}$ since ${y'_q}^{n'}$ and $y_q^{n'}$ have the same type. Denote the sets
\begin{align}
\mathcal B_{y_q^{n'}} \triangleq \left \{ u^{n'} : \frac{1}{n'} d^{n'} ({u}^{n'}, y_q^{n'}) > D \right \}
\end{align}
Set $\mathcal B_{{y'_q}^{n'}}$ is defined analogously.
Let $u^{n'} \in \mathcal B_{y_q^{n'}}$. Since the distortion function is permutation invariant, $d^{n'}(\pi^{n'}u^{n'},\pi^{n'} y_q^{n'})$ $=$ $d^{n'}(u^{n'}, y_q^{n'})$. Thus, $\pi^{n'}u^{n'} \in \mathcal B_{{y'_q}^{n'}}$. If $u^{n'} \neq u'^{n'}$, $\pi^{n'}u^{n'} \neq \pi^{n'}u'^{n'}$. It follows that $|\mathcal B_{{y'_q}^{n'}}| \geq |\mathcal B_{y_q^{n'}}|$. Interchanging $y_q^{n'}$ and ${y'_q}^{n'}$ in the above argument, $|\mathcal B_{y_q^{n'}}| \geq |\mathcal B_{{y'_q}^{n'}}|$. It follows that $|\mathcal B_{{y_q}^{n'}}| = |\mathcal B_{{y'_q}^{n'}}|$. (\ref{SymmetryStep1}) follows.
Let $V_q^{n}$ be independent of $U^{n'}$. From (\ref{SymmetryStep1}) it follows that
\begin{align}\label{SymmetryStep1Conclusion}
& \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y^{n'}) > D \right \} \right ) = \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
By an argument identical with the one used to prove (\ref{SymmetryStep1}), it follows that
\begin{align} \label{SymmetryStep2}
& \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({u}^{n'}, V_q^{n'}) > D \right \} \right ) = \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({u'}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
for $u^{n'}, u'^{n'} \in \mathcal U^{n'}$. From (\ref{SymmetryStep2}) it follows that
\begin{align} \label{SymmetryStep2Conclusion}
& \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({u}^{n'}, V_q^{n'}) > D \right \} \right ) = \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
From (\ref{SymmetryStep1Conclusion}) and (\ref{SymmetryStep2Conclusion}), (\ref{JJMainDualityJJ}) follows.
\emph{Proof that a channel which is capable of communicating the uniform $X$ source with a certain distortion level is also capable of communicating bits reliably at any rate less than the infimum of the rates needed to code the uniform $X$ source with the same distortion level under the $\inf$ probability of excess distortion criterion:}
Denote
\begin{align}
& A_{n'} \triangleq \inf_{q \in \mathcal G^{n'}}\Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({U}^{n'}, y_q^{n'}) > D \right \} \right ) =
\inf_{q \in \mathcal G^{n'}} \Pr \left ( \left \{ \frac{1}{n'} d^{n'} ({u}^{n'}, V_q^{n'}) > D \right \} \right )
\end{align}
From (\ref{JCorrectCalculationJ}), it follows that rate $R$ is achievable for the channel-coding problem if
\begin{align} \label{ChannelCalculationCriterion}
(A_{n'}) ^{ 2^{\lfloor n' R \rfloor } - 1} \to 1 \ \mbox{as} \ n' \to \infty
\end{align}
From (\ref{JSourceErrorJ}), it follows that rate $R$ is achievable for the source-coding problem if
\begin{align}
& (A_{n'_i})^{2^{\lfloor n'_i R \rfloor }} \to 0 \ \mbox{as} \ n'_i \to \infty \ \mbox{for some} \ n'_i = n_0 n_i\ \mbox{for some}\ n_i \to \infty
\end{align}
Let
\begin{align} \label{JDefAlphaJ}
\alpha \triangleq \sup \{ R \ | \ (\ref{ChannelCalculationCriterion}) \ \mbox{holds} \}
\end{align}
Then, if $R' > \alpha$,
\begin{align}\label{FinalStretch1}
& \lim_{n'_i \to \infty} (A_{n'_i})^{2^{\lfloor n_i R' \rfloor} - 1} < 1\ \forall \ R' > \alpha \ \mbox{for some sequence} \ n'_i \to \infty
\end{align}
$n'_i$ may depend on $R'$.
Then,
\begin{align}\label{FinalStretch2}
\lim_{n'_i \to \infty} (A_{n'_i})^{2^{\lfloor n'_iR'' \rfloor} - 1} = 0 \ \mbox{for}\ R'' > R'
\end{align}
(\ref{FinalStretch1}) and (\ref{FinalStretch2}) hold for all $R'' > R' > \alpha$. It follows that rates larger than $\alpha$ are achievable for the source-coding problem.
Thus, a channel which is capable of communicating the uniform $X$ source with a certain distortion level is also capable of communicating bits reliably at any rate less than the infimum of the rates needed to code the uniform $X$ source with the same distortion level under the $\inf$ probability of excess distortion criterion.
\emph{Wrapping up the proof of the theorem:}
It follows that if source $U$ is directly communicated over $b$ within distortion $D$, then reliable communication can be accomplished over $b$ at rates $<R^P_U(D, \inf)$. By use of the assumption $R^P_U(D) = R^P_U(D, \inf)$, it follows that reliable communication can be accomplished over $b$ at rates $<R^P_U(D)$. In other words, the capacity of $b$, $C \geq R^P_U(D)$.
\end{proof}
\section{Discussion and recapitulation}
Randomized code constructions were made for a source-coding problem and a channel-coding problem and relation drawn between source-coding rates and channel coding-rates for the two problems. The source-coding problem is a covering problem and the channel-coding problem is a packing problem. For this reason, the connection is a randomized covering-packing connection. This duality between source-coding and channel coding is captured in (\ref{JJMainDualityJJ}).
Note { Berger's lemma or the type covering lemma \cite{CsiszarKorner},} that at least for additive distortion functions, there exist source codes of rates approaching $R^P(D)$ such that ``balls'' around codewords cover all sequences of type $p_X$, not only a large fraction of them. Thus, in (\ref{SourceCodeD}), one does not need to take a limit; in other words, in the source-coding problem, one may not need to take a limit. Thus, a deterministic version of the source-coding problem is possible; however it is unclear, how to do the same for the channel-coding problem. For this reason, the randomized versions of the problems are needed.
The technical condition $R^P_U(D) = R^P_U(D, \inf)$ is made on the rate-distortion function. This technical condition holds for additive distortion functions, and an \emph{operational} proof which uses code constructions and various properties and relations between code constructions is provided in Chapter 5 of \cite{MukulPhdThesis}.
A proof of source-channel separation for communication with fidelity criterion follows as follows: If there exist encoder-decoder $<e^n, f^n>_1^\infty$ such that by use of this encoder-decoder, communication of source $U$ within distortion $D$ happens over a channel $k$, then, $b = <e^{n'} \circ k \circ f^{n'}>_1^\infty$ is a channel which communicates the source $U$ directly within distortion $D$. Thus, rates $<R^P_U(D)$ are achievable over $b$ by use of some encoder-decoder $<E^{n'}, F^{n'}>_1^\infty$. For this reason, reliable communication is possible over $k$ at rates $<R^P_U(D)$ by use of encoder-decoder $<E^{n'} \circ e^{n'}, f^{n'} \circ F^{n'}>_1^\infty$. By use of the standard argument of source-coding followed by channel coding, if capacity of $k$ is $>R^P_(D)$, the uniform $X$ Source can be communicated over $k$ by source coding followed by channel coding. Proof of separation follows. The proof only uses the operational meanings of capacity (maximum rate of reliable communication) and rate-distortion function (minimum rate needed to compress a source with certain distortion), and randomized code constructions for these problems instead of using finite-dimensional functional simplifications or finite dimensional information theoretic definitions, for example, capacity as maximum mutual information and rate-distortion function as minimum mutual information, unlike in the traditional proof of Shannon \cite{Shannon}. Functional simplifications are carried out to the extent of (\ref{JJMainDualityJJ}).
Note that whether a view or a proof is operational (in the sense used in this paper) cannot be defined mathematically precisely. However, the same can be sensed intuitively from the context in which it is used.
By use of a perturbation argument, the results can be generalized to the i.i.d. $X$ source (general $p_X$, not necessarily those for which $p_X(x)$ is rational) for additive distortion functions as discussed in Chapter 5 of \cite{MukulPhdThesis}.
{
Finally, note that the argument to prove Theorem \ref{KKUniUniKK} uses random codes. However, if the channel is a single channel, existence of a random code implies the existence of a deterministic code. Note further, that in the decoding rule in Theorem \ref{KKUniUniKK}, only the end-to-end description that the channel communicates the uniform $X$ source within distortion $D$ is used, and not the particular $<{b^{n'}}_1^\infty$. For this reason, even if the channel belongs to a set, that is, the channel is compound in the sense of \cite{CsiszarKorner}, Theorem \ref{KKUniUniKK} still holds. However, random codes would be needed since the argument to go from a random code to a deterministic code does not hold for a compound channel. For the same reason, a universal source channel separation theorem for communication with a fidelity criterion where universality is over the channel (channel is compound) holds if random codes are permitted. Precise details of a general, compound channel, what it means for a general, compound channel to communicate the uniform $X$ source within distortion $D$, and the capacity of a general, compound channel, are omitted.}
\bibliographystyle{IEEEtran}
|
1,116,691,497,664 | arxiv | \section{Introduction}
A magnetic obstacle is a region in the flow of an electrically
conducting fluid, e.g. liquid metal, where an external
inhomogeneous magnetic field, ${\bf B}$, is applied as shown in
Fig.~\ref{Fig:IntroductoryMO}$a$. The region of the magnetic
obstacle manifests itself through the braking Lorentz force, ${\bf
F_L=j\times B}$, originating from the interaction of ${\bf B}$
with electrical currents ${\bf j}$. The electrical currents are
induced because of the electromotive force arising when the
conducting liquid moves through the region of magnetic field. The
net effect is that the core of the magnetic obstacle is
impenetrable to the flow, much like a foreign solid insertion.
Characteristics of the flow influenced by a magnetic obstacle are
of considerable fundamental and practical interest. On the
fundamental side, such a system possesses a rich variety of
dynamical states \cite{Votyakov:PRL:2007}. On the practical side,
spatially localized magnetic fields enjoy a variety of industrial
applications in metallurgy, e.g. \cite{Davidson:Review:1999},
including stirring of melts by a moving magnetic obstacle (called
electromagnetic stirring), removing undesired turbulent
fluctuations during steel casting using steady magnetic obstacles
(called electromagnetic brake) and non-contact flow measurement
using a magnetic obstacle (called Lorentz force velocimetry, e.g.
\cite{Thess:Votyakov:Kolesnikov:2006}).
In this paper, the magnetic Reynolds number $R_m=\mu^*\sigma u_0
H$ is taken to be much less than one where $\mu^*$ is the magnetic
permeability, $\sigma$ is fluid electric conductivity, and $u_0$
and $H$ are the characteristic scales for velocity and length.
Therefore the induced magnetic field is expected to be much less
that the imposed external magnetic field
\cite{Shercliff:book:1962}, \cite{Roberts:1967},
\cite{Moreau:book:1990}, \cite{Davidson:book:2001}. Under this
constraint, the external magnetic field has the following twofold
effect on a turbulent magnetohydrodynamic (MHD) flow. Firstly, the
turbulent velocity pulsations are suppressed in the direction
parallel to the direction of the external field, that is the
turbulence tends to be more and more two-dimensional when the
external field becomes stronger and stronger \cite{Moffatt:1967},
\cite{Sommeria:Moreau:MHDturblulence:1982}, \cite{Davidson:1997}.
This is true for the system subject to a homogeneous magnetic
field where the mean velocity is constant over the flow except for
boundary layers. Secondly, when the external field is local in
space, as it must be in the case of the magnetic obstacle, then
the decelerating Lorentz force is higher in the center of the
obstacle compared to its periphery. This creates a shear gradient
in the mean flow velocity which generates an additional vorticity
which then diffuses downstream and contributes to the turbulence.
Therefore it is important to understand for practical applications
whether the useful turbulence-damping effect of a magnetic brake
is not obliterated by excessive vorticity generation in the wake
of the magnetic obstacle.
In order to deal with turbulent phenomena one needs to know the
averaged parameters of the flow. However to properly define, for
instance, the mean velocity is not a trivial task in the case of a
local magnetic field because various recirculation patterns are
possible both inside and in the vicinity of the magnetic obstacle
as shown below. So the first obvious step is to study a laminar
flow around the magnetic obstacle before attacking turbulence
subject to the local external magnetic field. On the other hand,
as shown in this paper and elsewhere \cite{Votyakov:PoF:2009},
there is a similarity concept between a hydrodynamic flow around a
solid cylinder and a MHD flow around a strong magnetic obstacle.
This gives hope that numerous results for turbulent flows
initiated by an obstacle can be projected onto turbulent MHD flows
influenced by the local magnetic field. For instance, the
vorticity generation by shear layer of a solid cylinder can be
roughly perceived as similar to those in the shear layer alongside
the magnetic obstacle. Thus, this paper is aimed to attract
attention of researchers working on ordinary hydrodynamic
turbulence to problems appearing in a MHD flow subject to a
heterogeneous external magnetic field.
Studies of the effects of the magnetic obstacle on a liquid metal
flow had been initiated in 1970s in the former Soviet Union by
Gelfgat \textit{et al.} \cite{Gelfgat:Peterson:Sherbinin:1978},
\cite{Gelfgat:Olshanskii:1978}, and have been recently revived in
the West by Cuevas \textit{et al.}
\cite{Cuevas:Smolentsev:Abdou:Pamir:2005},
\cite{Cuevas:Smolentsev:Abdou:2006},
\cite{Cuevas:Smolentsev:Abdou:PRE:2006}. Among the above citations
there were 2D numerical works related to creeping MHD flow, where
a possible recirculation induced by the local magnetic was shown,
see for example \cite{Gelfgat:Peterson:Sherbinin:1978},
\cite{Cuevas:Smolentsev:Abdou:PRE:2006}, but where the physical
explanation of the recirculation, as well as the generic scenario
for the MHD flow around the magnetic obstacle, were obscured.
New results about the wake of a magnetic obstacle have been
reported by \cite{Votyakov:PRL:2007}, \cite{Votyakov:JFM:2007} and
the generic scenario has been elaborated in
\cite{Votyakov:PoF:2009}. It has been found that a liquid metal
flow subject to a local magnetic field shows different
recirculation patterns: (1) no vortices, when the viscous forces
prevail at small Lorentz force, (2) one pair of \textit{inner
magnetic} vortices between the magnetic poles, when Lorentz force
is high and inertia small, and (3) three pairs, namely, magnetic
as above, \textit{connecting} and \textit{attached} vortices, when
Lorentz and inertial forces are high. The latter six-vortex
ensemble is shown in Fig.~\ref{Fig:IntroductoryMO}$b$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=13.5cm, angle=0, clip]{IntroductoryMO.eps}
\end{center}
\caption{\label{Fig:IntroductoryMO}$a$ - scheme of the magnetic obstacle
created by two permanent magnets which are located on the top and bottom of the channel where
an electrically conducting liquid flows. $b$ - structure of the wake of the magnetic
obstacle consisting of inner magnetic (first pair), connecting (second) and attached
vortices (third pair). Dashed bold lines on $b$ mark borders of the
magnets. }
\end{figure*}
The goal of the current paper is to highlight effects taking place
in the laminar flow around the magnetic obstacle when the
interaction parameter $N$, which is a ratio of Lorentz force to
the inertial force, increases. When $N$ is very large, both mass
transfer and electric field vanish in the region between magnetic
poles. This region, hereinafter called the core of the magnetic
obstacle, appears as if frozen by the external magnetic field, so
that the upstream flow and crosswise electric currents can not
penetrate inside it. Thus, the core of the magnetic obstacle is
similar to a solid insulated obstacle inside an ordinary
hydrodynamical flow with crosswise electric currents and
\textit{without} an external magnetic field. (This concerns
hydrodynamics because there is no magnetic field, and the
crosswise electric currents go around the insulated insertion
without changing the mass flow.) Magnetic vortices are located
aside the core and compensate shear stresses like a ball-bearing
between the impenetrable region and upstream flow.
It is worthwhile to notice that the core of a magnetic obstacle is
not similar to a stagnant region appearing in the MHD flow subject
to a fringing transverse magnetic field. The fringing field is
nonuniform in the streamwise direction and uniform in the spanwise
direction, while in the case of the magnetic obstacle, the
external local magnetic field is nonuniform in all the directions.
Effects of the fringing field on a duct liquid metal flow have
been intensively studied before, see for example
\cite{Molokov:Reed:Fusion:2003}, \cite{Alboussiere:2004},
\cite{Kumamaru:etal:2004}, \cite{Kumamaru:etal:2007},
\cite{ni_current_2007}, while those of the magnetic obstacle is a
new research field \cite{Votyakov:PoF:2009}. It is trivial that in
both cases the breaking Lorentz force is responsible for the
phenomenon that the liquid metal flow vanishes in the space of the
strong magnetic field. However, in the case of the fringing field
the side flow jets are caused by a geometrical heterogeneity
imposed by the sidewalls of the duct, and the stagnant region of
vanishing flow tends to spread completely between the sidewalls.
In the case of the magnetic obstacle, maxima of streamwise
velocity appear in an originally free flow around the region where
the magnetic field is of highest intensity, and the core roughly
corresponds to the region where the magnetic field is imposed. The
uniformity of the fringing field in the spanwise directions makes
impossible recirculation in the M-shaped velocity profile, while
magnetic vortices alongside a magnetic obstacle easily appear at
moderate $N$ \cite{Votyakov:JFM:2007}.
The structure of the present paper is as follows. First, we
present technical details of the simulations: model, equations and
3D numerical solver. Then, we report results for the core of the
magnetic obstacle obtained in a series of 3D simulations. As an
extension of the presentation given in TSFP-6
\cite{Votyakov:TSFP6:2009}, we discuss also a 2D MHD flow and show
the differences between 2D and 3D cases. A summary of the main
conclusions ends the paper.
\section{3D Model: equations and numerical method}
The equations governing the motion of an electrically conductive
and incompressible fluid are derived from the Navier-Stokes
equations coupled with the Maxwell equations for a moving medium
and Ohm's law. By assuming that the induced magnetic field is
infinitely small in comparison to the external magnetic field, the
equations in dimensionless form are:
\begin{eqnarray}
\label{eq:NSE:momentum}
\frac{{\partial\textbf{u}}}{{\partial t}} + (\textbf{u} \cdot \nabla ) \textbf{u}
&=& - \nabla p + \frac{1}{Re}\triangle \textbf{u} + N (\textbf{j}\times\textbf{B}),
\\ \label{eq:NSE:Ohm} \textbf{j} &=& -\nabla\phi + \textbf{u} \times \textbf{B},
\\ \label{eq:NSE:Poisson} \nabla \cdot \textbf{j} &=& 0,
\\ \label{eq:NSE:continuity} \nabla \cdot \textbf{u} &=& 0,
\end{eqnarray}where $\textbf{u}$ is velocity field, $\bf{B}$ is an
external magnetic field, $\textbf{j}$ is electric current density,
$p$ is pressure, $\phi$ is electric potential. The Reynolds
number, $Re=u_0H/\nu$, expresses a ratio between the inertia force
and the viscous force and the interaction parameter,
$N=B_0^2H\sigma/(u_0\rho)$, expresses the ratio between the
Lorentz force and the inertia force. $Re$ and $N$ are linked with
each other by means of the Hartmann number, $Re\,N=Ha^2$,
$Ha=HB_0(\sigma/\rho\nu)^{1/2}$, which determines the thickness
$\delta$ of Hartmann boundary layers, $\delta/H \sim Ha^{-1}$,
formed near the walls perpendicular to the direction of the
magnetic field in the flow under constant magnetic field. Here,
$H$ is the characteristic length (size), $u_0$ is the
characteristic flow velocity, $B_0$ the characteristic magnitude
of the magnetic field intensity, $\nu$ is the kinematic viscosity
of the fluid, $\sigma$ is the electric conductivity of the fluid,
and $\rho$ is its density.
At given $Re$, $N$ and ${\bf B}(x,y,z)$, the system of the partial
differential equations shown above is solved in a 3D computational
domain to obtain the unknown ${\bf u}(x,y,z)$, $p(x,y,z)$ and
$\phi(x,y,z)$. The computational domain has periodical boundary
conditions in the spanwise direction, and no-slip and insulating
top and bottom walls (transverse direction). The electric
potential at the inlet and outlet planes is taken equal to zero.
For the velocity, the outlet boundary is force free, and a laminar
parabolic velocity profile is imposed at the inlet boundary. We
are interested in a stationary laminar solution, hence, the
initial conditions play no role.
The origin of the right-handed coordinate system, $x=y=z=0$, is
taken in the center of the magnetic gap. The size of the
computational domain is: $-L_{x}\le x \le L_{x}$, $-L_{y}\le y \le
L_{y}$, $-H \le z \le H$, where $L_{x}=25$, $L_{y}=25$, $H=1$ and
$x, y, z$ are respectively the streamwise, crosswise, and
transverse directions.
The characteristic dimensions for the Reynolds number $Re$, and
the interaction parameter $N$ are the half-height of the duct $H$,
the mean flow rate $u_0$, and the magnetic field intensity $B_0$
taken at the center of the magnetic gap, $x\!=\!y\!=\!z\!=\!0$.
The range of the studied parameters is: $Re=0.1, 1, 10, 100$ and
$0 \leq N \leq 1000$.
The external magnetic field is modelled as a field from two
permanent magnets occupying a space $\Omega=\{|x|\leq M_x, |y|\leq
M_y, |z|\geq h\}$, where $M_x=1.5$ ($M_y=2$) is the streamwise
(spanwise) width of the magnet, and $2\times h$ is the distance
between magnetic poles, $h=1.5$. The magnets are supposed to be
composed of magnetic dipoles oriented along the $z$-direction,
therefore the total magnetic field ${\mathbf
B(x,y,z)}=\int_{\Omega}{\bf B_d(r,r')} d{\bf r'}$, where ${\bf
B_d(r,r')}=\nabla\left[\partial_{z}(1/|{\bf r}-{\bf r'}|)\right]$
is a field, at the point $\mathbf{r}=(x,y,z)$ created by the
single magnetic dipole located in the point
$\mathbf{r'}=(x',y',z')$. The integration can be performed
analytically, see \cite{Votyakov:JFM:2007}, and after cumbersome
algebraic calculations one obtains:
\begin{eqnarray*} \label{eq:MF:final}
B_{x}(\mathbf{r})&=& \frac{1}{B_{0}} \sum_{k=\pm 1} \sum_{j=\pm 1}\sum_{i=\pm 1}
(ijk)\,\mbox{arctanh}\left[\frac{\delta_j}{\delta_{ijk}}\right], \label{eq:MF:Bx}\\
B_{y}(\mathbf{r})&=& \frac{1}{B_{0}} \sum_{k=\pm 1} \sum_{j=\pm 1}\sum_{i=\pm 1}
(ijk)\,\mbox{arctanh}\left[\frac{\delta_i}{\delta_{ijk}}\right], \label{eq:MF:By}\\
B_{z}(\mathbf{r})&=& -\frac{1}{B_{0}} \sum_{k=\pm 1} \sum_{j=\pm 1}\sum_{i=\pm
1}(ijk)\mbox{arctan}\left[\frac{\delta_i \delta_j}{\delta_k\delta_{ijk}}\right], \label{eq:MF:Bz}
\end{eqnarray*} where $\delta_i=(x-iM_x)$, $\delta_j=(y-jM_y)$,
$\delta_k=(z-kh)$, and
$\delta_{ijk}=[(x-iM_x)^2+(y-jM_y)^2+(z-kh)^2]^{1/2}$. The
normalization factor $B_{0}$ is selected in such a way to have the
intensity of the $z$-component equal one, $B_z(0,0,0)=1$, in the
center of the magnetic gap. Three-fold summation with the
sign-alternating factor $(ijk)$ reflects the fact that these
equations are obtained by integrating over the 3D box $\Omega$.
Different cuts of the intensity ${\bf B(r)}$ are plotted in Fig.~3
and Fig.~4($b$) in the paper of \cite{Votyakov:JFM:2007}.
The 3D numerical solver has been explained in detail earlier, see
\cite{Votyakov:Zienicke:FDMP:2006}. It was developed from a free
hydrodynamic solver originally created in the research group of
Prof.~M.~Griebel (\cite{Griebel:book:1995}). The solver employs
the Chorin-type projection algorithm and finite differences on an
inhomogeneous staggered regular grid. Time integration is done by
the explicit Adams-Bashforth method that has second order
accuracy. Convective and diffusive terms are implemented by means
of the VONOS (variable-order non-oscillatory scheme) method. The
3D Poisson equations are solved for pressure and electric
potential at each time step by using the bi-conjugate gradient
stabilized method (BiCGStab).
The numerical grid was regular and inhomogeneous, $N_x\times N_y
\times N_z=64^3$. The minimal horizontal step size in the region
of the magnetic gap was $\Delta x \simeq \Delta y \simeq 0.3$,
which means that a few dozens points were used for resolving the
inner vortices in the core of the magnetic obstacle. The minimal
vertical step size near the top and bottom (Hartmann) walls was
$\Delta z=0.005$. This corresponds to using three to five
($=(1/Ha)/\Delta z)$ points to resolve Hartmann layer at
$Ha=40-70$. To ascertain that the numerical resolution was
adequate, a few runs were performed with double the resolution and
no differences have been found.
\section{3D results\cite{Votyakov:PoF:2009}}
The goal of the simulations is to focus on the flow around a
magnetic obstacle at large interaction parameter $N$. In order to
achieve large $N=Ha^2/Re$, the simulations were started at a small
interaction parameter and $Ha$ was smoothly increased, while
keeping $Re$ constant. Several values of the Reynolds number were
studied, $Re=0.1, 1, 10, 100$, and no principal differences were
found at the same $N$. These low values of $Re$ imply low inertial
forces, therefore, only two-vortex patterns were produced, without
connecting and attached vortices.
In this Section, results are shown for the mid central plane,
where all vortex peculiarities can be distinctively visualized.
Nevertheless, it is necessary to note that the flow in the mid
plane is not two-dimensional. There is a secondary flow from and
into the mid plane towards and from the top and bottom walls. This
secondary flow is caused by the process of creation and
destruction of the Hartmann layers. 3D pictures of the vortices
have been drawn before \cite{Votyakov:JFM:2007} and will not be
considered here.
\begin{figure*}[]
\begin{center}
\includegraphics[width=13.5cm, angle=0, clip]{StreamSlices.eps}
\end{center}
\caption{\label{Fig:StreamSlices}Streamlines in the central plane, $Re=10$,
$N=$5($a$), 10($b$), 640($c$). Dashed bold lines mark borders of the
magnets. As $N$ gives rise, magnetic vortices move away each other by
forming in between a core of the magnetic obstacle.}
\end{figure*}
The natural way to visualize the core of the magnetic obstacle is
to plot streamlines of the flow in the central horizontal plane as
shown in Fig.~\ref{Fig:StreamSlices} at different interaction
parameters $N$. Because $N$ is the ratio of the Lorentz force to
the inertial force, the larger $N$ is, the stronger the retarding
effect of the Lorenz force becomes. So, one observes no vortices
at $N=5$, Fig.~\ref{Fig:StreamSlices}$a$; weak circular magnetic
vortices first appear at slightly below $N=10$, as shown in
Fig.~\ref{Fig:StreamSlices}$b$; and finally these vortices become
well developed and strongly deformed at very large $N=640$,
Fig.~\ref{Fig:StreamSlices}$c$. In the latter case, the vortex
streamlines envelop the bold dashed rectangle. This rectangle
denotes the borders of the external magnet; inside the rectangle
at large $N$ one can see an island -- the core of the magnetic
obstacle. The observed deformation of the vortices and their drift
from the center of the magnetic gap are due to the tendency of the
flow to reduce the friction caused by retarding Lorentz force. The
vortices are cambered and located in the shear layer alongside the
magnetic gap in such a way that their rotation looks like the
rotation of a ball-bearing inside the wheel.
\begin{figure*}[]
\begin{center}
\includegraphics[width=13.5cm, angle=0, clip=on]{u_phi_dist.eps}
\end{center}
\caption{\label{Fig:u_phi_dist}Streamwise velocity ($a$) and electric
potential ($b$) along crosswise cuts of middle horizontal plane
$x=z=0$. $Re=10$, $N=$0.1(solid 1), 1.6(dot-dashed 2), 4.9(solid 3),
40(dashed 4), 250(solid 5), and 490(dot-dashed 6). Insertion shows
magnified plots for curves 5 and 6.}
\end{figure*}
\begin{figure*}[]
\begin{center}
\includegraphics[width=13.5cm, angle=0, clip=yes]{u_phi_summ.eps}
\end{center}
\caption{\label{Fig:u_phi_summ}Central streamwise velocity $u_{center}$ ($a$)
and central spanwise electric field $E_{y,center}$ ($b$) as a function
of the interaction parameter $N$. $N_{c,m}$ is a critical
value where the streamwise velocity is equal to zero.
Insertion shows the definition of $u_{center}$ and $E_{y,center}$.}
\end{figure*}
The quantitative analysis of the core was performed by crosswise
cuts through the center of the magnetic gap at different arising
magnetic interaction parameters $N$. These cuts are shown in
Fig.~\ref{Fig:u_phi_dist}a for the streamwise velocity $u_x(y)$
and in Fig.~\ref{Fig:u_phi_dist}b for the electric potential
$\phi(y)$. First, we discuss how the streamwise velocity changes
as $N$ increases.
As shown in the Fig.~\ref{Fig:u_phi_dist}$a$ (curve 1), for small
$N=0.1$, the velocity profile is only slightly disturbed with
respect to a uniform distribution. As $N$ increases, the curves
$u_x(y)$ pull further down in the central part $u_{center} \equiv
u_x(0)$, see for example curves 2 and 3. At $N$ higher than a
critical value $N_{c,m}$, i.e. for curve 4, the central velocities
$u_{center}$ are negative. This means that there appears a reverse
flow causing magnetic vortices in the magnetic gap. When $N$ rises
even more (see curves 5 and 6) the magnetic vortices become
stronger and simultaneously shift away from the center to the side
along the $y$ direction, see insertion in
Fig.~\ref{Fig:u_phi_dist}$a$ for curves 5 and 6.
Fig.~\ref{Fig:u_phi_dist}($b$) shows how the electric potential
$\phi(y)$ varies along the central crosswise cut through the
magnetic gap. The slope in the central point is the crosswise
electric field, $E_{y,center}=-d\phi/dy|_{y=0}$. One can see that
$E_{y,center}$ changes its sign: it is positive at small $N$ and
negative at high $N$. To explain why it is so, one can use the
following way of thinking. Any free flow tends to pass over an
obstacle in such a way so as to perform the lowest possible
mechanical work, i.e. flow streamlines are the lines of least
resistance to the transfer of mass. The resistance of the flow
subject to an external magnetic field is caused by the retarding
Lorentz force $F_x\approx j_y B_z$, so the flow tends to produce a
crosswise electric current, $j_y$, as low as possible while
preserving the divergence-free condition $\nabla\cdot{\bf j}=0$.
To satisfy the latter requirement, an electric field ${\bf E}$
must appear, which is directed in such a way, so as to compensate
the currents produced by the electromotive force ${\bf
u}\times{\bf B}$. Next, we analyze the crosswise electric current
$j_y = E_y + (u_z B_x - u_x B_z)$. Due to symmetry in the center
of the magnetic gap $B_y=B_x=u_y=u_z=j_y=j_z=0$, so $j_y = E_y -
u_x B_z$. This means that $E_y$ tends to have the same sign as
$u_x$ in order to make $j_y$ smaller. At small $N$, the streamwise
velocity $u_x$ is large and positive, so the electric field $E_y$
is positive too. When the magnetic vortices appear, there is a
reverse flow in the center. Therefore, the central velocity is
negative now, and the central electric field $E_{y,center}$ is
also negative.
The change of the electric field in the magnetic gap can be
explained in terms of the Poisson equation and the concurrence
between external and internal vorticity, see
\cite{Votyakov:JFM:2007}. Those arguments are also valid here,
however, in contrast to the previous study, we have no side walls
now, so the external vorticity in the present case plays only a
minor role. As a result, the reversal of the electric field
appears at a small $N$ (approximately equal to five), which is
close to $\kappa=0.4$ given in \cite{Votyakov:JFM:2007}.
\begin{figure*}
\begin{center}
\includegraphics[width=13.5cm, angle=0, clip=yes]{phi_p_u_jRe0010Ha070.eps}
\end{center}
\caption{\label{Fig:phi_p_u_jRe0010Ha070}
Middle horizontal plane, $z=0$:
streamlines of the mass ($u_x, u_y$) ($a$) and electric charge
($j_x, y_y$) ($b$) flow. Contour lines for the electric potential $\phi(x,y)$ ($c$)
and pressure $p(x,y)$ ($d$) resemble the streamlines given above.
$Re=10$, $N=490$. Contours of the electric potential are given with step 0.01,
and contours of the pressure are given with the step 0.4.
Dashed bold rectangle shows borders of the external magnet.}
\end{figure*}
The overall data about $u_{center}$ and $E_{y,center}$ in the
whole range of $N$ studied are shown in Fig.~\ref{Fig:u_phi_summ}.
One can see that both characteristics start from positive values,
then, they cross the zeroth level, reach a minimum, go up again,
and finally vanish in the limit of high $N$. With respect to the
streamwise velocity, this means that, at hight $N$, there is no
mass flow in the center of the magnetic gap; the other velocity
components are equal to zero due to symmetry. With respect to the
crosswise electric field, this means that there are no electric
currents. This occurs because there is no mass flow, therefore,
the electromotive force vanishes, $E_y$ goes to zero, and the
other electric field components are equal to zero due to symmetry.
Thus, one can say that the center of the magnetic gap is frozen by
the strong external magnetic field, so that both mass flow and
electric currents tend to bypass the center. In other words, this
means that a strong magnetic obstacle has a core, and such a core
is like a solid insulated body, being impenetrable for the
external mass and electric charge flow.
When the inertia and viscous forces are negligible compared to the
Lorentz force and pressure gradients, then mass flow streamlines
must be governed by the electric potential distribution, while the
trajectories of the induced electric current must be governed by
pressure distribution. This is derived straightforwardly from
equations (\ref{eq:NSE:momentum} -- \ref{eq:NSE:Ohm}). Because
inertia and viscosity are vanishing, equations
(\ref{eq:NSE:momentum} -- \ref{eq:NSE:Ohm}), in the core and
nearest periphery of the magnetic obstacle, become:
\begin{eqnarray}
\nabla p = {\bf j \times B}, \quad \nabla \phi = -{\bf j}+{\bf
u\times B} \approx {\bf u\times B}\,. \label{eq:Kulikovskii}
\end{eqnarray}
In the latter formula, ${\bf j} \ll \nabla\phi \mbox{ and } {\bf
u\times B}$ is the dominating term. In the core of the obstacle,
${\bf B}=(0,0,B_z)\approx (0,0,1)$, hence, the pressure (electric
potential) is a streamline function for the electric current
(velocity), see Fig.~\ref{Fig:phi_p_u_jRe0010Ha070}. These
relationships for the flow under the strong external magnetic
field had been discussed earlier by Kulikovskii in 1968
\cite{Kulikovskii:1968}.
Kulikovskii's theory is linear, therefore, it must work well in
the stagnant core of the magnetic obstacle. The traditional
approach in the context of this theory is to introduce the
so-called characteristic surfaces, and then, to impose Hartmann
layers as boundary conditions for further integration along the
characteristic surfaces. Such an approach has been used before for
slowly varying fringing magnetic fields \cite{Alboussiere:2004},
where Hartmann layers and the inertialess assumption are
reasonable. However, it is an open issue whether the concept of
the characteristic surfaces is valid for the case of the magnetic
obstacle. For perfectly electrically conductive liquids this
concept forces mass and electric streamlines to flow along the
surfaces of constant $\mathbf{B}$. The latter is the conjecture of
(\ref{eq:Kulikovskii}) and is observed actually in
Fig.~\ref{Fig:phi_p_u_jRe0010Ha070}$a,b$ far from the core of the
obstacle. Nevertheless, the concept of the characteristic
surfaces does not allow for any recirculation in shear layers,
what is the most remarkable effect here.
A magnetic field in a rotational flow requires more sophisticated
boundary conditions than just the Hartmann layer. There is known a
solution for the Ekman-Hartman layers \cite{Debnath:1973}, where
both constant rotation and constant magnetic field are taken
jointly into account. This probably does not fit the present case
either, because the vorticity is not constant along the transverse
direction, and the shape of vortices is not circular. Moreover,
inclusion of the non constant vorticity destroys the linearity of
Kulikovskii's theory. Therefore, Kulikovskii's theory could not be
used as it stands to predict recirculation \emph{a priori}.
Indeed, this explains why the theory has not been applied to
magnetic vortices, even though it has been known for a while.
Nevertheless, Kulikovskii's theory is useful and must be mentioned
because it explains \emph{a posteriori} the shape of vortices and
their matching to electric potential lines.
\section{2D creeping flow around magnetic obstacle}
It is interesting to compare our results with those reported
earlier by Cuevas \textit{et.al}
\cite{Cuevas:Smolentsev:Abdou:PRE:2006} for a creeping 2D MHD
flow. In particular, these authors observed not only two magnetic
vortices, but four vortices as well as, aligned in the spanwise
direction at high Hartmann numbers, see Fig.~9 of
\cite{Cuevas:Smolentsev:Abdou:PRE:2006}. The four-vortex aligned
structure has been missed in our 3D simulations, therefore to get
insight about it, we have performed our own 2D simulations with
the same parameters for the magnetic field $M_x=0.5$, $M_y=0.5$,
$h=1$ (our formula for the field (\ref{eq:MF:Bz}) coincides with
Eq.~(31) of \cite{Cuevas:Smolentsev:Abdou:PRE:2006}) and
$Re=0.05$. To gather more information, we explored the larger $Ha$
range, $0\le Ha \le 150$. A 2D finite element method\footnote{In
the 2D simulations, the numerical mesh varied from $64^2$ to
$256^2$. The quality of the mesh was checked by repeating the
runs with doubled resolution.} was numerically employed to solve
the vorticity-stream function formulation of steady-state
Navier-Stokes equation with the Lorentz force. Electric currents
were calculated from the magnetic induction equation. Generally,
the obtained 2D results coincided with those from
\cite{Cuevas:Smolentsev:Abdou:PRE:2006} and also extended them as
reported below.
\begin{figure*}[]
\begin{center}
\includegraphics[width=12cm, angle=0, clip=on]{u_summ_2DRe005.eps}
\end{center}
\caption{\label{Fig:u_summ_2DRe005}
Central streamwise velocity $u_{center}$ as a function of $Ha$ for 2D MHD
flow, $Re=0.05$, $M_x=0.5$, $M_y=0.5$, $h=1$.
Insertion is the zoom for $20\le Ha\le 150$.}
\end{figure*}
The main curve of our 2D results is presented in
Fig.~\ref{Fig:u_summ_2DRe005}. It shows the dependence of the
velocity in the center of the magnetic obstacle as a function of
$Ha$ number. (This is similar to Fig.~\ref{Fig:u_phi_summ}($a$)
with the difference that the $x$-axis corresponds to $Ha$ instead
of $N$.) One can see that $u_{center}$ vanishes as $Ha$ increases,
i.e. mass flow disappears in the core of the obstacle as the
intensity of the magnetic field gives rise. Therefore, the
principal conclusion derived from 3D results remains valid: there
develops a frozen core of the magnetic obstacle when $Ha$ is
large. However, the details of the to approach zero-level values
for $u_{center}$ are different in 2D case relative to the 3D case.
To stress this fact, we zoomed the curve in
Fig.~\ref{Fig:u_summ_2DRe005} and put it as an insertion. One can
see that in the 2D case $u_{center}$ vanishes with oscillations,
while in the 3D case, see Fig.~\ref{Fig:u_phi_summ}($a$), it was
always negative as it approaching the zero level.
To consider in detail the different flow regimes shown
cumulatively in Fig.~\ref{Fig:u_summ_2DRe005}, we have labelled
the turning points, where $u_{center}$ changes its sign, with the
letters A,B,C. Figures~\ref{Fig:Ha015_2D}-\ref{Fig:Ha125_2D}
illustrate below three nontrivial flow regimes through two plots:
the streamwise velocity profile along the central spanwise line,
$x=0$, and flow streamlines.
\begin{figure*}[]
\begin{center}
\includegraphics[width=12.5cm, angle=0, clip=on]{Ha015_2D.eps}
\end{center}
\caption{\label{Fig:Ha015_2D}
Streamwise velocity $u_x$ along spanwise direction at $x=0$ ($a$) and
flow streamlines ($b$) at $H=15$ for 2D MHD flow.
On ($b$) one distinguishes two magnetic vortices.
Bold lines denote borders of the magnet.}
\end{figure*}
\begin{figure*}[]
\begin{center}
\includegraphics[width=12.5cm, angle=0, clip=on]{Ha050_2D.eps}
\end{center}
\caption{\label{Fig:Ha050_2D}
Streamwise velocity $u_x$ along spanwise direction at $x=0$ ($a$) and
flow streamlines ($b$) at $H=50$ for 2D MHD flow.
Insertion on ($a$) is the zoom for $-1.5\le y\le 1.5$. On
($b$) one distinguishes four vortices aligned along the $y$
direction at $x=0$.
Bold lines denote borders of the magnet.}
\end{figure*}
\begin{figure*}[]
\begin{center}
\includegraphics[width=12.5cm, angle=0, clip=on]{Ha125_2D.eps}
\end{center}
\caption{\label{Fig:Ha125_2D}
Streamwise velocity $u_x$ along spanwise direction at $x=0$ ($a$) and
flow streamlines ($b$) at $H=125$ for 2D MHD flow.
Insertion on ($a$) is the zoom for $-1.5\le y\le 1.5$. On
($b$) one distinguishes six vortices aligned along the $y$
direction at $x=0$.
Bold lines denote borders of the magnet.}
\end{figure*}
The first regime, corresponding to the $Ha$ range from zero up to
the first turning point, A, is characterized by a weak breaking
Lorentz force. There is no recirculation, so we have considered
this regime as trivial, and did not illustrate it further by any
figure. It is marked as regime I and represented by Fig.~7 in the
paper by Cuevas \textit{et al.}
\cite{Cuevas:Smolentsev:Abdou:PRE:2006}.
The second regime is for the $Ha$ interval between the first
turning point, A, and the second, B. It is characterized by a
Lorentz force that is already strong enough, to reverse the flow
between the magnetic poles. This results in two magnetic vortices
as illustrated by Fig.~\ref{Fig:Ha015_2D} of the current paper and
Fig.~8 of \cite{Cuevas:Smolentsev:Abdou:PRE:2006}. These vortices
are analogous to those in 3D results, with the difference that in
the 3D simulation the flow particles captured by the magnetic
vortices move helically towards the top and bottom walls to
dissipate upstream kinetic energy \cite{Votyakov:JFM:2007}, while
in the 2D system they always follow rotation along closed contour
lines of the stream function.
In the third regime, for the $Ha$ range between turning points B
and C, the reversal of the upstream flow occurs in the shear layer
rather than in the center of the magnetic obstacle. This can be
explained by the fact that, at these $Ha$, the flow in the core
is hindered, and the shear layer between the core and the
bypassing flows is large to accommodate magnetic vortices. When
this happens, the requirement of flow continuity, $\nabla
\mathbf{u}_\perp=0$, turns the velocity in the center of the
obstacle to positive values. As a result, the four vortex, aligned
along the central spanwise line, appear as shown in
Fig.~\ref{Fig:Ha050_2D} (see also Fig.~9 of
\cite{Cuevas:Smolentsev:Abdou:PRE:2006}).
When $Ha$ increases further, after the turning point C, the shear
layer is extended further as well. So the whole elongated vortex
structure increases in length, and two vortices, that were closest
to the center of the stagnant core, are pushed out from the core.
Again, to satisfy flow continuity this initiates the weaker
counter-rotating vortices in the core. In this regime, there are
altogether six aligned vortices shown in Fig.~\ref{Fig:Ha125_2D}.
This recirculation pattern is new and has not been reported
earlier.
Now, we can offer the following qualitative description of the
creeping 2D MHD flow around a magnetic obstacle. As $Ha$ increases
more and more, additional vortices appear in the expanded shear
layer. These vortices have decaying intensity towards to the
magnetic gap and are alternating-signed, starting from the very
first vortex, counted from the bypassing flow, where the flow is
reversed. This results in vanishing velocity oscillations in the
center of the magnetic gap, Fig.~\ref{Fig:u_summ_2DRe005}.
The question arises whether the aligned multi-vortices appear in a
creeping 3D MHD flow. In the latter case, the largest $N$ under
consideration was 1000 and two magnetic vortices were observed
only. The four-vortex aligned structure that was clearly presented
in our 2D results, occurred at $Ha \approx 50$, which corresponds
to ($Re=0.05$) $N=Ha^2/Re=50000$, much larger than the maximum
$N=1000$ used in the 3D simulations. On the other hand, the
velocities are of magnitude $10^{-3}-10^{-2}$ at the center of the
magnetic gap, so one would have to employ a very accurate 3D
numerical code to verify the effect discussed.
Physically, the crucial difference between 2D and 3D vortices is
the secondary flow in the transverse direction. This dissipates
kinetic energy of the upstream flow inside viscous boundary layers
located on no-slip top/bottom walls of the duct. On the other
hand, the secondary flow variation, $\partial_{z}u_z$, is a mass
source/sink adjusted to keep flow continuity, $\nabla_\perp
\mathbf{u}_\perp=-\partial_{z}u_z$. Obviously in 2D cases, the
dissipation by the secondary flow is absent and the flow is
obliged to satisfy 2D continuity, $\nabla_\perp
\mathbf{u}_\perp=0$. Then, the possible way to dissipate energy
and the flow be continuous in 2D is to create a weaker
counter-rotating vortex nearby a strong vortex, the phenomena
observed in Fig.~\ref{Fig:Ha050_2D} and \ref{Fig:Ha125_2D}.
\section{Core of the magnetic obstacle and controlled
production of vorticity.}
It is worth to discuss how a magnetic obstacle can be used to
control vorticity and turbulence. On the one hand, the magnetic
field damps turbulent pulsations in the core of the obstacle by
freezing any kind of mass and electric transfer. On the other
hand, the shear layer between the core of the obstacle and the
rest of the flow generates a transverse vorticity. The results
presented herein were obtained at low $Re$ numbers, and all the
vortices were confined in the shear layer. However, it is possible
to imagine the following two scenarios. The first situation is to
destroy the initial core by increasing the $Re$ number, that is by
turbulating the flow, at fixed magnetic field parameters; and the
second situation is to increase the magnetic field strength so as
to manifest the core, that is to laminarize the core, while the
$Re$ number remans high and the rest of the flow is turbulent.
In the trivial turbulating scenario, the interaction parameter
$N=Ha^2/Re$ is initially high because of the $Re$ number being low
while the $Ha$ number is moderate\footnote{A moderate $Ha$ number
range was also used in this paper in order to avoid numerical
difficulties related to the resolution of Hartmann layers.} Then,
one increases the $Re$, which could be equivalent to increasing
the inlet flow rate. Obviously, since $Ha$ is fixed, a higher $Re$
implies a lower $N$, hence, the core of the magnetic obstacle
becomes more penetrable, so the lateral magnetic vortices will
shift toward each other due to the unfreezing of the core. At
some threshold $Re$ number, when the inertia of the flow becomes
sufficiently high to produce a stagnant region behind the core of
the obstacle, one will have to observe the six-vortex pattern
shown schematically in Fig.~\ref{Fig:IntroductoryMO}. Then, an
even higher $Re$ number will result in the detachment of the
attached vortices (the third pair of vortices in
Fig.~\ref{Fig:IntroductoryMO}) which is analogous to the classical
process of the detachment of the attached vortices past solid
cylinder \cite{Votyakov:PoF:2009}. Finally, at the highest $Re$,
the interaction parameter $N$ will become so small that the
internal recirculation structure of the magnetic obstacle will
disappear, and the flow will become turbulent as in the ordinary
hydrodynamics.
In the laminarizing scenario, by taking initially a high $Re$
number and keeping it fixed, one increases the $Ha$ number by
imposing, for instance, an external local magnetic field with
higher and higher intensity. This increases the interaction
parameter $N$, so the core of the obstacle must manifest itself by
first showing the same six-vortex recirculation as
aforementioned. In practice however, it might be difficult for all
six vortices to be stable at very high $Re$, because in this case
the rest of the flow is turbulent, so the wake of the obstacle
oscillates and can potentially destroy the stability of the
connecting and attached vortices (the second and the third pair of
the vortices in Fig.~\ref{Fig:IntroductoryMO}). Nevertheless, the
lateral magnetic vortices (the first pair of the vortices in
Fig.~\ref{Fig:IntroductoryMO}) must manifest themselves clearly at
any $Re$ provided that $N$ is large. These vortices will be
distinctly seen at the beginning of the turbulent wake. A further
increase of the $Ha$ does laminarize and freeze the core of the
obstacle. The magnetic vortices move away from each other and
adjust their position in the lateral shear layers. Owing to the
high $Re$ number, the recirculation confined in the shear layer
possesses an excess amount of kinetic energy that is dissipated
downstream in the wake of the obstacle.
In the case of a solid obstacle one has little control over the
shear layer at high $Re$ because there is one parameter only, i.e.
$Re$. Also, the solid obstacle is impenetrable and could not fit
itself to the flow. In the case of the magnetic obstacle one has
an opportunity to govern the detachment process because there are
two additional parameters at a given $Re$: the $Ha$ number, for
instance, the magnetic field intensity controlling the degree of
permeability of the magnetic obstacle; and the degree of lateral
heterogeneity of the external magnetic field controlling the width
of the lateral shear layer. Thus, a controlled intensity and shape
of a magnetic obstacle is a model lab to produce and study
vorticity generation and turbulent phenomena.
\section{Conclusions.}
3D numerical simulations are reported for a liquid metal flow
subject to a strong external heterogenous magnetic field. The
simulations shed light on the process of formation of the core of
the magnetic obstacle when the interaction parameter $N$ is large.
The core is surrounded by deformed magnetic vortices located in
the shear layer. Inside the core, there is no mass and electric
transfer, i.e. at high $N$, the magnetic obstacle is analogous to
a solid hydrodynamical obstacle.
The series of 2D simulations for a creeping MHD flow demonstrated
that there still exists a weak recirculation in the stagnant core
in 2D cases even at very high $Ha$. The flow regime can be
represented as an elongated recirculation composed of many (even
number) sign-alternating vortices aligned along the line crossing
the center of the magnetic gap in the spanwise direction. The
intensity of vortices vanishes by approaching the core of the
magnetic obstacle, therefore, the principal conclusion derived
from 3D results remains valid: there develops a frozen core of the
magnetic obstacle when $Ha$ grows toward infinity. By studying
the characteristics of magnetic obstacles in laminar flow, we have
been able to provide conjectures on the formation and destruction
of magnetic obstacle in turbulent flow.
\section{ACKNOWLEDGEMENTS}
This work has been performed under the UCY-CompSci project, a
Marie Curie Transfer of Knowledge (TOK-DEV) grant (Contract No.
MTKD-CT-2004-014199). This work was also partially funded under a
Center of Excellence grant from the Norwegian Research Council to
the Center of Biomedical Computing.
|
1,116,691,497,665 | arxiv | \section{Introduction}
The Ginzburg-Landau (GL) theory of superconductivity, originating in~\cite{GL}, provides a phenomenological, macroscopic, description of the response of a superconductor to an applied magnetic field. Several years after it was introduced, it turned out that it could be derived from the microscopic BCS theory~\cite{BCS,Gor} and should thus be seen as a mean-field/semiclassical approximation of many-body quantum mechanics. A mathematically rigorous derivation starting from BCS theory has been provided recently~\cite{FHSS}.
Within GL theory, the state of a superconductor is described by an order parameter $\Psi:~\mathbb{R} ^2\to\mathbb{C}$ and an induced magnetic vector potential $\kappa \sigma \mathbf{A}:\mathbb{R} ^2 \to \mathbb{R} ^2 $ generating an induced magnetic field
$$h=\kappa \sigma \: \mbox{curl} \, \mathbf{A}.$$
The ground state of the theory is found by minimizing the energy functional\footnote{Here we use the units of~\cite{FH-book}, other choices are possible, see, e.g., \cite{SS2}.}
\begin{equation}\label{eq:gl func}
\mathcal{G}_{\kappa,\sigma}^{\mathrm{GL}}[\Psi,\mathbf{A}] = \int_{\Omega} \mathrm{d} \mathbf{r} \: \left\{ \left| \left( \nabla + i \kappa\sigma \mathbf{A} \right) \Psi \right|^2 - \kappa^2 |\Psi|^2 + \mbox{$\frac{1}{2}$} \kappa^2 |\Psi|^4 + \left(\kappa \sigma \right)^2 \left| \mbox{curl} \mathbf{A} - 1 \right|^2 \right\},
\end{equation}
where $ \kappa >0 $ is a physical parameter (penetration depth) characteristic of the material, and $\kappa \sigma $ measures the intensity of the external magnetic field, that we assume to be constant throughout the sample. We consider a model for an infinitely long cylinder of cross-section $\Omega \subset \mathbb{R} ^2$, a compact simply connected set with regular boundary.
Note the invariance of the functional under the gauge transformation
\begin{equation}\label{eq:gauge inv}
\Psi \to \Psi e^{-i \kappa \sigma \varphi}, \qquad \mathbf{A} \to \mathbf{A} + \nabla \varphi,
\end{equation}
which implies that the only physically relevant quantities are the gauge invariant ones such as the induced magnetic field $h$ and the density $|\Psi| ^2$. The latter gives the local relative density of electrons bound in Cooper pairs. It is well-known that a minimizing $\Psi$ must satisfy $|\Psi| ^2 \leq 1$. A~value $|\Psi| = 1$ (respectively, $|\Psi| = 0$) corresponds to the superconducting (respectively, normal) phase where all (respectively, none) of the electrons form Cooper pairs. The perfectly superconducting state with $|\Psi| = 1$ everywhere is an approximate ground state of the functional for small applied field and the normal state where $\Psi$ vanishes identically is the ground state for large magnetic field. In between these two extremes, different mixed phases can occur, with normal and superconducting regions varying in proportion and organization.
A vast mathematical literature has been devoted to the study of these mixed phases in type-II superconductors (characterized by $\kappa > 1/\sqrt{2}$), in particular in the limit $\kappa \to \infty$ (extreme type-II). Reviews and extensive lists of references may be found in~\cite{FH-book,SS2,Sig}. Two main phenomena attracted much attention:
\begin{itemize}
\item The formation of hexagonal vortex lattices when the applied magnetic field varies between the first and second critical field, first predicted by Abrikosov~\cite{Abr}, and later experimentally observed (see, e.g., \cite{Hetal}). In this phase, vortices (zeros of the order parameter with quantized phase circulation) sit in small normal regions included in the superconducting phase and form regular patterns.
\item The occurrence of a surface superconductivity regime when the applied magnetic fields varies between the second and third critical fields. In this case, superconductivity is completely destroyed in the bulk of the sample and survives only at the boundary, as predicted in~\cite{SJdG}. We refer to~\cite{NSG} for experimental observations.
\end{itemize}
We refer to~\cite{CR2} for a more thorough discussion of the context. We shall be concerned with the surface superconductivity regime, which in the above units translates into the assumption
\begin{equation}
\label{eq:external field}
\sigma = b \kappa
\end{equation}
for some fixed parameter $b$ satisfying the conditions
\begin{equation}
\label{eq:b condition}
1 < b < \Theta_0^{-1}
\end{equation}
where $\Theta_0$ is a spectral parameter (minimal ground state energy of the shifted harmonic oscillator on the half-line, see~\cite[Chapter 3]{FH-book}):
\begin{equation}\label{eq:theo}
\Theta_0 := \inf_{\alpha \in \mathbb{R}} \inf \left\{ \int_{\mathbb{R} ^+} \mathrm{d} t \left( |\partial_t u| ^2 + (t+\alpha) ^2 |u| ^2 \right), \: \left\Vert u \right\Vert_{L^2 (\mathbb{R} ^+)} = 1 \right\}.
\end{equation}
From now on we introduce more convenient units to deal with the surface superconductivity phenomenon: we define the small parameter
\begin{equation}
\label{eq:eps}
\varepsilon = \frac{1}{\sqrt{\sigma \kappa}} = \frac{1}{b^{1/2}\kappa} \ll 1
\end{equation}
and study the asymptotics $ \varepsilon \to 0 $ of the minimization of the functional~\eqref{eq:gl func}, which in the new units reads
\begin{equation}\label{eq:GL func eps}
\mathcal{G}_{\eps}^{\mathrm{GL}}[\Psi,\mathbf{A}] = \int_{\Omega} \mathrm{d} \mathbf{r} \: \bigg\{ \bigg| \bigg( \nabla + i \frac{\mathbf{A}}{\varepsilon^2} \bigg) \Psi \bigg|^2 - \frac{1}{2 b \varepsilon^2} \left( 2|\Psi|^2 - |\Psi|^4 \right) + \frac{b}{\varepsilon^4} \left| \mbox{curl} \mathbf{A} - 1 \right|^2 \bigg\}.
\end{equation}
We shall denote
\begin{equation}
E_{\eps}^{\mathrm{GL}} : = \min_{(\Psi, \mathbf{A}) \in \mathscr{D}^{\mathrm{GL}}} \mathcal{G}_{\eps}^{\mathrm{GL}}[\Psi,\mathbf{A}],
\end{equation}
with
\begin{equation}
\mathscr{D}^{\mathrm{GL}} : = \left\{ (\Psi,\mathbf{A}) \in H^1(\Omega;\mathbb{C}) \times H^1(\Omega;\mathbb{R}^2) \right\},
\end{equation}
and denote by $ (\Psi^{\mathrm{GL}},\mathbf{A}^{\mathrm{GL}}) $ a minimizing pair (known to exist by standard methods~\cite{FH-book,SS2}).
\medskip
The salient features of the surface superconductivity phase are as follows:
\begin{itemize}
\item The GL order parameter is concentrated in a thin boundary layer of thickness $\sim \varepsilon = (\kappa \sigma) ^{-1/2}$. It decays exponentially to zero as a function of the distance from the boundary.
\item The applied magnetic field is very close to the induced magnetic field, $\mbox{curl} \, \mathbf{A} \approx 1$.
\item Up to an appropriate choice of gauge and a mapping to boundary coordinates, the ground state of the theory is essentially governed by the minimization of a 1D energy functional in the direction perpendicular to the boundary.
\end{itemize}
A review of rigorous statements corresponding to these physical facts may be found in~\cite{FH-book}. One of their consequences is the energy asymptotics
\begin{equation}
\label{eq:FH energy asympt 2}
E_{\eps}^{\mathrm{GL}} = \frac{|\partial \Omega| E ^{\rm 1D}_{0}}{\varepsilon} + \mathcal{O}(1),
\end{equation}
where $|\partial\Omega|$ is the length of the boundary of $\Omega$, and $E ^{\rm 1D}_{0}$ is obtained by minimizing the functional
\begin{equation}\label{eq:intro 1D func}
\E^{\mathrm{1D}}_{0,\alpha}[f] : = \int_0^{+\infty} \mathrm{d} t \left\{ \left| \partial_t f \right|^2 + (t + \alpha )^2 f^2 - \frac{1}{2b} \left(2 f^2 - f^4 \right) \right\},
\end{equation}
both with respect to the function $f$ and the real number $\alpha$. We proved recently~\cite{CR2} that~\eqref{eq:FH energy asympt 2} holds in the full surface superconductivity regime, i.e. for $1<b<\Theta_0 ^{-1}$. This followed a series of partial results due to several authors~\cite{Alm,AH,FH1,FH2,FHP,LP,Pan}, summarized in~\cite[Theorem 14.1.1]{FH-book}. Some of these also concern the limiting regime $b \nearrow \Theta_0 ^{-1}$. The other limiting case $b\searrow 1$ where the transition from boundary to bulk behavior occurs is studied in~\cite{FK,Kac}, whereas results in the regime $b\nearrow 1$ may be found in~\cite{AS,Alm2,SS1}.
The rationale behind~\eqref{eq:FH energy asympt 2} is that, up to a suitable choice of gauge, any minimizing order parameter $\Psi^{\mathrm{GL}}$ for~\eqref{eq:gl func} has the structure
\begin{equation}\label{eq:GLm structure formal}
\Psi^{\mathrm{GL}}(\mathbf{r}) \approx f_0 \left(\textstyle \frac{\tau}{\varepsilon} \right) \exp \left( - i \alpha_0 \textstyle \frac{s}{\varepsilon}\right) \exp \left\{ i \phi_{\varepsilon}(s,t) \right\}
\end{equation}
where $(f_0,\alpha_0)$ is a minimizing pair for~\eqref{eq:intro 1D func}, $(s,\tau)= $ (tangent coordinate, normal coordinate) are boundary coordinates defined in a tubular neighborhood of $\partial \Omega$ with $ \tau = \mathrm{dist}(\mathbf{r},\partial \Omega) $ for any point $ \mathbf{r} $ there and $ \phi_{\varepsilon} $ is a gauge phase factor (see \eqref{eq: gauge phase}), which plays a role in the change to boundary coordinates. Results in the direction of~\eqref{eq:GLm structure formal} may be found in the {following references:}
\begin{itemize}
\item {\cite{Pan} contains a result of uniform distribution of the energy density at the domain's boundary for any $ 1 \leq b < \Theta_0^{-1} $};
\item {\cite{FH1} gives fine energy estimates compatible with~\eqref{eq:GLm structure formal} when $ b \nearrow \Theta_0^{-1} $};
\item {\cite{AH} and then~\cite{FHP} prove that~\eqref{eq:GLm structure formal} holds at the level of the density, in the $L ^2$ sense, for $1.25\leq b < \Theta_0 ^{-1}$;}
\item {\cite{FK} and then~\cite{Kac} investigate the concentration of the energy density when $b$ is close to $1$;}
\item {\cite{FKP} contains results about the energy concentration phenomenon in the 3D case.}
\end{itemize}
In~\cite[Theorem~2.1]{CR2} we proved that
\begin{equation}\label{eq:recall density generic}
\left\Vert |\Psi^{\mathrm{GL}}| ^2 - f_0 ^2 \left(\textstyle\frac{\tau}{\varepsilon} \right) \right\Vert_{L ^2 (\Omega)} \leq C \varepsilon \ll \left\Vert f_0 ^2 \left(\textstyle\frac{\tau}{\varepsilon}\right) \right\Vert_{L ^2 (\Omega)}
\end{equation}
for any $1<b<\Theta_0 ^{-1}$ in the limit $\varepsilon \to 0$. A very natural question is whether the above estimate may be improved to a uniform control (in $L ^{\infty}$ norm) of the local discrepancy between the modulus of the true GL minimizer and the simplified normal profile $f_0 \left(\textstyle\frac{\tau}{\varepsilon} \right)$. Indeed,~\eqref{eq:recall density generic} is still compatible with the vanishing of $\Psi^{\mathrm{GL}}$ in small regions, e.g., vortices, inside of the boundary layer. Proving that such local deviations from the normal profile do not occur would explain the observed uniformity of the surface superconducting layer (see again~\cite{NSG} for experimental pictures). Interest in this problem (stated as Open Problem number 4 in the list in \cite[Page 267]{FH-book}) originates from a conjecture of X.B. Pan~\cite[Conjecture 1]{Pan} and an affirmative solution has been provided in~\cite{CR2} for the particular case of a disc sample. The purpose of this paper is to extend the result to general samples with regular
boundary (the case with corners is known to require a different analysis~\cite[Chapter 15]{FH-book}).
Local variations (on a scale $\mathcal{O} (\varepsilon)$) in the tangential variable are compatible with the energy estimate~\eqref{eq:FH energy asympt 2}, and thus the uniform estimate obtained for disc samples in~\cite{CR2} is based on an expansion of the energy to the next order:
\begin{equation}
\label{eq:recall energy disc}
E_{\eps}^{\mathrm{GL}} = \frac{2\pi E ^{\rm 1D}_{\star} (k)}{\varepsilon} + \mathcal{O}(\varepsilon |\log\varepsilon|),
\end{equation}
where $E ^{\rm 1D}_{\star} (k)$ is the minimum (with respect to both the real number $\alpha$ and the function $f$) of the $\varepsilon$-dependent functional
\begin{equation}
\label{eq:intro 1D func disc}
\E^{\mathrm{1D}}_{k,\alpha}[f] : = \int_0^{c_0|\log\varepsilon|} \mathrm{d} t \: (1-\varepsilon k t )\left\{ \left| \partial_t f \right|^2 + \frac{(t + \alpha - \frac12 \varepsilon k t ^2 )^2}{(1-\varepsilon k t ) ^2} f^2 - \frac{1}{2b} \left(2 f^2 - f^4 \right) \right\},
\end{equation}
where the constant $ c_0 $ has to be chosen large enough and $k=R ^{-1}$ is the curvature of the disc under consideration, whose radius we denote by $R$. Of course, \eqref{eq:intro 1D func} is simply the above functional where one sets $k=0$, $\varepsilon=0$, which amounts to neglect the curvature of the boundary. When the curvature is constant,~\eqref{eq:recall energy disc} in fact follows from a next order expansion of the GL order parameter beyond~\eqref{eq:GLm structure formal}:
\begin{equation}\label{eq:GLm structure formal disc}
\Psi^{\mathrm{GL}}(\mathbf{r}) \approx f_{k} \left(\textstyle \frac{\tau}{\varepsilon} \right) \exp \left( - i \alpha(k) \textstyle \frac{s}{\varepsilon}\right) \exp \left\{ i \phi_{\varepsilon}(s,t) \right\}
\end{equation}
where $(\alpha(k),f_{k})$ is a minimizing pair for~\eqref{eq:intro 1D func disc}. Note that for any fixed $k$
\begin{equation}
\label{eq:point est 0 profile}
f_{k} = f_0 (1+\mathcal{O}(\varepsilon)),\qquad \alpha(k) = \alpha_0 (1+\mathcal{O}(\varepsilon)),
\end{equation}
so that~\eqref{eq:GLm structure formal disc} is a slight refinement of~\eqref{eq:GLm structure formal} but the $\mathcal{O}(\varepsilon)$ correction corresponds to a contribution of order $1$ beyond~\eqref{eq:FH energy asympt 2} in~\eqref{eq:recall energy disc}, which turns out to be the order that controls local density variations.
As suggested by the previous results in the disc case, the corrections to the energy asymptotics~\eqref{eq:FH energy asympt 2} must be curvature-dependent. The case of a general sample where the curvature of the boundary is not constant is then obviously harder to treat than the case of a disc, where one obtains~\eqref{eq:recall energy disc} by a simple variant of the proof of~\eqref{eq:FH energy asympt 2}, as explained in our previous paper~\cite{CR2}.
In fact, we shall obtain below the desired uniformity result for the order parameter in general domains as a corollary of the energy expansion ($\gamma$ is a fixed constant)
\begin{equation}\label{eq:intro energy GL}
\boxed{E_{\eps}^{\mathrm{GL}} = \frac{1}{\varepsilon} \int_0^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_\star \left(k(s)\right) + \mathcal{O} (\varepsilon |\log \varepsilon| ^\gamma) }
\end{equation}
where the integral runs over the boundary of the sample, $k(s)$ being the curvature of the boundary as a function of the tangential coordinate $s$. Just as the particular case~\eqref{eq:recall energy disc},~\eqref{eq:intro energy GL} contains the leading order~\eqref{eq:FH energy asympt 2}, but $\mathcal{O}(1)$ corrections are also evaluated precisely. As suggested by the energy formula, the GL order parameter has in fact small but fast variations in the tangential variable which contribute to the subleading order of the energy. More precisely, one should think of the order parameter as having the approximate form
\begin{equation}\label{eq:intro GLm formal refined}
\boxed{\Psi^{\mathrm{GL}}(\mathbf{r}) = \Psi^{\mathrm{GL}} (s,\tau) \approx f_{k(s)} \left(\textstyle \frac{\tau}{\varepsilon} \right) \exp \left( - i \alpha (k(s)) \textstyle \frac{s}{\varepsilon}\right) \exp\left\{i \phi_{\varepsilon}(s,t) \right\} }
\end{equation}
with $f_{k(s)},\alpha(k(s))$ a minimizing pair for the energy functional~\eqref{eq:intro 1D func disc} at curvature $k = k(s)$. The main difficulty we encounter in the present paper is to precisely capture the subtle curvature dependent variations encoded in~\eqref{eq:intro GLm formal refined}. What our new result (we give a rigorous statement below)~\eqref{eq:intro GLm formal refined} shows is that curvature-dependent deviations to~\eqref{eq:GLm structure formal} do exist but are of limited amplitude and can be completely understood via the minimization of the family of 1D functionals~\eqref{eq:intro 1D func disc}. A crucial input of our analysis is therefore a detailed inspection of the $k$-dependence of the ground state of~\eqref{eq:intro 1D func disc}.
We can deduce from~\eqref{eq:intro energy GL} a uniform density estimate settling the general case of~\cite[Conjecture 1]{Pan} and~\cite[Open Problem 4, page 267]{FH-book}. We believe that the energy estimate~\eqref{eq:intro energy GL} is of independent interest since it helps in clarifying the role of domain curvature in surface superconductivity physics. It was previously known (see \cite[Chapters 8 and 13]{FH-book} and references therein) that corrections to the value of the third critical field depend on the domain's curvature, but applications of these results are limited to the regime where $b\to \Theta_0 ^{-1}$ when $\varepsilon \to 0$. The present paper seems to contain the first results indicating the role of the curvature in the regime~$1<b<\Theta_0 ^{-1}$. This role may seem rather limited since it only concerns the second order in the energy asymptotics but it is in fact crucial in controlling local variations of the order parameter and allowing to prove a strong form of uniformity for the surface
superconductivity layer.
\medskip
Our main results are rigorously stated and further discussed in Section~\ref{sec:main results}, their proofs occupy the rest of the paper. Some material from~\cite{CR2} is recalled in Appendix~\ref{sec:app} for convenience.
\medskip
\noindent\textbf{Notation.} In the whole paper, $C$ denotes a generic fixed positive constant independent of $\varepsilon$ whose value changes from formula to formula. A $\mathcal{O} (\delta)$ is always meant to be a quantity whose absolute value is bounded by $\delta = \delta (\varepsilon)$ in the limit $\varepsilon \to 0$. We use $\mathcal{O} (\varepsilon ^{\infty})$ to denote a quantity (like $\exp(- \varepsilon ^{-1})$) going to $0$ faster than any power of $\varepsilon$ and $|\log \eps| ^{\infty}$ to denote $|\log \varepsilon|^a$ where $a>0$ is some unspecified, fixed but possibly large constant. Such quantities will always appear multiplied by a power of $\varepsilon$, e.g., $\varepsilon |\log \eps| ^{\infty}$ which is a $\mathcal{O} (\varepsilon ^{1-c})$ for any $0<c<1$, and hence we usually do not specify the precise power $a$.
\medskip
\noindent\textbf{Acknowledgments.} M.C. acknowledges the support of MIUR through the FIR grant 2013 ``Condensed Matter in Mathematical Physics (Cond-Math)'' (code RBFR13WAET). N.R. acknowledges the support of the ANR project Mathostaq (ANR-13-JS01-0005-01). We also acknowledge the hospitality of the \emph{Institut Henri Poincar\'e}, Paris. {We are indebted to one of the anonymous referees for the content of Remarks~ 2.\ref{rem:b1} and 2.\ref{rem:curvature}.}
\section{Main Results}\label{sec:main results}
\subsection{Statements}\label{sec:statements}
We first state the refined energy and density estimates that reveal the contributions of the domain's boundary. As suggested by~\eqref{eq:intro GLm formal refined}, we now introduce a reference profile that includes these variations. A piecewise constant function in the tangential direction is sufficient for our purpose and we thus first introduce a decomposition of the superconducting boundary layer that will be used in all the paper. The thickness of this layer in the normal direction should roughly be of order $\varepsilon$, but to fully capture the phenomenon at hand we need to consider a layer of size $c_0 \varepsilon |\log \varepsilon|$ where $c_0$ is a fixed, large enough constant. By a passage to boundary coordinates and dilation of the normal variable on scale~$\varepsilon$ (see~\cite[Appendix F]{FH-book} or Section~\ref{sec:up bound} below), the surface superconducting layer
\begin{equation}
\label{eq:intro ann}
\tilde{\mathcal{A}}_{\eps} : = \left\{ \mathbf{r} \in \Omega \: | \: \tau \leq c_0 \varepsilon |\log\varepsilon| \right\},
\end{equation}
where
\begin{equation}
\tau : = \mathrm{dist}(\mathbf{r}, \partial \Omega),
\end{equation}
can be mapped to
\begin{equation}\label{eq:intro def ann rescale}
\mathcal{A}_{\eps}:= \left\{ (s,t) \in \left[0, |\partial \Omega| \right] \times \left[0,c_0 |\log\varepsilon|\right] \right\}.
\end{equation}
We split this domain into $ N_{\eps} = \mathcal{O}(\varepsilon ^{-1})$ rectangular cells $ \{ \mathcal{C}_n \}_{n=1, \ldots, N_{\eps}}$ of constant side length $ \ell_{\eps} \propto \varepsilon$ in the $s$ direction. We denote $s_n $, $s_{n+1} = s_n + \ell_{\eps} $ the $s$ coordinates of the boundaries of the cell $ \mathcal{C}_n $:
$$ \mathcal{C}_n = [s_n,s_{n+1}] \times [0,c_0 |\log \varepsilon|]$$
and we may clearly choose
$$ \ell_{\eps} = \varepsilon |\partial \Omega|\left(1 + \mathcal{O}(\varepsilon)\right)$$
for definiteness. We will approximate the curvature $k(s)$ by its mean value $k_n$ in each cell:
$$ k_n := \ell_{\eps}^{-1} \int_{s_n} ^{s_{n+1}} \mathrm{d} s \, k(s).$$
We also denote
$$ f_n := f_{k_n}, \qquad \alpha_n := \alpha(k_n)$$
respectively the optimal profile and phase associated to $k_n$, obtained by minimizing~\eqref{eq:intro 1D func disc} first with respect to\footnote{We are free to impose $f_n \geq 0$, which we always do in the sequel.} $f$ and then to $\alpha$.
The reference profile is then the piecewise continuous function
\begin{equation}\label{eq:ref profile}
g_{\rm ref} (s,t) := f_n (t), \qquad \mbox{for } s\in [s_n,s_{n+1}] \mbox{ and } (s,t) \in \mathcal{A}_{\eps},
\end{equation}
that can be extended to the whole domain $ \Omega $ by setting it equal to $ 0 $ for $ \mathrm{dist}(\mathbf{r}, \partial \Omega) \geq c_0 \varepsilon |\log\varepsilon| $. We compare the density of the full GL order parameter to $g_{\rm ref} $ in the next theorem. Note that because of the gauge invariance of the energy functional, the phase of the order parameter is not an observable quantity, so the next statement is only about the density $|\Psi^{\mathrm{GL}}| ^2$.
\begin{teo}[\textbf{Refined energy and density asymptotics}]\label{theo:energy}\mbox{}\\
Let $\Omega\subset \mathbb{R} ^2$ be any smooth, bounded and simply connected domain. For any fixed $1<b<\Theta_0 ^{-1}$, in the limit $ \varepsilon \to 0$, it holds
\begin{equation}\label{eq:energy GL}
E_{\eps}^{\mathrm{GL}} = \frac{1}{\varepsilon} \int_0^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_\star \left(k(s)\right) + \mathcal{O} (\varepsilon |\log \eps| ^{\infty}).
\end{equation}
and
\begin{equation}\label{eq:main density}
\left\Vert |\Psi^{\mathrm{GL}}| ^2 - g_{\rm ref} ^2 \left(s,\varepsilon ^{-1} \tau \right) \right\Vert_{L ^2 (\Omega)} = \mathcal{O}(\varepsilon ^{3/2} |\log \eps| ^{\infty}) \ll \left\Vert g_{\rm ref} ^2 \left(s,\varepsilon ^{-1} t \right) \right\Vert_{L ^2 (\Omega)}.
\end{equation}
\end{teo}
\begin{rem}[The energy to subleading order]\label{rem:energy sublead}\mbox{}\\
The most precise result prior to the above is~\cite[Theorem 2.1]{CR2} where the leading order is computed and the remainder is shown to be at most of order $1$. Such a result had been obtained before in~\cite{FHP} for a smaller range of parameters, namely for $1.25\leq b <\Theta_0 ^{-1}$, see also~\cite[Chapter~14]{FH-book} and references therein. The above theorem evaluates precisely the $\mathcal{O}(1)$ term, which is better appreciated in light of the following comments:
\begin{enumerate}
\item In the effective 1D functional~\eqref{eq:intro 1D func disc}, the parameter $k$ that corresponds to the local curvature of the sample appears with an $\varepsilon$ prefactor. As a consequence, one may show (see Section~\ref{sec:eff func} below) that for all $s\in [0,|\partial \Omega|]$
\begin{equation}
\label{eq:eone asympt}
E^{\mathrm{1D}}_\star \left(k(s)\right) = E^{\mathrm{1D}}_\star (0) + \mathcal{O}(\varepsilon)
\end{equation}
so that~\eqref{eq:energy GL} contains the previously known results. More generally we prove below that
$$\left|E^{\mathrm{1D}}_\star \left(k(s)\right) - E^{\mathrm{1D}}_\star \left(k(s')\right)\right| \leq C \varepsilon |s-s'| $$
so that $E^{\mathrm{1D}}_\star \left(k(s)\right)$ has variations of order $\varepsilon$ on the scale of the boundary layer. These contribute to a term of order $1$ that is included in~\eqref{eq:energy GL}. Actually one could investigate the asymptotics \eqref{eq:eone asympt} further, aiming at evaluating explicitly the error $ \mathcal{O}(\varepsilon) $ and therefore the curvature contribution to the energy. This would in particular be crucial in the analysis described in Remark 2.\ref{rem:curvature} below, but we do not pursue it here for the sake of brevity.
\item Undoing the mapping to boundary coordinates, one should note that $g_{\rm ref}(s,\varepsilon ^{-1} t)$ has fast variations (at scale $\varepsilon$) in both the $t$ direction and $s$ directions. The latter are of limited amplitude however, which explains that they enter the energy only at subleading order, and why a piecewise constant profile is sufficient to capture the physics.
\item We had previously proved the density estimate~\eqref{eq:recall density generic}, which is less precise than~\eqref{eq:main density}. Note in particular that~\eqref{eq:main density} does not hold at this level of precision if one replaces $g_{\rm ref} ^2 \left(s,\varepsilon ^{-1} t\right)$ by the simpler profile $f_0 ^2 (\varepsilon^{-1} t)$.
\item Strictly speaking the function $ g_{\rm ref} $ is defined only in the boundary layer $ \tilde{\mathcal{A}}_{\eps} $, so that \eqref{eq:main density} should be interpreted as if $ g_{\rm ref} $ would vanish outside $ \tilde{\mathcal{A}}_{\eps} $. However the estimate there is obviously true thanks to the exponential decay of $ \Psi^{\mathrm{GL}} $.
\end{enumerate}
\end{rem}
\begin{rem}[Regime $ b \to 1 $]
\label{rem:b1}
\mbox{} \\
A simple inspection of the proof reveals that some of the crucial estimates still hold true even if $ b \to 1 $, where surface superconductivity is also present (see~\cite{Alm,Pan,FK}). The main reason for assuming $b>1$ is that we rely on some well-known decay estimates for the order parameter (Agmon estimates), which hold only in this case. When $ b \to 1 $ one can indeed find suitable adaptations of those estimates (see, e.g., \cite[Chapter 12]{FH-book}), which however make the analysis much more delicate. In particular the positivity of the cost function (Lemma \ref{lem:K positive} in Section \ref{sec:app cost}) heavily relies on the assumption $ b > 1 $ and, although it is expected to be true even if $ b \to 1 $, its proof requires some non-trivial modifications. Moreover while for $ b \geq 1 $ only surface superconductivity is present and our strategy has good chances to work, on the opposite, when $ b \nearrow 1 $, a bulk term appears in the energy asymptotics \cite{FK} and the problem becomes much more subtle.
\end{rem}
We now turn to the uniform density estimates that follow from the above theorem. Here we can be less precise than before. Indeed, as suggested by the previous discussion, a density deviation of order $\varepsilon$ on a length scale of order $\varepsilon$ only produces a $\mathcal{O}(\varepsilon ^2)$ error in the energy. Thus, using~\eqref{eq:energy GL} we may only rule out local variations of a smaller order than the tangential variations included in~\eqref{eq:ref profile}, and for this reason we will compare $|\Psi^{\mathrm{GL}}|$ in $L^{\infty}$ norm only to the simplified profile $f_0 (\varepsilon ^{-1}\tau)$, since by \eqref{eq:point est 0 profile} $ f_0(t) - f_k(t) = \mathcal{O}(\varepsilon) $. Also, the result may be proved only in a region where the density is relatively large\footnote{Recall that it decays exponentially far from the boundary.}, namely in
\begin{equation}
\label{eq:annd}
\mathcal{A}_{\rm bl} : = \left\{ \mathbf{r} \in \Omega \: : \: f_0 \left(\varepsilon ^{-1} \tau \right) \geq \gamma_{\eps} \right\} \subset \left\{ \mathrm{dist}(\mathbf{r},\partial \Omega) \leq \mbox{$\frac{1}{2}$} \varepsilon \sqrt{|\log\gamma_{\eps}|} \right\},
\end{equation}
where $\rm{bl}$ stands for ``boundary layer'' and $ 0 < \gamma_{\eps} \ll 1 $ is any quantity such that
\begin{equation}
\label{eq:game}
\gamma_{\eps} \gg \varepsilon^{1/6}|\log \varepsilon| ^{a},
\end{equation}
where $ a > 0 $ is a suitably large constant related\footnote{Assuming that \eqref{eq:energy GL} holds true with an error of order $ \varepsilon |\log\varepsilon|^{\gamma} $, for some given $ \gamma > 0 $, the constant $ a $ can be any number satisfying $ a > \frac{1}{6} (\gamma+3) $.} to the power of $ |\log\varepsilon| $ appearing in \eqref{eq:energy GL}.
The inclusion in \eqref{eq:annd} follows from \eqref{eq:fal point l u b} below and ensures we are really considering a significant boundary layer: recall that the physically relevant region has a thickness roughly of order $\varepsilon|\log\varepsilon|$.
\begin{teo}[\textbf{Uniform density estimates and Pan's conjecture}]\label{theo:Pan}\mbox{}\\
Under the assumptions of the previous theorem, it holds
\begin{equation}
\label{eq:Pan plus}
\left\| \left|\Psi^{\mathrm{GL}}(\mathbf{r})\right| - f_0 \left(\varepsilon ^{-1} \tau \right) \right\|_{L^{\infty}(\mathcal{A}_{\rm bl})} \leq C \gamma_{\eps}^{-3/2} \varepsilon^{1/4} |\log \eps| ^{\infty} \ll 1.
\end{equation}
In particular for any $ \mathbf{r} \in \partial \Omega $ we have
\begin{equation}\label{eq:Pan}
\left| \left| \Psi^{\mathrm{GL}}(\mathbf{r}) \right| - f_0 (0) \right| \leq C \varepsilon^{1/4} ||\log \eps| ^{\infty} |\ll 1,
\end{equation}
where $C$ does not depend on $\mathbf{r}$.
\end{teo}
Estimate~\eqref{eq:Pan} solves the original form of Pan's conjecture~\cite[Conjecture 1]{Pan}. In addition, since $f_0$ is strictly positive, the stronger estimate~\eqref{eq:Pan plus} ensures that $\Psi^{\mathrm{GL}}$ does not vanish in the boundary layer~\eqref{eq:annd}. A physical consequence of the theorem is thus that normal inclusions such as vortices in the surface superconductivity phase may not occur. This is very natural in view of the existing knowledge on type-II superconductors but had not been proved previously.
\medskip
We now return to the question of the phase of the order parameter. Of course, the full phase cannot be estimated because of gauge invariance but gauge invariant quantities linked to the phase can. One such quantity is the winding number (a.k.a. phase circulation or topological degree) of $\Psi^{\mathrm{GL}}$ around the boundary $\partial \Omega$ defined as
\begin{equation}\label{eq:GL degree}
2 \pi \deg\left(\Psi, \partial \Omega\right) : = - i \int_{\partial \Omega} \mathrm{d} s \: \frac{|\Psi|}{\Psi} \partial_{s} \left( \frac{\Psi}{|\Psi|} \right),
\end{equation}
$ \partial_{s} $ standing for the tangential derivative. Theorem~\ref{theo:Pan} ensures that $\deg\left(\Psi, \partial \mathcal{B}_R\right)\in \mathbb{Z}$ is well-defined. Roughly, this quantity measures the number of quantized phase singularities (vortices) that $\Psi^{\mathrm{GL}}$ has inside $\Omega$. Our estimate is as follows:
\begin{teo}[{\bf Winding number of $ \Psi^{\mathrm{GL}} $ on the boundary}]
\label{theo:circulation}
\mbox{} \\
Under the previous assumptions, any GL minimizer $ \Psi^{\mathrm{GL}} $ satisfies
\begin{equation}\label{eq:GL degree result}
\deg\left(\Psi^{\mathrm{GL}}, \partial \Omega\right) = \frac{|\Omega|}{\varepsilon^2} + \frac{|\alpha_0|}{\varepsilon} + \mathcal{O}(\varepsilon^{-3/4}|\log\varepsilon|^{\infty})
\end{equation}
in the limit $\varepsilon \to 0$.
\end{teo}
Note that the remainder term in~\eqref{eq:GL degree result} is much larger than $\varepsilon ^{-1}|\alpha(k) -\alpha_0| = \mathcal{O} (1)$ so that the above result does not allow to estimate corrections due to curvature. We believe that, just as we had to expand the energy to second order to obtain the refined first order results Theorems~\ref{theo:Pan} and~\ref{theo:circulation}, obtaining uniform density estimates and degree estimates at the second order would require to expand the energy to the third order, which goes beyond the scope of the present paper.
We had proved Theorems~\ref{theo:Pan} and~\ref{theo:circulation} before in the particular, significantly easier, case where $\Omega$ is a disc. The next subsection contains a sketch of the proof of the general case, where new ingredients enter, due to the necessity to take into account the non-trivial curvature of the boundary. {Before proceeding, we make a last remark in this direction:}
\begin{rem}[Curvature dependence of the order parameter]\label{rem:curvature}\mbox{}\\
{In view of previous results~\cite{FH1} in the regime $b \nearrow\Theta_0 ^{-1}$, a larger curvature should imply a larger local value of the order parameter. In the regime of interest to this paper, this will only be a subleading order effect, but it would be interesting to capture it by a rigorous asymptotic estimate.}
{It has been proved before~\cite{Pan,FK} that in the surface superconductivity regime~\eqref{eq:b condition}
\begin{equation}\label{eq:curv dep psi}
\frac{1}{b ^{1/2}\varepsilon} |\Psi^{\mathrm{GL}}| ^4 \mathrm{d} \mathbf{r} \underset{\varepsilon \to 0}{\longrightarrow} C(b) \mathrm{d} s(\mathbf{r})
\end{equation}
as measures, with $ \mathrm{d} \mathbf{r} $ the Lebesgue measure and $\mathrm{d} s (\mathbf{r})$ the 1D Hausdorff measure along the boundary. Here $C(b)>0$ is a constant which depends only on $b$. A natural conjecture is that one can derive a result revealing the next-order behavior, of the form
\begin{equation}\label{eq:curv dep psi 2}
\frac{1}{\varepsilon}\left( \frac{1}{b ^{1/2}\varepsilon} |\Psi^{\mathrm{GL}}| ^4 \mathrm{d} \mathbf{r} - C(b) \mathrm{d} s(\mathbf{r})\right) \underset{\varepsilon \to 0}{\longrightarrow} C_2 (b) k(s) \mathrm{d} s(\mathbf{r})
\end{equation}
with $C_2 (b) >0$ depending only on $b$. The form of the right-hand side is motivated by two considerations: }
\begin{itemize}
\item {In view of~\cite{FH1} we should expect that increasing $k$ increases the local value of $|\Psi^{\mathrm{GL}}|$, whence the sign of the correction;}
\item {Since the curvature appears only at subleading order in this regime, perturbation theory suggests that the correction should be linear in the curvature.}
\end{itemize}
{We plan to substantiate this picture further in a later work.}
\end{rem}
\subsection{Sketch of proof}\label{sec:sketch}
In the regime of interest to this paper, the GL order parameter is concentrated along the boundary of the sample and the induced magnetic field is extremely close to the applied one. The tools allowing to prove these facts are well-known and described at length in the monograph~\cite{FH-book}. We shall thus not elaborate on this and the formal considerations presented in this subsection take as starting point the following effective functional
\begin{multline}\label{eq:intro GL func bound}
\mathcal{G}_{\mathcal{A}_{\eps}}[\psi] : = \int_0^{|\partial \Omega|} \mathrm{d} s \int_0^{c_0 |\log\varepsilon|} \mathrm{d} t \left(1 - \varepsilon k(s) t \right) \left\{ \left| \partial_t \psi \right|^2 + \frac{1}{(1- \varepsilon k(s) t)^2} \left| \left( \varepsilon \partial_s + i a_{\varepsilon}(s,t) \right) \psi \right|^2 \right. \\
\left. - \frac{1}{2 b} \left[ 2|\psi|^2 - |\psi|^4 \right] \right\},
\end{multline}
where $(s,t)$ represent boundary coordinates in the original domain $\Omega$, the normal coordinate $t$ having been dilated on scale $\varepsilon$, and $ \psi $ can be thought of as $ \Psi^{\mathrm{GL}}(\mathbf{r}(s,\varepsilon t)) $, i.e., the order parameter restricted to the boundary layer. We denote $k(s)$ the curvature of the original domain and have set
\begin{equation}\label{eq:intro vect pot bound}
a_{\varepsilon}(s,t) : =- t + \mbox{$\frac{1}{2}$} \varepsilon k(s) t^2 + \varepsilon \delta_{\varepsilon} ,
\end{equation}
with
\begin{equation}\label{eq:intro deps}
\delta_{\varepsilon} : = \frac{\gamma_0}{\varepsilon^2} - \left\lfloor \frac{\gamma_0}{\varepsilon^2} \right\rfloor, \qquad \gamma_0 : = \frac{1}{|\partial \Omega|} \int_{\Omega} \mathrm{d} \mathbf{r} \: \mbox{curl} \, \mathbf{A}^{\mathrm{GL}},
\end{equation}
$ \left\lfloor \: \cdot \: \right\rfloor $ standing for the integer part. Note that a specific choice of gauge has been made to obtain~\eqref{eq:intro GL func bound}.
Thanks to the methods exposed in~\cite{FH-book}, one can show that the minimization of the above functional gives the full GL energy in units of $\varepsilon ^{-1}$, up to extremely small remainder terms, provided $c_0$ is chosen lare enough. To keep track of the fact that the domain $\mathcal{A}_{\eps} = [0, |\partial \Omega|] \times [ 0, c_0 |\log\varepsilon| ]$ corresponds to the unfolded boundary layer of the original domain and $\psi$ to the GL order parameter in boundary coordinates, one should impose periodicity of $\psi$ in the $s$ direction.
Here we shall informally explain the main steps of the proof that
\begin{equation}\label{eq:intro anne}
G_{\mathcal{A}_{\eps}} = \int_0^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_\star \left(k(s)\right) + \mathcal{O} (\varepsilon ^2 |\log \eps| ^{\infty}).
\end{equation}
where $G_{\mathcal{A}_{\eps}}$ is the ground state energy associated to~\eqref{eq:intro GL func bound}.
When $k(s)\equiv k$ is constant (the disc case), one may use the ansatz
\begin{equation}\label{eq:intro ansatz 1D}
\psi (s,t) = f(t) e^{- i \left( \varepsilon ^{-1} \alpha s - \varepsilon \delta_{\varepsilon} s \right) }.
\end{equation}
and recover the functional~\eqref{eq:intro 1D func disc}. It is then shown in~\cite{CR2} that the above ansatz is essentially optimal if one chooses $\alpha = \alpha(k)$ and $f=f_{k}$. An informal sketch of the proof in the case $k=0$ is given in Section~3.2 therein. The main insight in the general case is to realize that the above ansatz stays valid \emph{locally in $s$}. Indeed, since the terms involving $k(s)$ in~\eqref{eq:intro GL func bound} come multiplied by an $\varepsilon$ factor, it is natural to expect variations in $s$ to be weak and the state of the system to be roughly of the form~\eqref{eq:intro GLm formal refined}, directly inspired by~\eqref{eq:intro ansatz 1D}.
As usual the upper and lower bound inequalities in~\eqref{eq:intro anne} are proved separately.
\paragraph*{Upper bound.} To recover the integral in the energy estimate~\eqref{eq:intro anne}, we use a Riemann sum over the cell decomposition $\mathcal{A}_{\eps} = \bigcup_{n=1} ^{N_{\eps}} \mathcal{C}_n $ introduced at the beginning of Section~\ref{sec:statements}. Indeed, as already suggested in~\eqref{eq:ref profile}, a piecewise constant approximation in the $s$-direction will be sufficient. Our trial state roughly has the for
\begin{equation}\label{eq:rough trial state}
\psi (s,t) = f_n (t) e^{- i \left( \varepsilon ^{-1} \alpha_n s - \varepsilon \delta_{\varepsilon} s \right) }, \quad \mbox{ for } s_n\leq s \leq s_{n+1}.
\end{equation}
Of course, we need to make this function continuous to obtain an admissible trial state, and we do so by small local corrections, described in more details in Section~\ref{sec:trial state}. We may then approximate the curvature by its mean value in each cell, making a relative error of order $\varepsilon ^2$ per cell. Evaluating the energy of the trial state in this way we obtain an upper bound of the form
\begin{equation}\label{eq:up bound Riemann}
G_{\mathcal{A}_{\eps}} \leq \sum_{n=1} ^{N_{\eps}} |s_{n+1} - s_n|E^{\mathrm{1D}}_\star (k_n) (1 + o(1))+ \mathcal{O} (\varepsilon ^2)
\end{equation}
where the $o(1)$ error is due to the necessary modifications to~\eqref{eq:rough trial state} to make it continuous. The crucial point is to be able to control this error by showing that the modification needs not be a large one. This requires a detailed analysis of the $k$ dependence of the relevant quantities $E ^{\rm 1D}_{\star} (k)$, $\alpha(k)$ and $f_{k}$ obtained by minimizing~\eqref{eq:intro 1D func disc}. Indeed, we prove in Section~\ref{sec:eff func} below that
$$ \left| E^{\mathrm{1D}}_{\star} (k) -E^{\mathrm{1D}}_{\star} (k') \right| \leq C \varepsilon|\log \eps| ^{\infty} |k-k'|, \qquad \left|\alpha (k) - \alpha (k') \right| \leq C \varepsilon^{1/2} |\log \eps| ^{\infty} |k-k'|^{1/2} $$
and, in a suitable norm,
$$ f_{k'} = f_{k} + \mathcal{O} \left( \varepsilon^{1/2} |\log \eps| ^{\infty} |k-k'|^{1/2} \right),$$
which will allow to obtain the desired control of the $o(1)$ in~\eqref{eq:up bound Riemann} and conclude the proof by a Riemann sum argument.
\paragraph*{Lower bound.} In view of the argument we use for the upper bound, the natural idea to obtain the corresponding lower bound is to use the strategy for the disc case we developed in~\cite{CR2} locally in each cell. In the disc case, a classical method of energy decoupling and Stokes' formula lead to the lower bound\footnote{We simplify the argument for pedagogical purposes.}
\begin{equation}\label{eq:intro low bound disc}
\mathcal{G}_{\mathcal{A}_{\eps}} [\psi] \gtrapprox E ^{\rm 1D}_{\star} (k) + \int_{\mathcal{A}_{\eps}} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k t\right) K_k (t) \left( |\partial_t v| ^2 + \textstyle\frac{\varepsilon ^2}{\left(1 - \varepsilon k t\right) ^2} |\partial_s v| ^2 \right)
\end{equation}
where we have used the strict positivity of $f_{k}$ to write
\begin{equation}\label{eq:intro decouple}
\psi (s,t)= f_{k} (t) e^{- i \left( \varepsilon ^{-1} \alpha(k) s - \varepsilon \delta_{\varepsilon} s \right) } v(s,t)
\end{equation}
and the ``cost function'' is
\begin{align*}
K_k (t) &= f_{k} ^2 (t) + F_k (t),\\
F_k(t) &= 2 \int_0 ^t d\eta \: \frac{\eta + \alpha(k) - \textstyle\frac{1}{2} \varepsilon k \eta ^2}{1-\varepsilon k \eta} f_{k} ^2 (\eta).
\end{align*}
This method is inspired from our previous works on the related Gross-Pitaevskii theory of rotating Bose-Einstein condensates~\cite{CRY,CPRY1,CPRY2} (informal summaries may be found in ~\cite{CPRY3,CPRY4}). Some of the steps leading to~\eqref{eq:intro low bound disc} have also been used before in this context~\cite{AH}. The desired lower bound in the disc case follows from~\eqref{eq:intro low bound disc} and the fact that $K_k$ is essentially {\it positive}\footnote{More precisely it is positive except possibly for large $t$, a region that can be handled using the exponential decay of GL minimizers (Agmon estimates).} for any $k$. This is proved by carefully exploiting special properties of $f_{k}$ and~$\alpha(k)$.
To deal with the general case where the curvature is not constant, we again split the domain $\mathcal{A}_{\eps}$ into small cells, approximate the curvature by a constant in each cell and use the above strategy locally. A serious new difficulty however comes from the use of Stokes' formula in the derivation of~\eqref{eq:intro low bound disc}. We need to reduce the terms produced by Stokes' formula to expressions involving only first order derivatives of the order parameter, using further integration by parts. In the disc case, boundary terms associated with this operation vanish due to the periodicity of $\psi$ in the $s$ variable. When doing the integrations by parts in each cell, using different $f_{k}$ and $\alpha(k)$ in~\eqref{eq:intro decouple}, the boundary terms do not vanish since we artificially introduce some (small) discontinuity by choosing a cell-dependent profile $f_{k_n}$ as reference.
To estimate these boundary terms we proceed as follows: the term at $s=s_{n+1}$, made of one part coming from the cell $ \mathcal{C}_n $ and one from the cell $\mathcal{C}_{n+1}$ is integrated by parts back to become a bulk term in the cell $ \mathcal{C}_n $. In this sketch we ignore a rather large amount of technical complications and state what is essentially the conclusion of this procedure:
\begin{equation}\label{eq:intro low bound gen}
\mathcal{G}_{\mathcal{A}_{\eps}} [\psi] \gtrapprox \sum_{n=1} ^{N_{\eps}} \bigg[ |s_{n+1} - s_n|E^{\mathrm{1D}}_\star (k_n) + \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \left(1 - \varepsilon k_n t\right) \tilde{K}_{n} \bigg( |\partial_t u_n| ^2 + \textstyle\frac{\varepsilon ^2}{\left(1 - \varepsilon k_n t\right) ^2} |\partial_s u_n| ^2 \bigg) \bigg]
\end{equation}
where
\begin{equation}\label{eq:intro decouple bis}
u_n (s,t)= f_{k_n} ^{-1} (t) e^{ i \left( \varepsilon ^{-1} \alpha(k) s + \varepsilon \delta_{\varepsilon} s \right) } \psi (s,t)
\end{equation}
and the ``modified cost function'' is
\begin{align*}
\tilde{K}_{n} (s,t) &= K_{k_n} (t) - |\partial_s \chi_n (s)| |I_{n,n+1} (t)| - |\chi_n(s)| \left| \partial_t I_{n,n+1} (t)\right|,\\
I_{n,n+1} (t) &= F_{k_n} (t)- F_{k_{n+1}} (t) \frac{f^2_{k_{n}}(t)}{f^2_{k_{n+1}}(t)},
\end{align*}
and $\chi_n$ is a suitable localization function supported in $\mathcal{C}_n$ with $\chi_n(s_{n+1}) = 1$ that we use to perform the integration by parts in $\mathcal{C}_n$. Note that the dependence of the new cost function on both $k_n$ and $k_{n+1}$ is due to the fact that the original boundary terms at $s_{n+1}$ that we transform into bulk terms in $ \mathcal{C}_n $ involved both $u_n$ and $u_{n+1}$.
The last step is to prove a bound of the form
\begin{equation}\label{eq:intro bound L}
|I_{n,n+1}(t)| + \left| \partial_t I_{n,n+1} (t)\right| \leq C \varepsilon |\log \eps| ^{\infty} f_{k_n} ^2 (t)
\end{equation}
on the ``correction function'' $I_{n,n+1}$, so that
$$ \tilde{K}_{n} (t) \geq \left( 1- C \varepsilon |\log \eps| ^{\infty} \right)f_{k_n} ^2 (t) + F_{k_n} (t).$$
This allows us to conclude that (essentially) $\tilde{K}_{n} \geq 0$ by a perturbation of the argument applied to $K_{k_n}$ in~\cite{CR2} and thus concludes the lower bound proof modulo the same Riemann sum argument as in the upper bound part. Note the important fact that the quantity in the l.h.s. of \eqref{eq:intro bound L} is proved to be small \emph{relatively to} $f_{k_n} ^2 (t) $, including in a region where the latter function is exponentially decaying. This bound requires a thorough analysis of auxiliary functions linked to~\eqref{eq:intro 1D func disc} and is in fact a rather strong manifestation of the continuity of this minimization problem as a function of~$k$.
\medskip
The rest of the paper is organized as follows: Section~\ref{sec:functionals} contains the detailed analysis of the effective, curvature-dependent, 1D problem. The necessary continuity properties as function of the curvature are given in Subsection~\ref{sec:eff func} and the analysis of the associated auxiliary functions in Subsection~\ref{sec:aux func}. The details of the energy upper bound are then presented in Section~\ref{sec:up bound} and the energy lower bound is proved in Section~\ref{sec:low bound}. We deduce our other main results in Section~\ref{sec:density degree}. Appendix~\ref{sec:app} recalls for the convenience of the reader some material from~\cite{CR2} that we use throughout the paper.
\section{Effective Problems and Auxiliary Functions}\label{sec:functionals}
This section is devoted to the analysis of the 1D curvature-dependent reduced functionals whose minimization allows us to reconstruct the leading and sub-leading order of the full GL energy. We shall prove results in two directions:
\begin{itemize}
\item We carefully analyse the dependence of the 1D variational problems as a function of curvature in Subsection~\ref{sec:eff func}. Our analysis, in particular the estimate of the subleading order of the GL energy, requires some quantitative control on the variations of the optimal 1D energy, phase and density when the curvature parameter is varied, that is when we move along the boundary layer of the original sample along the transverse direction.
\item In our previous paper~\cite{CR2} we have proved the positivity property of the cost function which is the main ingredient in the proof of the energy lower bound in the case of a disc (constant curvature). As mentioned above, the study of general domains with smooth curvature that we perform here will require to estimate more auxiliary functions, which is the subject of Subsection~\ref{sec:aux func}.
\end{itemize}
We shall use as input some key properties of the 1D problem at fixed $k$ that we proved in~\cite{CR2}. These are recalled in Appendix~\ref{sec:app} below for the convenience of the reader.
\subsection{Effective 1D functionals}\label{sec:eff func}
We take for granted the three crucial but standard steps of reduction to the boundary layer, replacement of the vector potential and mapping to boundary coordinates. Our considerations thus start from the following reduced GL functional giving the original energy in units of $\varepsilon ^{-1}$, up to negligible remainders:
\begin{multline}\label{eq:GL func bound}
\mathcal{G}_{\mathcal{A}_{\eps}}[\psi] : = \int_0^{|\partial \Omega|} \mathrm{d} s \int_0^{c_0 |\log\varepsilon|} \mathrm{d} t \left(1 - \varepsilon k(s) t \right) \left\{ \left| \partial_t \psi \right|^2 + \frac{1}{(1- \varepsilon k(s) t)^2} \left| \left( \varepsilon \partial_s + i a_{\varepsilon}(s,t) \right) \psi \right|^2 \right. \\
\left. - \frac{1}{2 b} \left[ 2|\psi|^2 - |\psi|^4 \right] \right\},
\end{multline}
where $k(s)$ is the curvature of the original domain. We have set
\begin{equation}\label{eq:vect pot bound}
a_{\varepsilon}(s,t) : =- t + \mbox{$\frac{1}{2}$} \varepsilon k(s) t^2 + \varepsilon \delta_{\varepsilon} ,
\end{equation}
and
\begin{equation}\label{eq:deps}
\delta_{\varepsilon} : = \frac{\gamma_0}{\varepsilon^2} - \left\lfloor \frac{\gamma_0}{\varepsilon^2} \right\rfloor, \qquad \gamma_0 : = \frac{1}{|\partial \Omega|} \int_{\Omega} \mathrm{d} \mathbf{r} \: \mbox{curl} \, \mathbf{A}^{\mathrm{GL}},
\end{equation}
$ \left\lfloor \: \cdot \: \right\rfloor $ standing for the integer part. The boundary layer in rescaled coordinates is denoted by
\begin{equation}
\label{eq:boundary layer}
\mathcal{A}_{\eps} : = \left\{ \mathbf{r} \in \Omega \: | \: \mathrm{dist}(\mathbf{r}, \partial \Omega) \leq c_0 \varepsilon |\log\varepsilon| \right\}.
\end{equation}
The effective functionals that we shall be concerned with in this section are obtained by computing the energy \eqref{eq:GL func bound} of certain special states. In particular we have to go beyond the simple ans\"atze considered so far in the literature, e.g., in~\cite{FH-book,CR2}, and obtain the following effective energies:
\begin{itemize}
\item {\it 2D functional with definite phase}. Inserting the ansatz
\begin{equation}\label{eq:ansatz 2D}
\psi (s,t) = g(s,t) e^{- i \left( \varepsilon ^{-1} S (s) - \varepsilon \delta_{\varepsilon} s \right) }
\end{equation}
in~\eqref{eq:GL func bound}, with $g$ and $S$ respectively real valued density and phase, we obtain
\bml{\label{eq:2D func}
\mathcal{E}^{\mathrm{2D}}_S[g] : = \int_{0}^{c_0 |\log\varepsilon|} \mathrm{d} t \int_0^{|\partial \Omega|} \mathrm{d} s \left( 1 - \varepsilon k(s) t \right) \left\{ \left| \partial_t g \right|^2 + \frac{\varepsilon^2}{(1 - \varepsilon k(s) t)^2} \left| \partial_s g \right|^2 \right. \\
+ \frac{\left(t + \partial_s S - \frac{1}{2}\varepsilon t ^2 k (s) \right) ^2}{(1-\varepsilon t k(s)) ^2} g^2 - \frac{1}{2b} \left(2 g^2 - g^4 \right) \bigg\}.
}
In the particular case where $\partial_s S = \alpha \in 2\pi \mathbb{Z}$ we may obtain a simpler functional of the density alone
\bml{\label{eq:2D func bis}
\mathcal{E}^{\mathrm{2D}}_\alpha [g] : = \int_{0}^{c_0 |\log\varepsilon|} \mathrm{d} t \int_0^{|\partial \Omega|} \mathrm{d} s \left( 1 - \varepsilon k(s) t \right) \left\{ \left| \partial_t g \right|^2 + \frac{\varepsilon^2}{(1 - \varepsilon k(s) t)^2} \left| \partial_s g \right|^2 \right. \\
\left. + W_{\alpha}(s,t) g^2 - \frac{1}{2b} \left(2 g^2 - g^4 \right) \right\},
}
where
\begin{equation}\label{eq:pots}
W_{\alpha} (s,t) = \frac{\left(t + \alpha - \frac{1}{2} k (s) \varepsilon t ^2 \right) ^2}{(1- k(s)\varepsilon t) ^2}.
\end{equation}
However to capture the next to leading order of~\eqref{eq:GL func bound} we do consider a non-constant $\partial_s S$ to accommodate curvature variations, which is in some sense the main novelty of the present paper. In particular,~\eqref{eq:2D func bis} does \emph{not} provide the $ \mathcal{O}(\varepsilon)$ correction to the full GL energy. On the opposite \eqref{eq:2D func} does, once minimized over the phase factor $S$ as well as the density $g$. We will not prove this directly although it follows rather easily from our analysis.
\item {\it 1D functional with given curvature and phase}. If the curvature $k(s)\equiv k$ is constant (the disc case), the minimization of~\eqref{eq:2D func bis} reduces to the 1D problem
\begin{equation}
\label{eq:1D func}
\E^{\mathrm{1D}}_{k,\alpha}[f] : = \int_0^{c_0|\log\varepsilon|} \mathrm{d} t (1-\varepsilon k t )\left\{ \left| \partial_t f \right|^2 + V_{k,\alpha}(t) f^2 - \textstyle\frac{1}{2b} \left(2 f^2 - f^4 \right) \right\},
\end{equation}
with
\begin{equation}
\label{eq:pot}
V_{k,\alpha}(t) : = \frac{(t + \alpha - \frac12 \varepsilon k t ^2 )^2}{(1-\varepsilon k t ) ^2}.
\end{equation}
In the sequel we shall denote
\begin{equation}\label{eq:def interval}
I_{\eps} = \left[0,c_0 |\log \varepsilon|\right] = : [0,t_{\eps}].
\end{equation}
Note that~\eqref{eq:1D func} includes $ \mathcal{O}(\varepsilon)$ corrections due to curvature. As explained above our approach is to approximate the curvature of the domain as a piecewise constant function and hence an important ingredient is to study the above 1D problem for different values of $k$, and prove some continuity properties when $k$ is varied. For $k=0$ (the half-plane case, sometimes referred to as the half-cylinder case) we recover the familiar
\begin{equation}\label{eq:1D func bis}
\E^{\mathrm{1D}}_{0,\alpha}[f] : = \int_0^{c_0|\log\varepsilon|} \mathrm{d} t \left\{ \left| \partial_t f \right|^2 + (t + \alpha )^2 f^2 - \textstyle\frac{1}{2b} \left(2 f^2 - f^4 \right) \right\},
\end{equation}
which has been known to play a crucial role in surface superconductivity physics for a long time (see~\cite[Chapter 14]{FH-book} and references therein).
\end{itemize}
In this section we provide details about the minimization of~\eqref{eq:1D func} that go beyond our previous study~\cite[Section 3.1]{CR2}. We will use the following notation:
\begin{itemize}
\item Minimizing~\eqref{eq:1D func} with respect to $f$ at fixed $\alpha$ we get a minimizer $f_{k,\alpha}$ and an energy $E^{\mathrm{1D}} (k,\alpha)$.
\item Minimizing the latter with respect to $\alpha$ we get some $\alpha(k)$ and some energy $E^{\mathrm{1D}}_\star (k)$. {It follows from~\eqref{eq:vari 1D phase} below that $\alpha(k)$ is uniquely defined}.
\item Corresponding to $E^{\mathrm{1D}}_\star (k) : = E^{\mathrm{1D}} (k,\alpha(k)) $ we have an optimal density $f_{k}$, which minimizes $E^{\mathrm{1D}} (k,\alpha(k)) $, and a potential
$$ V_{k}(t) : = V_{k,\alpha(k)}(t).$$
\end{itemize}
The following Proposition contains the crucial continuity properties (as a function of $k$) of these objects:
\begin{pro}[\textbf{Dependence on curvature of the 1D minimization problem}]\label{pro:1D curv}\mbox{}\\
Let $k,k' \in \mathbb{R}$ be bounded independently of $\varepsilon$ and $1<b<\Theta_0 ^{-1}$, then the following holds:
\begin{equation}\label{eq:vari 1D energy}
\left| E^{\mathrm{1D}}_{\star} (k) -E^{\mathrm{1D}}_{\star} (k') \right| \leq C \varepsilon |k-k'| |\log \eps| ^{\infty}
\end{equation}
and
\begin{equation}\label{eq:vari 1D phase}
|\alpha (k) - \alpha (k')| \leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}.
\end{equation}
Finally, for all $n\in \mathbb{N}$,
\begin{equation}\label{eq:vari 1D opt density}
\left\Vert f_{k} ^{(n)}- f_{k'} ^{(n)} \right\Vert_{L^{\infty} (I_{\eps})} \leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}.
\end{equation}
\end{pro}
We first prove~\eqref{eq:vari 1D energy} and~\eqref{eq:vari 1D phase} and explain that these estimates imply the following lemma:
\begin{lem}[\textbf{Preliminary estimate on density variations}]\label{lem:1D curv pre}\mbox{}\\
Under the assumptions of Proposition~\ref{pro:1D curv} it holds
\begin{equation}\label{eq:vari 1D opt density L 2}
\left\Vert f_{k} ^2 - f_{k'} ^2 \right\Vert_{L ^2 (I_{\eps})} \leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}.
\end{equation}
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lem:1D curv pre}] We proceed in three steps:
\paragraph*{Step 1. Energy decoupling.} We use the strict positivity of $f_{k}$ recalled in the appendix to write any function $f$ on $I_{\eps}$ as
$$ f = f_{k} v.$$
We can then use the variational equation \eqref{eq:var eq fal} satisfied by $ f_{k} $ to decouple the $\alpha ', k'$ functional in the usual way, originating in~\cite{LM}. Namely, we integrate by parts and use the fact that $f_{k}$ satisfies Neumann boundary conditions to write
\begin{multline*}
\int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t ) (\partial_t f)^2 = \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t ) \left[v^2 (\partial_t f_{k}) ^2 + f_{k}^2 (\partial_t v) ^2 + 2 f_{k} \partial_t f_{k} v \partial_t v \right]\\
= \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) \left[ f_{k}^2 (\partial_t v) ^2 + \left( \textstyle\frac{\varepsilon k'}{1-\varepsilon k' t} - \textstyle\frac{\varepsilon k}{1-\varepsilon k t} \right) v^2 f_{k} \partial_t f_{k} - f_{k} ^2 v^2 \left(V_{k} + \textstyle\frac{1}{b} (f_{k} ^2 - 1 )\right)\right].
\end{multline*}
Inserting this into the definition of $\E^{\mathrm{1D}}_{k',\alpha '}$ and using~\eqref{eq:eone explicit}, we obtain for any $f$
\begin{align}\label{eq:decouple 1D}
\E^{\mathrm{1D}}_{k',\alpha'} [f] &= E ^{\rm 1D}_{\star} (k) + \mathcal{F}_{\rm red} [v] \nonumber
\\ &+ \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) \left(V_{k',\alpha'}(t) - V_{k}(t) \right) f_{k} ^2 v ^2 \nonumber
\\ &+ \frac{1}{b} \varepsilon(k' -k ) \displaystyle\int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: t f_{k} ^4 + \varepsilon \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: \left( k' - k \textstyle\frac{1-\varepsilon k ' t}{1-\varepsilon k t}\right) |v| ^2 f_{k} \partial_t f_{k}
\end{align}
with
\begin{equation}\label{eq:f red 1D}
\mathcal{F}_{\rm red} [v] = \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) \left\{ f_{k} ^2 (\partial_t v ) ^2 + \textstyle\frac{1}{ 2b} f_{k} ^4 \left( 1 - v^2\right) ^2 \right\}.
\end{equation}
In the case $\alpha'=\alpha(k)$ we can insert the trial state $v\equiv 1$ in the above, which gives
\begin{equation}\label{eq:up bound ener 1D}
E^{\mathrm{1D}}_\star (k') \leq E^{\mathrm{1D}}_{k',\alpha(k)} \leq E ^{\rm 1D}_{\star} (k) + C \varepsilon |k-k'||\log \varepsilon| ^{\infty}
\end{equation}
in view of the bounds on $f_{k}$ recalled in Appendix~\ref{sec:app} and the easy estimate
$$ \left| V_{k',\alpha(k)} (t)- V_{k} (t)\right| \leq C \varepsilon |k-k'||\log \varepsilon| ^{\infty}$$
for any $t\in I_{\eps}$. Changing the role of $k$ and $k'$ in \eqref{eq:up bound ener 1D} we obtain the reverse inequality
$$E ^{\rm 1D}_{\star} (k) \leq E^{\mathrm{1D}}_\star (k') + C \varepsilon |k-k'||\log \varepsilon| ^{\infty}, $$
and hence \eqref{eq:vari 1D energy} is proved.
\paragraph*{Step 2. Use of the cost function.}
We now consider the case $\alpha' = \alpha (k'), f=f_{k'}$ and bound from below the term on the second line of~\eqref{eq:decouple 1D}. A simple computation gives
\begin{multline}\label{eq:1D diff pot}
\int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) \left(V_{k',\alpha(k')} - V_{k,\alpha(k)} \right) f_{k} ^2 v ^2 \\
= \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k t) ^{-1} \left( \alpha(k') - \alpha(k) \right) \left( 2t + \alpha (k) + \alpha(k') - \varepsilon k t ^2 \right) f_{k} ^2 v ^2 + \mathcal{O}(\varepsilon |k-k'|)
\\= \left( \alpha(k') - \alpha(k) \right) ^2 \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) ^{-1} f_{k} ^2 v^2
\\ + 2 ( \alpha(k') - \alpha(k)) \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: \frac{t + \alpha(k) - \frac{1}{2}\varepsilon k t ^2 }{1-\varepsilon k t} f_{k} ^2 v ^2 + \mathcal{O}(\varepsilon |k-k'|).
\end{multline}
We may now follow closely the procedure of~\cite[Section 5.2]{CR2}: with the potential function $F_k$ defined in~\eqref{eq:Fk} below we have
$$2\frac{t + \alpha(k) - \frac{1}{2}\varepsilon k t ^2 }{1-\varepsilon k t} f_{k} ^2 = \partial_t F_k (t)$$
and hence an integration by parts yields (boundary terms vanish thanks to Lemma~\ref{lem:F prop})
\begin{equation}\label{eq:1D mom term}
2 \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: \frac{t + \alpha(k) - \frac{1}{2}\varepsilon k t ^2 }{1-\varepsilon k t} f_{k} ^2 v^2 = - 2 \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: F_k v\partial_t v.
\end{equation}
We now split the integral into one part running from $0$ to $\bar{t}_{k,\eps}$ and a boundary part running from $\bar{t}_{k,\eps}$ to $c_0 |\log \varepsilon|$, where $\bar{t}_{k,\eps}$ is defined in \eqref{eq:annb} and~\eqref{eq:annb bis} below. For the second part, it follows from the decay estimates of Lemma~\ref{lem:point est fal}
that
\begin{equation}\label{eq:1D bound region}
\int_{\bar{t}_{k,\eps}} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: F_k v\partial_t v = \mathcal{O}(\varepsilon ^{\infty}).
\end{equation}
To see this, one can simply adapt the procedure in~\cite[Eqs. (5.21) -- (5.28)]{CR2}. The bound~\eqref{eq:1D bound region} is in fact easier to derive than the corresponding estimate in~\cite{CR2} because the decay estimates in Lemma~\ref{lem:point est fal} are stronger than the Agmon estimates we had to use in that case. Details are thus omitted.
We turn to the main part of the integral~\eqref{eq:1D mom term}, which lives in $[0,\bar{t}_{k,\eps}].$ Since $F_k$ is negative we have, using Lemma~\ref{lem:K positive} and Cauchy-Schwarz,
\begin{multline*}
\bigg| 2 (\alpha(k') - \alpha(k)) \int_{0} ^{\bar{t}_{k,\eps}} \mathrm{d} t \: F_k v\partial_t v \bigg| \\ \leq
(\alpha(k') - \alpha(k)) ^2 \int_{0} ^{\bar{t}_{k,\eps}} \mathrm{d} t \: (1-\varepsilon k' t) ^{-1}\left|F_k\right| v^2 + \int_{0} ^{\bar{t}_{k,\eps}} \mathrm{d} t \: (1-\varepsilon k' t) \left|F_k\right| (\partial_t v )^2
\\\leq (1-d_{\varepsilon}) (\alpha(k') - \alpha(k)) ^2 \int_{0} ^{\bar{t}_{k,\eps}} \mathrm{d} t \: (1-\varepsilon k' t) ^{-1} f_{k} ^2 v^2 + (1-d_{\varepsilon}) \int_{0} ^{\bar{t}_{k,\eps}} \mathrm{d} t (1-\varepsilon k' t) f_{k} ^2 (\partial_t v )^2
\end{multline*}
for any $0< d_{\varepsilon} \leq C |\log \varepsilon| ^{-4}$. Inserting this bound and~\eqref{eq:1D bound region} in~\eqref{eq:decouple 1D}, using~\eqref{eq:1D diff pot} and~\eqref{eq:1D mom term}, yields the lower bound
\begin{align}\label{eq:1D low pre}
E^{\mathrm{1D}}_\star (k') &\geq E ^{\rm 1D}_{\star} (k) + \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon t k') \left\{ d_{\varepsilon} f_{k}^2 (\partial_t v )^2 + d_{\varepsilon} \frac{(\alpha' - \alpha(k)) ^2}{(1-\varepsilon t k') ^2} f_{k} ^2 v^2 + \frac{f_{k} ^4}{ 2b} \left( 1 - v^2\right) ^2 \right\} \nonumber
\\ &+ \varepsilon \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: v^2 f_{k} \partial_t f_{k} \left( k' - k\frac{1-\varepsilon tk '}{1-\varepsilon t k}\right) - C \varepsilon |k-k'| |\log \varepsilon| ^{\infty}
\end{align}
where $v = f_{k'}/f_{k}$ and we also used the uniform bound~\eqref{eq:fal estimate} to estimate the fourth term of the r.h.s. of~\eqref{eq:decouple 1D}.
\paragraph*{Step 3. Conclusion.} We still have to bound the first term in the second line of~\eqref{eq:1D low pre}:
\begin{multline*}
\varepsilon \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: v^2 f_{k} \partial_t f_{k} \left( k' - k \frac{1-\varepsilon k ' t}{1-\varepsilon k t}\right)
= \frac{1}{2} \left[ v^2 f_{k} ^2 \left( \varepsilon k' - \varepsilon k \frac{1-\varepsilon k' t }{1-\varepsilon k t}\right)\right]_0 ^{c_0 |\log \varepsilon|}
\\+ \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: v^2 f_{k} ^2 \frac{\varepsilon k (k'-k)}{(1-\varepsilon k t)^2} - \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: v \partial_t v f_{k} ^2 \left( \varepsilon k' - \varepsilon k \frac{1-\varepsilon k' t }{1-\varepsilon k t}\right).
\end{multline*}
The first two terms are both $\mathcal{O} (\varepsilon |k-k'| |\log \varepsilon| ^{\infty})$ thanks to~\eqref{eq:fal estimate} applied to $f_{k'} ^2 = f_{k} ^2 v^2$. For the third one we write
\bmln{
\bigg|\int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: v \partial_t v f_{k} ^2 \left( \varepsilon k' - \varepsilon k \frac{1-\varepsilon k' t }{1-\varepsilon k t}\right)\bigg| \leq C \varepsilon |k-k'| |\log \varepsilon| ^{\infty} \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: v |\partial_t v| f_{k} ^2
\\ \leq C \varepsilon |k-k'| |\log \varepsilon| ^{\infty} \bigg[ \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: f_{k} ^2 v^2 + \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: f_{k} ^2 (\partial_t v) ^2 \bigg].
}
Inserting this in~\eqref{eq:1D low pre}, using again~\eqref{eq:fal estimate} and dropping a positive term, we finally get
\begin{align}\label{eq:1D low fin}
E^{\mathrm{1D}}_\star (k') &\geq E ^{\rm 1D}_{\star} (k) + |\log\varepsilon|^{-5} (\alpha(k') - \alpha(k)) ^2 \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) f_{k'} ^2 \nonumber
\\&+ \frac{1}{ 2b} \int_{0} ^{c_0 |\log \varepsilon|} \mathrm{d} t \: (1-\varepsilon k' t) \left(f_{k} ^2 - f_{k'}^2 \right) ^2 - C \varepsilon |k-k'| |\log \varepsilon| ^{\infty}
\end{align}
where we have chosen $d_{\varepsilon} = |\log \varepsilon | ^{-5}$, which is compatible with the requirement $0 < d_{\varepsilon} \leq C|\log \varepsilon| ^{-4}$. Combining with the estimate~\eqref{eq:vari 1D energy} that we proved in Step 1 concludes the proof of~\eqref{eq:vari 1D phase}. To get \eqref{eq:vari 1D opt density L 2} one has to use in addition \eqref{eq:fal point l u b}, which guarantees that under the assumptions $1<b<\Theta_0 ^{-1}$
$$ \left\| f_{k'} \right\|_{L^2(I_{\eps})} \geq C > 0 $$
for some constant $C$ independent of $\varepsilon$.
\end{proof}
To conclude the proof of Proposition~\ref{pro:1D curv} there only remains to discuss~\eqref{eq:vari 1D opt density}. We shall upgrade the estimate~\eqref{eq:vari 1D opt density L 2} to better norms, taking advantage of the 1D nature of the problem and using a standard bootstrap argument.
\begin{proof}[Proof of Proposition~\ref{pro:1D curv}]
We write $f_{k} = f_{k'} + (f_{k}-f_{k'})$ and expand the energy $E ^{\rm 1D}_{\star} (k) = \E ^{\rm 1D}_{k} [f_{k}]$, using the variational equation~\eqref{eq:var eq fal} for $f_{k'}$:
\begin{align*}
E ^{\rm 1D}_{\star} (k) &\geq E^{\mathrm{1D}}_\star (k') + \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t)|\partial_t (f_{k} - f_{k'})| ^2 + \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) V_{k} (f_{k} - f_{k'}) ^2
\\&+ \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) ( V_{k}- V_{k'}) f_{k'} ^2 + 2 \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) f_{k'} (f_{k}-f_{k'}) (V_{k} -V_{k'})
\\&+\frac{1}{2b} \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) \left[ 6 f_{k'} ^2 (f_{k}-f_{k'}) ^2 + 4 f_{k'} (f_{k} - f_{k'}) ^3 + (f_{k} - f_{k'}) ^4 - 2 (f_{k} - f_{k'}) ^2 \right]
\\&-C \varepsilon |k-k'| |\log \varepsilon| ^{\infty}
\end{align*}
where the $\mathcal{O} (\varepsilon |k-k'| |\log \varepsilon| ^{\infty})$ is as before due to the replacement of the curvature $k\leftrightarrow k'$. Using the same procedure to expand $E^{\mathrm{1D}}_\star (k') = \E^{\mathrm{1D}}_{k'} [f_{k'}]$ and combining the result with the above we obtain
\begin{align*}
E ^{\rm 1D}_{\star} (k) &\geq E ^{\rm 1D}_{\star} (k) + 2 \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t)|\partial_t (f_{k} - f_{k'})| ^2 + \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) (V_{k} + V_{k'}) (f_{k} - f_{k'}) ^2
\\&+ \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) ( V_{k}- V_{k'}) (f_{k'} ^2 - f_{k} ^2)
\\&+ 2 \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) ( f_{k'} (f_{k}-f_{k'}) - f_{k} (f_{k'} - f_{k})) (V_{k} -V_{k'})
\\&+\frac{1}{2b} \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) (f_{k}-f_{k'}) ^2 \left[ 4 f_{k'} ^2 + 4 f_{k} ^2 + 4 f_{k'} f_{k} - 4 \right]
\\&- C\varepsilon |k-k'| |\log \varepsilon| ^{\infty}.
\end{align*}
Hence it holds
\begin{align}\label{eq:1D improve bounds}
C \varepsilon |k-k'| |\log \varepsilon| ^{\infty} &\geq 2 \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t)|\partial_t (f_{k} - f_{k'})| ^2 \nonumber
\\&+ \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) ( V_{k}- V_{k'}) (f_{k} ^2 - f_{k'} ^2 ) \nonumber
\\&+ \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) (f_{k}-f_{k'}) ^2 \left[ V_{k} + V_{k'} + \frac{2}{b} \left( f_{k'} ^2 + f_{k} ^2 + f_{k'} f_{k} - 2 \right) \right].
\end{align}
Next we note that thanks to~\eqref{eq:vari 1D phase}
$$ \sup_{I_{\eps}} \left| V_{k}- V_{k'}\right| \leq C \left(|\alpha(k) - \alpha(k')| + \varepsilon |k-k'|\right) |\log \varepsilon| ^{\infty} \leq C \left(\varepsilon |k-k'|\right) ^{1/2} |\log \varepsilon| ^{\infty}$$
as revealed by an easy computation starting from the expression~\eqref{eq:pot}. Thus, using~\eqref{eq:vari 1D opt density L 2} and the Cauchy-Schwartz inequality,
\begin{multline}\label{eq:1D improve diff pot diff f} \left| \int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) ( V_{k}- V_{k'}) (f_{k} ^2 - f_{k'} ^2 )\right|\leq \\mathbb{C} |\log \varepsilon | ^{1/2} \sup_{I_{\eps}} \left| V_{k}- V_{k'}\right| \left\Vert f_{k} ^2 - f_{k'} ^2 \right\Vert_{L^2 (I_{\eps})} \leq C \varepsilon |k-k'| |\log \varepsilon| ^{\infty}.
\end{multline}
For the term on the third line of~\eqref{eq:1D improve bounds} we notice that, using the growth of the potentials $V_{k}$ and $V_{k'}$ for large $t$, the integrand is positive in
$$\tilde{I}_{\eps}:=\left[c_1(\log |\log \varepsilon|)^{1/2}, c_0 |\log \varepsilon|\right]$$
for any constant $c_1$ and $\varepsilon$ small enough. On the other hand, combining~\eqref{eq:vari 1D opt density L 2} and the pointwise lower bound in~\eqref{eq:fal point l u b} we have
$$ \left\Vert f_{k} - f_{k'} \right\Vert_{L^2 (\tilde{I}_{\eps})}\leq C \left(\varepsilon |k-k'| \right) ^{1/2} |\log \varepsilon| ^{\infty}.$$
Splitting the integral into two pieces we thus have
$$\int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t) (f_{k}-f_{k'}) ^2 \left[ V_{k} + V_{k'} + \textstyle\frac{2}{b} \left( f_{k'} ^2 + f_{k} ^2 + f_{k'} f_{k} - 2 \right) \right] \geq - C \varepsilon |k-k'| |\log \varepsilon| ^{\infty}.$$
Using this and~\eqref{eq:1D improve diff pot diff f} we deduce from~\eqref{eq:1D improve bounds} that
\begin{equation}\label{eq:1D H1 bound}
\int_{I_{\eps}} \mathrm{d} t (1-\varepsilon k t)|\partial_t (f_{k} - f_{k'})| ^2 \leq C \varepsilon |k-k'| |\log \varepsilon| ^{\infty}
\end{equation}
and combining with the previous $L ^2$ bound this gives
$$ \left\Vert f_{k} - f_{k'} \right\Vert_{H^1 (\tilde{I}_{\eps})}\leq C \left(\varepsilon |k-k'| \right) ^{1/2} |\log \varepsilon| ^{\infty}.$$
Since we work on a 1D interval, the Sobolev inequality implies
\begin{equation}\label{eq:1D L inf bound pre} \left\Vert f_{k} - f_{k'} \right\Vert_{L^{\infty} (\tilde{I}_{\eps})}\leq C \left(\varepsilon |k-k'| \right) ^{1/2} |\log \varepsilon| ^{\infty}.
\end{equation}
In particular
$$
\left| f_{k} (c_1(\log |\log \varepsilon|)^{1/2}) - f_{k'}(c_1(\log |\log \varepsilon|)^{1/2}) \right|\leq C \left(\varepsilon |k-k'| \right) ^{1/2} |\log \varepsilon| ^{\infty}.
$$
Then, integrating the bound~\eqref{eq:1D H1 bound} from $c_1(\log |\log \varepsilon|)^{1/2}$ to $c_0 |\log \varepsilon|$ we can extend~\eqref{eq:1D L inf bound pre} to the whole interval $I_{\eps}$:
$$ \left\Vert f_{k} - f_{k'} \right\Vert_{L^{\infty} (I_{\eps})}\leq C \left(\varepsilon |k-k'| \right) ^{1/2} |\log \varepsilon| ^{\infty},$$
which is~\eqref{eq:vari 1D opt density} for $n=0$. The bounds on the derivatives follow by a standard bootstrap argument, inserting the $L ^{\infty}$ bound in the variational equations.
\end{proof}
\subsection{Estimates on auxiliary functions}\label{sec:aux func}
In this Section we collect some useful estimates of other quantities involving the 1D densities as well as the optimal phases. It turns out that we need an estimate of the $k$-dependence of $\partial_t \log (f_k)$, provided in the following
\begin{pro}[\textbf{Estimate of logarithmic derivatives}]
\label{pro:est log der}
\mbox{} \\
Let $k,k' \in \mathbb{R}$ be bounded independently of $\varepsilon$ and $1<b<\Theta_0 ^{-1}$, then the following holds:
\begin{equation}
\label{eq:est log der}
\left\| \frac{f^{\prime}_k}{f_k} - \frac{f^{\prime}_{k'}}{f_{k'}} \right\|_{L^{\infty}(I_{\eps})} \leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}.
\end{equation}
\end{pro}
\begin{proof}
Let us denote for short
\begin{equation}
g(t) : = \frac{f^{\prime}_k (t)}{f_k (t)}- \frac{f^{\prime}_{k'} (t)}{f_{k'} (t)}.
\end{equation}
We first notice that the estimate is obviously true in the region where $ f_k \geq |\log\varepsilon|^{-M} $ for any $ M > 0 $ finite, thanks to \eqref{eq:vari 1D opt density} and \eqref{eq:fal derivative}:
\bmln{
|g(t)| \leq \frac{\left| f^{\prime}_k - f^{\prime}_{k'}\right|}{f_k} + \frac{\left|f^{\prime}_{k'} \right| \left| f_k - f_{k'} \right|}{f_k f_{k'}} \leq |\log\varepsilon|^M \left| f^{\prime}_k - f^{\prime}_{k'}\right| + |\log\varepsilon|^{M+3} \left| f_k - f_{k'} \right| \\
\leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}.
}
Let $ t_* $ be the unique solution to $ f_k(t_*) = |\log\varepsilon|^{-M} $ (uniqueness follows from the properties of $ f_k $ discussed in Proposition \ref{pro:min fone}). To complete the proof it thus suffices to prove the estimate in the region $ [t_*, c_0 |\log\varepsilon|] $. Notice also that thanks to \eqref{eq:fal point l u b}, it must be that $ t_* \to \infty $ when $\varepsilon \to 0$.
At the boundary of the interval $ [t_*, t_{\eps}] $ (recall \eqref{eq:def interval}), one has
\begin{equation}
g(t_*) = \mathcal{O}\left(\left( \varepsilon |k-k'| \right) ^{1/2} |\log\varepsilon|^{M}\right), \qquad g(t_{\eps}) = 0,
\end{equation}
because of Neumann boundary conditions. Hence if the supremum of $ |g| $ is reached at the boundary there is nothing to prove. Let us then assume that $ \sup_{t \in [t_*,t_{\eps}]} |g| = |g(t_0)| $, for some $ t_* < t_0 < t_{\eps} $, such that $ g'(t_0) = 0 $, i.e.,
\begin{equation}
\label{log der step 0}
\frac{f^{\prime\prime}_k(t_0)}{f_k(t_0)} - \frac{f^{\prime\prime}_{k'}(t_0)}{f_{k'}(t_0)} + \frac{\left(f^{\prime}_k(t_0)\right)^2}{f_k^2(t_0)} - \frac{\left(f^{\prime}_{k'}(t_0)\right)^2}{f^2_{k'}(t_0)} = 0.
\end{equation}
Since $ f_k $ and $ f_{k'} $ are both decreasing in $ [t_*,t_{\eps}] $ (see again Proposition \ref{pro:min fone}) we also have
\begin{equation}
\label{log der step 1}
\frac{\left(f^{\prime}_k(t_0)\right)^2}{f_k^2(t_0)} - \frac{\left(f^{\prime}_{k'}(t_0)\right)^2}{f^2_{k'}(t_0)} = \left[ \frac{\left|f^{\prime}_k(t_0)\right|}{f_k(t_0)} + \frac{\left|f^{\prime}_{k'}(t_0)\right|}{f_{k'}(t_0)} \right] g(t_0).
\end{equation}
The variational equations satisfied by $ f_k$ and $ f_{k'} $ on the other hand imply
\bml{
\label{log der step 2}
\bigg| \frac{f^{\prime\prime}_k(t_0)}{f_k(t_0)} - \frac{f^{\prime\prime}_{k'}(t_0)}{f_{k'}(t_0)} \bigg| = \bigg| \frac{\varepsilon k f^{\prime}_k(t_0)}{(1 - \varepsilon k t) f_k(t_0)} - \frac{\varepsilon k' f^{\prime}_{k'}(t_0)}{(1 - \varepsilon k' t) f_{k'}(t_0)} + V_k(t_0) - V_{k'}(t_0) \\
- \frac{1}{b} \left( f_k^2(t_0) - f_{k'}^2(t_0) \right) \bigg| \leq C \left[\left(\varepsilon|k - k'|\right)^{1/2} |\log \eps| ^{\infty} + \varepsilon |g(t_0)| \right],
}
thanks to \eqref{eq:vari 1D phase} and \eqref{eq:vari 1D opt density}. For the first two terms the estimate \eqref{eq:fal derivative} has also been used for the derivatives $ f_k^{\prime} $ and $ f_{k'}^{\prime} $:
\bmln{
\frac{\varepsilon k f^{\prime}_k(t_0)}{(1 - \varepsilon k t) f_k(t_0)} - \frac{\varepsilon k' f^{\prime}_{k'}(t_0)}{(1 - \varepsilon k' t) f_{k'}(t_0)} = \mathcal{O}(\varepsilon) g(t_0) + \frac{f^{\prime}_{k'}(t_0)}{f_{k'}(t_0)} \left( \frac{\varepsilon k}{1 - \varepsilon k t} - \frac{\varepsilon k'}{1 - \varepsilon k' t} \right) \\
= \mathcal{O}(\varepsilon) g(t_0) + \mathcal{O}(\varepsilon|k - k'|).
}
Plugging \eqref{log der step 1} and \eqref{log der step 2} into \eqref{log der step 0}, we get the estimate
\begin{equation}
\left[ \frac{\left|f^{\prime}_k(t_0)\right|}{f_k(t_0)} + \frac{\left|f^{\prime}_{k'}(t_0)\right|}{f_{k'}(t_0)} + \mathcal{O}(\varepsilon) \right] g(t_0) = \mathcal{O}\left(\left(\varepsilon|k - k'|\right)^{1/2}|\log \eps| ^{\infty}\right).
\end{equation}
Now if
\begin{displaymath}
\frac{\left|f^{\prime}_k(t_0)\right|}{f_k(t_0)} + \frac{\left|f^{\prime}_{k'}(t_0)\right|}{f_{k'}(t_0)} \geq |\log\varepsilon|^{-2},
\end{displaymath}
the result follows immediately. Therefore we can assume that
\begin{equation}
\frac{\left|f^{\prime}_k(t_0)\right|}{f_k(t_0)} + \frac{\left|f^{\prime}_{k'}(t_0)\right|}{f_{k'}(t_0)} \leq |\log\varepsilon|^{-2},
\end{equation}
but we claim that this also implies
\begin{equation}
\label{log der step 3}
\frac{\left|f^{\prime}_k(t)\right|}{f_k(t)} + \frac{\left|f^{\prime}_{k'}(t)\right|}{f_{k'}(t)} \leq |\log\varepsilon|^{-2} \mbox{ for any } t \in [t_0,t_{\eps}].
\end{equation}
Indeed, setting
$$ h_k(t) : = - f^\prime_k(t)/f_k(t),$$
a simple computation involving the variational equation \eqref{eq:var eq fal} yields
\begin{displaymath}
h_k^{\prime}(t) = - \frac{\varepsilon k f^{\prime}_k(t)}{(1 - \varepsilon k t) f_k(t)} - V_k(t) + \frac{1}{b} \left(1 - f_k^2(t) \right) + h_k^2(t) = - V_k(t) + h_k^2(t) + \mathcal{O}(1),
\end{displaymath}
using~\eqref{eq:fal derivative} again. Hence $ h_k^{\prime}(t_0) < 0 $, since $ V_k(t_0) \gg 1 $, which follows from $ t_0 > t_* \gg 1 $, and therefore~\eqref{log der step 3} holds. An identical argument applies to $ h_{k'} $ and thus to the sum
$$ h_k + h_{k'} = : h.$$
Finally, the explicit expression of $ g^{\prime}(t) $ in combination with~\eqref{log der step 3} gives for $ t \geq t_0 $
\bml{
|g(t)| = \bigg| \int_{t}^{t_{\eps}} \mathrm{d} \eta \: g^{\prime}(\eta) \bigg| \leq \int_{t}^{t_{\eps}} \mathrm{d} \eta \: \left[ \left(h(\eta) + \mathcal{O}(\varepsilon) \right) \left| g(\eta) \right| + \mathcal{O}\left(\left(\varepsilon|k - k'|\right)^{1/2}|\log \eps| ^{\infty}\right) \right] \\
\leq C |\log\varepsilon|^{-1} \sup_{t \in [t_0,t_{\eps}]} |g(t)| + \mathcal{O}\left(\left(\varepsilon|k - k'|\right)^{1/2}|\log \eps| ^{\infty}\right),
}
which implies the result.
\end{proof}
The above estimate is mainly useful in providing bounds on quantities of the form
\begin{equation}
\label{eq:def ijj first}
I_{k,k'}(t) : = F_k(t) - F_{k'}(t) \frac{f_k ^2 (t)}{f_{k'} ^2(t)},
\end{equation}
alluded to in Subsection~\ref{sec:sketch}. As announced there, the main difficulty is that we need to show that $I_{k,k'}$ is small \emph{relatively to} $f_k ^2$, which is the content of the following corollary. We need the following notation
\begin{equation}
[0,\bar{t}_{k,\varepsilon}] : = \left\{ t : f_k(t) \geq |\log\varepsilon|^3 f_k(t_{\eps}) \right\}.
\end{equation}
Note that the monotonicity for large $ t $ of $ f_k $ guarantees that the above set is indeed an interval and that
\begin{equation}
\label{eq:tkk}
\bar{t}_{k,\varepsilon} = t_{\eps} + \mathcal{O}(\log|\log\varepsilon|).
\end{equation}
\begin{cor}[\textbf{Estimates on the correction function}]
\label{cor:est log cost}
\mbox{} \\
Under the assumptions of Proposition \ref{pro:est log der}, it holds
\begin{equation}
\label{eq:est log cost}
\sup_{t \in [0,t_{\eps}]} \bigg| \frac{I_{k,k'}}{f_k^2} \bigg| \leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}
\end{equation}
and, setting $ \bar{t}_{\varepsilon} : = \min\left\{ \bar{t}_{k,\varepsilon}, \bar{t}_{k',\varepsilon} \right\} $,
\begin{equation}
\label{eq:sup est der ijj}
\sup_{t \in [0,\bar{t}_{\varepsilon}]} \bigg| \frac{\partial_t I_{k,k'}}{f_k^2} \bigg| \leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty}.
\end{equation}
\end{cor}
\begin{proof}
We write
$$
\frac{I_{k,k'}(t)}{f_k ^2 (t)} = \frac{F_k (t)}{f_k ^2 (t)} - \frac{F_{k'} (t)}{f_{k'} ^2 (t)}
$$
Using the definition of the potential function \eqref{eq:Fk} and its properties \eqref{F prop}, we can rewrite
\bml{
\label{F ratio step 1}
\frac{F_k(t)}{f^2_k(t)} - \frac{F_{k'}(t)}{f^2_{k'}(t)} = - \int_t^{t_{\eps}} \mathrm{d} \eta \bigg[ b_k(\eta) \frac{f_k^2(\eta)}{f_k^2(t)} - b_{k'}(\eta) \frac{f_{k'}^2(\eta)}{f_{k'}^2(t)} \bigg] \\
= \int_t^{t_{\eps}} \mathrm{d} \eta \bigg[ b_k(\eta) \bigg(\frac{f_{k'}^2(\eta)}{f_{k'}^2(t)} - \frac{f_k^2(\eta)}{f_k^2(t)} \bigg) + \left(b_{k'}(\eta) - b_{k}(\eta)\right) \frac{f_{k'}^2(\eta)}{f_{k'}^2(t)} \bigg].
}
We first observe that for any $ \eta \geq t $
\begin{equation}
\label{F ratio step 2}
\frac{f_{k'}(\eta)}{f_{k'}(t)} \leq C,
\end{equation}
as it easily follows by combining the monotonicity of $ f_k $ for $ t $ large with its strict positivity close to the origin (see Proposition \ref{pro:min fone} and Lemma \ref{lem:point est fal} for the details). Hence we can bound the last term on the r.h.s. of \eqref{F ratio step 1} as
\begin{equation}
\label{F ratio step 3}
\bigg| \int_t^{t_{\eps}} \mathrm{d} \eta \: \left(b_{k'}(\eta) - b_{k}(\eta)\right) \frac{f_{k'}^2(\eta)}{f_{k'}^2(t)} \bigg| \leq C |\log\varepsilon| \left\| b_{k'}- b_{k} \right\|_{L^{\infty}(I_{\eps})} = \mathcal{O} \left( \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty} \right),
\end{equation}
since by \eqref{eq:vari 1D phase}
\begin{displaymath}
b_{k'}(t) - b_{k}(t) = \left(1 + \mathcal{O}(\varepsilon) \right) \left( \mathcal{O}(\varepsilon|k - k'|t^2) + \alpha(k) - \alpha(k') \right) = \mathcal{O} \left( \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty} \right).
\end{displaymath}
For the first term on the r.h.s. of \eqref{F ratio step 1} we exploit the estimate
\begin{displaymath}
\frac{f_{k'}(\eta)}{f_{k'}(t)} - \frac{f_k(\eta)}{f_k(t)} = \mathcal{O} \left( \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty} \right),
\end{displaymath}
which can be proven by writing
\begin{displaymath}
\frac{f_k(\eta)}{f_k(t)} = \exp\left\{ \int_t^{\eta} \mathrm{d} \tau \: \frac{f_k^{\prime}(\tau)}{f_k(\tau)} \right\},
\end{displaymath}
which implies
\bml{
\label{F ratio step 4}
\left| \frac{f_{k'}(\eta)}{f_{k'}(t)} - \frac{f_k(\eta)}{f_k(t)} \right| = \frac{f_{k'}(\eta)}{f_{k'}(t)} \left| 1 - \exp\left\{ \int_t^{\eta} \mathrm{d} \tau \: \bigg[ \frac{f_k^{\prime}(\tau)}{f_k(\tau)} - \frac{f_{k'}^{\prime}(\tau)}{f_{k'}(\tau)} \bigg] \right\} \right| \\
\leq C \int_t^{\eta} \mathrm{d} \tau \: \bigg| \frac{f_k^{\prime}(\tau)}{f_k(\tau)} - \frac{f_{k'}^{\prime}(\tau)}{f_{k'}(\tau)} \bigg| \exp\left\{ \int_t^{\eta} \mathrm{d} \tau \bigg| \frac{f_k^{\prime}(\tau)}{f_k(\tau)} - \frac{f_{k'}^{\prime}(\tau)}{f_{k'}(\tau)} \bigg| \right\}
\leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty},
}
where we have used \eqref{F ratio step 2}, the estimate $ |1 - e^{\delta}| \leq |\delta| e^{|\delta|} $, $ \delta \in \mathbb{R} $, and \eqref{eq:est log der}. Putting together \eqref{F ratio step 1} with \eqref{F ratio step 3} and \eqref{F ratio step 4}, we conclude the proof of~\eqref{eq:est log cost}.
To obtain~\eqref{eq:est log der} we first note that since $F_k' (t) \leq 0$, the positivity of $ K_{k} $ in $ [0, \bar{t}_{k,\varepsilon}] $ recalled in Lemma~\ref{lem:K positive} ensures that
$$ \left|\frac{F_{k} (t)}{f_k ^2 (t)} \right|\leq 1$$
in $ [0, \bar{t}_{k,\varepsilon}] $. Then we may use~\eqref{eq:est log der} again to estimate
\begin{multline*}
\sup_{t \in [0,\bar{t}_{\varepsilon}]} \bigg| \frac{\partial_t I_{k,k'}}{f_k^2} \bigg| = \sup_{t \in [0,\bar{t}_{\varepsilon}]} \bigg[ \left| \left(1 - \varepsilon k t \right) b_k - \left(1 - \varepsilon k' t)\right) b_{k'} \right| + 2 \bigg| \frac{F_{k'}}{f_{k'}^2} \bigg| \bigg| \frac{f_k^{\prime}}{ f_{k}} - \frac{f_{k'}^{\prime}}{f_{k'}} \bigg| \bigg] \\
\leq C \left( \varepsilon |k-k'| \right) ^{1/2} |\log \eps| ^{\infty},
\end{multline*}
and the proof is complete.
\end{proof}
\section{Energy Upper Bound}\label{sec:up bound}
We now turn to the proof of the energy upper bound corresponding to~\eqref{eq:energy GL}, namely we prove the following:
\begin{pro}[\textbf{Upper bound to the full GL energy}]\label{pro:up bound}\mbox{}\\
Let $1<b<\Theta_0 ^{-1}$ and $\varepsilon$ be small enough. Then it holds
\begin{equation}\label{eq:up bound GL}
E_{\eps}^{\mathrm{GL}} \leq \frac{1}{\varepsilon} \int_{0} ^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_\star (k(s)) + C \varepsilon |\log \varepsilon| ^{\infty}
\end{equation}
where $s\mapsto k(s)$ is the curvature function of the boundary $\partial \Omega$ as a function of the tangential coordinate.
\end{pro}
This result is proven as usual by evaluating the GL energy of a trial state having the expected physical features. As is well-known~\cite{FH-book}, such a trial state should be concentrated along the boundary of the sample, and the induced magnetic field should be chosen close to the applied one. Before entering the heart of the proof, we briefly explain how these considerations allow us to reduce to the proof of an upper bound to the reduced functional~\eqref{eq:GL func bound}. We define
\begin{equation}\label{eq:GL func bound inf}
G_{\mathcal{A}_{\eps}} := \inf\left\{ \mathcal{G}_{\mathcal{A}_{\eps}} [\psi], \psi (0,t) = \psi (|\partial \Omega|,t) \right\},
\end{equation}
the infimum of the reduced functional under periodic boundary conditions in the tangential direction and prove
\begin{lem}[\textbf{Reduction to the boundary functional}]\label{lem:up bound}\mbox{}\\
Under the assumptions of Proposition~\ref{pro:up bound}, it holds
\begin{equation}\label{eq:up GL-GL bound}
E_{\eps}^{\mathrm{GL}} \leq \frac{1}{\varepsilon} G_{\mathcal{A}_{\eps}} + C \varepsilon ^{\infty}.
\end{equation}
\end{lem}
\begin{proof}
This is a standard reduction for which more details may be found in~\cite[Section 14.4.2]{FH-book} and references therein. See also~\cite[Sections 4.1 and 5.1]{CR2}. We provide a sketch of the proof for completeness.
We first pick the trial vector potential as
$$ \mathbf{A}_{\rm trial} = \mathbf{F} $$
where $\mathbf{F}$ is the induced vector potential written in a gauge where $\mathrm{div} \, \mathbf{F} = 0$, namely the unique solution of
$$ \begin{cases}
\mathrm{div} \,\mathbf{F} = 0, & \mbox{ in } \Omega,\\
\mbox{curl} \,\mathbf{F} = 1, & \mbox{ in } \Omega, \\
\mathbf{F}\cdot \bm{\nu} = 0, & \mbox{ on } \partial \Omega.
\end{cases}
$$
Next we introduce boundary coordinates as described in~\cite[Appendix F]{FH-book}: let
$$\bm{\gamma}(\xi): \mathbb{R} \setminus (|\partial \Omega| \mathbb{Z}) \to \partial \Omega $$
be a counterclockwise parametrization of the boundary $ \partial \Omega $ such that $ |\bm{\gamma}^{\prime}(\xi)| = 1 $. The unit vector directed along the inward normal to the boundary at a point $ \bm{\gamma}(\xi) $ will be denoted by $ \bm{\nu}(\xi) $. The curvature $ k(\xi) $ is then defined through the identity
$$ \bm{\gamma}^{\prime\prime}(\xi) = k(\xi) \bm{\nu}(\xi). $$
Our trial state will essentially live in the region
\begin{equation}
\label{ann}
\tilde{\mathcal{A}}_{\eps} : = \left\{ \mathbf{r} \in \Omega \: | \: \mathrm{dist}(\mathbf{r}, \partial \Omega) \leq c_0 \varepsilon |\log\varepsilon| \right\},
\end{equation}
and in such a region we can introduce tubular coordinates $ ( s,\varepsilon t) $ (note the rescaling of the normal variable) such that, for any given $ \mathbf{r} \in \tilde{\mathcal{A}}_{\eps} $, $ \varepsilon t = \mathrm{dist}(\mathbf{r}, \partial \Omega) $, i.e.,
\begin{equation}
\label{eq:tubular coordinates}
\mathbf{r}(s,\varepsilon t) = \bm{\gamma}^{\prime}( s) + \varepsilon t \bm{\nu}( s),
\end{equation}
which can obviously be realized as a diffeomorphism for $\varepsilon$ small enough. Hence the boundary layer becomes in the new coordinates $ (s,t) $
\begin{equation}\label{eq:def ann rescale}
\mathcal{A}_{\eps}:= \left\{ (s,t) \in \left[0, |\partial \Omega| \right] \times \left[0,c_0 |\log\varepsilon|\right] \right\}.
\end{equation}
We now pick a function $\psi (s,t)$ defined on $\mathcal{A}_{\eps}$, satisfying periodic boundary conditions in the $s$ variable. Using a smooth cut-off function $\chi (t) $ with $\chi (t) \equiv 1$ for $t\in [0,c_0 |\log \varepsilon|]$ and $\chi (t)$ exponentially decreasing for $t > c_0 |\log \varepsilon|$, we associate to $\psi$ the GL trial state
$$ \Psi_{\rm trial} (\mathbf{r}) := \psi(s,t) \chi ( t) \exp \left\{ i \phi_{\rm trial}(s,t) \right\},$$
where $ \phi_{\rm trial} $ is a gauge phase (analogue of \eqref{eq: gauge phase}) depending on $ \mathbf{A}_{\rm trial} $, i.e.,
\bml{
\label{eq: gauge phase trial}
\phi_{\rm trial}(s,t) : = - \frac{1}{\varepsilon} \int_{0}^{t} \mathrm{d} \eta \: \bm{\nu}(s) \cdot \mathbf{A}_{\rm trial}(\mathbf{r}(s, \varepsilon \eta)) + \frac{1}{\varepsilon^2} \int_{0}^s \mathrm{d} \xi \: \bm{\gamma}^{\prime}(\xi) \cdot \mathbf{A}_{\rm trial}(\mathbf{r}(\xi,0)) \\
- \left( \frac{|\Omega|}{|\partial \Omega| \varepsilon^2 } - \left\lfloor \frac{|\Omega|}{|\partial \Omega| \varepsilon^2 } \right\rfloor \right) s.
}
Then, with the definition of $\mathcal{G}_{\mathcal{A}_{\eps}}$ as in~\eqref{eq:GL func bound}, a relatively straightforward computation gives
$$ E^{\mathrm{GL}} \left[\Psi_{\rm trial}, \mathbf{A}_{\rm trial} \right] \leq \frac{1}{\varepsilon} \mathcal{G}_{\mathcal{A}_{\eps}} [\psi] + C \varepsilon ^{\infty},$$
and the desired result follows immediately. Note that this computation uses the gauge invariance of the GL functional, e.g., through~\cite[Lemma F.1.1]{FH-book}.
\end{proof}
The problem is now reduced to the construction of a proper trial state for $\mathcal{G}_{\mathcal{A}_{\eps}}$. To capture the $\mathcal{O}(\varepsilon)$ correction (which depends on curvature) to the leading order of the GL energy (which does not depend explicitly on curvature), we need a more elaborate function than has been considered so far. The construction is detailed in Subsection~\ref{sec:trial state} and the computation completing the proof of Proposition~\ref{pro:up bound} is given in Subsection~\ref{sec:trial ener}.
\subsection{The trial state in boundary coordinates}\label{sec:trial state}
We start by recalling the splitting of the domain $ \mathcal{A}_{\eps} $ defined in \eqref{eq:boundary layer} into $ N_{\eps} \propto \varepsilon ^{-1} $ rectangular cells $ \left\{ \mathcal{C}_n \right\}_{n=1 \ldots N_{\eps}}$ with boundaries $s_n,s_{n+1}$ in the $ s$-coordinate such that
$$s_{n+1} - s_n = \ell_{\eps} \propto \varepsilon, $$
so that
$$ N_{\eps} = \frac{|\partial \Omega|}{\ell_{\eps}}.$$
We denote
\begin{equation}
\label{eq: cell}
\mathcal{C}_n = [s_n,s_{n+1}] \times [0,c_0 |\log \varepsilon|],
\end{equation}
with the convention that $ s_1 = 0 $, for simplicity. We will approximate the curvature $k(s)$ inside each cell by its mean value and set
\begin{equation}
\label{eq:mean curvature}
k_n := \ell_{\eps}^{-1} \int_{s_n} ^{s_{n+1}} \mathrm{d} s \, k(s).
\end{equation}
We also denote by
\begin{equation}
\label{eq: opt phase cell}
\alpha_n = \alpha(k_n)
\end{equation}
the optimal phase associated to $k_n$, obtained by minimizing $E^{\mathrm{1D}} (\alpha,k_n)$ with respect to $\alpha$ as in Section~\ref{sec:eff func}.
The assumption about the smoothness of the boundary guarantees that
\begin{equation}
\label{eq:curv diff}
k_n - k_{n+1} = \mathcal{O}(\varepsilon).
\end{equation}
Indeed if we assume that $ \sup_{s \in [0,2\pi]} \left| \partial_s k (s) \right| \leq C < \infty $ (independent of $ \varepsilon $), one gets
\bmln{
\ell_{\eps}^{-1} \bigg| \int_{k_n}^{k_{n+1}} \mathrm{d} s \: k(s) - \int_{k_{n+1}}^{k_{n+2}} \mathrm{d} s \: k(s) \bigg| = \ell_{\eps}^{-1} \bigg| \int_{k_n}^{k_{n+1}} \mathrm{d} s \int_{s}^{k_{n+1}} \mathrm{d} \eta \: \partial_\eta k (\eta) + \int_{k_{n+1}}^{k_{n+2}} \mathrm{d} s \int_{k_{n+1}}^s \mathrm{d} \eta \: \partial_\eta k(\eta) \bigg| \\
\leq C \ell_{\eps} = \mathcal{O}(\varepsilon).
}
We can then apply Proposition \ref{pro:1D curv} to obtain
\begin{equation}
\label{eq:phase diff}
\alpha_n - \alpha_{n+1} = \mathcal{O}(\varepsilon|\log\varepsilon|^{\infty}),
\end{equation}
\begin{equation}
\label{eq:density diff}
\left\|f^{(m)}_n - f^{(m)}_{n+1}\right\|_{L^{\infty}(I_{\eps})} = \mathcal{O}(\varepsilon|\log\varepsilon|^{\infty}),
\end{equation}
for any finite $ m \in \mathbb{N} $.
Our trial state has the form
\begin{equation}\label{eq:trial state}
\psi_{\rm trial} (s,t) = g(s,t) \exp \left\{ -i\left(\varepsilon^{-1} S(s) - \varepsilon \delta_{\varepsilon} s\right) \right\}
\end{equation}
where $\delta_{\varepsilon}$ is the number~\eqref{eq:deps}. The density $g$ and phase factor $S$ are defined as follows:
\begin{itemize}
\item \underline{The density.} The modulus of our wave function is constructed to be essentially piecewise constant in the $s$-direction, with the form $f_{k_n} (t)$ in the cell $ \mathcal{C}_n$. The admissibility of the trial state requires that $g$ be continuous and we thus set:
\begin{equation}\label{eq:trial density}
g(s,t):= f_{k_n} + \chi_n,
\end{equation}
where the function $\chi_n$ satisfies
\begin{equation}
\chi_n(s,t) =
\begin{cases}
0, & \mbox{at } s = s_{n},\\
f_{k_{n+1}}(t) - f_{k_n}(t), & \mbox{at } s = s_{n+1},
\end{cases}
\end{equation}
the continuity at the $s_n$ boundary being ensured by $\chi_{n-1}$. A simple choice is given by
\begin{equation}\label{eq:choice chi}
\chi_n (s,t)= \left(f_{k_{n+1}} (t) - f_{k_n} (t) \right) \left( 1 - \frac{s-s_{n+1}}{s_{n} - s_{n+1}}\right).
\end{equation}
Note that $|k_n-k_{n+1}| \leq C |s_n-s_{n+1}|\leq C\varepsilon$ since the curvature is assumed to be a smooth function of $s$. Clearly, in view of Proposition~\ref{pro:1D curv} we can impose the following bounds on $\chi_n$:
\begin{equation}\label{eq:control smoothing}
|\chi_n| \leq C \varepsilon |\log \eps| ^{\infty}, \qquad |\partial_t \chi_n| \leq C \varepsilon |\log \eps| ^{\infty}, \qquad |\partial_s \chi_n| \leq C |\log \eps| ^{\infty},
\end{equation}
so that $\chi_n$ is indeed only a small correction to the desired density $f_{k_n}$ in $\mathcal{C}_n$.
\item \underline{The phase.} The phase of the trial function is dictated by the refined ansatz \eqref{eq:intro GLm formal refined}: within the cell $ \mathcal{C}_n $ it must be approximately equal to $ \alpha_n $ and globally it must define an admissible phase factor, i.e., vary of a multiple of $ 2\pi $ after one loop. We then let
$$S = S (s) = S_{\rm loc}(s) + S_{\rm glo} (s)$$
where $S_{\rm loc}$ varies locally (on the scale of a cell) and $S_{\rm glo}$ varies globally (on the scale of the full interval $[0,|\partial \Omega|]$) and is chosen to enforce the periodicity on the boundary of the trial state. The term $S_{\rm loc}$ is the main one, and its $s$ derivative should be equal to $\alpha_n$ in each cell $\mathcal{C}_n$ in order that the evaluation of the energy be naturally connected to the 1D functional we studied before{, as explained in Section 3.1}. We define $S_{\rm loc}$ recursively by setting:
\begin{equation}\label{eq:trial phase 1}
S_{\rm loc} (s)=
\begin{cases}
\alpha_1 s, & \mbox{in } \mathcal{C}_1,\\
\alpha_{n} (s-s_{n}) + S_{\rm loc}(s_{n}), & \mbox{in } \mathcal{C}_n, n\geq 2,
\end{cases}
\end{equation}
which in particular guarantees the continuity of $S_{\rm loc}$ on $[s_1,s_{N_{\eps}+1}[$. Moreover we easily compute (recall that $ s_1 = 0 $)
\begin{equation}
\label{eq:Sloc boundary}
S_{\rm loc}(s_n) = \sum_{m = 2}^{n-1} \alpha_m \left(s_{m+1} - s_m \right) + \alpha_1 s_2 = \int_0^{s_{n}} \mathrm{d} s \: \alpha(s) + \mathcal{O}(\varepsilon|\log \eps| ^{\infty}).
\end{equation}
The factor $S_{\rm glo}$ ensures that
$$S(s_{N_{\eps}+1}) - S (s_{1}) = S(s_{N_{\eps}+1}) \in 2 \pi \varepsilon \mathbb{Z},$$
which is required for~\eqref{eq:trial state} to be periodic in the $s$-direction and hence to correspond to a single-valued wave function in the original variables. The conditions we impose on $S_{\rm glo}$ are thus
\begin{align}\label{eq:trial phase 2}
S_{\rm glo} (s_1) &= 0
\\ S_{\rm glo}(s_{N_{\eps}+1}) &= 2\pi \varepsilon \left( \alpha_{N_{\eps}} \left(s_{N_{\eps}+1} - s_{N_{\eps}} \right) + S_{\rm loc}(s_{N_{\eps}}) - \left\lfloor \alpha_{N_{\eps}} \left(s_{N_{\eps}+1} - s_{N_{\eps}}\right) + S_{\rm loc}(s_{N_{\eps}}) \right\rfloor \right) \nonumber
\end{align}
with $\lfloor\,.\,\rfloor$ standing for the integer value. Thanks to \eqref{eq:Sloc boundary}, we have
$$ \alpha_{N_{\eps}} \left(s_{N_{\eps}+1} - s_{N_{\eps}}\right) + S_{\rm loc}(s_{N_{\eps}}) = \mathcal{O} (1)$$
and we can thus clearly impose that $S_{\rm glo}$ be regular and
\begin{equation}\label{eq:control S2}
|S_{\rm glo}| \leq C \varepsilon, \qquad |\partial_s S_{\rm glo}| \leq C \varepsilon.
\end{equation}
\begin{rem}($s$-dependence of the trial state)
\mbox{} \\
The main novelty here is the fact that the density and phase of the trial state have (small) variations on the scale of the cells which are of size $\mathcal{O}(\varepsilon)$ in the $s$-variable. A noteworthy point is that the phase needs not have a $t$-dependence to evaluate the energy at the level of precision we require. Basically this is associated with the fact that the $t^2$ term in~\eqref{eq:vect pot bound} comes multiplied with an $\varepsilon$ factor. The main point that renders the computation of the energy doable is~\eqref{eq:control smoothing} and this is where the analysis of Subsection~\ref{sec:eff func} enters heavily.
\end{rem}
\end{itemize}
\subsection{The energy of the trial state}\label{sec:trial ener}
We may now complete the proof of Proposition~\ref{pro:up bound} by proving
\begin{lem}[\textbf{Upper bound for the boundary functional}]\label{lem:up bound bound}\mbox{}\\
With $\psi_{\rm trial}$ given by the preceding construction, it holds
\begin{equation}\label{eq:up bound bound}
\mathcal{G}_{\mathcal{A}_{\eps}} [\psi_{\rm trial}] \leq \int_{0} ^{|\partial \Omega|} \mathrm{d} s E^{\mathrm{1D}}_\star (k(s)) + \mathcal{O} (\varepsilon ^2 |\log \varepsilon| ^{\infty}).
\end{equation}
\end{lem}
The upper bound~\eqref{eq:up bound GL} follows from Lemmas~\ref{lem:up bound} and~\ref{lem:up bound bound} since $\psi_{\rm trial}$ is periodic in the $s$-variable and hence an admissible trial state for $G_{\mathcal{A}_{\eps}}$.
\begin{proof} As explained in Subsection~\ref{sec:eff func}, inserting~\eqref{eq:trial state} into~\eqref{eq:GL func bound} yields
\begin{equation}\label{eq:ener trial}
\mathcal{G}_{\mathcal{A}_{\eps}} [\psi_{\rm trial}] = \mathcal{E}^{\mathrm{2D}}_S [g]
\end{equation}
where $\mathcal{E}^{\mathrm{2D}}_S [g]$ is defined in~\eqref{eq:2D func}. For clarity we split the estimate of the r.h.s. of the above equation into several steps. We use the shorter notation $f_n$ for $f_{k_n}$ when this generates no confusion.
\paragraph{Step 1. Approximating the curvature.} In view of the continuity of the trial function, the energy is the sum of the energies restricted to each cell. We approximate $k(s)$ by $k_n$ in $\mathcal{C}_n$ as announced, and note that since $k$ is regular we have $|k(s) - k(s_n) |\leq C \varepsilon$ in each cell, with a constant $C$ independent of $j$. We thus have
\begin{multline}\label{eq:estim 1}
\mathcal{E}^{\mathrm{2D}}_S [g] \leq \sum_{n = 1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \left( 1 - \varepsilon k_n t \right) \left\{ \left| \partial_t g \right|^2 + \frac{\varepsilon^2}{(1 - \varepsilon k_n t)^2} \left| \partial_s g \right|^2 \right.\\
\left. + \frac{\left(t + \partial_s S - \frac{1}{2}\varepsilon t ^2 k_n \right) ^2}{(1-\varepsilon k_n t) ^2} g^2 - \frac{1}{2b} \left(2 g^2 - g^4 \right) \right\} \left( 1 + \mathcal{O}(\varepsilon ^2)\right)
\end{multline}
since each $k$-dependent term comes multiplied with an $\varepsilon$ factor.
\paragraph{Step 2. Approximating the phase.} In $\mathcal{C}_n$ we have
$$\partial_s S = \alpha_n + \partial_s S_{\rm glo} = \alpha_n + \mathcal{O} (\varepsilon).$$
We can thus expand the potential term:
\begin{multline}\label{eq:change phase}
\int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \frac{\left(t + \partial_s S - \frac{1}{2}\varepsilon t ^2 k_n\right) ^2}{1-\varepsilon k_n t} g^2 = \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \frac{\left(t + \alpha_n - \frac{1}{2}\varepsilon t ^2 k_n \right) ^2}{1-\varepsilon k_n t} g^2
\\
+ 2 \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \partial_s S_{\rm glo} \frac{t + \alpha_n - \frac{1}{2}\varepsilon t ^2 k_n}{1-\varepsilon k_n t} g^2
+ \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \frac{\left(\partial_s S_{\rm glo} \right)^2}{1-\varepsilon k_n t} g^2
\end{multline}
and obviously
$$ \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \frac{\left(\partial_s S_{\rm glo} \right)^2}{1-\varepsilon k_n t} g^2 \leq C \varepsilon ^3 |\log \eps| ^{\infty}, $$
because of~\eqref{eq:control S2} and the size of $ \mathcal{C}_n $ in the $s$ direction. Next we note that in $ \mathcal{C}_n $
$$ g ^2 = f_n ^2 + 2 f_n \chi_n + \chi_n ^2$$
so that, using~\eqref{eq:FH nonlinear} and the fact that $\partial_s S_{\rm glo}$ only depends on $s$ we have
$$ \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \partial_s S_{\rm glo} \frac{t + \alpha_n - \frac{1}{2}\varepsilon t ^2 k_n}{1-\varepsilon k_n t} g^2 = \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \partial_s S_{\rm glo} \frac{t + \alpha_n - \frac{1}{2}\varepsilon t ^2 k_n}{1-\varepsilon k_n t} \left( 2 f_n\chi_n + \chi_n ^2 \right),$$
which is easily bounded by $C\varepsilon ^3 |\log \eps| ^{\infty}$ using~\eqref{eq:control smoothing},~\eqref{eq:control S2} and the fact that $|s_{n+1}-s_n|\leq C \varepsilon$. All in all:
\begin{equation}\label{eq:change phase 2}
\int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \frac{\left(t + \partial_s S - \frac{1}{2}\varepsilon t ^2 k_n \right) ^2}{1-\varepsilon k_n t} g^2 = \int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \frac{\left(t + \alpha_n - \frac{1}{2}\varepsilon t ^2 k_n \right) ^2}{1-\varepsilon k_n t} g ^2 + \mathcal{O} (\varepsilon ^3 |\log \eps| ^{\infty}).
\end{equation}
\paragraph{Step 3. The 1D functional inside each cell.} We now have to estimate an essentially 1D functional in each cell, closely related to~\eqref{eq:1D func}:
\begin{equation}\label{eq:change density}
\int_{\mathcal{C}_n} \mathrm{d} t \, \mathrm{d} s \: \left( 1 - \varepsilon k_n t \right) \bigg\{ \left| \partial_t g \right|^2 + \frac{\varepsilon^2}{(1 - \varepsilon k_n t)^2} \left| \partial_s g \right|^2 + \frac{\left(t + \alpha_n - \frac{1}{2}\varepsilon t ^2 k_n \right) ^2}{(1-\varepsilon k_n t)^2} g^2 - \frac{1}{2b} \left(2 g^2 - g^4 \right) \bigg\}.
\end{equation}
We may now expand $g$ according to~\eqref{eq:trial density} in the above expression and use the variational equation~\eqref{eq:var eq fal} to cancel the first order terms in $\chi_n$. This yields
\begin{multline}\label{eq:change density 2}
\int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left( 1 - \varepsilon k_n t \right) \left\{ \left| \partial_t g \right|^2 + \textstyle\frac{\varepsilon^2}{(1 - \varepsilon k_n t)^2} \left| \partial_s g \right|^2 + V_{k_n}(t) g^2 - \frac{1}{2b} \left(2 g^2 - g^4 \right) \right\} = \ell_{\eps} E^{\mathrm{1D}}_{\star} (k_n)
\\ + \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left( 1 - \varepsilon k_n t \right) \left\{ |\partial_t \chi_n| ^2 + \textstyle\frac{\varepsilon ^2}{(1-\varepsilon t k_n) ^2} |\partial_s \chi_n| ^2 + V_{k_n} \chi_n ^2 + \frac{1}{2b} \left( 6 \chi_n ^2 f_n^2 + 4 \chi_n ^3 f_n + \chi_n ^4 - 2 \chi_n ^2 \right)\right\}
\\ = \ell_{\eps} E^{\mathrm{1D}}_{\star} (k_n) + O(\varepsilon ^3 |\log \eps| ^{\infty}),
\end{multline}
where we only have to use~\eqref{eq:control smoothing} to obtain the final estimate.
\medskip
\textbf{Step 4, Riemann sum approximation.} Gathering all the above estimates we obtain
\begin{equation}\label{eq:up bound pre final}
\mathcal{E}^{\mathrm{2D}}_S [g] \leq
\ell_{\eps} \sum _{n=1}^{N_{\eps}} E^{\mathrm{1D}}_{\star} (k_n) \left( 1+ \mathcal{O} (\varepsilon ^2)\right) + \mathcal{O}(\varepsilon ^2 |\log \eps| ^{\infty})
\\ = \int_0^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_{\star} (k(s)) + \mathcal{O}(\varepsilon ^2 |\log \eps| ^{\infty}).
\end{equation}
Indeed, ~\eqref{eq:vari 1D energy} implies that inside $\mathcal{C}_n$
\begin{equation}
\label{eq:1D energy continuous} \left| E^{\mathrm{1D}}_\star (k_n) - E^{\mathrm{1D}}_\star (k (s))\right| \leq C \varepsilon \ell_{\eps} |\log \eps| ^{\infty} \leq C \varepsilon ^2 |\log \eps| ^{\infty}.
\end{equation}
Recognizing a Riemann sum of $ N_{\eps} \propto \varepsilon ^{-1} $ terms in~\eqref{eq:up bound pre final} and recalling that $E^{\mathrm{1D}}_{\star} (k_n)$ is of order~$ 1 $, irrespective of $n$, thus leads to \eqref{eq:up bound pre final}. Combining~\eqref{eq:ener trial} and~\eqref{eq:up bound pre final} we obtain~\eqref{eq:up bound bound} which concludes the proof of Lemma~\ref{lem:up bound bound} and hence that of Proposition~\ref{pro:up bound}, via Lemma \ref{lem:up bound}.
\end{proof}
\section{Energy Lower Bound}
\label{sec:low bound}
The main result proven in this section is the following
\begin{pro}[\textbf{Energy lower bound}]
\label{pro:energy lb}
\mbox{}\\
Let $\Omega\subset \mathbb{R} ^2$ be any smooth simply connected domain. For any fixed $1<b<\Theta_0 ^{-1}$, in the limit $ \varepsilon \to 0$, it holds
\begin{equation}\label{eq:energy lb}
E_{\eps}^{\mathrm{GL}} \geq \frac{1}{\varepsilon} \int_0^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_\star \left(k(s)\right) - C \varepsilon |\log\varepsilon|^{\infty}.
\end{equation}
\end{pro}
We first reduce the problem to the study of decoupled functionals in the boundary layer in Subsection~\ref{sec:low bound pre} and then provide lower bounds to these in Subsection~\ref{sec:low bound main}, which contains the main new ideas of our proof.
\subsection{Preliminary reductions}\label{sec:low bound pre}
As in Section~\ref{sec:up bound}, the starting point is a restriction to the boundary layer together with a replacement of the vector potential. We refer to the proof of Lemma \ref{lem:up bound} and in particular \eqref{eq:tubular coordinates} for the definition of the boundary coordinates.
\begin{lem}[\textbf{Reduction to the boundary functional}]
\label{lem:low bound}
\mbox{}\\
Under the assumptions of Proposition \ref{pro:energy lb}, it holds
\begin{equation}
\label{eq:energy lb ann}
E_{\eps}^{\mathrm{GL}} \geq \frac{1}{\varepsilon} \mathcal{G}_{\mathcal{A}_{\eps}}[\psi] -C \varepsilon^2 |\log\varepsilon|^2,
\end{equation}
with $ \psi(s,t) = \Psi^{\mathrm{GL}}(\mathbf{r}(s,\varepsilon t)) e^{-i \phi_{\varepsilon}(s,t)} $ in $ \mathcal{A}_{\eps} $, $ \phi_{\varepsilon}(s,t) $ is a global phase defined in \eqref{eq: gauge phase} below and $\mathcal{G}_{\mathcal{A}_{\eps}}$ is the boundary functional defined in~\eqref{eq:GL func bound}
\end{lem}
\begin{proof}
A simplified version of the result for disc samples is proven in \cite[Proposition 4.1]{CR2}, where a rougher lower bound is also derived for general domains. This latter result is obtained by dropping the curvature dependent terms from the energy, which was sufficient for the analysis contained there. Here we need more precision in order to obtain a remainder term of order $ o(\varepsilon) $. We highlight here the main steps and skip most of the technical details.
A suitable partition of unity together with the standard Agmon estimates (see~\cite[Section 14.4]{FH1}) allow to restrict the integration to the boundary layer:
\begin{equation}
E_{\eps}^{\mathrm{GL}} \geq \int_{\tilde{\mathcal{A}}_{\eps}} \mathrm{d} \mathbf{r} \left\{ \left| \left( \nabla + i \textstyle\frac{\mathbf{A}^{\mathrm{GL}}}{\varepsilon^2} \right) \Psi_1 \right|^2 - \textstyle\frac{1}{2 b \varepsilon^2} \left[ 2|\Psi_1|^2 - |\Psi_1|^4 \right] \right\} + \mathcal{O}( \varepsilon^{\infty}).
\end{equation}
where $ \Psi_1 $ is given in terms of $ \Psi^{\mathrm{GL}} $ in the form $ \Psi_1 = f_1 \Psi^{\mathrm{GL}} $ for some {function $ 0 \leq f_1 \leq 1 $, depending only on the normal coordinate $t$,} with support containing the set $ \tilde{\mathcal{A}}_{\eps} $ defined by \eqref{ann} and contained in
\begin{displaymath}
\{ \mathbf{r} \in \Omega \: | \: \mathrm{dist}(\mathbf{r}, \partial \Omega) \leq C \varepsilon|\log\varepsilon| \}
\end{displaymath}
for a possibly large constant $ C $. The constant $ c_0 $ in the definition \eqref{ann} of the boundary layer has to be chosen large enough, but the choice of the support of $ f_1 $ remains to any other extent arbitrary and one can clearly pick $ f_1 $ in such a way that $ f_1 = 1 $ in $ \tilde{\mathcal{A}}_{\eps} $ and going smoothly to $ 0 $ outside of it.
The second ingredient of the proof is the replacement of the magnetic potential $ \mathbf{A}^{\mathrm{GL}} $ but this can be done following the same strategy applied to disc samples in \cite[Eqs. (4.18) -- (4.26)]{CR2}, whose estimates are not affected by the dependence of the curvature on $ s $. The crucial properties used there are indeed provided by the Agmon estimates, see below. The phase factor involved in the gauge transformation is explicitly given by
\begin{equation}
\label{eq: gauge phase}
\phi_{\varepsilon}(s,t) : = - \frac{1}{\varepsilon} \int_{0}^{t} \mathrm{d} \eta \: \bm{\nu}(s) \cdot \mathbf{A}^{\mathrm{GL}}(\mathbf{r}(s, \varepsilon \eta)) + \frac{1}{\varepsilon^2} \int_{0}^s \mathrm{d} \xi \: \bm{\gamma}^{\prime}(\xi) \cdot \mathbf{A}^{\mathrm{GL}}(\mathbf{r}(\xi,0)) - \delta_{\varepsilon} s.
\end{equation}
The overall prefactor $ \varepsilon^{-1} $ in the energy is then inherited from the rescaling of the normal coordinate $ \tau = \varepsilon t $ in the tubular neighborhood of the boundary. Note here the use of a different convention with respect to both \cite{CR2,FH1}, where the tangential coordinate $ s $ was rescaled too.
\end{proof}
We need to rephrase some well-known decay estimates in a form suited to our needs. The Agmon estimates proven in \cite[Eq. (12.9)]{FH2} can be translated into analogous bounds applying to $ \psi(s,t) = \Psi^{\mathrm{GL}}(\mathbf{r}(s,\varepsilon t)) e^{-i \phi_{\varepsilon}(s,t)} $ in $ \mathcal{A}_{\eps} $: for some constant $A>0$ it holds
{\begin{equation}
\label{eq:Agmon est}
\int_{\mathcal{A}_{\eps}} \mathrm{d} s \mathrm{d} t \: (1 - \varepsilon k(s) t) \: e^{ A t } \left\{ \left| \psi(s,t) \right|^2 + \left| \left( \left(\varepsilon\partial_s,\partial_t\right) + i \textstyle\frac{\tilde{\mathbf{A}}(s,t)}{\varepsilon} \right) \psi(s,t) \right|^2 \right\} = \mathcal{O}(1),
\end{equation}
with (see, e.g., \cite[Eqs. (4.19) -- (4.20)]{CR2})
\begin{equation}
\label{eq: fake magnp}
\tilde{\mathbf{A}}(s,t) : = \left( (1 - \varepsilon k(s) t) \bm{\gamma}^{\prime}(s) \cdot \mathbf{A}^{\mathrm{GL}}(\mathbf{r}(s,\varepsilon t)) + \varepsilon^2 \partial_s \phi_{\varepsilon} \right) {\bf e}_s
\end{equation}}
In addition we are going to use two additional bounds proven in \cite[Eq. (10.21) and (11.50)]{FH2}:
\begin{equation}
\label{eq:sup est glm}
\left\| \psi \right\|_{L^{\infty}(\mathcal{A}_{\eps})} \leq 1, \qquad \left\| (\varepsilon \partial_s, \partial_t) \psi \right\|_{L^{\infty}(\mathcal{A}_{\eps})} \leq C.
\end{equation}
These bounds imply the following
\begin{lem}[\textbf{Useful consequences of Agmon estimates}]
\label{lem:Agmon est alternative}
\mbox{} \\
Let $ \bar{t} = c_0|\log\varepsilon| (1 + o(1)) $ for some $ c_0 $ large enough, then for any $ a, b, s_0 \in [0, 2\pi) $,
\begin{equation}
\label{eq:Agmon decay}
\int_a^b \mathrm{d} s \: |\psi(s,\bar{t})| = \mathcal{O}(\varepsilon^{\infty}), \qquad \int_{\bar t}^{c_0|\log\varepsilon|} \mathrm{d} t \: |\psi(s_0,t)| = \mathcal{O}(\varepsilon^{\infty}).
\end{equation}
\end{lem}
\begin{proof}
We start by considering the first estimate: let $ \chi(t) $ be a suitable smooth function with support in $ [t_1,\bar{t}] $, with $ t_1 = \bar{t} - c $, for some $ c > 0 $, and such that $ 0 \leq \chi \leq 1$, $ \chi(\bar{t}) = 1 $ and $ |\partial_t \chi| \leq C $. Then one has
\bml{
\int_a^b \mathrm{d} s \: |\psi(s,\bar{t})| = \int_a^b \mathrm{d} s \: \chi(\bar{t}) |\psi(s,\bar{t})| = \int_a^b \mathrm{d} s \int_{t_1}^{\bar{t}} \mathrm{d} t \: \left[ \chi(t) \partial_t|\psi(s,t)| + |\psi(s,t)| \partial_t\chi(t) \right] \\
\leq C e^{-\frac{1}{2}A t_1} \bigg\{ \bigg[ \int_{\mathcal{A}_{\eps}} \mathrm{d} s \mathrm{d} t \: e^{At} \left| \partial_t|\psi(s,t)| \right|^2 \bigg]^{1/2} + \bigg[ \int_{\mathcal{A}_{\eps}} \mathrm{d} s \mathrm{d} t \: e^{At} \left| \psi(s,t) \right|^2 \bigg]^{1/2} \bigg\} = \mathcal{O}(\varepsilon^{\infty}),
}
by \eqref{eq:Agmon est}{, the diamagnetic inequality} and the assumption on $ t_1 $ and $ \bar{t} $. Indeed the factor $ e^{-\frac{1}{2}A t_1} = \varepsilon^{\frac{1}{2}A c_0(1+o(1))} $ can be made smaller than any power of $ \varepsilon $ by taking $ c_0 $ large enough.
For the second estimate we use a tangential cut-off function, i.e., a smooth monotone function $ \chi(s) $ with support\footnote{Let us assume that $ s_0 - 2\pi > C > 0 $, otherwise one can take as a support for $ \chi $ the complement set, i.e., $ [0, s_0] $.} in $ [s_0, 2\pi] $, such that $ 0 \leq \chi \leq 1$, $ \chi(s_0) = 1 $, $ \chi(2\pi) = 0 $, and $ |\partial_s \chi| \leq C $. Then as in the estimate above (recall that $ t_{\eps} : = c_0|\log\varepsilon| $)
\bml{
\int_{\bar{t}}^{t_{\eps}} \mathrm{d} t \: |\psi(s_0,t)| = \int_{\bar{t}}^{t_{\eps}} \mathrm{d} t \: \chi(s_0) |\psi(s_0,t)| = - \int_{s_0}^{2\pi} \mathrm{d} s \int_{\bar{t}}^{t_{\eps}} \mathrm{d} t \: \left[ \chi(s) \partial_s|\psi(s,t)| + |\psi(s,t)| \partial_s\chi(s) \right] \\
\leq C e^{-\frac{1}{2}A \bar{t}} \bigg\{ \varepsilon^{-1} \bigg[ \int_{\mathcal{A}_{\eps}} \mathrm{d} s \mathrm{d} t \: e^{At} \left| \varepsilon \partial_s|\psi(s,t)| \right|^2 \bigg]^{1/2} + \bigg[ \int_{\mathcal{A}_{\eps}} \mathrm{d} s \mathrm{d} t \: e^{At} \left| \psi(s,t) \right|^2 \bigg]^{1/2} \bigg\} = \mathcal{O}(\varepsilon^{\infty}),
}
where the main ingredients are again \eqref{eq:Agmon est}{, the diamagnetic inequality} and the assumption on~$\bar{t}$.
\end{proof}
We now introduce some reduced energy functionals defined over the cells we have introduced before, see Subsection~\ref{sec:trial state} for the notation.
{We are going to perform an energy decoupling \`a la Lassoued-Mironescu~\cite{LM} in each cell:} we write
\begin{equation}
\label{eq:splitting psi}
\psi(s,t) = : u_n(s,t) f_n(t) \exp \left\{-i \textstyle\left(\frac{\alpha_n}{\varepsilon} + \delta_{\varepsilon}\right)s \right\},
\end{equation}
and introduce the reduced functionals
\begin{equation}
\label{eq:Ej}
\mathcal{E}_n [u] : = \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \left(1 - \varepsilon k_n t \right) f_n^2 \left\{ \left| \partial_t u \right|^2 + \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \varepsilon \partial_s u \right|^2 - 2 \varepsilon b_n(t) J_s[u] + \textstyle\frac{1}{2 b} f_n^2 \left(1 - |u|^2 \right)^2 \right\},
\end{equation}
with
\begin{equation}
b_n(t) : = \frac{t + \alpha_n - \mbox{$\frac{1}{2}$} \varepsilon k_n t^2}{(1 - \varepsilon k_n t)^2},
\end{equation}
and
\begin{equation}
J_s[u] : = (i u, \partial_s u) = \textstyle\frac{i}{2} \left(u^* \partial_s u - u \partial_s u^*\right).
\end{equation}
Note that in~\eqref{eq:Ej} the curvature is approximated by its mean value in the cell $\mathcal{C}_n$. These objects play a crucial role in the sequel, as per
\begin{lem}[\textbf{Lower bound in terms of the reduced functionals}]\label{lem:reduc func}\mbox{}\\
With the previous notation
\begin{equation}\label{eq:reduc func}
\mathcal{G}_{\mathcal{A}_{\eps}} [\psi] \geq \int_{0}^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_{\star}(k(s)) + \sum_{n=1} ^{N_{\eps}} \mathcal{E}_n[u_n] - C\varepsilon^2 |\log \eps| ^{\infty}
\end{equation}
\end{lem}
\begin{proof}
With the above cell decomposition, we can estimate
\begin{equation}
\label{step 0}
\mathcal{G}_{\mathcal{A}_{\eps}}[\psi] \geq \sum_{n=1}^{N_{\eps}} \mathcal{E}^{\mathrm{GL}}_{n}[\psi] -C \varepsilon ^2 |\log\varepsilon|^{\infty},
\end{equation}
where
\begin{equation}
\mathcal{E}^{\mathrm{GL}}_{n}[\psi] : = \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \left(1 - \varepsilon k_n t \right) \left\{ \left| \partial_t \psi \right|^2 + \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \left(\varepsilon \partial_s + i a_n(t) \right) \psi \right|^2 - \textstyle\frac{1}{2 b} \left[ 2|\psi|^2 - |\psi|^4 \right] \right\},
\end{equation}
and
\begin{equation}
a_n(t) : = - t + \mbox{$\frac{1}{2}$} \varepsilon k_n t^2 + \varepsilon \delta_{\varepsilon}.
\end{equation}
The remainder term has been estimated as follows: the replacement of $ k(s) $ by $ k_n $ produces two different rests which can be estimated separately, i.e.,
\begin{equation}
\label{eq:lb remainder 1}
\int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(k(s) - k_n \right) t \left\{ \left| \partial_t \psi \right|^2 - \textstyle\frac{1}{2 b} \left[ 2|\psi|^2 - |\psi|^4 \right] \right\} = \mathcal{O}(\varepsilon^2|\log\varepsilon|^{\infty}),
\end{equation}
\begin{equation}
\label{eq:lb remainder 2}
\frac{1}{\varepsilon} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left\{ \textstyle\frac{1}{1- \varepsilon k(s) t} \left| \left(\varepsilon \partial_s + i a_{\varepsilon}(s,t) \right) \psi \right|^2 - \textstyle\frac{1}{1- \varepsilon k_n t} \left| \left(\varepsilon \partial_s + i a_n(t) \right) \psi \right|^2 \right\} = \mathcal{O}(\varepsilon^2|\log\varepsilon|^{\infty}).
\end{equation}
In estimating the first error term \eqref{eq:lb remainder 1}, we use the fact that
$$ k(s) - k_n = \mathcal{O}(\varepsilon)$$
and the bounds \eqref{eq:sup est glm} together with the cell size. For the second estimate the same ingredients are sufficient as well, in addition to the simple bound
\begin{displaymath}
\sup_{(s,t) \in \mathcal{C}_n} \left| a_{\varepsilon}(s,t) - a_n(t) \right| \leq C \varepsilon \sup_{(s,t) \in \mathcal{C}_n} \left|k(s) - k_n \right| |\log\varepsilon|^2 = \mathcal{O}(\varepsilon^2 |\log\varepsilon|^2).
\end{displaymath}
Inside any given cell $ \mathcal{C}_n $ we can then decouple the functional in the usual way (see~\cite[Lemma~5.2]{CR2} for a statement in this context) to obtain
\begin{equation}
\label{eq:splitting}
\mathcal{E}^{\mathrm{GL}}_{n}[\psi] = E^{\mathrm{1D}}_{\star}(k_n) \ell_{\eps} + \mathcal{E}_n[u_n].
\end{equation}
The first term in~\eqref{eq:splitting} is a Riemann sum approximation of the leading order term in~\eqref{eq:energy lb}: using \eqref{eq:1D energy continuous}, we immediately get
\bml{
\label{eq:Riemann sum approx}
\sum_{n = 1}^{N_{\eps}} E^{\mathrm{1D}}_{\star}(k_n) \ell_{\eps} = \sum_{n = 1}^{N_{\eps}} E^{\mathrm{1D}}_{\star}(k_n) (s_{n+1} - s_n) \\
= \sum_{n = 1}^{N_{\eps}}\int_{s_n}^{s_{n+1}} \mathrm{d} s \left[ E^{\mathrm{1D}}_{\star}(k(s)) + \mathcal{O}(\varepsilon^2 |\log \eps| ^{\infty}) \right] = \int_{0}^{|\partial \Omega|} \mathrm{d} s \: E^{\mathrm{1D}}_{\star}(k(s)) + \mathcal{O}(\varepsilon^2 |\log \eps| ^{\infty}),
}
which concludes the proof.
\end{proof}
\subsection{Lower bounds to reduced functionals}\label{sec:low bound main}
In view of our previous reductions {in Lemma~\ref{lem:reduc func}}, the final lower bound~\eqref{eq:energy lb} is a consequence of the following lemma
\begin{lem}[\textbf{Lower bound on the reduced functionals}]\label{lem:bound reduc func}\mbox{}\\
With the previous notation, we have
\begin{multline}\label{eq:bound reduc func}
\sum_{n=1} ^{N_{\eps}} \mathcal{E}_n [u_n] \geq |\log \varepsilon| ^{-4} \sum_{n = 1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n^2 \left[ \left| \partial_t u_n \right|^2 + \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right] \\
+ \displaystyle\frac{1}{2 b \varepsilon}\sum_{n = 1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n ^4 \left( 1 - |u_n|^2 \right)^2 - C \varepsilon ^2 |\log \eps| ^{\infty}
\end{multline}
\end{lem}
Proposition~\ref{pro:energy lb} now follows by a combination of Lemmas~\ref{lem:low bound},~\ref{lem:reduc func} and~\ref{lem:bound reduc func} because the two sums in the right-hand side of~\eqref{eq:bound reduc func} are positive. These terms will prove useful to obtain our density and degree estimates in Section~\ref{sec:density degree}.
We can now focus on the proof of Lemma~\ref{lem:bound reduc func}, which is the core argument of the proof of Proposition~\ref{pro:energy lb}.
\begin{proof}[Proof of Lemma \ref{lem:bound reduc func}]
The proof is split into two rather different steps. In the first one we essentially follow the strategy of~\cite[Section 5.2]{CR2} to control the main part of the only potentially negative term in~\eqref{eq:Ej}. This is done locally inside each cell and uses mainly the positivity of the cost function, Lemma~\ref{lem:K positive}. This strategy however involves an application of Stokes' formula and subsequent further integrations by parts to put the so obtained terms in such a form (involving only first order derivatives, see \eqref{step 1}) that they can be compared with the kinetic one. This produces unphysical surface terms located on the boundaries of the (rather artificial) cells we have introduced. The second step of the proof consists in controlling those, which requires to sum them all and reorganize the sum in a convenient manner. It is in this step only that we cease working locally inside each cell.
\paragraph{Step 1. Lower bound inside each cell.} First, we split the integration over two regions, one where a suitable lower bound to the density $ f_n $ holds true and another one yielding only a very small contribution. More precisely we set
\begin{equation}
\mathcal{R}_n : = \left\{ (s,t) \in \mathcal{C}_n: f_n(t) \geq |\log\varepsilon|^3 f_n(t_{\eps}) \right\}.
\end{equation}
Note that the monotonicity for large $ t $ of $ f_n $ (see Proposition \ref{pro:min fone}) guarantees that
\begin{equation}
\label{tj}
\mathcal{R}_n : = [s_n, s_{n+1}] \times [\bar{t}_{n,\varepsilon}, t_{\eps}], \qquad \bar{t}_{n,\varepsilon} = t_{\eps} + \mathcal{O}(\log|\log\varepsilon|).
\end{equation}
Now we use the potential function $ F_n(t) $ defined as
\begin{equation}
F_n(t) : = 2 \int_0^t \mathrm{d} \eta \: (1 - \varepsilon k_n \eta) f_n^2(\eta) b_n(\eta) = 2 \int_0^t \mathrm{d} \eta \: f_n^2(\eta) \frac{\eta + \alpha_n - \mbox{$\frac{1}{2}$} \varepsilon k_n \eta^2}{1 - \varepsilon k_n \eta},
\end{equation}
and compute
\begin{equation}
\label{eq:step 0}
- 2 \varepsilon \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n^2(t) b_{k}(t) J_s[u_n] = \varepsilon \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: F_n (t) \partial_t J_s[u_n],
\end{equation}
where we have exploited the vanishing of $ F_n $ at $ t = 0 $ and $ t = t_{\eps} $. Now we split the r.h.s. of the above expression into an integral over $ \mathcal{D}_n : = \mathcal{C}_n \setminus \mathcal{R}_n $ and a rest. In order to compare the first part with the kinetic energy and show that the sum is positive, we have to perform another integration by parts:
\bml{
\label{step 1}
\varepsilon \int_{\mathcal{D}_n} \mathrm{d} s \mathrm{d} t \: F_n(t) \partial_t J_s[u_n] = 2 \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{s_{n+1}} \mathrm{d} s \: F_n(t) \left( i \partial_t u_n, \partial_s u_n \right) \\ + \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t F_n(t) \left[ J_t[u_n](s_{n+1}, t) - J_t[u_n](s_{n}, t) \right].
}
The first term in~\eqref{step 1} can be bounded by using some kinetic energy:
\bml{
\label{step 3}
2 \varepsilon \int_{\mathcal{D}_n} \mathrm{d} t \mathrm{d} s \: F_n(t) \left( i \partial_t u_n, \partial_s u_n \right) \geq - 2 \int_{\mathcal{D}_n} \mathrm{d} s \mathrm{d} t \: \left| F_n (t) \right| \left| \partial_t u_n \right| \left| \varepsilon \partial_s u_n \right| \\
\geq - \int_{\mathcal{D}_n} \mathrm{d} s \mathrm{d} t \: (1 - \varepsilon k_n t) F_n (t) \left[ \left| \partial_t u_n \right|^2 + \textstyle\frac{1}{(1 -
\varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right],
}
where we have used the inequality $ab\leq \frac{1}{2} (\delta a^2 + \delta ^{-1} b^2)$ and the negativity of $ F_n(t) $ (see Lemma~\ref{lem:F prop}). Combining the above lower bound with~\eqref{eq:Ej} and~\eqref{step 0} and dropping the part of the kinetic energy located in $ \mathcal{R}_n $, we get
\bml{
\label{step 4}
\mathcal{E}_n[u_n] \geq \int_{\mathcal{D}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) K_n(t) \bigg[ \left| \partial_t u_n \right|^2 + \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \bigg] \\
+ \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t F_n(t) \left[ J_t[u_n](s_{n+1}, t) - J_t[u_n](s_{n}, t) \right] + \varepsilon \displaystyle\int_{\mathcal{R}_n} \mathrm{d} s \mathrm{d} t \: F_n (t) \partial_t J_s[u_n] \\
+ d_{\varepsilon} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n^2 \left[ \left| \partial_t u_n \right|^2
+ \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right] \\+ \displaystyle\frac{1}{2 b} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n ^4 \left( 1 - |u_n|^2 \right)^2,
}
where
\begin{equation}
\label{eq:Kj}
K_n (t) : = K_{k_n}(t),
\end{equation}
is the cost function defined in \eqref{eq:Kk}, for some given $d_{\varepsilon}$, satisfying \eqref{eq:de}. The third term in \eqref{step 4} is bounded from below by a quantity smaller than any power of $ \varepsilon $, provided $ c_0 $ is chosen large enough. This is shown using the same strategy as in \cite[Eq. (5.21) and following discussion]{CR2} and we skip the details for the sake of brevity. For the first term we use the positivity of $K_n$ provided by Lemma~\ref{lem:K positive}. We then conclude
\bml{
\label{eq:step 4}
\mathcal{E}_n[u_n] \geq \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t F_n(t) \left[ J_t[u_n](s_{n+1}, t) - J_t[u_n](s_{n}, t) \right] \\
+ d_{\varepsilon} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n^2 \left[ \left| \partial_t u_n \right|^2
+ \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right] \\+ \displaystyle\frac{1}{2 b} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n ^4 \left( 1 - |u_n|^2 \right)^2 + \mathcal{O} (\varepsilon ^{\infty}),
}
and there only remains to bound the first term on the r.h.s. from below. We are not actually able to bound the term coming from cell $ n $ separately, so in the next step we put back the sum over cells.
\paragraph{Step 2. Summing and controlling boundary terms.} We now conclude the proof of~\eqref{eq:bound reduc func} by proving the following inequality:
\begin{multline}\label{eq:control boundary terms}
\varepsilon \sum_{n=1} ^{N_{\eps}} \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t F_n(t) \left[ J_t[u_n](s_{n+1}, t) - J_t[u_n](s_{n}, t) \right] \geq \\ -C|\log \varepsilon| ^{-5} \sum_{n=1} ^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t \right) f_n^2 \left[ \left| \partial_t u_n \right|^2
+ \textstyle\frac{1}{(1- \varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right] -C \varepsilon ^2 |\log \eps| ^{\infty}.
\end{multline}
Grouping~\eqref{eq:step 4} and~\eqref{eq:control boundary terms}, choosing $d_{\varepsilon} = 2 |\log \varepsilon| ^{-4}$ (which we are free to do) concludes the proof.
We turn to our claim~\eqref{eq:control boundary terms}. Once we have put back the sum over all cells the idea is to associate the two terms evaluated on the same boundary, which come from two adjacent cells and therefore contain two different densities:
\bml{
\label{step 5}
\varepsilon \sum_{n=1}^{N_{\eps}} \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t F_n(t) \left[ J_t[u_n](s_{n+1}, t) - J_t[u_n](s_{n}, t) \right] \\
= \varepsilon \sum_{n=1}^{N_{\eps}} \bigg[ \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \left[ F_n(t) J_t[u_n](s_n,t) - F_{n+1}(t) J_t[u_{n+1}](s_n,t) \right] + R_n \bigg],
}
where, assuming without loss of generality that $ \bar{t}_{n,\varepsilon} < \bar{t}_{n+1,\varepsilon} $,
\begin{equation}
R_n : = - \int_{\bar{t}_{n,\varepsilon}}^{\bar{t}_{n+1,\varepsilon}} \mathrm{d} t \: F_{n+1}(t) J_t[u_{n+1}](s_{n+1},t),
\end{equation}
If on the other hand $ \bar{t}_{n,\varepsilon} > \bar{t}_{n+1,\varepsilon} $, in~\eqref{step 5} $ \bar{t}_{n,\varepsilon} $ should be replaced with $ \bar{t}_{n+1,\varepsilon} $ and in place of $ R_n $ one would find
\begin{displaymath}
\int_{\bar{t}_{n+1,\varepsilon}}^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: F_{j}(t) J_t[u_{j}](s_{n+1},t).
\end{displaymath}
In other words the remainder $ R_n $ is inherited from the fact that the decomposition $ \mathcal{C}_n = \mathcal{D}_n \cup \mathcal{R}_n $ clearly depends on $ n $ and the boundary terms in~\eqref{step 5} do not compensate exactly. However it is clear from what follows that the estimate of such a boundary term is the same in both cases and essentially relies on the second inequality in~\eqref{eq:Agmon decay}: recalling that
$$ f_{n+1}^2(t) + F_{n+1}(t) \geq 0 $$
for any $ t \leq \bar{t}_{n+1,\varepsilon} $, we have
\bml{
|R_n| = \int_{\bar{t}_{n,\varepsilon}}^{\bar{t}_{n+1,\varepsilon}} \mathrm{d} t \: \left| F_{n+1}(t) \right| \left| J_t[u_{n+1}](s_{n+1},t) \right| \leq \int_{\bar{t}_{n,\varepsilon}}^{\bar{t}_{n+1,\varepsilon}} \mathrm{d} t \: f_{n+1}^2(t) \left|u_{n+1}(s_{n+1},t) \right| \left| \partial_t u_{n+1}(s_{n+1},t) \right| \\
\leq \int_{\bar{t}_{n,\varepsilon}}^{\bar{t}_{n+1,\varepsilon}} \mathrm{d} t \: \left|\psi(s_{n+1},t) \right| \left[ \left| \partial_t \psi (s_{n+1},t) \right| + \left|u_{n+1}(s_{n+1},t) \right| \left| \partial_t f_{n+1} (t) \right| \right] \\
\leq C |\log\varepsilon|^3 \int_{\bar{t}_{n,\varepsilon}}^{\bar{t}_{n+1,\varepsilon}} \mathrm{d} t \: \left|\psi(s_{n+1},t) \right| = \mathcal{O}(\varepsilon^{\infty}),
}
where we have used the bounds \eqref{eq:sup est glm} and \eqref{eq:fal derivative}, i.e., $ |f_{n+1}^{\prime}| \leq |\log\varepsilon|^3 f_{n+1}(t) $. The identity \eqref{step 5} hence yields
\bml{
\label{step 6}
\varepsilon \sum_{n=1}^{N_{\eps}} \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t F_n(t) \left[ J_t[u_n](s_{n+1}, t) - J_t[u_n](s_{n}, t) \right] \\
= \varepsilon \sum_{n=1}^{N_{\eps}} \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \left[ F_n(t) J_t[u_n](s_n,t) - F_{n+1}(t) J_t[u_{n+1}](s_n,t) \right] + \mathcal{O}(\varepsilon^{\infty}).
}
Using now the definitions~\eqref{eq:splitting psi} of $ u_n $ and $ u_{n+1} $, we get
\begin{equation}
u_{n+1}(s,t) = \frac{f_n(t)}{f_{n+1}(t)} e^{i(\alpha_{n+1} - \alpha_n) s} u_n(s,t),
\end{equation}
so that
\begin{equation}
J_t[u_{n+1}](s_n,t) = i G_{n,n+1}(t) G_{n,n+1}^{\prime}(t) \left| u_n(s_n,t) \right|^2 + G_{n,n+1}^2(t) J_t[u_n](s_n,t),
\end{equation}
where we have set
\begin{equation}
G_{n,n+1}(t) : = \frac{f_n(t)}{f_{n+1}(t)}.
\end{equation}
Then we can compute
\begin{multline*}
\varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \left[ F_n J_t[u_n](s_n,t) - F_{n+1} J_t[u_{n+1}](s_n,t) \right]
\\= \varepsilon\int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: \left[ F_n(t) - F_{n+1}(t) G_{n,n+1}^2(t) \right] J_t[u_n](s_n,t) \\
\\- \frac{i\varepsilon}{2} \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: F_{n+1}(t) \partial_t \left( G_{n,n+1}^2(t) \right) \left| u_n(s_n,t) \right|^2,
%
\end{multline*}
but we know that the l.h.s. of the above expression is real, so that we can take the real part of the identity above obtaining
\begin{equation}
\varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \left[ F_n J_t[u_n](s_n,t) - F_{n+1} J_t[u_{n+1}](s_n,t) \right] = \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: \left[ F_n - F_{n+1} G_{n,n+1}^2 \right] J_t[u_n](s_n,t).
\end{equation}
To estimate the r.h.s. we integrate by parts back by introducing a suitable cut-off function. Let, for any given $ n = 1, \ldots, N_{\eps} $, $ \chi_n(s) $ be a suitable smooth function, such that
$$ \chi_n(s_n) = 1, \qquad \chi\left(\textstyle\frac12 ( s_{n} + s_{n+1})\right) = 0 $$
and
\begin{equation}
\left[ s_n, \mbox{$\frac{1}{2}$} \left( s_{n} + s_{n+1} \right) \right)\subset \mathrm{supp}(\chi_n), \qquad \left| \partial_s \chi_n \right| \leq C{\varepsilon^{-1}}.
\end{equation}
We can rewrite
\bml{
\label{step 7}
\varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: \left[ F_n - F_{n+1} G_{n,n+1}^2 \right] J_t[u_n](s_n,t) = \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: \chi_n(s_n) \left[ F_n - F_{n+1} G_{n,n+1}^2 \right] J_t[u_n](s_n,t) \\
= \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \Big\{ \chi_n(s) I_{n,n+1}(t) \partial_s \left( J_t[u_n] \right) + \partial_s \left( \chi_n(s) \right) I_{n,n+1}(t) J_t[u_n] \Big\},
}
where we have set for short (compare with~\eqref{eq:def ijj first})
\begin{equation}
\label{eq:def ijj}
I_{n,n+1}(t) : = F_n(t) - F_{n+1}(t) G_{n,n+1}^2(t) = F_n(t) - F_{n+1}(t) \frac{f_n ^2 (t)}{f_{n+1}^2(t)}.
\end{equation}
The first contribution to~\eqref{step 7} can be cast in a form analogous to~\eqref{step 3}:
\bml{
\label{step 8}
\varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \chi_n(s) I_{n,n+1}(t) \partial_s \left( J_t[u_n] \right) \\
= \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \chi_n(s) \left\{ 2 I_{n,n+1}(t) \left(i \partial_s u_n, \partial_t u_n \right) - \partial_t \left(I_{n,n+1}(t)\right) J_s[u_n] \right\} \\
+ \varepsilon \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \chi_n(s) I_{n,n+1}(\bar{t}_{n,\varepsilon}) J_s[u_n](s,\bar{t}_{n,\varepsilon}).
}
The first term on the r.h.s. can be handled as we did for~\eqref{step 3}:
\bml{
\label{step 9}
2 \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \chi_n(s) I_{n,n+1}(t) \left(i \partial_s u_n, \partial_t u_n \right)
\geq - 2 \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: \left| I_{n,n+1}(t) \right| \left| \varepsilon \partial_s u_n\right| \left| \partial_t u_n \right| \\
\geq - C \varepsilon|\log \eps| ^{\infty} \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: (1 - \varepsilon k_n t) f_n^2 \left[ \left| \partial_t u_n \right|^2 + \textstyle\frac{1}{(1 -
\varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right],
}
where we have used~\eqref{eq:est log cost} with $k= k_n$, $k' = k_{n+1}$ and recalled that $|k_n-k_{n+1}| \leq C \varepsilon$ to bound $I_{n,n+1}$.
The last term in~\eqref{step 8} can be easily shown to provide a small correction: using~\eqref{eq:est log cost} again yields
$$ |I_{n,n+1}(\bar{t}_{n,\varepsilon})| \leq C \varepsilon |\log \eps| ^{\infty} f_n^2(\bar{t}_{n,\varepsilon}) ,$$
so that by~\eqref{eq:Agmon decay} and \eqref{tj}
\bml{
\label{eq:step 10}
\bigg| \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \chi_n(s) I_{n,n+1}(\bar{t}_{n,\varepsilon}) J_s[u_n](s,\bar{t}_{n,\varepsilon}) \bigg| \leq \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: |I_{n,n+1}(\bar{t}_{n,\varepsilon})| \left| J_s[u_n] \right| \\
\leq C \varepsilon |\log \eps| ^{\infty} \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: f_n^2(\bar{t}_{n,\varepsilon}) \left|u_n(s,\bar{t}_{n,\varepsilon})\right| \left|\partial_s u_n(s,\bar{t}_{n,\varepsilon}) \right| \\
\leq C |\log \eps| ^{\infty} \left\| \varepsilon \partial_s \psi \right\|_{\infty} \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)}\mathrm{d} s \: \left|\psi(s,\bar{t}_{n,\varepsilon})\right| = \mathcal{O}(\varepsilon^{\infty}),
}
where we have estimated the $s$-derivative of $ \psi $ by means of~\eqref{eq:sup est glm}.
Hence, combining \eqref{step 8} with \eqref{step 9} and {\eqref{eq:step 10}}, we can bound from below~\eqref{step 7} as
\bml{
\label{step 10}
\varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \: \left[ F_n - F_{n+1} G_{n,n+1}^2 \right] J_t[u_n](s_n,t) \\
\geq \varepsilon \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \:\left\{ - \partial_t I_{n,n+1} J_s[u_n] + \partial_s \chi_n I_{n,n+1} J_t[u_n] \right\} \\
- C \varepsilon |\log \eps| ^{\infty} \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: (1 - \varepsilon k_n t) f_n^2 \left[ \left| \partial_t u_n \right|^2 + \textstyle\frac{1}{(1 -
\varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right] + \mathcal{O}(\varepsilon^{\infty}).
}
To complete the proof it only remains to estimate the first two terms on the r.h.s. of the expression above, which again requires to borrow a bit of the kinetic energy. Using~\eqref{eq:est log der} we have
$$
\sup_{t \in [0,\bar{t}_{n,\varepsilon}]} \bigg| \frac{\partial_t I_{n,n+1}}{f_n^2} \bigg| \leq C \varepsilon|\log \eps| ^{\infty},
$$
so that
\bml{
\label{step 11}
\varepsilon \bigg| \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \partial_t I_{n,n+1} J_s[u_n] \bigg| \leq C \varepsilon |\log \eps| ^{\infty} \int_{\mathcal{D}_n} \mathrm{d} s \mathrm{d} t \: f_n^2 \left| u_n \right| \left| \varepsilon \partial_s u_n \right| \\
\leq C \varepsilon |\log \eps| ^{\infty} \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: \left[ \textstyle\frac{1}{\delta} \textstyle\frac{1}{1 -
\varepsilon k_n t} f_n^2 \left| \varepsilon \partial_s u_n \right|^2 + \delta |\psi|^2 \right] \\
\leq C |\log\varepsilon|^{-5} \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: \textstyle\frac{1}{1 -
\varepsilon k_n t} f_n^2 \left| \varepsilon \partial_s u_n \right|^2 + \mathcal{O}(\varepsilon^3|\log \eps| ^{\infty}),
}
where we have chosen $ \delta = \varepsilon |\log\varepsilon|^a $, for some suitably large $ a > 0 $ to compensate the $ |\log\varepsilon| $ prefactor (this generates the coefficient $ |\log\varepsilon|^{-5} $), and used \eqref{eq:sup est glm} to estimate the remaining term.
For the second term on the r.h.s. of~\eqref{step 10} we proceed in the same way, using first \eqref{eq:est log cost} and the assumption $ \left| \partial_s \chi \right| \leq C {\varepsilon^{-1}} $, to get
\bml{
\label{step 12}
\varepsilon \bigg| \int_0^{\bar{t}_{n,\varepsilon}} \mathrm{d} t \int_{s_n}^{\frac12 \left( s_{n} + s_{n+1}\right)} \mathrm{d} s \: \partial_s \chi_n I_{n,n+1} J_t[u_n] \bigg| \leq C {\varepsilon} |\log \eps| ^{\infty} \int_{\mathcal{D}_n} \mathrm{d} s \mathrm{d} t \: f_n^2 \left| u_n \right| \left| \partial_t u_n \right| \\
{\leq C \varepsilon |\log \eps| ^{\infty} \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: \left[ \textstyle\frac{1}{\delta} f_n^2 \left|\partial_t u_n \right|^2 + \delta |\psi|^2 \right]} \\
{\leq C |\log\varepsilon|^{-5} \int_{\mathcal{D}_n}\mathrm{d} s \mathrm{d} t \: f_n^2 \left| \partial_t u_n \right|^2 + \mathcal{O}(\varepsilon^3|\log \eps| ^{\infty}),}
}
where we have made the same choice of $ \delta $ as in \eqref{step 11}.
Collecting all the previous estimates yields our claim~\eqref{eq:control boundary terms} (recall that there are $N_{\eps} \propto \varepsilon ^{-1}$ terms to be summed, whence the final error of order $\varepsilon ^2 |\log \eps| ^{\infty}$).
\end{proof}
\section{Density and Degree Estimates}\label{sec:density degree}
In this section we prove the main results about the behavior of $ |\Psi^{\mathrm{GL}}| $ close to the boundary of the sample $ \partial\Omega $ and an estimate of its degree at $ \partial \Omega $.
We first notice that the $L^2$ estimate stated in \eqref{eq:main density} is in fact a trivial consequence of the energy asymptotics \eqref{eq:energy GL}: putting together the lower bounds~\eqref{eq:energy lb ann},~\eqref{eq:reduc func} and~\eqref{eq:bound reduc func} with the upper bound \eqref{eq:up bound GL}, we obtain
\begin{equation}
\label{eq:upper bound nonlinear}
\frac{1}{2\varepsilon b} \sum_{n=1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t\right) f_n^4 \left(1 - |u_n|^2 \right)^2 \leq C \varepsilon |\log\varepsilon|^{\gamma},
\end{equation}
for some power $\gamma$ large enough (recall the meaning of the notation $|\log \eps| ^{\infty}$). Now, using the fact that $ k_n = k(s) \left(1 + \mathcal{O}(\varepsilon) \right) $ inside $ \mathcal{C}_n $, we can easily reconstruct \eqref{eq:main density}, once everything has been expressed in the original unscaled variables and the definitions \eqref{eq:ref profile} and \eqref{eq:splitting psi} has been exploited (recall also that $ \psi(s,t) = \Psi^{\mathrm{GL}}(\mathbf{r}(s,\varepsilon t) $). See also \cite[Section 4.2]{CR2} for further details.
\medskip
We now focus on the refined density estimate discussed in Theorem \ref{theo:Pan} and the proof of Pan's conjecture. The result is obtained via an adaptation of the arguments used in \cite[Section 5.3]{CR2}, originating in~\cite{BBH1}. The general idea is now rather standard so we will mainly comment on the changes needed to make those argument work in the present setting.
\begin{proof}[Proof of Theorem \ref{theo:Pan}] The two main ingredients of the proof are the above estimate~\eqref{eq:upper bound nonlinear} and a pointwise bound on the gradient of $ u_n $. Once combined, the two estimates imply that the function $ |u_n| $ cannot be too far from $ 1 $ anywhere in the boundary layer $ \mathcal{A}_{\rm bl} $ (see \eqref{eq:annd} for its precise definition).
\medskip
{\emph{Step 1, gradient estimate.}} A minor difference with the setting in \cite[Section 5.3]{CR2} is due to the convention we used to avoid a scaling of the tangential coordinate $ s $. This is just a matter of notation and by following \cite[Proof of Lemma 5.3]{CR2}, we can show that, for any $ n = 1, \ldots, N_{\eps} $,
\begin{equation}
\label{eq:point est grad u}
\left| \partial_t |u_n| \right| \leq C f_n^{-1}(t) |\log\varepsilon|^{3}, \qquad \left| \partial_s |u_n| \right| \leq C f_n^{-1}(t) \varepsilon^{-1}.
\end{equation}
Notice the second estimate above, which is a consequence of not scaling the coordinate $ s $.
{We now prove~\eqref{eq:point est grad u}. From the definitions of $\psi$ and $u_n$ we immediately have
\bml{
\left| \partial_t |u_n|(s,t) \right| \leq f_n^{-2}(t) \left| f_n^{\prime}(t) \right| \left| \psi(s,t) \right| + f_n^{-1}(t) \left| \partial_t \left| \psi(s,t) \right| \right| \\
\leq C f_n^{-1}(t) \left[ |\log\varepsilon|^3 + \left| \partial_{t} \left| \psi(s,t) \right| \right| \right],
}
and
$$
\left| \partial_s |u_n|(s,t) \right| \leq f_n^{-1}(t) \left| \partial_s \left| \psi(s,t) \right| \right|
$$
where we have used~\cite[Equation (A.28)]{CR2}. The result is then a consequence of~\cite[Theorem~2.1]{Alm} or~\cite[Equation~(4.9)]{AH} in combination with the diamagnetic inequality (see \cite{LL}), which yield
\begin{equation}
\left| \nabla \left| \Psi^{\mathrm{GL}} \right| \right| \leq \left| \left( \nabla + i \textstyle\frac{\mathbf{A}^{\mathrm{GL}}}{\varepsilon^2} \right) \Psi^{\mathrm{GL}} \right|\leq C \varepsilon ^{-1}
\quad \Longrightarrow \quad \left| \partial_{t} \left| \psi(s,t) \right| \right| + \varepsilon \left| \partial_{s} \left| \psi(s,t) \right| \right|\leq C.
\end{equation}}
\medskip
{\emph{Step 2, uniform bound on $u_n$.}} {We first observe that the estimate $ \left\| f_n - f_0 \right\|_{\infty} = \mathcal{O}(\varepsilon)$ proven in~\eqref{eq:vari 1D opt density} guarantees that
\begin{equation}
f_n (t) \geq \gamma_{\eps}, \qquad \mbox{for any } (s,t) \in \mathcal{C}_n \cap \mathcal{A}_{\rm bl} \mbox{ and } \forall n = 1, \ldots, N_{\eps}.
\end{equation}}
Now we can apply a standard argument to show that $ |u_n| $ can not differ too much from $ 1 $ {in $\mathcal{A}_{\rm bl}$}. The proof is done by contradiction. {We choose some $ 0 < c < \frac{3}{2} a $ and define
\begin{equation}
\sigma_{\eps} : = \varepsilon^{1/4} \gamma_{\eps}^{-3/2} |\log\varepsilon|^c \ll |\log\varepsilon|^{c-3a/2} \ll 1.
\end{equation}
Suppose for contradiction that there exists a point $ (s_0,t_0) $ in $ \mathcal{C}_n \cap \mathcal{A}_{\rm bl} $ such that
\begin{displaymath}
\left|1 - |u_n(s_0,t_0)| \right| \geq \sigma_{\eps}.
\end{displaymath}}
Then by \eqref{eq:point est grad u} we can construct a rectangle-like region $ R_{\varepsilon} {\subset \mathcal{C}_n \cap \mathcal{A}_{\rm bl}} $ of tangential length $ \frac{1}{2} \varepsilon \gamma_{\eps} \sigma_{\eps} {\ll \varepsilon} $ and normal length $ \varrho_{\eps} $ with
\begin{equation}
\varrho_{\eps} : = \gamma_{\eps} \sigma_{\eps} |\log\varepsilon|^{-3} {\ll \varepsilon^{1/6} |\log\varepsilon|^{c-3-a/2} \ll |\log\varepsilon|^{1/2}},
\end{equation}
{where}
\begin{displaymath}
\left|1 - |u_n(s,t)| \right| \geq \textstyle\frac{1}{2} \sigma_{\eps}.
\end{displaymath}
To complete the proof it suffices to estimate from below
\bml{
{\frac{1}{\varepsilon} \sum_{n=1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t\right) f_n^4 \left(1 - |u_n|^2\right)^2 \geq \frac{1}{\varepsilon} \int_{R_{\varepsilon}} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t\right) f_n^4 \left(1 - |u_n|^2\right)^2} \\
\geq \gamma_{\eps}^5 \sigma_{\eps}^3 \varrho_{\eps} = \gamma_{\eps}^6 \sigma_{\eps}^4 |\log\varepsilon|^{-3} = \varepsilon |\log\varepsilon|^{4c-3} \gg \varepsilon |\log\varepsilon|^{\gamma},
}
where $ \gamma $ is the power of $ |\log\varepsilon |$ appearing in the r.h.s. of \eqref{eq:upper bound nonlinear} and we have chosen $ c$ so that $ c \geq \frac{1}{4}(\gamma+3) $. {Recalling the condition $ a > \frac{2}{3} c $ we also have $ a > \frac{1}{6}(\gamma+3) ${, which coincides with the assumption on $ \gamma_{\eps} $ (see \eqref{eq:game})}. Under such conditions the estimate above contradicts the upper bound \eqref{eq:upper bound nonlinear} and the result is proven.}
\medskip
{\emph{Step 3, conclusion.} Now we know that in $\mathcal{A}_{\rm bl} \cap \mathcal{C}_n$
$$ \left||u_n| - 1\right| \leq \sigma_{\eps}, $$
and it is easy to translate this estimate in an analogous one for $ |\psi(s,t)| $ and therefore $ |\Psi^{\mathrm{GL}}| $. Indeed, in the cell $\mathcal{C}_n$
$$ |\psi| = |\Psi^{\mathrm{GL}}| = f_n |u_n|$$
modulo a change of variables. The final estimate on $|\Psi^{\mathrm{GL}}|$ then involves the reference profile $ g_{\rm ref} $ but the bound $ \left\| f_n - f_0 \right\|_{\infty} = \mathcal{O}(\varepsilon)$ again allows the replacement of $ g_{\rm ref} $ with $ f_0 $}.
\end{proof}
We can now turn to the proof of the estimate of the winding number of $ \Psi^{\mathrm{GL}} $ along $ \partial \Omega $.
\begin{proof}[Proof of Theorem \ref{theo:circulation}] Thanks to the positivity of $ g_{\rm ref} $ at $ t = 0 $ (see Lemma \ref{lem:point est fal}) and the result discussed above, $ \Psi^{\mathrm{GL}} $ never vanishes on $ \partial \Omega $ and therefore its winding number is well defined. The rest of the proof follows the lines of \cite[Proof of Theorem 2.4]{CR2}.
{The first part is the estimate of the winding number contribution of the phase $ \phi_{\varepsilon}$ involved in the change of gauge $ \psi(s,t) = \Psi^{\mathrm{GL}}(\mathbf{r}(s, \varepsilon t)) e^{-i \phi_{\varepsilon}(s,t)}$ but this can be done exaclty as in \cite[Proof of Lemma 5.4]{CR2}:
\bml{
\label{gauge phase}
2 \pi \deg\left(\Psi^{\mathrm{GL}}, \partial \Omega\right) - 2\pi \deg\left(\psi, \partial \Omega \right) = \int_{0}^{|\partial \Omega|} \mathrm{d} s \: \gamma^{\prime}(s) \cdot \nabla \phi_{\varepsilon}(s,t) = \int_{0}^{|\partial \Omega|} \mathrm{d} s \: \partial_{s} \phi_{\varepsilon}(s,0) \\
= \phi_{\varepsilon}(2\pi,0) - \phi_{\varepsilon}(0,0) = \frac{1}{\varepsilon^2} \int_{0}^{|\partial \Omega|} \mathrm{d} s \: \bm{\gamma}^{\prime}(s) \cdot \mathbf{A}^{\mathrm{GL}}(\mathbf{r}(s,0)) - |\partial \Omega| \delta_{\varepsilon} \\
= \frac{1}{\varepsilon^2} \int_{\Omega} \mathrm{d} \mathbf{r} \: \mbox{curl} \mathbf{A}^{\mathrm{GL}} - |\partial \Omega| \delta_{\varepsilon}.
}
Now by the elliptic estimate \cite[Eq. (11.51)]{FH-book}
\begin{displaymath}
\left\| \mbox{curl} \mathbf{A}^{\mathrm{GL}} - 1 \right\|_{C^{1}(\Omega)} = \mathcal{O}(\varepsilon),
\end{displaymath}
and the Agmon estimate \cite[Eq. (12.10)]{FH1}
\begin{displaymath}
\left\| \nabla (\mbox{curl} \mathbf{A}^{\mathrm{GL}} - 1) \right\|_{L^1(\Omega\setminus \mathcal{A}_{\eps})} = \mathcal{O}(\varepsilon^{\infty}),
\end{displaymath}
we get
\begin{eqnarray}
\left\| \mbox{curl} \mathbf{A}^{\mathrm{GL}} - 1 \right\|_{L^{1}(\mathcal{A}_{\eps})} & \leq & C \varepsilon |\log\varepsilon| \left\| \nabla \left( \mbox{curl} \mathbf{A}^{\mathrm{GL}} -1\right) \right\|_{L^{\infty}(\Omega)} = \mathcal{O}(\varepsilon^2 |\log\varepsilon|), \nonumber \\
\left\| \mbox{curl} \mathbf{A}^{\mathrm{GL}} - 1 \right\|_{L^{1}(\Omega\setminus \mathcal{A}_{\eps})} & \leq & C \left\| \mbox{curl} \mathbf{A}^{\mathrm{GL}} - 1 \right\|_{L^{2}(\Omega\setminus \mathcal{A}_{\eps})} \nonumber \\
&& \leq C \left\| \nabla (\mbox{curl} \mathbf{A}^{\mathrm{GL}} - 1) \right\|_{L^1(\Omega\setminus \mathcal{A}_{\eps})} = \mathcal{O}(\varepsilon^{\infty}),
\end{eqnarray}
via Sobolev inequality. Altogether we can thus replace $ \mbox{curl} \mathbf{A}^{\mathrm{GL}} $ with $ 1 $ in \eqref{gauge phase}, so obtaining
\begin{equation}
2 \pi \deg\left(\Psi^{\mathrm{GL}}, \partial \Omega\right) - 2\pi \deg\left(\psi, \partial \Omega \right) = \frac{|\Omega|}{\varepsilon^2} + \mathcal{O}(|\log\varepsilon|).
\end{equation}}
A minor modification in the proof is then due to the cell decomposition and the use of a different decoupling in each cell: the analogue of \cite[Lemma~5.4]{CR2} is the following
\begin{equation}
\label{eq:circulation u}
\sum_{n = 1}^{N_{\eps}} \int_{s_n}^{s_{n+1}} \mathrm{d} s \: J_s[u_n](s,0) = \mathcal{O}(|\log\varepsilon|^{\infty}).
\end{equation}
To see that, we introduce a tangential cut-off function $ \chi(t) $ with support contained in $ [0,|\log\varepsilon|^{-1}] $ and such that $ 0 \leq \chi \leq 1 $, $ \chi(0) = 1 $ and $ |\partial_t \chi|\ = \mathcal{O}(|\log\varepsilon|) $. Then we compute
\bml{
\int_{s_n}^{s_{n+1}} \mathrm{d} s \: J_s[u_n](s,0) = \int_{s_n}^{s_{n+1}} \mathrm{d} s \int_0^{\frac{1}{|\log\varepsilon|}} \mathrm{d} t \: \left[ \partial_t \chi J_s[u_n](s,t) + \chi \partial_t J_s[u_n](s,t) \right] = \\
\int_0^{\frac{1}{|\log\varepsilon|}} \mathrm{d} t \bigg\{ \int_{s_n}^{s_{n+1}} \mathrm{d} s \: \left[ \partial_t \chi J_s[u_n](s,t) + 2 \chi \left(i \partial_t u_n, \partial_s u_n \right) \right] + J_t[u_n](s_{n+1},t) - J_t[u_n](s_{n},t) \bigg\}
}
and after a rearrangement of the boundary terms
\bml{
\label{circulation est 1}
\sum_{n =1}^{N_{\eps}} \int_{s_n}^{s_{n+1}} \mathrm{d} s \: J_s[u_n](s,0) = \sum_{n =1}^{N_{\eps}} \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \int_{s_n}^{s_{n+1}} \mathrm{d} s \: \left[ \left( \partial_t \chi \right) J_s[u_n](s,t) + 2 \chi \left(i \partial_t u_n, \partial_s u_n \right) \right] \\
- \sum_{n =1}^{N_{\eps}} \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \: \left[ J_t[u_{n+1}](s_{n+1},t) - J_t[u_n](s_{n+1},t) \right].
}
The three terms on the r.h.s. of the above expression are going to be bounded independently. We first observe that, exactly like we derived \eqref{eq:upper bound nonlinear}, one can also extract from the comparison between the energy upper and lower bounds (see~\eqref{eq:bound reduc func}) the following estimate:
\begin{equation}
\label{eq:upper bound kinetic}
\sum_{n=1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: \left(1 - \varepsilon k_n t\right) f_n^2 \left\{ \left| \partial_t u_n \right|^2 + \textstyle\frac{1}{(1 - \varepsilon k_n t)^2} \left| \varepsilon \partial_s u_n \right|^2 \right\} \leq C \varepsilon^2 |\log\varepsilon|^{\infty}.
\end{equation}
Then we can estimate the absolute value of the first two terms on the r.h.s. of \eqref{circulation est 1} by using the Cauchy-Schwarz inequality
\bml{
\sum_{n =1}^{N_{\eps}} \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \int_{s_n}^{s_{n+1}} \mathrm{d} s \: \left[ C |\log\varepsilon| |u_n| \left| \partial_s u_n \right| + 2 \left| \partial_t u_n \right| \left| \partial_s u_n \right| \right] \\
\leq C \sum_{n =1}^{N_{\eps}} \int_{\mathcal{C}_n} \mathrm{d} s \mathrm{d} t \: (1 - \varepsilon k_n t) f_n^2 \left[|\log\varepsilon| \left| u_n \right|^2 + \textstyle\frac{2}{(1 - \varepsilon k_n t)^2} \left| \partial_s u_n \right|^2 + \left| \partial_t u_n \right|^2 \right],
}
where we have exploited the pointwise lower bound \eqref{eq:fal point l u b}, which implies $ f_n(t) \geq C > 0 $ for any $ t \in [0,|\log\varepsilon|^{-1}] $ and $ n = 1, \ldots, N_{\eps} $, to put back the density $ f_n^2 $ in the expression. Now the bound
$$ f_n |u_n| = |\psi| \leq 1 $$
together with \eqref{eq:upper bound kinetic} yield
\begin{equation}
\sum_{n =1}^{N_{\eps}} \bigg| \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \int_{s_n}^{s_{n+1}} \mathrm{d} s \: \left[ \left( \partial_t \chi \right) J_s[u_n](s,t) + 2 \chi \left(i \partial_t u_n, \partial_s u_n \right) \right] \bigg| \leq C |\log\varepsilon|^{\infty}.
\end{equation}
On the other hand the definition \eqref{eq:splitting psi} of $ u_n $ implies that
\bml{
\bigg| \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \: J_t[u_{n+1}](s_{n+1},t) - J_t[u_n](s_{n+1},t) \bigg| = \bigg| \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \: \bigg( \frac{1}{f_{n+1}^2} - \frac{1}{f_n^2} \bigg) J_t[|\psi|](s_{n+1},t) \bigg| \\
\leq C \varepsilon |\log\varepsilon|^{\infty} \int_0^{|\log\varepsilon|^{-1}} \mathrm{d} t \: |\psi| |\partial_s |\psi|| \leq C |\log\varepsilon|^{\infty},
}
thanks to \eqref{eq:vari 1D opt density}, the already mentioned lower bound on $ f_n $ in $ [0, |\log\varepsilon|^{-1}] $ and the standard bound $ \left\| \nabla \psi \right\|_{\infty} \leq C \varepsilon^{-1} $ (see, e.g., \cite[Eq. (11.50)]{FH-book}).
Hence \eqref{eq:circulation u} is proven and the rest of the proof is just a repetition of the estimates in \cite[Proof of Theorem 2.4]{CR2}. Note that, as already anticipated in the comments after Theorem \ref{theo:circulation} $ \alpha_n = \alpha_0 (1 + \mathcal{O}(\varepsilon)) $, so that the optimal phases $ \alpha_n $ can all be replaced with $ \alpha_0 $.
\end{proof}
|
1,116,691,497,666 | arxiv | \section{Introduction}
\label{sec:introduction}
Since early 2020, the world has been grappling with the COVID-19 pandemic caused by the new SARS-CoV-2 coronavirus. At the time of writing, there have been more than 250 million confirmed infections while more than five million have succumbed to the virus or related complications~\cite{WHO-COVID}. The main vector of disease transmission is exposure to respiratory particles resulting from direct or close physical contact with infected individuals. Transmission can also occur from the transfer of viral particles from contaminated surfaces or objects to the eyes, nose or mouth~\cite{WHO-COVID}.
Various preventive measures have been adopted worldwide to help curb the spread of the virus by reducing the risk of new infections. These include local, national and international travel restrictions, the banning of large gatherings and the encouragement of physical distancing, remote working and education, and strict quarantine policies, see e.g.~\cite{EU-Measures}. Two of the most broadly adopted measures are the (sometimes mandatory) use of protective facial coverings or masks~\cite{MasksCountries-2020} and enhanced hand hygiene (handwashing or disinfection using hydroalcoholic gel). Facial masks, such as those illustrated in Fig.~\ref{fig:face_masks}, can reduce viral transmission through respiratory particles~\cite{Peeples-FaceMasks-2020}, while enhanced hand hygiene can reduce the rate of new infections through contact with contaminated surfaces or objects. Preventive measures, as well as the virus itself, have necessitated consequential shifts and disruption to daily life, with potentially long-lasting repercussions impacting individuals, social and professional practices and processes, businesses both small and large, as well as the global economy.
\begin{figure}[t]
\centering
\subfloat[Surgical mask~\protect \cite{Damer-Masks-2020}]{\includegraphics[height=0.150\textwidth]{figures/ID1223_d2_i2_80.jpg}} \hfill
\subfloat[Cloth mask~\protect \cite{Damer-Masks-2020}]{\includegraphics[height=0.15\textwidth]{figures/ID5345_d3_i2_50.jpg}} \hfill
\subfloat[Filter mask\protect\footnotemark]{\includegraphics[height=0.15\textwidth]{figures/filterMask.jpg}}
\subfloat[Printed mask\protect\footnotemark]{\includegraphics[height=0.15\textwidth]{figures/printedMask.png}}
\caption{Examples of typical protective face masks}\vspace*{-0.5cm}
\label{fig:face_masks}
\end{figure}
Such measures have had a considerable impact in our daily lives. For instance, the use of facial masks covering the mouth and nose in public spaces can decrease the usefulness of surveillance systems or prevent us from unlocking our smartphone using face recognition technologies. In this context, this article focuses on the impact of the COVID-19 pandemic on \textbf{biometric recognition}.
Biometric technologies can be used for automated identity verification and to distinguish individuals based on their personal biological and behavioural characteristics (e.g.\ face and voice). Biometric solutions frequently supplement or replace traditional knowledge- and token-based security systems since, as opposed to passwords and access cards, biometric characteristics cannot be forgotten or lost. Furthermore, biometrics inherently and seamlessly enable diverse application scenarios which are either difficult or infeasible using more traditional methods, e.g.\ continuous authentication~\cite{Patel-ContinuousAuthentication-2016,Mondal-ContinuousAuthentication-2017}, forensics~\cite{Tistarelli-HandbookBiometricForsensics-2017}, and surveillance~\cite{HandbookRemote}.
\footnotetext[1]{Source: \url{www.ikatehouse.com}}
\footnotetext[2]{Source: \url{www.thenationalnews.com}}
\begin{table*}[t]
\centering
\caption{Overview of commonly used biometric characteristics in the context of COVID-19.}
\label{table:overview}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
\textbf{Biometric} & \textbf{Data acquisition} & \multicolumn{4}{c}{\textbf{Application area}} & \textbf{Operational} & \textbf{Impact of} \\ \cmidrule{3-6}
\textbf{characteristic} & \textbf{hardware} & \textbf{mobile devices} & \textbf{access control} & \textbf{forensics} & \textbf{surveillance} & \textbf{prevalence} & \textbf{COVID-19} \\ \midrule
Face & commodity hardware & \ding{51} & \ding{51} & \ding{51} & \ding{51} & wide & high\\ \midrule
NIR Iris & special sensor & (\ding{51}) & \ding{51} & & & wide & low\\
VIS Iris & commodity hardware & \ding{51} & (\ding{51}) & & & low & low\\ \midrule
Touch-based Fingerprint & special sensor & \ding{51} & \ding{51} & \ding{51} & & wide & high\\
Touchless Fingerprint & commodity hardware & \ding{51} & \ding{51} & & & low & low\\ \midrule
Touch-based Hand Vein & special sensor & & \ding{51} & & & low & low\\
Touchless Hand Vein & special sensor & (\ding{51}) & \ding{51} & & & low & low \\ \midrule
Voice & commodity hardware & \ding{51} & \ding{51} & \ding{51} & \ding{51} & wide & medium\\
\bottomrule
\end{tabular}
}
\end{table*
Biometric technologies have come to play an integral role in society, e.g., for identity management, surveillance, access control, social and welfare management, and automatic border control, with these applications alone being used either directly or indirectly by billions of individuals~\cite{AlRaisi-UAEIris-Elsevier-2008,Dalwai-Aadhaar-2014,SmartBorders-EU-2018,Thales-AFIS-2020}. While reliance upon biometric technologies has reached a profound scale, health-related measures introduced in response to the COVID-19 pandemic have been shown to impact either directly or indirectly upon their reliability \cite{Carlaw-BTT-BiometricsCovid-2020}. It should be however noted that the new measures have a limited impact on other biometric characteristics such as ear \cite{Emersivic-UnconstrainedEarChallenge-ICB-2019}. Even though this fact will also lead to renewed efforts directed to such biometric characteristics in order to achieve accurate and deployable systems in the near future, we limit the scope of this article to those biometric characteristics affected by health-related measures.
Table~\ref{table:overview} provides a brief overview of the operational prevalence and COVID-19-related impacts and technological challenges in the context of the most widely (in operational systems) used biometric characteristics. They are reviewed and discussed in further detail in the remainder of this article, including a short introduction and description for each characteristic for the non-expert readers. This work represents a narrative/integrated review. It is meant to selectively assess relevant works in the field of biometrics that (in)directly tackle challenges caused by the COVID-19 pandemic. It is aiming at offering guidance about future research directions and enabling new perspectives to emerge.
The rest of the article is organised as follows. The impact of facial masks on biometrics technologies is discussed in Section~\ref{sec:influence}. Section~\ref{sec:remote_biometric_authentication} addresses impacts upon mobile and remote biometric authentication. Section~\ref{sec:emerging_technologies} describes new opportunities and applications that have emerged as a result of the COVID-19 pandemic. The societal impact of these changes is discussed in Section~\ref{sec:societal} and concluding remarks are presented in Section~\ref{sec:conclusions}.
\vspace{0.5cm}
\section{Influence of facial coverings on biometric recognition}
\label{sec:influence}
The use of facial coverings, such as masks, occlude a substantial part of the lower face. Such occlusions or obstructions change dramatically the operational conditions for numerous biometric recognition technologies. Such changes can make biometric recognition especially challenging. A review of the impacts of facial coverings is presented in this section, with a focus upon facial, periocular, iris, and voice biometrics.
\subsection{Face recognition}
\label{subsec:influence_face}
The natural variation among individuals yields a good inter-class separation and thus makes the use of facial characteristics for biometric recognition especially appealing. Traditional solutions rely upon handcrafted features based on texture, keypoints, and other descriptors for face recognition~\cite{Li-HandbookFace-2011}. More recently, the use of deep learning and massive training datasets has led to breakthrough advances. The best systems perform reliably even with highly unconstrained and low-quality data samples~\cite{Masi-DeepFaceSurvey-2018,Guo-DeepFaceSurvey-2019}. Relevant to the study presented here is a large body of research on occluded face detection~\cite{Opitz-OccludedFaceDetection-2016} and recognition~\cite{Zeng-OccludedFaceSurvey-2020}, though occlusion-invariant face recognition remains challenging~\cite{Song-OcclusionRobustFaceRecognition-2019}. Most work prior to the COVID-19 pandemic addresses occlusions from, e.g., sunglasses, partial captures, or shadows which typify unconstrained, `in-the-wild' scenarios. The use of facial masks therefore presents a new and significant challenge to face recognition systems, especially considering the stringent operating requirements for application scenarios in which face recognition technology is often used, e.g. automated border control. The requirement for extremely low error rates typically depend on the acquisition of unoccluded images of reasonable quality.
The most significant evaluation of the impact of masks upon face recognition solutions was conducted by the National Institute of Standards and Technology (NIST)~\cite{Ngan-NIST-PreCovid-2020,Ngan-NIST-PostCovid-2020}. The evaluation was performed using a large dataset of facial images with superimposed, digitally generated masks of varying size, shape, and colour.
The evaluation tested the face recognition performance of algorithms submitted to the ongoing Face Recognition Vendor Test (FRVT) benchmark in terms of biometric verification performance (i.e., one-to-one comparisons). The false-negative error rates (i.e., false non-match rate) for algorithms submitted prior to the pandemic~\cite{Ngan-NIST-PreCovid-2020}, where observed to increase by an order of magnitude, even for the most reliable algorithms. Even some of the best-performing algorithms (as judged from evaluation with unmasked faces) failed almost completely, with false-negative error rates of up to 50\%.
Of course, these results may not be entirely surprising given that systems designed prior to the pandemic are unlikely to have been optimised for masked face data. The study itself also had some limitations, e.g.\ instead of using genuine images collected from mask-wearing individuals, it used synthetically generated images where masks were superimposed using automatically derived facial landmarks. Despite the shortcomings, the study nonetheless highlights the general challenges to biometric face recognition from face coverings and masks. The general observations are that: 1) the degradation in verification reliability increases when the mask covers a larger proportion of the face including the nose; 2) reliability degrades more for mated biometric comparisons than for non-mated comparisons, i.e.\ masks increase the rate of false non-match rate more than the false match rate; 3) different mask shapes and colors lead to differences in the impact upon verification reliability, a finding which emphasises the need for evaluation using genuine masked face data; 4) in many cases, masked faces are not even detected.
A follow-up study~\cite{Ngan-NIST-PostCovid-2020}, also conducted by NIST, evaluated systems that were updated with enhancements designed to improve reliability for masked faces. In addition to greater variability in mask designs, the study also considered both masked probe as well as masked reference face images. While reliability was observed to improve for masked faces, it remained substantially degraded compared to unmasked faces (approximately an order of magnitude lower). The degraded performance of masked faces was equivalent to that for unmasked faces and state-of-the-art systems from 2017. Increases in false-match rates were also observed when both reference, as well as probe faces are masked. Full details and results are available from the NIST FRVT Face Mask Effects website~\cite{FRVT-Masks-2020}.
Results from the related DHS Biometric Rally show similar trends~\cite{DHS-Rally-2020}. The DHS study was conducted in a setup simulating real operational conditions using systems submitted by commercial vendors. Significant difficulties in image acquisition as well as general degradation in biometric performance were observed for masked faces. Like the NIST study, the DHS study too found that, even with masked faces, today's systems perform as well as state-of-the-art systems from only a few years ago~\cite{DHS-Rally-2020} tested with unmasked face images.
These US-based studies are complemented by a number of academic studies.
Two datasets~\cite{Damer-Masks-2020,Wang-MaskedFaceDB-2020} of masked face images have been collected in Europe and China to support research efforts. While~\cite{Wang-MaskedFaceDB-2020} provides data, however, it does not provide a formal evaluation of the effect of masks on face recognition performance. Moreover, this study did not address a specific usecase scenario, e.g. collaborative face verification.
Damer \textit{et al.}~\cite{Damer-Masks-2020,DBLP:journals/iet-bmt/DamerBSKK21,https://doi.org/10.1049/bme2.12077} released a database of real masked face images that were collected in three collaborative sessions. They include realistic variation in the capture environment, masks, and illumination. Evaluation results show similar trends exposed by the NIST study~\cite{Ngan-NIST-PreCovid-2020}: difficulties in face detection and greater impacts upon mated comparisons than non-mated comparisons.
While significantly smaller than the NIST dataset in the number of data subjects and images, the use of real instead of synthetically generated masked faces images increases confidence in results.
From a technical perspective, face masks can be considered as a subset of general face occlusions, and thus previous works on this issue are relevant. A number of works have proposed to automatically detect, and synthetically in-paint, the occluded face areas. This aimed at generating realistic and occlusion-free face images, as well as enabling a more accurate face recognition. Most of the better performing face completion solutions are based on deep generative models \cite{DBLP:conf/cvpr/LiLY017, DBLP:conf/accv/ZhangZS018}.
A recent study by Mathai \textit{et al.}~\cite{DBLP:conf/icb/MathaiMA19} has shown that face completion can be beneficial for occluded face recognition accuracy, given that the occlusions are detected accurately. They have also pointed out that the completion of occlusions on the face boundaries did not have significant effect, which is not the case of face mask occlusions. Thus, these results indicate that face image completion solutions are possible candidates to enhance masked face recognition performance.
\begin{figure}[t]
\centering
\subfloat[Transparent mask\protect\footnotemark]{\includegraphics[width=0.3\textwidth]{figures/transparent_mask}} \hfil
\subfloat[Face shield\protect\footnotemark]{\includegraphics[width=0.3\textwidth]{figures/face_shield}}
\caption{Examples of alternative protective masks}\vspace*{-0.5cm}
\label{fig:alternative_masks}
\end{figure}
\footnotetext[3]{Source: \url{https://www.theclearmask.com/product}}
\footnotetext[4]{Source: \url{https://3dk.berlin/en/covid-19/474-kit-for-face-shield-mask-with-two-transparent-sheets.html}}
The use of transparent masks or shields may combat to some extent the impact of opaque masks upon face recognition systems. Transparent masks, such as those shown in Fig.~\ref{fig:alternative_masks}, allow some portion of the masked face to remain visible but even their impact is likely non-trivial. Transparent masks can cause light reflections, visual distortions and/or blurring. Both opaque and transparent masks, as well as strategies to counter their impact, may increase the threat of presentation attacks. For example, it is conceivable that masks with specific patterns could be used to launch concealment or impersonation attacks, e.g.\ using concepts similar to those in~\cite{Sharif-AccessoriesImpersonationConcealment-2016}.
Regardless of the exact type of face mask, wearing one can have an effect on the face image quality. Most biometric systems estimate the quality of a detected face image prior to feature extraction \cite{DBLP:journals/corr/abs-2009-01103}. This quality estimation indicates the suitability of the image for recognition purposes \cite{DBLP:conf/cvpr/TerhorstKDKK20,DBLP:journals/corr/abs-2112-06592}. For existing systems, the quality threshold configurations might lead to disregarding samples with face masks and thus increase the failure to extract rate. This link between face occlusions and face image quality has been probed in previous works, however, not exclusively for mask occlusions. One of these works, presented by Lin and Tang \cite{DBLP:conf/cvpr/LinT07}, built on the assumption that occlusions negatively effect the face image quality, in order to detect such occlusion. A recent study by Zhang \textit{et al.}~\cite{DBLP:conf/icct/ZhangSYDZS19} has demonstrated the effect of occlusion on the estimated face mage quality, along with presenting an efficient multi-branch face quality assessment algorithm. The authors pointed out that images with alignment distortion, occlusion, pose or blur tend to obtain lower quality scores.
The studies conducted thus far highlight the challenges to face recognition systems in the COVID-19 era and raise numerous open questions. These include, but are not limited to large-scale tests using images with real and not digitally generated masks, identification (i.e. one-to-many search), demographic differentials, presence of additional occlusions such as glasses, the effect on face image quality \cite{DBLP:conf/biosig/FuSCD21}, unconstrained data acquisition in general, as well as effects on the accuracy of human examiners~\cite{Ngan-NIST-PostCovid-2020,https://doi.org/10.1049/bme2.12077}. In addition, new areas of research have been opened, such as the automatic detection of whether a subject is wearing the mask correctly (i.e., covering mouth and nose) \cite{Batagelj-CorrectUseMasks-AS-2021}.
To foster research on the aforementioned issues, the Masked Face Recognition Competition (MFR) \cite{Boutros-MFRC-IJCB-2021} was organised in 2021. The main goals of this competition were not only the enhancement of recognition performance in the presence of masks, but also the analysis of the deployability of the proposed solutions. A private dataset representing a collaborative multi-session real masked capture scenario was used to evaluate the submitted solutions. In comparison to one of the top performing academic face recognition solutions, 10 out of the 18 submitted solutions did achieve a higher masked face verification accuracy, thereby showing the way for future face recognition approaches. This was followed by a series of works that targeted enhancing the accuracy of masked face recognition, either by training task-specific models \cite{DBLP:conf/fgr/HuberBKD21} or processing face templates extracted by existing models \cite{DBLP:journals/pr/BoutrosDKK22}.
\subsection{Iris recognition}
\label{subsec:influence_iris}
The human iris, an externally visible structure in the human eye, exhibits highly complex patterns which vary among individuals. The phenotypic distinctiveness of these patterns allow their use for biometric recognition~\cite{Daugman-HowIrisRecognitionWorks-IEEE-2004}. The acquisition of iris images typically requires a camera with near-infrared (NIR) illumination so that sufficient detail can be extracted for even darkly pigmented irides. Recent advances support acquisition in semi-controlled environments at a distance even from only reasonably cooperative data subjects on the move (e.g.\ while walking)~\cite{Matey2009,Nguyen-Iris-2017}.
Solutions to iris recognition which use mobile devices and which operate using only visible wavelength illumination have been proposed in recent years~\cite{Proenca-TPAMI-IrisVIS-2009,Raja-PRL-SmartphoneIrisVIS-2015,Rattani-IVC-OcularVISSurbvey-2017}. Attempts to use image super-resolution, a technique of generating high-resolution images from low resolution counterparts, have also shown some success by increasing image quality~\cite{Tapia-WIFS-SRIris-2020}. However, iris recognition solutions seem more dependent than face recognition solutions upon the use of constrained scenarios that lead to the acquisition of high quality images~\cite{Masi-DeepFaceSurvey-2018,Guo-DeepFaceSurvey-2019}. Nevertheless, iris recognition systems have now been in operation worldwide for around two decades. Near-infrared iris recognition has been adopted in huge deployments of biometrics technology, e.g.\ in the context of the Indian Aadhaar programme through which more than 1 billion citizens have been enrolled using iris images~\cite{UIDAI-Dashboard} in addition to other biometric data. Due to their high computational efficiency and reliability~\cite{Daugman-Doppelgangers-2016}, iris recognition systems are used successful within the Aadhaar programme for intensive identification (1-$N$ search) and de-duplication ($N$-$N$ search)~\cite{Dalwai-Aadhaar-2014}.
The success of automated border control systems used in the United Arab Emirates~\cite{AlRaisi-UAEIris-Elsevier-2008}, where it is common for individuals to conceal a substantial part of their face on account of religious beliefs, serve to demonstrate the robustness of iris recognition systems to face coverings. In these scenarios, such as that illustrated in Fig.~\ref{fig:iris_uae}, whereas face recognition systems generally fail completely, iris recognition systems may still perform reliably so long as the iris remains visible. They are also among the least intrusive of all approaches to biometric recognition. This would suggest that, at least compared to face recognition counterparts, the reliability of iris recognition systems should be relatively unaffected as a consequence of mask wearing in the COVID-19 era.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{figures/IrisGuard-UAE}
\caption{IrisGuard Inc. UAE enrolment station\protect\footnotemark}\vspace*{-0.5cm}
\label{fig:iris_uae}
\end{figure}
\footnotetext[5]{Source: \url{https://en.wikipedia.org/wiki/File:IrisGuard-UAE.JPG}}
It is worth mentioning that the usefulness of the anatomy of the human eye with regard to biometrics is not limited to the irides. For example, the retinal blood vessels are suitable for the purposes of biometric recognition. However, retinal imaging requires close proximity of a highly cooperative data subject to the specialised acquisition device which sends a beam of light inside the eye to fully illuminate the retina (see e.g. \cite{Lajevardi-Retina-2013}). Although retinal structures exhibit a high degree of distinctiveness and hence good biometric performance, the need for a specialised sensor and the perceived intrusiveness of the acquisition process have been considered as obstacles to adoption of this biometric characteristic. The blood vessels present in the ocular surface have also been shown to exhibit some discriminative power and hence suitability of biometric recognition \cite{Rot-ScleraVein-2020}. The acquisition process for those, albeit less arduous than for the retinal images, still requires a high-resolution camera and subject cooperation in gazing in the required directions. Thus far, however, biometric recognition with ocular vasculature received relatively little attention beyond academic studies.
\subsection{Periocular recognition and soft-biometrics}
\label{subsec:influence_periocular}
Periocular recognition, namely recognition observing biometric characteristics from the area surrounding the eye~\cite{Alonso-PRL-PeriocularSurvey-2016}, offers potential for a compromise between the respective strengths and weaknesses of face and iris recognition systems. Unlike face recognition, periocular recognition can be reliable even when substantial portions of the face are occluded (opaque masks) or distorted (transparent masks). Unlike iris recognition, periocular recognition can be reliable in relatively unconstrained acquisition scenarios. Compared to alternative ocular biometrics, periocular recognition systems are also less demanding in terms of subject cooperation.
Due to those and other properties, periocular recognition was explored extensively during the last decade. Similarly to work in iris recognition, much of it has direct relevance to biometrics in the COVID-19 era, in particular with regards the wearing of face masks. In fact, one of the most popular use cases thus far for periocular recognition involves consumer mobile devices~\cite{Raja-BIOSIG-SmartphonePeriocular-2014,deFreitas-BTAS-PeriocularMobile-2015} which can readily capture high quality images of the periocular region with onboard cameras. This approach to biometric recognition, e.g.\ to unlock a personal device, is of obvious appeal in the COVID-19 era when masks must be worn in public spaces and where tactile interactions, e.g.\ to enter a password or code, must preferably be avoided.
In most works, reliable verification rates can be achieved by extracting features from the periocular region. However, the error rates are not yet as good as those yielded by face verification schemes under controlled scenarios. Nevertheless, the periocular features can be used to improve the performance of unconstrained facial images as shown in \cite{deFreitas-BTAS-PeriocularMobile-2015}. Similarly, Park~\textit{et al.}~showed in~\cite{Park-TIFS-periocularVIS-2010} how the rank-1 accuracy was multiplied by a factor of two in a similar scenario using a synthetic dataset of face images treated artificially to occlude all but the face region above the nose. In other words, the success chances of correctly identifying a person within a group are doubled when the periocular information is analysed in parallel to the global face image. Some newer works have also explored the fair of these methods across gender \cite{Krishnan-GenderFairnessPeriocular-ICPR-2021}, reporting an equivalent performance of males and females for ocular-based mobile user-authentication at lower false match rates.
In addition to the aforementioned works, some multimodal approaches combining face, iris, and the periocular region have been proposed for mobile devices \cite{Raja-ICB-SmartphonePeriocularFaceIrs-2015}, also incorporating template protection in order to comply with the newest data privacy regulations such as the European GDPR \cite{Stokkenes-IPTA-FacePeriocularBFSmartphone-2016}.
As pointed out in Sect.~\ref{subsec:influence_iris}, in such uncontrolled conditions where the iris cannot always be used due to a low quality or resolution of the samples, that lack of quality of acquired biometric information can be addressed using super-resolution. Even though some approaches have already been proposed for the periocular region, based mostly on deep learning models \cite{Ipe-ICACN-PeriocularSRCNN-2019,Tapia-WIFS-SRIris-2020}, there is still a long way ahead before they are deployed in practical applications.
In addition to providing identity information, facial images can also be used to extract other soft biometric information, such as age range, gender, or ethnicity. Alonso-Fernandez \textit{et al.}~benchmarked the performance of six different CNNs for soft-biometrics. Also for this prupose, the results obtained indicate the possibility of performing soft-biometrics classification using images containing only the ocular or mouth regions, without a significant drop in performance in comparison to using the entire face. Furthermore, it can be observed in their study how different CNN models perform better for different population groups in terms of age or ethnicity. Therefore, the authors indicated that the fusion of information stemming form different architectures may improve the performance of the periocular region, making it eventually similar to that of unoccluded facial images. Similarly, the periocular region can be also utilised to estimate emotions using handcrafted textural features \cite{Alonso-Fernandez-SITIS-PeriocularExpression-2018} or deep learning
\cite{Reddy-IJCNN-DeepPeriocularExpression-2020}.
\subsection{Voice recognition}
Progress in voice recognition has been rapid in recent years~\cite{kinnunen2010overview,hansen2015,Todisco-ASVIS-2016,nagrani2017voxceleb,snyder2018x}.
Being among the most convenient of all biometrics technologies, voice recognition is now also among the most ubiquitous, being used for verification across a broad range of different services and devices, e.g.\ telephone banking services and devices such as smart phones, speakers, and watches that either contain or provide access to personal or sensitive data.
The consequences of COVID-19 upon voice recognition systems depend largely on the effect of face masks on the production of speech. Face masks obstruct the lower parts of the face and present an obstacle to the usual transmission of speech sounds; they interfere with the air pressure variations emanating from the mouth and nose. The effect is similar to acoustic filters such as sound absorbing fabrics used for soundproofing or automobile exhaust mufflers~\cite{automobile}. Since masks are designed to hinder the propagation of viral particles of sub-micron size, typically they consist of particularly dense fabric layers. The effect on speech is an often-substantial attenuation and damping. A study on the impact of fabrics on sound is reported in~\cite{seddeq2013investigation,TANG2017360}, which shows how acoustic effects are influenced by the particular textile and its thickness, density and porosity. Denser structures tend to absorb sound at frequencies above 2~{kHz}, while thicker structures absorb sound of frequencies below 500~{Hz}. With these bands overlapping that of human speech, masks attenuate and distort speech signals and hence degrade the reliability of voice biometric systems that are trained with normal (unmasked) speech.
Masks can also have a negative impact on presentation attack detection (PAD) systems, which present countermeasures to discriminate bonafide vs spoofed speech. These systems are based on spectral features obtained from the two classes. It becomes clear that any modification/deviation of the bonafide spectrum results in greater difficulty in detecting it. Moreover, other countermeasure systems are based on the detection of the POP noise~\cite{pop}: a bonafide user emits pop noise which naturally incurred while speaking close to the microphone. This noise is attenuated by the mask and, consequently, PAD performance decreases.
Fig.~\ref{fig:speech_fig} shows speech waveforms and corresponding spectrograms derived using the short-time Fourier transform (STFT) for four different recordings of read speech. The text content is identical for all four recordings: \textit{allow each child to have an ice pop}. The first is for a regular, mask-free recording while the other three are for the same speaker wearing a surgical mask, a thin or light cloth mask and a dense cloth mask. Note that the word \textit{pop} pronounced at the end of the sentence becomes less and less noticeable as you wear heavier masks. Another notable effect concerns the attenuation of high frequencies for heavier masks, which affects not only recognition performance but also speech intelligibility~\cite{Mac2019}.
\begin{figure}[t]
\centering
\subfloat[mask-free]{\includegraphics[width=0.7\linewidth]{figures/free-min.png}} \hfill
\subfloat[surgical mask]{\includegraphics[width=0.7\linewidth]{figures/surgical-min.png}} \hfill \\
\subfloat[cloth mask]{\includegraphics[width=0.7\linewidth]{figures/ffp2-min.png}} \hfill
\subfloat[dense cloth mask]{\includegraphics[width=0.7\linewidth]{figures/ffp3-min.png}}
\caption{Examples of four spectrograms of the utterance: \textit{allow each child to have an ice pop}, pronounced by the same speaker wearing different types of masks: (a) mask-free, (b) surgical, (c) cloth and (d) dense cloth mask.}\vspace*{-0.5cm}
\label{fig:speech_fig}
\end{figure}
Related to these aforementioned issues, a study of the impact of face coverings upon the voice biometrics is reported in~\cite{Saeidi+2016}. It assessed and analysed the acoustic properties of four coverings (motorcycle helmet, rubber mask, surgical mask and scarf). The impact of all four coverings was found to be negligible for frequencies less than 1~kHz, while substantial levels of attenuation were observed for frequencies above 4~kHz; 4~kHz is not a general mark, since peaks at 1.8 kHz are reported for some masks.
Face coverings were shown to degrade the accuracy of an i-vector/PLDA speaker recognition system. However, the treatment of speech data with inverted mask transfer functions was shown to improve accuracy to a level closer to the original.
Similarly, face masks distort speech data above 4 kHz. The degradation to performance, however, is modest since the substantial effects are at higher frequencies where speech energy (and discriminative biometric information) is typically lower than it is at lower frequencies where the effects are much milder.
To reflect the current issues in the voice biometrics community, the 2020 findings of the $12^{\mbox{th}}$ Computational Paralinguistics Challenge (COMPARE) considered a mask detection sub-challenge.
System fusion results for the challenge baselines show that the task is far from being solved. Speech signals, in this context, are not only relevant to voice biometrics but are usable to detect signal distortions.
The existing work stands to show that facial masks do affect voice-based technologies, and there is potential to compensate these effects. Thus the relevance of speaker recognition increases in this time, since it is unintrusive and touchless , that is, it can be done at distance, without any physical interaction (over the phone).
\section{Remote and mobile biometric recognition}
\label{sec:remote_biometric_authentication}
The COVID-19 pandemic has caused disruptions to many aspects of life. As a result of physical interactions being necessarily limited or even forbidden, many have had no alternative but to work remotely or to receive education online. With authentication being needed to access many services and resources, and without the possibility of physical means to identification, the deployment of biometric solutions for remote authentication has soared in recent times~\cite{Burt-COVID-2020}.
Remote biometric authentication has already attracted significant attention~\cite{HandbookRemote,Guo-Mobile-2017} and is already being exploited for, e.g.
eBanking, eLearning, and eBoarders. With an increasing percentage of personal mobile devices now incorporating fingerprint, microphone and imaging sensors, remote biometric authentication is deployable even without the need for costly, specialist or shared equipment. The latter is of obvious appeal in a pandemic, where the use of touchless, personal biometric sensors and devices can help reduce spread of the virus.
Some specific biometric characteristics lend themselves more naturally to remote authentication than others. They are dictated by the level of required user cooperation and the need for specialist sensors.
Face, voice, and keystroke/mouse dynamics are among the most popular
characteristics for remote biometric authentication~\cite{Kaur16,FENU201883}. These characteristics can be captured with sensors which are likely to be embedded in the subjects' devices, e.g. camera, microphone, keyboard and mouse.
As discussed in the following, remote biometric authentication entails a number of specific challenges related to
mobile biometrics, remote education, as well as security and privacy.
\subsection{Mobile biometrics}
\label{sec:mobile}
The ever-increasing number of smartphones in use today has fueled research in mobile biometric recognition solutions, e.g.\ mobile face recognition~\cite{RATTANI-mobileface-2018} and mobile voice recognition~\cite{Khoury-Mobile-Speaker-2013,gomar2015system,bisio2018smart}. Numerous biometric algorithms specifically designed or adapted to the mobile environment have been proposed in the literature~\cite{Rattani-Selfie-2019}. Additionally, commercial solutions for mobile biometric recognition based on inbuilt smartphone sensors or hardware/software co-design are already available
Proposed solutions can be categorized depending on where the comparison of biometric data takes place:
\begin{itemize}
\item Biometric comparison is performed on the client side,
as proposed by the
Fast IDentity Online (FIDO) Alliance~\cite{fido}.
An advantage of this scheme is that biometric data is kept on the user device, leading to improved privacy protection. On the other hand,
users may require specific sensors and
installed software to enable
authentication.
\item Biometric comparison is performed on the server side. These comparisons depend upon the secure transmission of biometric data (see Section~\ref{sec:remote_secpriv}), with relatively little
specific software being required on the user device.
\end{itemize}
One limiting factor of mobile biometrics stems from processing complexity and memory footprints. Whereas server side computation capacity and memory resources are typically abundant, mobile devices resources running on battery power are relatively limited. Many state-of-the-art biometric recognition algorithms are based on large (deep) neural networks which require a large amount of data storage and are computationally expensive, thereby prohibiting their deployment on mobile devices.
This has spurned research in efficient, and low footprint approaches to biometric computation, e.g.\ using smaller, more shallow neural networks~\cite{Ba-2014}. A number of different approaches to compress neural networks have been proposed, e.g.\ based on student-teacher networks~\cite{Luo-2016} or pruning~\cite{Molchanov-pruning-2017}. These approaches trade model size and inference time against system performance. However, this trade-off still has to be optimized for mobile systems, while the implications of limited resources extend to other biometric sub-processes too, e.g.\ PAD.
In summary, mobile biometric authentication clearly has a role to play in the COVID-19 era.
Touchless, personal mobile biometrics solutions can help to deliver reliable authentication while also meeting strict hygiene requirements, even if the efficient integration of biometric recognition technologies into mobile device platforms remains challenging.
\subsection{Biometrics in remote education}\label{sec:elearning}
The use of learning management systems has increased dramatically in recent years, not least due to the promotion of home-schooling and eLearning during the COVID-19 pandemic. Learning management systems deliver remote education via electronic media. eLearning systems often require some form of identity management for the authentication of remote students. Biometrics solutions have proved extremely popular, with
a number of strategies to integrate biometric recognition in eLearning environments having been proposed in recent years~\cite{eLearniingsurvey,Sanna-2017}.
In the eLearning arena, biometric technologies are used for user login, user monitoring, attention or emotion estimation, and authorship verification. Fig.~\ref{fig:eLearning} shows an example for user login to an eLearning platform.
Both one-time authentication (biometric verification at a single point in time) and continuous authentication (periodic over time) have utility in eLearning scenarios. Whereas one-time authentication might
be suitable to authenticate students submitting homework,
continuous authentication may be preferred to prevent students cheating while sitting remote examinations~\cite{Flior10}.
In order to minimise inconvenience, continuous biometric authentication calls for
the use of biometric characteristics which require little to no user cooperation~\cite{eLearniingsurvey}, e.g.\ text-independent keystroke dynamics~\cite{Morales_Keystroke,Bours17}.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{figures/identity-proofing.jpg}
\caption{BioID\textsuperscript{\tiny\textregistered}
Identity Proofing for e-learning platforms \cite{bioid-2020}}\vspace*{-0.5cm}
\label{fig:eLearning}
\end{figure}
Presentation attacks can present a substantial threat to biometric technologies deployed in such scenarios (see Section~\ref{sec:remote_secpriv}).
This might be why, despite significant research interest,
only few biometric recognition systems have been deployed in operational eLearning scenarios~\cite{eLearniingsurvey}.
Even so,
eLearning systems will likely become more popular while the pandemic continues and, once operational, their use will likely be maintained
in the future.
\subsection{Security and privacy in remote biometrics}\label{sec:remote_secpriv}
The remote collection of biometric information gives rise to obvious security and privacy concerns; the trustworthiness of the collection environment cannot be guaranteed.
One of the potentially gravest threats in this case, especially given the absence of any human supervision (e.g.\ in contrast to the automatic boarder control use case), is that of presentation attacks or `spoofing'~\cite{Ratha-EnhancingSecurityAndPrivacy-IBM2001,Marcel-HandbookPAD-ACVPR-2019,Raghavendra-FacePAD-Survey-2017}.
Presentation attacks involve the presentation of false, manipulated or synthesized samples to a biometric system made by an attacker to masquerade as another individual.
Diverse presentation attack instruments, ranging from face masks to gummy fingers, have
all been proved a threat.
The detection of presentation attacks in a remote setting
can be more challenging that in a local setting, depending on whether detection countermeasures are implemented on the client side or the server side. In case PAD is performed on the client side, hardware-based detection approaches can be employed, though these require specific, additional equipment beyond those used purely for recognition. Even these approach might still be vulnerable to presentation attacks, as demonstrated for Apple's Face ID system~\cite{iPhone-2017}. If PAD is implemented on the server side, then software-based attack detection mechanisms represent the only solution. Such software-based PAD for remote face and voice recognition were explored in the EU-H2020 TeSLA project~\cite{Bhattacharjee18}. It is expected that more research will be devoted to this topic in the future \cite{DBLP:conf/fgr/FangBKD21,DBLP:journals/pr/FangDKK22}.
In addition to the threat of direct attacks performed at the sensor level, there is also the possibility of indirect attacks performed at the system level.
The storage of personal biometric information on mobile devices as well as the transmission of this information from the client to a cloud based server calls for strong data protection mechanisms.
While traditional encryption and cryptographic protocols can obviously be applied to the protection of biometric data, any processing applied to the data required prior decryption, which still
leaves biometric information vulnerable to interception.
Encryption mechanisms designed specifically for biometric recognition in the form of template protection~\cite{Rathgeb-BTP-Survey-EURASIP-2011} overcome this vulnerability by enabling comparison of biometric data in the encrypted domain. Specific communication architectures that ensure privacy protection in remote biometric authentication scenarios where biometric data is transmitted between a client and a server have already been introduced, e.g.\ the Biometric Open Protocol Standard (BOPS)~\cite{BOPS} which supports the homomorphic encryption~\cite{Moore-HE-Survey-ISCAS-2014} of biometric data.
As it has been described in this section, the use of remote biometric authentication in the times of COVID-19 provides many advantages. However, in order to achieve trustworthy identity management, it also requires appropriate mechanisms to protect privacy. Countermeasures to prevent or detect presentation attacks are also essential. The latter is usually more challenging in a remote authentication scenario, where means of detecting attacks may be more limited compared to conventional (accessible) biometric systems.
\section{Emerging technologies}
\label{sec:emerging_technologies}
As discussed in the previous sections, the COVID-19 pandemic poses specific challenges to biometric technologies.
However, it is also expected to foster research and development in emerging biometrics characteristics which stand to meet new requirements relating to the pandemic, as well as the use of biometric information directly for virus detection and monitoring
e.g.\ of infected individuals. Such emerging biometric technologies are described in the following.
\subsection{Touchless, hand-based biometrics}
Hydro-alcoholic gel, strongly advocated as a convenient means to disinfection during the COVID era, can be used to protect the users of touch-based sensors such as those used for fingerprint recognition~\cite{Okereafor-JMIRBE-FingerSensorCOVID-2020}.
While they serve to reduce sensor contamination and pathogen transmission, hydro-alcoholic gels tend to dry the skin.
The sensitivity of fingerprint sensors to variability in skin hydration is well known. It can degrade the quality of acquired fingerprints and hence also recognition reliability~\cite{Olson-Moisture-2015}.
Severe dryness can even prevent successful acquisition as illustrated in Fig.~\ref{fig:dryfp}, thereby resulting in failures to acquire.
\begin{figure}[t]
\centering
\subfloat[dry]{\includegraphics[height=3.75cm]{figures/fp_dry}} \hfil
\subfloat[normally moist]{\includegraphics[height=4cm]{figures/fp_normal}}
\caption{Example of a dry fingerprint and the same fingerprint with normal moist (taken from \cite{Olson-Moisture-2015}).}
\label{fig:dryfp}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[Stationary touchless\protect\footnotemark]{\includegraphics[width=0.45\linewidth]{figures/idemia-morphowave-desktop_0}} \hfil
\subfloat[Mobile touchless]{\includegraphics[width=0.45\linewidth]{figures/mobile}}
\caption{Touchless capturing of fingerprints}\vspace*{-0.5cm}
\label{fig:touchless}
\end{figure}
\footnotetext[7]{Source: \url{https://pbs.twimg.com/media/DyCFi_AWsAMN8MK.jpg}}
Hygiene concerns
have increased societal resistance to the use of touch-based sensors.
These concerns have in turn fueled research efforts in 2D or 3D touchless fingerprint recognition systems~\cite{Priesnitz-2dfinger-Survey-2020,Ajay-3Dfinger-2018} such as those illustrated in
Fig.~\ref{fig:touchless}.
Touchless fingerprint sensors are generally either
prototype hardware designs~\cite{raghavendra2014low-cost,Galbally20173D-touchless} or are adapted from general purpose devices
adapted to
touchless fingerprint recognition~\cite{kumar2011contactless,stein2012fingerphoto}.
Both the capture and processing of fingerprints must usually be adapted to touchless acquisition~\cite{Priesnitz-2dfinger-Survey-2020}.
The majority of touchless finger image acquisition sensors deliver colour images for which general image processing techniques are employed to improve contrast and sharpness. Traditional minutiae extractors and comparators may then be employed.
The interoperability of both touch-based and touchless devices is naturally desirable, e.g.\ to avoid the need for enrolment in two different systems.
Interoperability has proven to be
non-trivial~\cite{Libert-NIST-ContactlessFingerComp-2020,Orandi-NIST-ContactlessFingerMatcher-2020}. While some differences between the two systems, e.g. mirroring, colour-to-grayscale conversion or inverted back- and foreground, can be readily compensated for without degrading accuracy, others, e.g.\ the aspect ratio or deformation estimation, prove more challenging~\cite{Salum17touchlesstouchbased,Lin18touchlesstouchbased} and can degrade reliability. Note that fingerprint images acquired using touchless sensors do not exhibit the deformations caused by pressing the finger onto a surface that characterise images acquired from touch-based sensors. Moreover, DPI alignment and ridge frequency estimation is required to enable a meaningful comparison of fingerprints acquired from touch-based and touchless sensors.
As an alternative to fingerprint recognition, some ATMs already incorporate fingervein-based recognition sensors which are robust to variability in skin hydration as well as presentation attacks.
Images of the finger or hand are captured with NIR illumination, since light at NIR frequencies is absorbed differently by hemoglobin and the skin,
thereby allowing for the detection of vein patterns.
Touchless fingervein and palmvein sensors have been developed~\cite{Kauba-MDPISensors-ContactlessFingerprintVein-2019,Ma-IPT-ContactlessFingervein-2019,Marattukalam-ANZCC-PalmVeinContactless-2019}, though the lack of any control in the collection process typically causes
significant rotation and translation variation.
The quality of the capturing device as well as strategies to compensate for nuisance variation are hence key to the collection of high quality images and reliable performance. Touchless capturing device designs have been presented by various researchers, e.g. in \cite{Kauba-MDPISensors-ContactlessFingerprintVein-2019}.
This work showed that the degradation in recognition performance resulting from touchless acquisition
can be addressed using finger misplacement corrections. On the other hand, the approach presented in~\cite{Ma-IPT-ContactlessFingervein-2019} extracts a region of interest from captured samples and uses an oriented element feature extraction scheme to improve robustness.
The use of finger vein recognition for mobile devices is also emerging.
Debiasi \textit{et al.}~developed an auxiliary NIR illumination device for smartphones
which supports the capture of hand vascular patterns~\cite{Debiasi-BTAS-NIRVeinMobile-2018}. The device is connected and controlled via
Bluetooth and can be adapted to different smartphones. The authors also presented a challenge response protocol in order to prevent replay and presentation attacks and showed that acceptable verification performance can be achieved using standard finger vein recognition algorithms.
The VeinSeek Pro app\footnote{\url{https://www.veinseek.com/}} is able to capture vein images from the hand without the need for extra hardware. This approach is based on the fact that different colors of light penetrate different depths within the skin. By removing the signal from superficial layers of the skin, the authors argue that they can more easily see deeper structures. However, to the best of our knowledge there is no analysis so far of the feasibility of using these images for vein-based biometric recognition.
In summary, in the era of the COVID-19 pandemic, touchless hand-based biometric recognition seems to be a viable alternative to conventional touch-based systems.
These technologies achieve similar levels of performance as touch-based technologies~\cite{Priesnitz-2dfinger-Survey-2020,Ajay-3Dfinger-2018,Kauba-MDPISensors-ContactlessFingerprintVein-2019}. Some commercial products based on prototypical hardware design and general purpose devices, e.g.\ smartphones, are already available on the market. Nonetheless, touchless recognition remains an active field of research where several challenges need to be tackled, in particular recognition in challenging environmental conditions, e.g.\ uncontrolled background or varying illumination~\cite{malhotra_fingerphoto_2017,Priesnitz-2dfinger-Survey-2020}.
\subsection{COVID detection with biometric-related technologies}
COVID-19 attacks the human body at many levels, but the damage to the respiratory system is what often proves fatal.
The production of human speech starts with air in the lungs being forced through the vocal tract.
Diminished lung capacity or disease hence impacts upon speech production and there have been attempts to characterise the effects of COVID-19 upon speech as means to detect and diagnose infection~\cite{schuller2020covid,deshpande2020audio,bartl2020voice}.
Initial efforts involved the collection and annotation of databases of speech as well as non-speech sounds recorded from healthy speakers and those infected with the COVID-19 virus~\cite{shuja2020covid}. The data typically includes recordings of coughs~\cite{imran2020ai4covid,brown2020exploring,sharma2020coswara}, breathing sounds~\cite{faezipour2020smartphone,trivedy2020design} as well as speech excerpts~\cite{Han2020}.
The database described in~\cite{Han2020} contains recordings of five spoken sentences and in-the-wild speech, all recorded using the Wechat App from 52 COVID-confirmed and hospitalised patients in Wuhan, China, who also rated their sleep quality, fatigue, and anxiety (low, mid, and high). After data pre-processing, 260 audio samples were obtained.
While these early works highlight the potential of biometrics and related technology to help in the fight against the COVID-19 pandemic,
they also highlight the need for homogenised and balanced databases which can then be used to identify more reliable and consistent biomarkers indicative of COVID-19 infection. Outcomes of these studies are very encouraging: the detection of COVID-19 through voice, but also through coughing or the sound of breathing, has an accuracy comparable to that of the antigen or saliva test~\cite{kamble21_interspeech, kamble22, das21_interspeech, avila21_interspeech}.
Thermal face imaging has also come to play a major role during the pandemic, especially for the rapid surveillance of potential infections among groups of travellers on the move, e.g.\ in airports~\cite{TFM_Airports} and shopping centres~\cite{TFM_Malls}. Thermal face images can be used to detect individuals with fever~\cite{DBLP:conf/iccvw/LinLL19}, a possible symptom of COVID-19 infection.
Similar face captures can also be used as an alternative capture spectrum for face recognition \cite{DBLP:conf/icb/MallatDBKD19,DBLP:conf/icb/IranmaneshN19,DBLP:journals/corr/abs-1910-09524}, however, with verification performances inferior to the visible \cite{DBLP:conf/cvpr/DengGXZ19,DBLP:journals/csr/FarokhiFS16}.
Despite the ease with which thermal monitoring can be deployed, it is argued in~\cite{ThermalScanningEffec} that body temperature monitoring will be insufficient on its own to prevent
the spread of COVID-19 into previously uninfected countries or regions and the seeding of local transmission. The European Union Aviation Safety Agency (EASA) concludes that thermal screening equipment, including thermal scanners will miss between 1\% and 20\% of passengers carrying a fever~\cite{EASAreport}.
\section{Societal Impact}
\label{sec:societal}
As any other technology used by a large population, biometric recognition systems affect the society. So far, the positive aspects of such systems (e.g., faster authentication for border crossing or convenience for smartphone unlocking) have outweighed their disadvantages, mostly related to privacy and security issues \cite{Kindt-PrivacyIssues-Springer-2016,Tanwar-EthicsBiometrics-Springer-2019}. Such issues have been thoroughly analysed and (partially) dealt with, thereby increasing the acceptance of the users and boosting the deployment of biometric systems. Nevertheless, in the last years new, concerns have arisen related to the fairness of biometric recognition algorithms \cite{FreitasPereira-Fairness-arXiv-2021} and their trustworthiness \cite{Jain-TrustVerify-arxiv-2021,Rathgeb-DemographicFairness-arxiv-2021}. In addition, societal and ethical aspects of presentation attack detection methods have also been analysed \cite{Rebera-SocioEthicPAD-SEE-2014}.
In the context of the COVID-19 pandemic, the use of contact-based biometric systems have similarly lead to health-related concerns. Systems where contact with the capture device is necessary could still be employed in a private scenario (e.g., for unlocking your own smartphone or for remote for authentication from your own laptop), but contact-less approaches will be preferred for global applications (e.g., building access control) in order to prevent the spread of viruses. In fact, it can be argued that the use of contact-less biometrics can even reduce the transmission of pathogens in some scenarios such as airport \cite{BI-Covid19biometrics-2020}. This trend will probably remain even after the COVID-19 pandemic can be considered to be over.
On the other hand, the need for further digitalisation in almost all societal levels, including sensitive applications such as online exams or eHealth systems, where subject identification is of the utmost importance, has increased the acceptance of biometric technologies as a convenient and reliable means of authentication. Thus, more research is being done in this area \cite{Faundez-EHealthSignature-CC-2020,Vizitiu-IoTeHealthBiometrics-2021}, together with socioeconomic analysis of success and failure of big-scale implementations of such systems \cite{Effah-GhanaBiometricsSocioeconomic-ISM-2020}.
However, further digitalisation also brings some disadvantages. In general, and not only regarding biometric recognition, the tracking activities and health checks implemented worldwide in order to prevent the spread of COVID-19 have had deep implications on the privacy and freedom of the subjects. For instance, free travel within Schengen has been suspended for months, needing to fulfill certain criteria in terms of negative COVID-19 tests, vaccination status, or registration forms to enter a country\footnote{\url{https://reopen.europa.eu/en/}}. In addition, facial recognition systems have been used in countries such as Poland, China, or Russia to ensure that individuals in quarantine remain at home. In spite of the benefits for the collective health, \lq\lq \textit{the use of biometrics (including facial recognition) in response to COVID-19 raises a number of privacy and security concerns, particularly when these technologies are being used in the absence of specific guidance or fully informed and explicit consent. Individuals may also have problems exercising a wide range of fundamental rights, including the right of access to their personal data, the right to erasure, and the right to be informed as to the purposes of processing and who that data is shared with}'', as the Organisation for Economic Co-operation and Development (OECD) states in its policy response to Coronavirus (COVID-19) \cite{OECD-Covid19Privacy-2020}. Thus, the OECD gives a number of recommendations including the use of privacy-by-design approaches, such as the ones described in Sect.\ref{sec:remote_secpriv}, and the limitation on the time sensitive data can be stored.
The added societal concerns due to the exploitation of sensitive biometric data have been also addressed by The British Academy \cite{BA-Covid19LongSocietal-2021}. As the Academy points out, \lq\lq \textit{Sharing data is crucial for furthering research and maximising its potential to help overcome the current pandemic and better prepare for future health crises}''. However, bias or errors derived from the use of biometric technologies for authentication can result in negative impacts such as discrimination, and diminish the trust on COVID-19 related technologies. Therefore, the Academy recommends maintaining a human element in the loop. In addition, existing digital inequalities might also limit the potential benefits of health technologies and increase the social disadvantages of some groups. The report also includes some numbers: \lq\lq \textit{6 million people in the UK cannot turn on a device and up to 50\% of those are
aged under 65}''. Furthermore, in order to minimise the potential discrimination caused by biometric technologies, several characteristics should be considered: apps which rely
on voice recognition software that may not work effectively for those with a speech
impairment, can be beneficial for those with reduced sight.
In March 2022, the European Data Protection Supervisor (EDPS) published a report on COVID-19 related processing of the Union institutions, bodies, offices and agencies (EUIs) \cite{EDPS-Covid19ProcessingEUIs-2022}. In this survey, the EDPS reviews body temperature checks, contact tracing, COVID testing and handling of results, monitoring presence within the premises, vaccination campaigns, access control, and the use of IT-tools in telework. Regarding access control, where biometric recognition systems can be in place, the EUIs correctly informed the individuals about the processing activities carried our and specified a time limit for data retention, as recommended by the OECD. However, as the report points out, the lawful grounds of this identification requirement may not be given, since \lq\lq \textit{staff members [...] cannot provide freely given, specific, informed and unambiguous as well as explicit consent}''. Similarly, \lq\lq \textit{consent would also not be appropriate for visitors, who are in most cases obliged to come to the EUI premises for work purposes}''. Also, some EUIs had not indicated that they process health data even if they were doing so. In view of these negative impact on the privacy rights of the individuals, the EDPS recommends the EUIs to check the lawfulness and regularly reassess the necessity and proportionality of the existing COVID-related processing activities.
From those reports we can conclude that biometrics and other technologies have not only provided the subjects with additional advantages to access digital services, but have also had a negative impact on their right to privacy. Thus, we would like to urge the community to assess the necessity of identity checks before implementing them, and use all the available tools to minimise the negative impact of such a control: biometric template protection schemes to prevent sensitive data leakage, or presentation attack detection modules to minimise the success chances of identity theft.
\section{Conclusions}
\label{sec:conclusions}
This article has summarised the main challenges posed by the pandemic to biometric recognition, as well as the new opportunities for existing biometrics to be harnessed or adapted to the COVID-19 era, or where biometrics technology itself has potential to help in the fight against the virus. The use of hygienic masks covering the nose and mouth, as well as the secondary impacts of strict hygiene measures implemented to control the spread of pathogens all have potential to impact upon biometrics technology, thereby
calling for new research to maintain reliable recognition performance.
Facial biometrics are among the most impacted characteristic; masks occlude a considerable part of the face,
leading to degraded recognition performance.
This is the case not only for opaque masks but also for transparent face shields, since reflections caused variation that is non-trivial to model. Opportunities to overcome these difficulties are found by focusing parts of the face that remain uncovered, namely the iris and the wider periocular region.
Whereas solutions to iris recognition that use the NIR spectrum are well studied, numerous efforts in recent years have focused on less constrained approaches to iris recognition that use mobile devices and the visible spectrum. Given the lower quality of such images, image super-resolution techniques have been proposed to improve image quality. Such techniques can also be applied to the full periocular region. To date, the adoption of such systems is low, but likely to increase in the future.
Hand-based biometric systems are also affected by the new hygiene practices which typically result in drier skin, lower quality fingerprint images and degraded recognition performance. Both touch-based and touch-less systems are affected. Vein-based recognition systems are more robust to variations in skin condition. In contrast to traditional touched-based vein sensors, touch-less capture devices introduced in the last two years can reduce the risk of infection from contact with a contaminated surface. Further research is nonetheless needed to bridge the gap between the performance of less constrained, touchless systems and their better constrained touch-based counterparts.
Like facial biometrics, voice biometric systems are also impacted by the wearing of facial masks which can interfere with speech production. Like many other forms of illness, COVID-19 infections can also interfere with the human speech production system and also degrade recognition performance.
These same effects upon the speech production mechanism, however, offer potential for the detection of pulmonary complications such as those associated with serious COVID-19 infections.
Still, the challenges in ensuring reliable biometric recognition performance have grown considerably during the COVID-19 era and call for renewed research efforts. With many now working or receiving education at home, some of the greatest challenges relate to the use of biometric technology in remote, unsupervised verification scenarios.
This in turn gives greater importance to continuous authentication, presentation attack detection, or biometric template protection to ensure security and privacy in such settings which have come to so define the COVID-19 era.
\bibliographystyle{IEEEtran}
|
1,116,691,497,667 | arxiv | \section{Introduction}
\label{sec1}
Quantum computing has the properties of superposition and entanglement, which grants quantum speedup for certain problems \cite{alexeev2021quantum, ding2020quantum, gokhale2020optimization, niu2020hardware}. Compared with classical computing, exponential speedup has been demonstrated in areas such as quantum chemistry \cite{peruzzo2014variational,cao2019quantum}, finance \cite{ghosh2018identifying}, and machine learning \cite{jiang2021co, wang2021exploration, jiang2021machine}. Therefore, quantum computing has been considered among the best candidates for solving some complex computational problems that cannot be solved by classical computers. In recent decades, technologies such as superconducting transmon that support quantum computers have evolved substantially.
Google released Sycamore \cite{arute2019quantum} in 2019 and claimed to achieve quantum supremacy, which sampled a circuit with 53 quantum bits (qubits), and the distribution is difficult for a classical computer to simulate in principle. IBM also released a 127-qubit quantum computer at the end of 2021. It is probable that quantum computers with around 1,000 qubits can be manufactured in the coming decade.
The workflow to run programs on quantum hardware can be divided into two subsequent steps. Firstly, the quantum programs are synthesized and compiled into quantum gates. When the programs are sent to the real quantum hardware, the quantum gates will be transformed into quantum pulses with a look-up table. For currently available superconducting quantum computers, the final control signals are in the form of microwave pulses.
A major focus of quantum programs today is the variational quantum algorithm, especially for machine learning tasks, such as RobustQNN \cite{wang2021roqnn} and QuantumFlow \cite{jiang2021co}, which
uses a classical optimizer to train a parametrized quantum circuit.
Most of the variational quantum algorithms manipulate the parameters at the gate level. For example, quantum neural networks (QNN) encode the input data \cite{wang2021quantumnas, tacchino2020variational, wang2022chip} and perform machine learning tasks on a quantum computer by building and training parametric quantum gate networks. Some state-of-the-art QNNs obtain high accuracy on several classical datasets to demonstrate potential advantage \cite{wang2021quantumnas}. Robust QNNs have also been proposed to mitigate the high noise in NISQ quantum computer \cite{liang2021can}. Decent accuracy has been demonstrated even in presence of noise \cite{wang2021roqnn}. Some recent work tackles the quantum problem with machine learning. For example, reinforcement learning is adopted to find optimal parameters at the circuit level \cite{ostaszewski2021reinforcement}.
However, variational quantum gates have limited flexibility. A controlled rotation gate comes with only one parameter that can be trained. As pointed out by \cite{sim2019expressibility}, the expressibility and entangling capability of parameterized quantum circuits are mainly affected by the number of parameters. Pulses-level operations, being lower than gate level in the stack of quantum computing, can provide finer controls and thus granting more flexible manipulation of internal parameters. Therefore, we contemplate that the increased parameters at the pulse level will give us advantages during the training process while maintaining the circuit latency.
We term such a novel paradigm as {\em variational quantum pulse (VQP) learning}, which though intuitively appealing is in the uncharted territory.
Note that a seemingly relevant study is on algorithms to generate pulses to manipulate the qubits \cite{leung2017speedup, khaneja2005optimal, caneva2011chopped, peng2021deep, porotti2022gradient, sivak2021model}. Our goal is fundamentally different. We are not interested in the effective ways to generate pulses, but rather making various
pulse parameters learnable for machine learning tasks.
The architecture of VQP learning can be divided into three parts. The first part is to split the traditional QNN into the encoding circuit and trainable circuit (VQC). The encoding circuit is converted into pulse schedule and saved for later use. Then, the trainable circuit is also converted into pulse schedule. But the amplitudes are extracted from the pulse schedule. These amplitudes are the parameters that will be updated in the training process. The second part is to use an optimization framework to iteratively update the amplitudes with the target to minimize the error of classification tasks. The third part is to reconstruct the pulses with group of amplitudes obtained from optimization. Then, the trained VQP can be used for inference tasks. The major contributions of this work can be described as follow:
\begin{itemize}
\item This is the first work that verifies the trainability of VQP for machine learning tasks. We train VQP for binary classification tasks and obtain stable results in noisy environments.
\item Comparing the results of VQC learning and VQP learning for the same tasks demonstrates the advantages of VQP and sheds light on why VQP learning are more promising.
\end{itemize}
The paper is organized as follows. Section \ref{sec1} briefly introduces the background for VQP. Section \ref{sec2} introduces the motivation of this work, gives a definition of VQP, and discusses the possible advantages of VQP learning. Section \ref{sec3} proposes a framework for VQP learning, describes the components of the framework and the optimization methods. Section \ref{sec4} unfolds the experimental results that validate the idea of VQP. Section \ref{sec5} discusses the purpose of extracting pulse amplitudes from the pulse, Section \ref{sec6} concludes the paper with a perspective of the future of VQP learning.
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/VQP.pdf}}
\caption{Conceptual illustration of VQP for QML tasks.}
\label{fig:VQP}
\end{figure}
\section{Background and Motivation}
\label{sec2}
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/VQC.pdf}}
\caption{An example QNN that uses VQC for QML tasks.}
\label{fig:VQC}
\end{figure}
Our idea mainly comes from the usage of the VQC for the machine learning tasks. VQC enables quantum gates to be parameterized \cite{kandala2017hardware}, such as assigning a value to the angle of a rotation gate, thus making the quantum circuit trainable. The training parameter is actually the angle of the rotation gate, which can also be understood as phase shift training. More specifically, a variational quantum state $|\psi(x, \theta)\rangle = \Phi(x, \theta) |0......0\rangle$ is prepared by parameterizing the quantum circuit, where $x$ is the input data, $\theta$ is a set of free variables for training, and $\Phi()$ is parameterized quantum circuit. Usually, the VQC is trained by quantum-classical co-optimization to find the optimal set of parameters for the circuit. Quantum neural network (QNN) is a quantum machine learning model that uses VQC to encode features on the input data and then performs complex-valued linear transformations. For image classification tasks, it is necessary to first form a quantum encoding circuit by encoding pixels to several rotation gates, and then use VQC to compute and process the information passed to it. After the computation, we measure the qubits based on the Z-axis and get expectation values, and then calculate the corresponding probability, which is already a digital number because the measurement carries out the transformation from quantum state to classical data. VQC and QNN have great potential for applications in quantum machine learning (QML) \cite{biamonte2017quantum, ahsan2022quantum}, quantum simulation and optimization \cite{moll2018quantum, cheng2020accqoc}.
\subsection{Variational Quantum Pulse}
Inspired by the VQC, we propose the concept of the VQP. First of all, a general understanding is needed that the workflow of quantum computing at the software level is from the logic quantum circuits of high-level programming. Then the quantum circuits are mapped to the physical qubits after "transpiler". The quantum gates will further be "translated" into quantum pulses through a lookup table. These pulses are what quantum machine really corresponds to and processes with. In this paper, we define a VQP as a set of trainable quantum pulses that are parameterized, where a trainable pulse is defined as a pulse described by specific parameterization such as frequency, duration, amplitude, shape (e.g. Gaussian, square wave, etc.), etc.
\subsection{Variational Quantum Pulse Learning}
Also inspired by the fact that VQC can be trained, we believe that VQP has the potential to be trainable. However, the wide variety of parameters of VQP can make the search space for training too large, so that the whole training is not doable.
In this case, we narrow down the parameters of the attempted training to the amplitude item. As mentioned in the work \cite{wright2022deep}, a physical neural network can be constructed within a physical system in nature as long as parameters and an executable simulator exist, which also means that the parameters are trainable. And from the result of work \cite{meitei2021gate}, we can also make a guess that pulse should have the trainability.
Moreover, the training of amplitude is actually pulling and pushing the set of pulses in the pulse model without changing the shape of the pulse (the nature of quantum gates is mainly determined by the shape, e.g. the function of entanglement of CX gates is not affected by modifying amplitudes in a limited range). With the support of Qiskit's OpenPulse\cite{mckay2018qiskit}, We can take the amplitude both in the drive channel and control channel into VQP training to train the parameters that cannot be considered in the parameterized quantum circuit. The control channels are device-dependent. The role of these channels is described by the Hamiltonian returned by the device, with enough details to allow its operation. Based on this property, making amplitudes in control channels learnable may give the ability for tolerant noise in each specific device.
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/motivation.pdf}}
\caption{Comparison between VQC and VQP on a binary classification task from mini-MNIST datasets on qiskit pulse simulator with system model from ibmq\_quito.}
\label{fig:motivation}
\end{figure}
Furthermore, given the limited variety of quantum gates available in quantum circuits, we should not limit our vision to the existing quantum gates. The training of VQP can be viewed as a black-box optimization problem in which the quantum gates corresponding to the pulses in the optimized black box are unknown, but these unknown gates may have better results for our QML tasks. Figure \ref{fig:VQP} illustrates the concept to use VQP for
classification tasks. The process is quite similar
to QNN that uses VQC. We first use encoding circuit to encode the image pixels by putting them as angles in the rotation gate and transforming them to pulses format. Then we build the trainable pulses either can be transformed from a circuit (e.g. VQC) or put parameters in pulse builder, do the VQP training to process pulses, which can be in a pulse simulator or a real quantum machine. After that, we measure all the qubits and store the information in the acquire channel for calculating the expectation values by using Softmax. After that, we can obtain the probabilities based on expectation values.
We have set up a simple experiment with MNIST two-class classification to see if VQP learning can show some advantages over VQC learning. We use qiskit pulse simulator with system model from ibmq\_quito as the backend to process the quantum circuit and pulse schedule. Due to the limited resource on the quantum computer, we configured the experiments using 10, 20, 100 images for training and the same number of images for testing respectively. The baseline VQC is randomly generated by interleaving U3 and CU3 as shown in Figure \ref{fig:VQC}. The results for the three settings are shown in Figure \ref{fig:motivation}. It is obvious that in all the cases, VQP learning can achieve much better accuracy than VQC learning.
Apprantely, the performance of VQP learning, like all other learning framerworks, heavily depends on how well the training is carried out. Yet a major and unique challenge for VQP learning is that back-propagation is not possible to be calculated from the backend of qiskit OpenPulse. Therefore, a non-gradient-based optimization framework to enable VQP learning is needed. This will be discussed in the following sections.
\begin{table}[]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{5.5pt}
\footnotesize
\begin{tabular}{cccc}
\toprule
\midrule
\multirow{3}{*}{Form of CX gate} & \multicolumn{3}{c}{Time Duration} \\
& Pulse simulator & Pulse simulator & Pulse simulator \\
& (Quito) & (Belem) & (Jakarta)\\\midrule
CRX($\pi$) gate & 26832.0dt & 32016.0dt & 26832.0dt \\ \midrule
CX gate & 25136.0dt & 27728.0dt & 25136.0dt \\ \midrule
\bottomrule
\end{tabular}
\caption{Time duration for CX gate in different forms on pulse simulator with system models of ibmq\_quito, ibmq\_belem, and ibmq\_jakarta.}
\vspace{6pt}
\label{CXgate}
\vspace{-15pt}
\end{table}
\subsection{Advantages of VQP learning}
For quantum computers in the NISQ era, noise is a very big problem. VQC training architectures always need tens or even hundreds of gates to support learning. For VQP learning, pulse has more parameters, so the number of gates needed is greatly reduced. The reduction in the number of gates will directly lead to a decrease in latency, which means that the decoherence error can be reduced significantly. In this case, the process of VQP learning can be more robust to the environment of noise than VQC learning.
Moreover, the U gate is composed of some RZ gates as well as two fixed RX gates, RX($-\pi/2$) and RX($\pi/2$). This means that for circuit-level training of the U gate, the process of angle changes is only addressed on the Virtual Z and does not have a real impact on the physical amplitude. On the other hand, VQP learning, with amplitude as a parameter, actually changes the physical amplitudes of pulses.
Finally, VQP can also reduce the time duration for some special quantum gates. For example, in the gate level training, a CX gate needs to be treated as a special CRX($\pi$) gate, the time duration of which is shown in Table \ref{CXgate} for pulse simulator with system model of ibmq\_quito, ibmq\_belem, and ibmq\_jakarta. While at the pulse level, we can train directly on the pulse of the CX gate, and the time duration is
shorter on the same machine.
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/workflo.pdf}}
\caption{Overview of the VQP learning framework.}
\label{fig:workflo}
\end{figure}
\section{VQP Learning Framework}
\subsection{Overview}
\label{sec3}
In this section, we present an overview of the VQP learning framework. As shown in Figure \ref{fig:workflo}, we use optimizer continues communicate with the environment. We first get a parameterized quantum circuit as a baseline and transform it into a pulse schedule. Then we put this set of pulses into the pulse optimization framework and give the amplitude list of pulses and the corresponding accuracy list of pulses to the optimizer as the initial parameters. After finding the optimized amplitude list with the lowest error rate, we need to reconstruct the pulse based on this amplitude list. The reconstructed pulses are finally deployed and executed on the pulse simulator in the qiskit test mock or on a real quantum device. Subsection B below describes the circuit to pulse transform process. Subsection C proposes the optimization framework for VQP learning. Subsection D illustrates the process of pulse reconstruction based on the optimized amplitudes.
\subsection{Quantum Circuit \& Pulse Transform}
The baseline for this work is a VQC that can also be used for QML tasks. An example is shown in Figure \ref{fig:VQC}. Since there is no existing method for pulse encoding, we choose to continue to use the circuit encoding approach to encode classical data into quantum states. Then we transform the encoded circuit part, which already contains the input data and tasks, into a pulse schedule. This process can be achieved in qiskit's OpenPulse. After successfully encoding the data, we transform the trainable circuit part into pulse schedule by the same method in order to ensure the fairness of the comparison in the experiments. After the transformation, we get two parts of the pulse schedule: encoding pulses and variational quantum pulses, which can also be considered as trainable pulses. Then we need to fix these encoding pulses, and only variational quantum pulses are considered in the following training process.
\subsection{Pulse Optimization Framework}
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/Bovqp.pdf}}
\caption{The schematic of pulse optimization framework.}
\label{fig:BO}
\end{figure}
A wide range of algorithms can be possible candidates for this framework, such as RL evolutionary algorithms, genetic algorithms~\cite{kennedy1995particle,Ingber1993SA}, Bayesian optimization~\cite{gao2022bayesopt,gao2019bayesopt2,shahriari2016bayesopt}, etc. For this optimization framework, we extract the amplitudes of the pulses that are obtained as described in the previous subsection. At this point, the amplitudes are complex numbers, which means that the pulse optimization framework cannot handle it. But from the amplitude, we can associate the magnitudes and angles of these complex numbers. Then we combine the computed magnitudes and angles into a single array, which can be trained and optimized by the pulse optimization framework. It is worth noting that we need to revert this array back to the form of the complex numbers in the objective function to obtain the amplitude and then reconstruct pulses, because this objective function should be evaluated on a qiskit pulse simulator or a real quantum machine. Doing so allows our pulse optimization framework to tolerate random noise, which we believe can gain benefits in NISQ era quantum devices.
Here, we use Bayesian optimization (BO)~\cite{gao2022bayesopt,gao2019bayesopt2,shahriari2016bayesopt} to explain the optimization framework: combining the property of pulses, we propose a pulse BO framework. As shown in Figure \ref{fig:BO}, we first take the amplitude list extracted from pulse as the initial input for the BO. After the quantum machine executes, we can get an error rate, denoted as the initial objective values, also part of input for the BO. At the same time, we need to set the search space for BO. For an executable pulse, its norm needs to be no larger than one.
We build a Gaussian Process (GP) regression model with RBF kernel based on the initial amplitude list, yielding a surrogate model for the real simulation. The model calculates the mean and variance of the function values at each point. With the GP model accessible, we constructs and optimize an acquisition function, which is used to decide the location to sample points. We choose the lower confidence bound (LCB) as the acquistion function in this work with the aim of finding the minimum error rate. After finding the minimum error rate, we can know the current searched best list, denormalize these data, and then recalculate the magnitude and angle of the current searched best list into amplitude, after which we get the optimized amplitude list.
\begin{algorithm}[t]
\caption{Variational Pulse BO Learning}\label{alg:one}
\KwData{$\rho$, $\chi$, $M$, $D$}
// $\rho$ is the amplitude list, $\chi$ is the search bound, $D$ consists of $x_i$ and $y_i$, $M$ is the Gaussian Process Regression model. \\
$D \gets $InitPulses$(\rho, \chi)$\;
\For{$i \gets |D|$ to $N\_total$}{ // iterative optimization
$p(y|x,D) \gets FitModel(M, D)$\;
// Acquisition function actively searches for the next optimized amplitudes.\\
$x_i$ $\gets$ $argmin_{x \in \chi}S(x, p(y|x,D))$\;
// Calculate corresponding error rate by processing in quantum machine.\\
$y_i$ $\gets$ $f(x_i)$\;
$D$ $\gets$ $D \bigcup (x_i, y_i)$\;
}
\end{algorithm}
Algorithm \ref{alg:one} presents the simplified pseudo code of the proposed variational pulse BO learning framework. $\rho$ is the amplitude list from initial pulses for optimization, $\chi$ is the search bound, constraint the norm of amplitude of pulses need to be less or equal to one, $D$ is consists of $x_i$ and $y_i$, $x_i$ is the hyper-parameter and is amplitude list during optimization in this case, and $y_i$ is the error rate that results by executing $x_i$ by quantum machines, $M$ is the Gaussian Process Regression model. By using $\rho$ inference on quantum device backend, we can get $(x_i, y_i)$ pairs, $y_i$ is the error rate in this case. And we set $N_{total}$ as the iteration times for optimization, we set the Gaussian process regression model, set the kernel method with normalized $y_i$, and fit the normalized $x_i$ to the Gaussian form. Then, $S$ is proposed as an acquisition function to select parameters by constraints, aiming at minimize $y$. Finally we compute the updated $y_i$ with updated $x_i$.
\subsection{Pulse Reconstruction}
After obtaining the optimized amplitude list, we design a waveform reconstruction function. This function first stores the optimized amplitude in the modified list. Then it extracts the amplitude values from the initial pulses and overwrites the initial amplitude values one by one by iteratively storing the optimized amplitude in the modified list. The other parameters in the initial pulses are kept unchanged, so as to build new pulses with only the amplitude change.
\section{Experiments}
\label{sec4}
\subsection{Experiments Setup}
\textbf{Dataset:} We evaluate the proposed method using the binary classification task, which is the same as that used in \cite{jiang2021co}.
Specifically, it is a binary classification.
In generating the dataset, we will associate 1 out of 2 classes to two input values ($x$ and $y$).
For example, we associate class 0 to inputs of x = 0.2 and y = 0.6; and class 1 to x = 0.8 and y = 0.8.
On top of the created dataset, we will divide them to to the train set and test set. And we also use the MNIST dataset, we do center-cropped images to 24 × 24, and then down-sample to 4×4 for two-class. We also setup a method to make sure the number of images we get from different classes keeping same.
\textbf{VQP Framework and Baseline:}
We apply the gate based QNN as the baseline, which is implemented by TorchQuantum, an existing library for quantum machine learning.
Figure \ref{fig:VQC} shows the detailed QNN structure of the baseline.
In VQP framework, we obtain the initial pulse by converting the baseline QNN, and then, we will learn the parameters in the pulse to iteratively
tuning pulses
on the pulse simulator/real quantum devices.
The number of iterations is set as 30. With a trained VQP, we can stream the test set to get the accuracy.
In the evaluation, we apply both gradient method and the same BO framework for VQC training.
The optimization objective is to minimize the error rate (i.e., $1 - accuracy$). Limited by the property of optimizer, for training parameter in high dimensional space, the optimization results is unstable, so we run five seeds for every benchmark and calculate the average as the result we reporting in this paper.
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/Coupling.pdf}}
\caption{The coupling map of all quantum devices we use.}
\label{fig:coupling}
\end{figure}
\textbf{Measurement of VQP Learning: }After run the pulse schedule in quantum machine, we can obtain the quantum state $\phi$ from computational qubit $|q\rangle$.
By setting the shots as $x$, say $x=256$, we can obtain the probabilities of results to be state $|0\rangle$ (corresponding to output of +1) and $|1\rangle$ (corresponding to output of -1).
For example if we get 156 $|0\rangle$ and 100 $|1\rangle$, then we can calculate the probability of -1 as $(156 * 1 + 100 * (-1))/256 = 0.21875$.
\textbf{Quantum Hardware and Backend Setting: }We use the pulse simulator and quantum computer provided by IBM Quantum.
First, we employ pulse simulators as pulse simulator (Quito), pulse model (Belem), and pulse simulator (Lima), these simulators can be imported from the qiskit test mock, which are the simulators that pull the real data from the real machine. These settings are due to pulse simulator do not consider the physical error from qubits, but it will occur algorithmic error during Hamiltonian estimating. And as we want to keep circuit learning and pulse learning in a fair environment, we run both in pulse simulator, pulse level we do training on pulse and processed in pulse simulator in every optimization iteration, and for circuit level, we do training on circuit and transform the optimized circuit to pulse then processed in pulse simulator in every optimization iteration.
Second, we also conduct the evaluation on ibmq\_jakarta, a real quantum machine that can process pulse schedule.
Kindly note that noise is considered in both pulse simulator (algorithmic error) and real machine (physical error from qubits), indicating we evaluate the proposed
method in a noisy environment.
\begin{table}[t]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{5pt}
\footnotesize
\begin{tabular}{ccccccc}
\toprule
\hline
\multirow{3}{*}{Task Data} & \multicolumn{3}{c}{Accuracy} \\
& Pulse simulator & Pulse simulator & Pulse simulator \\
&(Quito) & (Belem) & (Lima) \\ \midrule
Initial 20 & 0.45 & 0.45 & 0.5 \\
\textbf{+ VQP learning} & \textbf{0.65} & \textbf{0.6} & \textbf{0.55}\\ \midrule
Initial 100 & 0.51 & 0.5 & 0.5 \\
\textbf{+ VQP learning} & \textbf{0.57} & \textbf{0.63} & \textbf{0.61} \\ \midrule
Initial MNIST 20 & 0.5 & 0.5 & 0.45 \\
\textbf{+ VQP learning} & \textbf{0.55} & \textbf{0.66} & \textbf{0.6} \\ \midrule
Initial MNIST 100 & 0.5 & 0.5 & 0.51 \\
\textbf{+ VQP learning} & \textbf{0.57} & \textbf{0.61} & \textbf{0.61} \\ \midrule
\bottomrule
\end{tabular}
\caption{Binary classification task results on different dataset size and run on pulse simulators with system models from ibmq\_quito, ibmq\_belem, ibmq\_lima, separately.}
\vspace{6pt}
\label{result1}
\vspace{-15pt}
\end{table}
\begin{table}[t]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{6.5pt}
\footnotesize
\begin{tabular}{ccccl}
\toprule
\midrule
\multirow{2}{*}{Model} & \multicolumn{2}{c}{Accuracy} \\
& Pulse simulator (Belem) & ibmq\_jakarta \\ \midrule
VQC learning 20 & 0.57 & 0.58 \\
\textbf{VQP learning 20} & \textbf{0.6} & \textbf{0.69} \\ \midrule
VQC learning 100 & 0.61 & 0.59 \\
\textbf{VQP learning 100} & \textbf{0.63} & \textbf{0.64} \\ \midrule
VQC learning MNIST 20 & 0.6 & 0.56 \\
\textbf{VQP learning MNIST 20} & \textbf{0.66} & \textbf{0.62} \\ \midrule
VQC learning MNIST 100 & 0.57 & 0.62 \\
\textbf{VQP learning MNIST 100} & \textbf{0.61} & \textbf{0.71} \\ \midrule
\bottomrule
\end{tabular}
\caption{Classification tasks results by VQP learning VS VQC learning with same BO framework on pulse simulator with system model from ibmq\_Belem and ibmq\_jakarta.}
\vspace{6pt}
\label{result2}
\vspace{-15pt}
\end{table}
\begin{table}[t]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{25.5pt}
\footnotesize
\begin{tabular}{ccc}
\toprule
\midrule
Model & \# of Gates & Accuracy \\ \midrule
VQC\_base & 9 & 0.62 \\ \midrule
\textbf{VQP} & \textbf{9} & \textbf{0.71} \\ \midrule
VQC* & 12 & 0.68 \\ \midrule
\bottomrule
\end{tabular}
\caption{Comparison the MNIST two-class classification task with 100 images and result on ibmq\_jakarta between VQC\_base, VQP, VQC* with different number of gates and all by same BO framework.}
\label{result3}
\end{table}
\begin{table}[t]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{20pt}
\footnotesize
\begin{tabular}{ccc}
\toprule
\midrule
Model & \# of Gates & Accuracy \\ \midrule
\textbf{VQP} & \textbf{9} & \textbf{0.71} \\ \midrule
VQC with gradient & 9 & 0.73 \\ \midrule
VQC* with gradient & 12 & 0.77 \\ \midrule
\bottomrule
\end{tabular}
\caption{Put VQC with different gates in gradient based method and VQP in the BO framework, test on ibmq\_jakarta and get results.}
\vspace{6pt}
\label{WithBO}
\vspace{-15pt}
\end{table}
\begin{table}[t]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{5pt}
\footnotesize
\begin{tabular}{cccc}
\toprule
\midrule
\multirow{2}{*}{Model} & \multirow{2}{*}{\# of Gate} & \multicolumn{2}{c}{Time Duration} \\
& & ibmq\_jakarta & Pulse simulator (Belem) \\ \midrule
\textbf{VQP} & \textbf{9} & \textbf{40816.0dt} & \textbf{45168.0dt} \\ \midrule
VQC* & 12 & 58896.0dt & 58768.0dt \\ \midrule
\textbf{VQP\_transpiled} & \textbf{11} & \textbf{32368.0dt} & \textbf{32816.0dt} \\ \midrule
VQC*\_transpiled & 17 & 53008.0dt & 46192.0dt \\ \midrule
\bottomrule
\end{tabular}
\caption{Time duration for VQP, VQC*, VQP\_transpiled, and VQC*\_transpiled based on ibmq\_jakarta and pulse simulator with system model of Belem.}
\vspace{10pt}
\label{models}
\vspace{-15pt}
\end{table}
\begin{table}[t]
\centering
\renewcommand*{\arraystretch}{1}
\setlength{\tabcolsep}{2.6pt}
\footnotesize
\begin{tabular}{lccc}
\toprule
\midrule
\multicolumn{1}{r}{\small \underline{Use system model of $\rightarrow$}} & \multirow{2}{*}{Pulse simulator (Lima)} & \multirow{2}{*}{Pulse simulator (Quito)} \\
{\small \underline{Inference on $\downarrow$}} & & \\
\midrule
Pulse simulator (Lima) & \cellcolor{blue!20} \textbf{0.65} & \cellcolor{red!20} 0.4 \\
Pulse simulator (Quito) & \cellcolor{red!20} 0.35 & \cellcolor{blue!20} \textbf{0.55} \\
\midrule
\bottomrule
\end{tabular}
\caption{Run and test accuracy result on different models.}
\vspace{6pt}
\label{dependence}
\vspace{-15pt}
\end{table}
\subsection{Main Results}
\textbf{VQP Learning Result: }
We take the initial pulses inference on quantum machine's result as an initial result, and we set the different size of tasks: (1) 20 'train' data and 20 'test' data, which is described as 'Initial 20'; (2) 'Initial 100' for 100 'train' data and 100 'test' data; (3) 'Initial MNIST 20' for 20 'train' image and 20 'test' image in MNIST two-class; (4) 'Initial MNIST 100' for 100 'train' image and 100 'test' image in MNIST two-class.
Table \ref{result1} report the experimental results. VQP learning can improve accuracy for all settings.
Specifically, compared with 'Initial 100', VQP learning can get an average improvement of 10\% accuracy.
The improvement of VQP is 12\% over 'Initial MNIST 20'.
\textbf{Comparison between VQP and VQC: }We also compared the results of VQC and VQP on the same task. VQC uses the nine-gate learnable circuit.
For a fair comparison, we performed 30 iteration same BO on two benchmarks, pulse simulator with system model of ibmq\_belem, and real quantum machine ibmq\_Jakarta, executing 256 shots each.
As can be seen from the Table \ref{result2}, the results of VQP learning achieved 60\% and 69\% accuracy for the 20-data. VQC learning, on the other hand, obtained 57\% and 58\% accuracy in pulse simulator (Belem) and ibmq\_jakarta. VQP learning show better performance in the 8 groups tasks, and achieve up to 71\% accuracy on the MNIST two-class 100 image classification task, meanwhile, VQC learning also gain the best performance on this task but with 62\% accuracy.
An observation is that VQP learning obtain up to 94\% accuracy under noise environment in real quantum machine, but sometimes only push performance to gain a little bit beneifts. This is because of non-gradient based optimizer sometimes cannot handle the hyperparameters in high dimensional space. Thus, a differentiable simulator is highly demand for the pulse learning. However, from another aspects that we can say 94\% is at least a second optimal solution that can be achieve by VQP learning.
\begin{figure*}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/visulizepulse.pdf}}
\caption{Pulse visualization of CX gate before and after amplitude tuning in pulse simulator (Belem).}
\label{fig:vis}
\end{figure*}
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figure/vary.pdf}}
\caption{Schematic of varying the amplitude of pulse.}
\label{fig:vary}
\end{figure}
Overall, from the above results, VQP learning optimized for parameter amplitudes at pulse level can show advantage over VQC learning trained on parameter angles at circuit level under the same conditions.
\textbf{Further Experiment on VQP and VQC: }To verify the conjecture of VQP learning advantage in the Section \ref{sec2}. We conduct another set of experiment to compare VQP and VQC learning.
As can be seen from the Table \ref{models}, VQC* with 12 gates achieves 68\% accuracy on MNIST two-class classification, while VQC\_base with nine gates achieves only 62\% accuracy.
This indicates that the training results get improved when the number of gates of VQC increases (i.e., more parameters); however, VQP learning with only 9 gates still outperform VQC learning with 12 gates.
This observation illustrate
that VQP learning has more trainable parameters than the gate based VQC for better learning.
In the Table \ref{WithBO}, we report the results of VQC learning in gradient based method on ibmq\_jakarta.
It is obviously that training VQC in gradient based method can gain the benefits. And now VQC learning with 9 gates already has the better performance than VQP leanring in terms of accuracy.
\textbf{Device Dependence of VQP: }VQP is device-dependent. As shown in Table \ref{dependence}, if we train on pulse simulator (Lima) and test on pulse simulator (Lima), we can obtain 65\% accuracy, but if we train on pulse simulator (Lima) and test on pulse simulator (Quito), the accuracy is only 35\%, which shows that device-specific is important. This is because the pulse for the same gate can vary from model to model. And from the work \cite{proctor2022measuring}, we think in the quantum machine the device dependence of VQP can also result in the noise is structured-dependent.
\section{Discussion}
\label{sec5}
We further discuss the purpose of extracting pulse amplitudes from the pulse schedules. Here, we provide the visualization on the change of variational quantum pulses in the process of optimization. Based on Equation \ref{eq1}, the Hamiltonian of the control pulses will be updated accordingly in the training process.
Moreover, the latency of 12 gates is necessarily larger than that of 9 gates.
On ibmq\_jakarta and pulse simulator (Belem), we tested the time duration of 9 gates VQP against 12 gates VQC*, which have similar accuracy for both using BO framework.
From the table, we can see that the time duration of VQP for 9 gates is much shorter than that of VQC* for 12 gates in all different cases.
From this result, we can intuitively see the advantage of VQP learning over VQC learning in terms of latency, which means that the robustness of VQP learning on noise not only comes from the optimization framework but also the property that its decoherence error is smaller.
\begin{equation}
\begin{aligned}
H = \sum_{i=0}^{1} (U_i(t)+D_i(t)) \sigma_i^{X} +
\sum_{i=0}^{1} 2\pi \nu_i (1-\sigma_i^{Z})/2 \\+
\omega_B a_B a^{\dagger}_B +
\sum_{i=0}^{1} g_i \sigma_i^{X} (a_B + a_B^{\dagger})
\label{eq1}
\end{aligned}
\end{equation}
\textbf{Vary of Amplitudes of Pulses: }
As shown in Figure \ref{fig:vary}, the process of VQP learning is equivalent to pulling or pushing initial pulses. In the process of iterative optimization, multiple attempts are made to vary amplitudes until the error rate is minimized.
Figure \ref{fig:vis} show the visualization of the pulse of CX gate on pulse simulator (Belem). The plot on the left is the pulse of the CX gate before amplitdue tuning, while the plot on the right is the new pulse after amplitdue tuning.
This shows how the proposed framework to directly tune the physical amplitudes.
\textbf{Analytical Understanding of Amplitude Tuning: }In the process of VQP learning, we can analyze the physical quantities affected by amplitude tuning according to Equation \ref{eq1}. This equation describes the drive Hamiltonian, where $D_i(t)$ is mixed by the signal on drive channel for qubit $i$ and local oscillator (LO) at frequency corresponding to the signal. $U_i(t)$ is mixed by the signal on control channel for qubit $i$ and some combinations of qubit LO's that specified by device. $\sigma_X, \sigma_Y$ and $\sigma_Z$ are Pauli operators. $\nu_i$ is the estimated frequency of qubits in the qubit i, $g_i$ is the coupling strength between qubits, $\omega_B$ is the frequency of buses, $a_B$ and $a^{\dagger}_B$ are the ladder operator for buses. The vectors actually effected by amplitudes tuning are $D_i(t)$ and $U_i(t)$. $D_i(t)$ and $U_i(t)$ can be obtained by the Equation \ref{eq2}:
\begin{equation}
\begin{aligned}
D_i(t) = Re (d_i(t)e^{iw_{d_i}t})\\
U_i(t) = Re [u_i(t)e^{i(w_{d_i} - w_{d_j}) t})]
\label{eq2}
\end{aligned}
\end{equation}
where $d_i(t)$ and $u_i(t)$ are the signal of qubit $i$ on drive channel and control channel, respectively. Amplitude is the intensity of signal, which means when we do amplitudes tuning, we change the intensity of signal, so that affect on the vary of $d_i(t)$ and $u_i(t)$. It is known from Equation \ref{eq2} that the changes in $d_i(t)$ and $u_i(t)$ affect the $D_i(t)$ and $U_i(t)$ in Equation \ref{eq1}.
\section{Conclusion and Prospective}
\label{sec6}
For QNN, its potential advantage over classical neural networks is that the search space of the unitary matrix can be increased with the number of qubits, such that the neural network can learn more and performs better. For VQP learning, the pulse has more parameters than the circuit, which means pulse learning may obtain better expressibility and entangling capability. The focus of this work is to propose a novel paradigm to use VQP for quantum learning. We demonstrate the advantages of VQP learning over VQC learning for ML tasks. Also, the reduced decoherence error due to the reduced latency from the small demand for gate number in VQP learning is really important for quantum machines in the NISQ era, since noise is one of the major problems in the NISQ era.
The potential of VQP is huge. It can be more flexible to tune and process the parameters inside the circuit. As we discussed in the experimental section, it currently achieves results only based on an unoptimized VQC architecture, and we expect to see better results if we apply it on an optimized circuit-level architecture.
The current VQP learning has pitfalls, one being the limited resources of the real quantum machine and the other being that we do not have an effective simulator for pulses. The simulator that can support pulses in the OpenPulse interface provided by Qiskit is slow to execute, which makes it difficult to experiment with larger and more complex tasks at the moment. As well, this pulse simulator is not differentiable, which also leaves us at the moment to use non gradient based optimizer in VQP learning, which tend to be more stochastic in parametric tasks in high dimensional spaces. Therefore an efficient and differentiable pulse simulator is urgently needed.
In addition, training and optimization methods for VQP are worth investigating, and we plan to implement a more efficient optimization and training framework in the future. Gradient-based machine learning for VQP learning may still be possible with advances in the supporting platforms.
\section*{Acknowledgment}
We thanks Thomas Alexander for patient guided on qiskit OpenPulse, also thanks and Dr. Xiangliang Zhang for valuable discussion about the optimization framework. We acknowledge the use of IBM Quantum services for this work.
\printbibliography
\end{document}
|
1,116,691,497,668 | arxiv |
\section{INTRODUCTION}\label{sec:intro}
Density-Based Spatial Clustering of Applications with Noise (DBSCAN)~\cite{ester1996dbscan} is a typical density based clustering method that determines the cluster structure according to the tightness of the sample distribution.
It automatically determines the number of final clusters according to the nature of the data, has low sensitivity to abnormal points, and is compatible with any cluster shape.
In terms of application areas, benefiting from its strong adaptability to datasets of unknown distribution, DBSCAN is the preferred solution for many clustering problems, and has achieved robust performance in fields such as financial analysis~\cite{huang2019time_series,yang2014suspicious_transactions}, commercial research~\cite{fan2021consumer,wei2019milan}, urban planning~\cite{li2007public_facility,pavlis2018retail_center}, seismic research~\cite{fan2019seismic_data,kazemi2017iran,vijay2019catalogs}, recommender system~\cite{guan2018social_tagging,kuzelewska2015music}, genetic engineering~\cite{francis2011dna_damage,mohammed2018genes_patterns}, etc.
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{figures/intro.pdf}
\vspace{-1.2mm}
\caption{Markov process of parameter search. The agent uses the data as the environment, and determines the action to search by observing the clustering state and reward.}\label{fig:intro}
\vspace{-3.2mm}
\end{figure}
However, the two global parameters of DBSCAN, the distance of the cluster formation $Eps$ and the minimum data objects required inside the cluster $MinPts$, that need to be manually specified, bring \textbf{Three} challenges to its clustering process.
\textbf{First, parameters free challenge.}
$Eps$ and $MinPts$ have considerable influence on the clustering effect, but it needs to be determined a priori.
The method based on the $k$-distance~\cite{lu2007vdbscan,Mitra2011kddclus} estimates the possible values of the $Eps$ through significant changes in the curve, but it still needs to manually formulate $Minpts$ parameters in advance.
Although some improved DBSCAN methods avoid the simultaneous adjustment of $Eps$ and $MinPts$ to varying degrees, but they also necessary to pre-determine the cutoff distance parameter~\cite{diao2018lpdbscsan}, grid parameter~\cite{darong2012grid}, Gaussian distribution parameter~\cite{smiti2012dbscangm} or fixed $Minpts$ parameter~\cite{hou2016dsets,akbari2016outlier}.
Therefore, the first challenge is how to perform DBSCAN clustering without tuning parameters based on expert knowledge.
\textbf{Second, adaptive policy challenge.}
Due to the different data distributions and cluster characteristics in clustering tasks, traditional DBSCAN parameter searching methods based on fixed patterns~\cite{lu2007vdbscan, Mitra2011kddclus} encounter bottlenecks in the face of unconventional data problems.
Moreover, hyperparameter optimization methods~\cite{karami2014bdedbscan, bergstra2011tpe, lessmann2005ga} using external clustering evaluation index based on label information as objective functions are not effective in the absence of data label information.
The methods~\cite{lai2019mvodbscan,zhou2014silhouette_coefficient} that only use the internal clustering evaluation index as the objective function, are limited by the accuracy problem, despite that they do not require label information.
In addition, for streaming data that needs to be clustered continuously but the data distribution is constantly changing, the existing DBSCAN parametric search methods do not focus on how to use past experience to adaptively formulate search policies for newly arrived data.
Thus, how to effectively and adaptively adjust the parameter search policy of the data and be compatible with the lack of label information is regarded as the second challenge.
\textbf{Third, computational complexity challenge.}
Furthermore, the parameter search is limited by the parameter space that cannot be estimated.
Searching too many invalid parameters will increase the search cost~\cite{Anssi2020large_continuous}, and too large search space will bring noise interfering with clustering accuracy~\cite{dulacarnold2016large_discrete}.
Hence, how to quickly search for the optimal parameters while securing clustering accuracy is the third challenge that needs to be addressed.
\textcolor{black}{In recent years, Deep Reinforcement Learning (DRL) \cite{scott2018td3,lillicrap2015ddpg} has been widely used for tasks lacking training data due to its ability to learn by receiving feedback from the environment \cite{peng2022reinforced}}.
In this paper, to handle the problem of missing optimal DBSCAN parameter labeling and the challenges above, we propose \textbf{DRL-DBSCAN}, a novel, adaptive, recursive \textbf{D}eep \textbf{R}einforcement \textbf{L}earning \textbf{DBSCAN} parameter search framework, to obtain optimal parameters in multiple scenarios and tasks stably.
We first take the cluster changes after each step of clustering as the observable state, the parameter adjustment direction as the action, and transform the parameter search process into a Markov decision process in which the DRL agents autonomously perceive the environment to make decisions (Fig. \ref{fig:intro}).
Then, through weak supervision, we construct reward based on a small number of external clustering indices, and fuse the global state and the local states of multiple clusters based on the attention mechanism, to prompt the agents to learn how to adaptively perform the parameter search for different data.
In addition, to improve the learning efficiency of the policy network, we optimize the base framework through a recursive mechanism based on agents with different search precisions to achieve the highest clustering accuracy of the parameters stably and controllable.
Finally, considering the existence of DBSCAN clustering scenarios with no labels, few labels, initial data, and incremental data, we designed four working modes: retraining mode, continuous training mode, pre-training test mode, and maintenance test mode for compatibility.
We extensively evaluate the parameter search performance of {DRL-DBSCAN} with four modes for the offline and online tasks on the public Pathbased, Compound, Aggregation, D31 and Sensor datasets.
The results show that {DRL-DBSCAN} can break away from manually defined parameters, automatically and efficiently discover suitable DBSCAN clustering parameters, and has stability in multiple downstream tasks.
The contributions of this paper are summarized as follows:
\textbf{(1)} The first DBSCAN parameter search framework guided by DRL is proposed to automatically select the parameter search directions.
\textbf{(2)} A weakly-supervised reward mechanism and a local cluster state attention mechanism are established to encourage DRL agents to adaptively formulate optimal parameter search policy based on historical experience in the absence of annotations/labels to adapt the data distribution fluctuations.
\textbf{(3)} A recursive DRL parameter search mechanism is designed to provide a fast and stable solution for large-scale parameter space problem.
\textbf{(4)} Extensive experiments in both offline and online tasks are conducted to demonstrate that the four modes of {DRL-DBSCAN} have advantages in improving DBSCAN accuracy, stability, and efficiency.
\section{PROBLEM DEFINITION}
In this section, we give the definitions of DBSCAN clustering, parameter search of DBSCAN clustering, and parameter search in data stream clustering.
\begin{define}
\textbf{(DBSCAN clustering).}
The DBSCAN clustering is the process of obtaining clusters $\mathcal{C}=\{c_{1}, ..., c_{n}, c_{n+1}, ...\}$ for all data objects $\{v_{1}, ..., v_{j}, v_{j+1}, ...\}$ in a data block $\mathcal{V}$ based on the parameter combination $\boldsymbol{P}=\{Eps, MinPts\}$.
Here, $Eps$ is the maximum distance that two adjacent objects can form a cluster, and $MinPts$ refers to the minimum number of adjacent objects within $Eps$ that an object can be a core point.
The formation of the clusters can be understood as the process of connecting the core points to its adjacent objects within $Eps$ \cite{ester1996dbscan} (as shown in Fig. \ref{fig:intro}).
\end{define}
\begin{define}
\textbf{(Parameter search in offline DBSCAN clustering).}
Given the data block $\mathcal{V}=\{v_{1}, ..., v_{j}, v_{j+1}, ...\}$, the parameter search of DBSCAN is the process of finding the optimal parameter combination $\boldsymbol{P}=\{Eps, MinPts\}$ for clustering in all possible parameter spaces.
Here, the feature set $\mathcal{X}$ of data objects in block $\mathcal{V}$ is $\{x_{1}, ..., x_{j},$ $x_{j+1}, ...\}$.
\end{define}
\begin{define}
\textbf{(Parameter search in online DBSCAN clustering).}
Given continuous and temporal $T$ data blocks $\{\mathcal{V}_{1}, ..., \mathcal{V}_{t},$ $\mathcal{V}_{t+1}, ...\}$ as the online data stream, we define the parameter search in online clustering as the process of obtaining the parameter combination $\boldsymbol{P}_{t}=\{Eps_{t}, MinPts_{t}\}$ of the data block $\mathcal{V}_t=\{v_{t,1}, ..., v_{t,j},$ $v_{t,j+1}, ...\}$ at each time $t \in T$.
\end{define}
\begin{comment}
\textbf{(Parameter search in multi-dimension clustering).}
Given a multi-dimension data block $\mathcal{V}=\{v_{1}, ..., v_{j}, v_{j+1}, ...\}$ with $P$ feature groups $\{\mathcal{X}_{1},...,\mathcal{X}_{p},\mathcal{X}_{p+1},...\}$, where $\mathcal{X}_{p}=\{x_{p,1}, ...,$ $x_{p,j},x_{p,j+1}, ...\}$ contains the $p$-th group feature $x_{p,j}$ of each data object $v_{p,j} \in \mathcal{V}$.
We define the parameter search in multi-dimension clustering as the process of obtaining parameter combination $\{Eps_{p},$ $MinPts_{p}\}$ for each feature group $\mathcal{X}_{p} \in \{\mathcal{X}_{1},...,\mathcal{X}_{p},\mathcal{X}_{p+1},...\}$.
\end{comment}
\section{{DRL-DBSCAN} FRAMEWORK}
\begin{figure*}[h]
\centering
\includegraphics[width=0.99\textwidth]{figures/framework4.pdf}
\vspace{-1.2mm}
\caption{{The core model of {DRL-DBSCAN}}. (a) Recursive mechanism, takes $3$-layer $6\times6$ parameter space as an example, with layerwise decreasing parameter space. (b) One layer {DRL-DBSCAN}, takes the search process in the $1$-th layer of the recursive mechanism as an example, aims to obtain the optimal parameter combination in the parameter space of layer $1$.}
\vspace{-1.2mm}
\label{fig:framework}
\end{figure*}
\textcolor{black}{The proposed {DRL-DBSCAN} has a core model (Fig.~\ref{fig:framework}) and four working modes (Fig.~\ref{fig:task}) that can be extended to downstream tasks.}
We firstly describe the basic Markov decision process for parameter search (Sec.~\ref{sec:parameter_search}), and the definition of the clustering parameter space and recursion mechanism (Sec.~\ref{sec:recursion}).
Then, we explain the four {DRL-DBSCAN} working modes (Sec.~\ref{sec:mode}).
\subsection{Parameter Search with DRL}\label{sec:parameter_search}
Faced with various clustering tasks, the fixed DBSCAN parameter search policy no longer has flexibility.
We propose an automatic parameter search framework {DRL-DBSCAN} based on Deep Reinforcement Learning (DRL), in which the core model can be expressed as a Markov Decision Process $MDP(\mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{P})$ including state set, action space, reward function and policy optimization algorithm~\cite{mundhenk2000mdp}.
This process transforms the DBSCAN parameter search process into a maze game problem~\cite{zheng2012maze,bom2013pac_man} in the parameter space, aiming to train an agent to search for the end point parameters step by step from the start point parameters by interacting with the environment, and take the end point (parameters in the last step) as the final search result of an episode of the game (as shown in Fig.~\ref{fig:framework}).
Specifically, the agent regards the parameter space and DBSCAN clustering algorithm as the environment, the search position and clustering result as the state, and the parameter adjustment direction as the action.
In addition, a small number of samples are used to reward the agent with exceptional behavior in a weakly supervised manner.
We optimize the policy of agent based on the Actor-Critic \cite{konda2000actor_critic} architecture.
Specifically, the search process for episode $e(e=1,2,...)$ is formulated as follows:
\emph{\textbf{$\bullet$ State:}}
Since the state needs to represent the search environment at each step as accurately and completely as possible, we consider building the representation of the state from two aspects.
Firstly, for the overall searching and clustering situation, we use a 7-tuple to describe the global state of the $i$-th step $(i=1,2,...)$:
\begin{equation}\label{eq:global_state}
\boldsymbol{s}_{global}^{(e)(i)}= \boldsymbol{P}^{(e)(i)} \ {\cup}\ \mathcal{D}_{b}^{(e)(i)} \ {\cup}\ \big\{{R}_{cn}^{(e)(i)}\big\}.
\end{equation}
Here, $ \boldsymbol{P}^{(e)(i)}=\{Eps^{(e)(i)}$, $MinPts^{(e)(i)}\}$ is the current parameter combination.
$\mathcal{D}_{b}^{(e)(i)}$ is the set of quaternary distances, including the distances of $Eps^{(e)(i)}$ from its space boundaries ${B}_{Eps,1}$ and ${B}_{Eps,2}$, the distances of $MinPts^{(e)(i)}$ from its boundaries ${B}_{MinPts,1}$ and ${B}_{MinPts,2}$,
${R}_{cn}^{(e)(i)}=\frac{|\mathcal{C}^{(e)(i)}|}{|\mathcal{V}|}$ is the ratio of the number of clusters $|\mathcal{C}^{(e)(i)}|$ to the total object number of data block $|\mathcal{V}|$.
Here, the specific boundaries of parameters will be defined in Sec. ~\ref{sec:recursion}.
Secondly, for the description of the situation of each cluster, we define the $\{d+2\}$-tuple of the $i$-th local state of cluster $\boldsymbol{c}_{n} \in \mathcal{\mathcal{C}}$ as:
\begin{equation}\label{eq:local_state}
\boldsymbol{s}_{local, n}^{(e)(i)}= \mathcal{X}_{cent,n}^{(e)(i)} \ {\cup}\ \big\{{D}^{(e)(i)}_{cent,n},\ |\boldsymbol{c}_{n}^{(e)(i)}|\big\}.
\end{equation}
Here, $\mathcal{X}_{cent,n}^{(e)(i)}$ is the central object feature of the $\boldsymbol{c}_{n}$, and $d$ is its feature dimension.
${D}^{(e)(i)}_{cent,n}$ is the Euclidean distance from the cluster center object to the center object of the entire data block.
$|\boldsymbol{c}_{n}^{(e)(i)}|$ means the number of objects contained in cluster $\boldsymbol{c}_{n}$.
Considering the change of the number of clusters at different steps in the parameter search process, we use the Attention Mechanism \cite{vaswani2017attention} to encode the global state and multiple local states into a fixed-length state representation:
\begin{equation}\label{eq:state}
\boldsymbol{s}^{(e)(i)}=
\sigma \Big(\boldsymbol{F}_{G}(\boldsymbol{s}_{global}^{(e)(i)}) \ {\mathbin\Vert}\
\sum_{\boldsymbol{c}_n \in \mathcal{C}} \boldsymbol{\alpha}_{att,n} \cdot
\boldsymbol{F}_{L}(\boldsymbol{s}_{local, n}^{(e)(i)})\Big),
\end{equation}
where $\boldsymbol{F}_{G}$ and $\boldsymbol{F}_{L}$ are the Fully-Connected Network (FCN) for the global state and the local state, respectively.
$\sigma$ represents the \texttt{ReLU} activation function.
And $||$ means the operation of splicing.
$\alpha_{att,n}$ is the attention weight of cluster $\boldsymbol{c}_{n}$, which is formalized as follows:
\begin{equation}
\boldsymbol{\alpha}_{att,n} = \frac{ \sigma \Big(
\boldsymbol{F}_{S}\big(\boldsymbol{F}_{G}(\boldsymbol{s}_{global}^{(e)(i)}) \ {\mathbin\Vert}\ \boldsymbol{F}_{L}(\boldsymbol{s}_{local,n}^{(e)(i)})\big)\Big)}
{\sum_{\boldsymbol{c}_n \in \mathcal{C}} \sigma \Big(\boldsymbol{F}_{S} \big(\boldsymbol{F}_{G}(\boldsymbol{s}_{global}^{(e)(i)}) \ {\mathbin\Vert}\ \boldsymbol{F}_{L}(\boldsymbol{s}_{local,n}^{(e)(i)})\big)\Big)}.
\label{equation:att}
\end{equation}
We concatenate the global state with the local state of each cluster separately, input it into a fully connected network $\boldsymbol{F}_{S}$ for scoring, and use the normalized score of each cluster as its attention coefficient.
This approach establishes the attention to the global search situation when local clusters are expressed.
At the same time, it also makes different types of cluster information have different weights in the final state expression, which increases the influence of important clusters on the state.
\begin{comment}
\begin{equation}\label{eq:state}
s^{(e)(i)}=[Eps^{(e)(i)}, D^{(e)(i)}_{lb}, D^{(e)(i)}_{rb},\notag
\end{equation}
\begin{equation}
MinPts^{(e)(i)}, D^{(e)(i)}_{ub}, D^{(e)(i)}_{db},\notag
\end{equation}
\begin{equation}
N_{c}^{(e)(i)},N, SC^{(e)(i)}].
\end{equation}
\end{comment}
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/task.pdf}
\vspace{-1.2mm}
\caption{{Four working modes. Dark orange square means partially labeled data, light orange square means unlabeled data.}} \label{fig:task}
\vspace{-1.2mm}
\end{figure*}
\emph{\textbf{$\bullet$ Action:}}
The action $\boldsymbol{a}^{(e)(i)}$ for the $i$-th step is the search direction of parameter.
We define the action space $\mathcal{A}$ as $\{left,right,$ $down,up,stop\}$, where $left$ and $right$ means to reduce or increase the $Eps$ parameter, $down$ and $up$ means to reduce or increase the $MinPts$ parameter, and $stop$ means to stop searching.
Specifically, we build an Actor \cite{konda2000actor_critic} as the policy network to decide action $\boldsymbol{a}^{(e)(i)}$ based on the current state $\boldsymbol{s}^{(e)(i)}$:
\begin{equation}\label{eq:action}
\boldsymbol{a}^{(e)(i)}= Actor(\boldsymbol{s}^{(e)(i)}).
\end{equation}
Here, the $Actor$ is a three-layer Multi-Layer Perceptron (MLP).
In addition, the action-parameter conversion process from the $i$-th step to the $i+1$-th step is defined as follows.
\begin{equation}\label{eq:action2parameter}
\boldsymbol{P}^{(e)(i)} \xrightarrow[]{\boldsymbol{a}^{(e)(i)},\ \boldsymbol{\theta}} \boldsymbol{P}^{(e)(i+1)}.
\end{equation}
Here, $\boldsymbol{P}^{(e)(i)}$ and $\boldsymbol{P}^{(e)(i+1)}$ are the parameter combinations $\{Eps^{(e)(i)},$ $MinPts^{(e)(i)}\}$ and $\{Eps^{(e)(i+1)}, MinPts^{(e)(i+1)}\}$ of the $i$-th step and the $i+1$-th step, respectively.
$\boldsymbol{\theta}$ is the increase or decrease step size of the action.
We discuss the step size in detail in Sec.~\ref{sec:recursion}.
Note that when an action causes a parameter to go out of bounds, the parameter is set to the boundary value and the corresponding boundary distance is set to $-1$ in the next step.
\emph{\textbf{$\bullet$ Reward:}}
Considering that the exact end point parameters are unknown but rewards need to be used to motivate the agent to learn a better parameter search policy, we use a small number of samples of external metrics as the basis for rewards.
We define the immediate reward function of the $i$-th step as:
\begin{equation}\label{eq:reward_function}
\mathcal{R}(\boldsymbol{s}^{(e)(i)},\boldsymbol{a}^{(e)(i)}) = {NMI}\big(DBSCAN(\mathcal{X},\boldsymbol{P}^{(e)(i+1)}),\mathcal{Y}')\big).
\end{equation}
Here, ${NMI}(,)$ stands for the external metric function, Normalized Mutual Information (NMI) \cite{estevez2009nmi} of $DBSCAN$ clustering.
$\mathcal{X}$ is the feature set.
$\mathcal{Y}'$ is a set of partial labels of the data block.
Note that the labels are only used in the training process, and the testing process performs search on unseen data blocks without labels.
In addition, the excellent parameter search action sequence for one episode is to adjust the parameters in the direction of the optimal parameters, and stop the search at the optimal parameters.
Therefore, we consider using both the maximum immediate reward for subsequent steps and the endpoint immediate reward as the reward in the $i$-th step:
\begin{equation}\label{eq:reward}
\boldsymbol{r}^{(e)(i)}=\beta \cdot \max \big\{\mathcal{R}(\boldsymbol{s}^{(e)(m)},\boldsymbol{a}^{(e)(m)})\big\}|_{m=i}^{I}
+\delta \cdot \mathcal{R}(\boldsymbol{s}^{(e)(I)},\boldsymbol{a}^{(e)(I)}),
\end{equation}
where $\mathcal{R}(\boldsymbol{s}^{(e)(I)},\boldsymbol{a}^{(e)(I)})$ is the immediate rewards for the $I$-th step end point parameters.
And \texttt{max} is used to calculate the future maximum immediate reward before stopping the search in current episode $e$.
$\beta$ and $\delta$ are the impact factors of reward, where $\beta=1-\delta$.
\begin{comment}
\begin{equation}\label{eq:reward}
r^{(e)(i)}=\mathcal{R}(s^{(e)(i)},a^{(e)(i)}) - \beta \cdot \frac{\sum_{m=i-M}^{i-1}\mathcal{R}(s^{(e)(m)},a^{(e)(m)})}{M} \notag
\end{equation}
\begin{equation}\label{eq:reward}
+\gamma \cdot \mathcal{R}(s^{(e)(I)},a^{(e)(I)})- \Lambda \cdot i,
\end{equation}
\end{comment}
\emph{\textbf{$\bullet$ Termination:}}
We determine the termination conditions for a complete episode search process as follows:
\begin{equation}\label{eq:termination}
\left\{
\begin{array}{ll}
\min(\mathcal{D}_{b}^{(e)(i)})<0, & \text{Out of bounds stop,}\\
i>=I_{max}, & \text{Timeout stop,}\\
\boldsymbol{a}^{(e)(i)}=stop, where \ i \ge 2, & \text{Active stop.}\\
\end{array}
\right.
\end{equation}
Here, $I_{max}$ is the maximum search step size in an episode.
\emph{\textbf{$\bullet$ Optimization:}}
The parameter search process in the episode $e$ is expressed as:
$1)$ observe the current state $\boldsymbol{s}^{(e)(i)}$ of DBSCAN clustering;
$2)$ and predict the action $\boldsymbol{a}^{(e)(i)}$ of the parameter adjustment direction based on $\boldsymbol{s}^{(e)(i)}$ through the $Actor$;
$3)$ then, obtain the new state $\boldsymbol{s}^{(e)(i+1)}$;
$4)$ repeat the above process until the end of episode, and get reward $\boldsymbol{r}^{(e)(i)}$ for each step.
The core element of the $i$-th step is extracted as:
\begin{equation}\label{eq:core_element}
\mathcal{T}^{(e)(i)}=(\boldsymbol{s}^{(e)(i)},\boldsymbol{a}^{(e)(i)},\boldsymbol{s}^{(e)(i+1)},\boldsymbol{r}^{(e)(i)}).
\end{equation}
We put $\mathcal{T}$ of each step into the memory buffer and sample $M$ core elements to optimize the policy network $Actor$, and define the key loss functions as follows:
\begin{equation}\label{eq:loss_critic}
\mathcal{L}_{c} = {\sum}_{\mathcal{T} \in buffer}^{M} \big(\boldsymbol{r}^{(e)(i)}+\gamma \cdot Critic(\boldsymbol{s}^{(e)(i+1)},\boldsymbol{a}^{(e)(i+1)})- \notag
\end{equation}
\begin{equation}
Critic(\boldsymbol{s}^{(e)(i)},\boldsymbol{a}^{(e)(i)})\big)^{2},
\end{equation}
\begin{equation}\label{eq:loss_actor}
\mathcal{L}_{a} = -\frac{{\sum}_{\mathcal{T} \in buffer}^{M} Critic\big(\boldsymbol{s}^{(e)(i)},Actor(\boldsymbol{s}^{(e)(i)})\big)}{M}.
\end{equation}
Here, we define a three-layer MLP as the $Critic$ to learn the action value of state \cite{konda2000actor_critic}, which is used to optimize the $Actor$.
And $\gamma$ means reward decay factor.
Note that we use the policy optimization algorithm named Twin Delayed Deep Deterministic strategy gradient algorithm (TD3)~\cite{scott2018td3} in our framework, and it can be replaced with other DRL policy optimization algorithms \cite{lillicrap2015ddpg, konda2000actor_critic}.
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/task.pdf}
\caption{{Four working modes. The figure shows four clustering tasks: online, offline, single-dimension, and multi-dimension.}}
\end{figure}
\end{comment}
\subsection{Parameter Space and Recursion Mechanism}\label{sec:recursion}
In this section, we will define the parameter space of the agent proposed in the previous section and the recursive search mechanism based on different parameter spaces.
Firstly, in order to deal with the fluctuation of the parameter range caused by different data distributions, we normalize the data features, thereby transforming the maximum $Eps$ parameter search range into the $(0,\sqrt{d}]$ range.
Unlike $Eps$, the $MinPts$ parameter must be an integer greater than $0$.
Therefore, we propose to delineate a coarse-grained $MinPts$ maximum parameter search range according to the size or dimension of the dataset.
Subsequently, considering the large parameter space affects the efficiency when performing high-precision parameter search, we propose to use a recursive mechanism to perform a progressive search.
The recursive process is shown in Fig.~\ref{fig:framework}.
We narrow the search range and increase the search precision layer by layer, and assign a parameter search agent $agent^{(l)}$ defined in Sec.~\ref{sec:parameter_search} to each layer $l$ to search for the optimal parameter combination $\boldsymbol{P}_{o}^{(l)}=\{Eps_{o}^{(l)}, MinPts_{o}^{(l)}\}$ under the requirements of the search precision and range of the corresponding layer.
The minimum search boundary ${B}_{p,1}^{(l)}$ and maximum search boundary ${B}_{p,2}^{(l)}$ of parameter $p \in \{Eps, MinPts\}$ in the $l$-th layer $(l=1,2,...)$ are defined as:
\begin{equation}\label{eq:boundary}
\begin{split}
& {B}_{p,1}^{(l)} : \max \big\{ {B}_{p,1}^{(0)}, \ p_{o}^{(l-1)} - \frac{\pi_{p}}{2} \cdot \theta_{p}^{(l)} \big\},\\
& {B}_{p,2}^{(l)} : \min \big\{ p_{o}^{(l-1)} + \frac{\pi_{p}}{2} \cdot \theta_{p}^{(l)}, {B}_{p,2}^{(0)} \big\}. \\
\end{split}
\end{equation}
Here, $\pi_{p}$ is the number of searchable parameters in the parameter space of parameter $p$ in each layer.
${B}_{p,1}^{(0)}$ and ${B}_{p,2}^{(0)}$ are the space boundaries of parameter $p$ in the $0$-th layer, which define the maximum parameter search range.
$p_{o}^{(l-1)} \in \boldsymbol{P}_{o}^{(l-1)}$ is the optimal parameter searched by the previous layer, and $p_{o}^{(0)} \in \boldsymbol{P}_{o}^{(0)}$ is the midpoint of ${B}_{p,1}^{(0)}$ and ${B}_{p,2}^{(0)}$.
In addition, $\theta_{p}^{(l)}$ is the search step size, that is, the search precision of the parameter $p$ in the $l$-th layer, which is defined as follows:
\begin{equation}\label{eq:theta}
\theta_{p}^{(l)} =
\left\{
\begin{array}{ll}
\frac{\theta_{p}^{(l-1)}}{\pi_{p}}, & {if \ p \ = \ Eps;}\\
\max \big\{\lfloor \frac{\theta_{p}^{(l-1)}}{\pi_{p}} + \frac{1}{2} \rfloor, \ 1 \big\}, & {otherwise.}\\
\end{array}
\right.
\end{equation}
Here, $\theta_{p}^{(l-1)}$ is the step size of the previous layer and $\theta_{p}^{(0)}$ is the size of the parameter maximum search range.
$\lfloor \rfloor$ means round down.
\vspace{0.5mm}
\noindent \emph{\textcolor{black}{\textbf{Complexity Discussion:}}}
It is known that the minimum search step size of the recursive structure with layers $L$ is $\theta_{p}^{(L)}$.
Then the computational complexity when there is no recursive structure is $O(N)$, where $N$ the size of the parameter space $\theta_{p}^{(0)} / \theta_{p}^{(L)}=(\pi_{p})^{L}$.
And {DRL-DBSCAN} with $L$-layer recursive structure only takes $L \cdot (\pi_{p})$, reducing the complexity from $O(N)$ to $O(log\ N)$.
\subsection{Proposed {DRL-DBSCAN}}\label{sec:mode}
Algorithm~\ref{algorithm:train} shows the process of the proposed {DRL-DBSCAN} core model.
Given a data block $\mathcal{V}$ with partial label $\mathcal{Y}'$, the training process repeats the parameter search process (Lines~\ref{traincode:observe_state}-\ref{traincode:termination}) for multiple episodes at each layer to optimize the agent (Line~\ref{traincode:loss}).
In this process, we update the optimal parameter combination (Line~\ref{traincode:optimal1} and Line~\ref{traincode:optimal2} based on the immediate reward (Eq. (\ref{eq:reward_function})).
In order to improve efficiency, we build a hash table to record DBSCAN clustering results of searched parameter combinations.
In addition, we established early stopping mechanisms to speed up the training process when the optimal parameter combination does not change (Line~\ref{traincode:stop1} and Line~\ref{traincode:stop2}).
It is worth noting that the testing process uses the trained agents to search directly with one episode, and does not set the early stop.
Furthermore, the testing process does not need labels, and the end point parameters of the unique episode of the last layer are used as the final optimal parameter combination.
In order to better adapt to various task scenarios, we define four working modes of {DRL-DBSCAN} as shown in Fig.~\ref{fig:task}. Their corresponding definitions are as follows:
\textbf{(1) Retraining Mode ($DRL_{re}$).}
The optimal parameters are searched based on the training process. When the dataset changes, the agent at each layer is reinitialized.
\textbf{(2) Continuous Training Mode ($DRL_{con}$).}
The agents are pre-trained in advance. When the dataset changes, continue searching based on the training process using the already trained agents.
\textbf{(3) Pretraining Testing Mode ($DRL_{all}$).}
The agents are pre-trained in advance. When the dataset changes, searching directly based on the testing process without labels.
\textbf{(4) Maintenance Testing Mode ($DRL_{one}$).}
The agents are pre-trained in advance. When the dataset changes, searching directly based on the testing process without labels. After pre-training, regular maintenance training is performed with labeled historical data.
\begin{algorithm}[t]
\SetAlgoRefName{1}
\SetAlgoVlined
\KwIn{The features $\mathcal{X}$ and partial labels $\mathcal{Y}'$ of block $\mathcal{V}$;
Agents for each layer: $\{agent^{(l)}\}|_{l=1}^{L_{max}}$;
}
\KwOut{Optimal parameter combination: $\boldsymbol{P}_{o}$;}
\For{$l = 1, ... , L_{max}$} {
Initialize parameter space via Eq.~(\ref{eq:boundary}) and Eq.~(\ref{eq:theta}); \label{traincode:boundary}\\
\For{$e = 1, ... , E_{max}$} {
Initialize $\boldsymbol{P}^{(e)(0)}$ by $\boldsymbol{P}_{o}^{(l-1)}$;\\
\For{$i = 1, ... , I_{max}$} {
Obatin the current state $\boldsymbol{s}^{(e)(i)}$ via Eq. (\ref{eq:state}); \label{traincode:observe_state}\\
Choose the action $\boldsymbol{a}^{(e)(i)}$ via Eq. (\ref{eq:action}); \label{traincode:choose_action}\\
Get new parameters $\boldsymbol{P}^{(e)(i)}$ via Eq. (\ref{eq:action2parameter}); \label{traincode:get_parameter}\\
Clustering using the current parameters; \label{traincode:dbsacn_cluster}\\
Termination judgment via Eq. (\ref{eq:termination}); \label{traincode:termination}\\
}
\If{is TRAIN}{
Get rewards $\boldsymbol{r}^{(e)(i)}$ via Eq. (\ref{eq:reward}), $\forall i \in \{1, I\}$; \label{traincode:reward}\\
Store $\mathcal{T}^{(e)(i)}$ in buffer \label{traincode:store_t} via Eq. (\ref{eq:core_element}), $\forall i \in \{1, I\}$;\\
Sampling and learning via Eq. (\ref{eq:loss_actor}) and Eq. (\ref{eq:loss_critic}); \label{traincode:loss}\\
}
Update optimal parameter combination $\boldsymbol{P}_{o}^{(l)}$; \label{traincode:optimal1}\\
Early stop judgment; \label{traincode:stop1}\\
}
Update optimal parameter combination $\boldsymbol{P}_{o}$; \label{traincode:optimal2}\\
Early stop judgment; \label{traincode:stop2}\\
}
\caption{The core model of {DRL-DBSCAN}}
\label{algorithm:train}
\end{algorithm}
\begin{comment}
\begin{algorithm}[t]
\SetAlgoRefName{2}
\SetAlgoVlined
\KwIn{The features $\mathcal{X}$ of block $\mathcal{V}$;
Number of maximum layers, episodes, steps: $L_{max}$, $E_{max}$, $I_{max}$; Agents for each layer: $\{agent^{(l)}\}|_{l=1}^{L_{max}}$;
Parameter space size: $\pi_{p}$, $\forall p \in \{E, M\}$;}
\KwOut{Optimal parameter combination: $\boldsymbol{P}_{o}$}
Initialize ${B}_{p,1}^{(0)}$, ${B}_{p,2}^{(0)}$, $\theta_{p}^{(0)}$, $\forall p \in \{E, M\}$; $\boldsymbol{P}_{o}^{(0)}$;\\
\For{$l = 1, ... , L_{max}$} {
Get ${B}_{p,1}^{(l)}$, ${B}_{p,2}^{(l)}$, $\theta_{p}^{(l)}$ via Eq.~(\ref{eq:boundary}) and Eq.~(\ref{eq:theta}), $\forall p \in \{E, M\}$; \label{testcode:boundary}\\
Initialize $\boldsymbol{P}^{(0)}$ by $\boldsymbol{P}_{o}^{(l-1)}$;\\
\For{$i = 1, ... , I_{max}$} {
Obatin the current state $\boldsymbol{s}^{(i)}$; via Eq. (\ref{eq:state}) \label{testcode:observe_state}\\
Choose the action $\boldsymbol{a}^{(i)}$ via Eq. (\ref{eq:action}); \label{testcode:choose_action}\\
Get new parameters $\boldsymbol{P}^{(i)}$ via Eq. (\ref{eq:action2parameter}); \label{testcode:get_parameter}\\
Clustering using the current parameters; \label{testcode:dbsacn_cluster}\\
Obatin new state $\boldsymbol{s}^{(i+1)}$ via Eq. (\ref{eq:state});\\
Termination judgment via Eq. (\ref{eq:termination}); \label{testcode:termination}\\
}
Update optimal parameter combination $\boldsymbol{P}_{o}^{(l)}$ by $\boldsymbol{P}^{(I)}$;\\
}
Get optimal parameter combination $\boldsymbol{P}_{o}$ by $\boldsymbol{P}_{o}^{(L_{max})}$; \label{testcode:optimal}\\
\caption{The core model {DRL-DBSCAN} (Testing)}
\label{algorithm:test}
\end{algorithm}
\end{comment}
\begin{comment}
\begin{algorithm}[t]
\SetAlgoRefName{1}
\SetAlgoVlined
\KwIn{The features $\mathcal{X}$ and partial labels $\mathcal{Y}'$ of block $\mathcal{V}$;
Agents for each layer: $\{agent^{(l)}\}|_{l=1}^{L_{max}}$;
}
\KwOut{Optimal parameter combination: $\boldsymbol{P}_{o}$;}
\For{$l = 1, ... , L_{max}$} {
Get ${B}_{p,1}^{(l)}$, ${B}_{p,2}^{(l)}$, $\theta_{p}^{(l)}$ via Eq.~(\ref{eq:boundary}) and Eq.~(\ref{eq:theta}), $\forall p \in \{E, M\}$; \label{traincode:boundary}\\
\For{$e = 1, ... , E_{max}$} {
Initialize $\boldsymbol{P}^{(e)(0)}$ by $\boldsymbol{P}_{o}^{(l-1)}$;\\
Execute Algorithm \ref{algorithm:test}\\
Get rewards $\boldsymbol{r}^{(e)(i)}$ via Eq. (\ref{eq:reward}), $\forall i \in \{1, I\}$; \label{traincode:reward}\\
Store $\mathcal{T}^{(e)(i)}$ in buffer \label{traincode:store_t} via Eq. (\ref{eq:core_element}), $\forall i \in \{1, I\}$;\\
Sampling and learning via Eq. (\ref{eq:loss_actor}) and Eq. (\ref{eq:loss_critic}); \label{traincode:loss}\\
Update optimal parameter combination $\boldsymbol{P}_{o}^{(l)}$ by $\boldsymbol{P}^{(e)(i)}$, $\forall i \in \{1, I\}$; \label{traincode:optimal1}\\
Early stop judgment; \label{traincode:stop1}\\
}
Update optimal parameter combination $\boldsymbol{P}_{o}$ by $\boldsymbol{P}_{o}^{(l)}$;; \label{traincode:optimal2}\\
Early stop judgment; \label{traincode:stop2}\\
}
\For{$l = 1, ... , L_{max}$} {
Get ${B}_{p,1}^{(l)}$, ${B}_{p,2}^{(l)}$, $\theta_{p}^{(l)}$ via Eq.~(\ref{eq:boundary}) and Eq.~(\ref{eq:theta}), $\forall p \in \{E, M\}$; \label{testcode:boundary}\\
Initialize $\boldsymbol{P}^{(0)}$ by $\boldsymbol{P}_{o}^{(l-1)}$;\\
Execute Algorithm \ref{algorithm:test}\\
Update optimal parameter combination $\boldsymbol{P}_{o}^{(l)}$ by $\boldsymbol{P}^{(I)}$;\\
}
Get optimal parameter combination $\boldsymbol{P}_{o}$ by $\boldsymbol{P}_{o}^{(L_{max})}$; \label{testcode:optimal}\\
\caption{The core model {DRL-DBSCAN} (Testing)}
\caption{The core model of {DRL-DBSCAN}}
\label{algorithm:train}
\end{algorithm}
\begin{algorithm}[t]
\SetAlgoRefName{2}
\SetAlgoVlined
\KwIn{The features $\mathcal{X}$ of block $\mathcal{V}$;
Agents of layer $l$: $agent^{(l)}$;
}
\KwOut{Optimal parameter combination: $\boldsymbol{P}_{o}$}
\For{$i = 1, ... , I_{max}$} {
Obatin the current state $\boldsymbol{s}^{(i)}$; via Eq. (\ref{eq:state}) \label{testcode:observe_state}\\
Choose the action $\boldsymbol{a}^{(i)}$ via Eq. (\ref{eq:action}); \label{testcode:choose_action}\\
Get new parameters $\boldsymbol{P}^{(i)}$ via Eq. (\ref{eq:action2parameter}); \label{testcode:get_parameter}\\
Clustering using the current parameters; \label{testcode:dbsacn_cluster}\\
Obatin new state $\boldsymbol{s}^{(i+1)}$ via Eq. (\ref{eq:state});\\
Termination judgment via Eq. (\ref{eq:termination}); \label{testcode:termination}\\
}
\caption{The core model {DRL-DBSCAN} (Testing)}
\label{algorithm:test}
\end{algorithm}
\end{comment}
\section{EXPERIMENTS}
In this section, we conduct experiments mainly including the following: \textbf{(1)} performance comparison over {DRL-DBSCAN} and baseline in offline tasks, and explanation of the search process for Reinforcement Learning (RL) (Sec. \ref{sec:offline});
\textbf{(2)} performance analysis of {DRL-DBSCAN} and its variants, and advantage comparison between four working modes in online tasks (Sec. \ref{sec:online});
\textbf{(3)} sensitivity analysis of hyperparameters and their impact on the model (Sec. \ref{sec:hyperparameter}).
\subsection{Experiment Setup}\label{sec:setup}
\vspace{0.5mm}
\noindent \emph{\textbf{Datasets}}.\label{sec:dataset}
To fully analyze our framework, the experimental dataset consists of $4$ artificial clustering benchmark datasets and $1$ public real-world streaming dataset (Table \ref{tab:dataset}).
The benchmark datasets \cite{Pasi2018benchmark} are the $2D$ shape sets, including: \textbf{Aggregation} \cite{gionis2007aggregation}, \textbf{Compound} \cite{zahn1971compound}, \textbf{Pathbased} \cite{chang2008pathbased}, and \textbf{D31} \cite{veenman2002d31}.
They involve multiple density types such as clusters within clusters, multi-density, multi-shape, closed loops, etc., and have various data scales.
Furthermore, the real-world streaming dataset \textbf{Sensor} \cite{zhu2010stream_dataset} comes from consecutive information (temperature, humidity, light, and sensor voltage) collected from $54$ sensors deployed by the Intel Berkeley Research Lab.
We use a subset of $80, 864$ for experiments and divide these objects into $16$ data blocks ($\mathcal{V}_1, ..., \mathcal{V}_{16}$) as an online dataset.
\vspace{0.5mm}
\noindent \emph{\textbf{Baseline and Variants}}.\label{sec:baseline_variants}
We compare proposed {DRL-DBSCAN} with three types of baselines: (1) traditional hyperparameter search schemes: random search algorithm \textbf{Rand} \cite{bergstra2012rand}, Bayesian optimization based on Tree-structured Parzen estimator algorithm \textbf{BO-TPE} \cite{bergstra2011tpe}; (2) meta-heuristic optimization algorithms: the simulated annealing optimization \textbf{Anneal} \cite{kirkpatrick1983anneal}, particle swarm optimization \textbf{PSO} \cite{shi1998pso}, genetic algorithm \textbf{GA} \cite{lessmann2005ga}, and differential evolution algorithm \textbf{DE} \cite{qin2008de}; (3) existing DBSCAN parameter search methods: \textbf{KDist} (V-DBSCAN) \cite{lu2007vdbscan} and \textbf{BDE}-DBSCAN \cite{karami2014bdedbscan}.
The detailed introduction to the above methods are given in Sec. \ref{sec:related_work}.
We also implement four variants of {DRL-DBSCAN} to analysis of state, reward and recursive mechanism settings in Sec. \ref{sec:parameter_search}.
Compared with {DRL-DBSCAN}, $DRL_{no-att}$ does not join local state based on the attention mechanism, $DRL_{only-max}$ only uses the maximum future immediate reward as final reward, $DRL_{recu}$ has no early stop mechanism, and $DRL_{recu}$ has no recursion mechanism.
\vspace{0.5mm}
\noindent \emph{\textbf{Implementation Details}}.
For all baselines, we use open-source implementations from the benchmark library Hyperopt \cite{bergstra2013hyperopt} and Scikit-opt \cite{_2022scikitopt}, or provided by the author.
All experiments are conducted on Python 3.7, $36$ core $3.00$GHz Intel Core $i9$ CPU, and NVIDIA RTX $A6000$ GPUs.
\vspace{0.5mm}
\noindent \emph{\textbf{Experimental Setting}}.
The evaluation of {DRL-DBSCAN} is based on the four working modes proposed in Sec. ~\ref{sec:mode}.
Considering the randomness of most algorithms, all experimental results we report are the means or variances of $10$ runs with different seeds (except KDist because it's heuristic and doesn't involve random problems).
Specifically, for the pre-training and maintenance training processes of {DRL-DBSCAN}, we set the maximum number of episodes $E_{max}$ to $50$, and do not set the early stop mechanism.
For the training process for searching, we set the maximum number of episodes $E_{max}$ to $15$.
In offline tasks and online tasks, the maximum number of recursive layers $L_{max}$ is $3$ and $6$, respectively, and the maximum search boundary in the $0$-th layer of $MinPts$ ${B}_{MinPts,2}^{(0)}$ is $0.25$ and $0.0025$ times the size of block, respectively.
In addition, we use the unified label training proportion $0.2$, the $Eps$ parameter space size $\pi_{Eps}$ $5$, the $MinPts$ parameter space size $\pi_{MinPts}$ $4$, the maximum number of search steps $I_{max}$ $30$ and the reward factor $\delta$ $0.2$.
\textcolor{black}{The FCN and MLP dimensions are uniformly set to $32$ and $256$, the reward decay factor $\gamma$ of Critic is $0.1$, and the number of samples $M$ from the buffer is $16$.}
Furthermore, all baselines use the same objective function (Eq. (\ref{eq:reward_function})), parameter search space, and parameter minimum step size as {DRL-DBSCAN} if they support the settings.
\vspace{0.5mm}
\noindent \emph{\textbf{Evaluation Metrics}}.
We evaluate the experiments in terms of accuracy and efficiency.
Specifically, we measure the clustering accuracy based on normalized mutual information (NMI) \cite{estevez2009nmi} and adjusted rand index (ARI) \cite{vinh2010ari}.
For the efficiency, we use the consumed DBSCAN clustering rounds as the measurement.
\subsection{Offline Evaluation}\label{sec:offline}
Offline evaluation is based on four artificial benchmark datasets.
Since there is no data for pre-training in offline scenarios, we only compare the parameter search performance of {DRL-DBSCAN} using the retraining mode $DRL_{re}$ with baselines.
\vspace{0.5mm}
\noindent \emph{\textbf{Accuracy and Stability Analysis. }}
In Table \ref{tab:offline_all}, we summarize the means and variances of the NMI and ARI corresponding to the optimal DBSCAN parameter combinations that can be searched by $DRL_{re}$ and baselines within $30$ clustering rounds.
It can be seen from the mean accuracy results of ten runs that in the Pathbased, Compound, Aggregation, and D31 datasets, $DRL_{re}$ can effectively improve the performance of $4\%$ \& $6\%$, $3\%$ \& $3\%$, $20\%$ \& $26\%$ and $5\%$ \& $26\%$ on NMI and ARI, relative to the best performing baselines.
At the same time, as the dataset size increases, the advantage of $DRL_{re}$ compared to other baselines in accuracy gradually increases.
Furthermore, the experimental variances shows that $DRL_{re}$ improves stability by $4\%$ \& $6\%$, $1\%$ \& $1\%$, $9\%$ \& $13\%$ and $2\%$ \& $17\%$ on NMI and ARI, relative to the best performing baselines.
The obvious advantages in terms of accuracy and stability indicate that $DRL_{re}$ can stably find excellent parameter combinations in multiple rounds of parameter search, compared with other hyperparameter optimization baselines under the same objective function.
Besides, $DRL_{re}$ is not affected by the size of the dataset.
Among all the baselines, PSO and DE are relatively worse in terms of accuracy, because their search in the parameter space is biased towards continuous parameters, requiring more rounds to achieve optimal results.
BO-TPE learns previously searched parameter combinations through a probabilistic surrogate model and strikes a balance between exploration and exploitation, with significant advantages over other baselines.
The proposed {DRL-DBSCAN} not only narrows the search space of parameters of each layer progressively through a recursive structure, but also learns historical experience, which is more suitable for searching DBSCAN clustering parameter combinations.
\begin{table}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Characteristics of Datasets.}\label{tab:dataset}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|ccc|cc}
\hline
Type & Dataset & Classes & Size & Dim. & Time\\
\hline
\multirow{4}*{Offline} & Pathbased & $3$ & $300$ & $2$ & $\times$\\
& Compound & $6$ & $399$ & $2$ & $\times$\\
& Aggregation & $7$ & $788$ & $2$ & $\times$\\
& D31 & $31$ & $3100$ & $2$ & $\times$\\
\hline
Online & Sensor & $54$ & $80640$ & $5$ & $\checkmark$\\
\hline
\end{tabular}
}
\vspace{-4.5mm}
\end{table}
\vspace{0.5mm}
\noindent \emph{\textbf{Efficiency Analysis. }}
We present the average historical maximum NMI results for $DRL_{re}$ and all baselines when consuming a different number of clustering rounds in Fig. \ref{fig:offline_efficiency}.
The shade in the figure represents the fluctuation range (variance) of NMI in multiple runs (only BO-TPE is also shown with shade as the representative in the baselines).
The results suggest that in the four datasets, $DRL_{re}$ can maintain a higher speed of finding better parameters than baselines, and fully surpass all baselines after the $5$-th, $12$-th, $17$-th, and $16$-th rounds, respectively.
In the Pathbased dataset, the clustering rounds of $DRL_{re}$ is $2.49$ times faster than that of BO-TPE when the NMI of the parameter combination searched by $DRL_{re}$ reaches $0.72$.
Besides, the results show that, with the increase of clustering rounds, the shadow area of the $DRL_{re}$ curve gradually decreases, while the shadow range of BO-TPE tends to be constant.
The above observation also demonstrates the good stability of the search when the number of rounds of {DRL-DBSCAN} reaches a specific number.
\vspace{0.5mm}
\noindent \emph{\textbf{DRL-DBSCAN Variants.}}
We compare the $DRL_{recu}$ with $DRL_{no-recu}$ which without the recursion mechanism in Fig. \ref{fig:recur_eff}.
Note that, for $DRL_{recu}$, we turn off the early stop mechanism so that it can search longer to better compare with $DRL_{no-recu}$.
The results show that the first $100$ episodes of the recursive mechanism bring the maximum search speedup ratio of $6.5$, which effectively proves the contribution of the recursive mechanism in terms of efficiency.
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Offline evaluation performance. \textmd{The best results are bolded and second-best are underlined.}}\label{tab:offline_all}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|cc|cccc|cc|ccc}
\hline
\multirow{2}*{\textbf{Dataset}} & \multirow{2}*{\textbf{Metrics}} & \multicolumn{2}{c|}{\textbf{Traditional}} & \multicolumn{4}{c|}{\textbf{Meta-heuristic}} & \multicolumn{5}{c}{\textbf{Dedicated}} \\
\cline{3-13}
& & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multicolumn{1}{c}{\textbf{DRL$_{re}$}} & (Mean) & (Var.)\\
\hline
\hline
\rowcolor{gray!15} & NMI & .66$\pm$.23 & .\underline{78$\pm$.07} & .65$\pm$.24 & .60$\pm$.28 & .68$\pm$.19 & .22$\pm$.28 & .40$\pm$.- - & .51$\pm$.33 & \textbf{.82$\pm$.03} & $\uparrow$ .04 & $\downarrow$ .04\\
\rowcolor{gray!15} \multirow{-2}*{Pathbased} & ARI & .63$\pm$.21 & \underline{.79$\pm$.10} & .66$\pm$.25 & .55$\pm$.38 & .67$\pm$.26 & .18$\pm$.28 & .38$\pm$.- - & .48$\pm$.40 & \textbf{.85$\pm$.04} & $\uparrow$ .06 & $\downarrow$ .06\\
\rowcolor{white!15} & NMI & \underline{.75$\pm$.05} & .70$\pm$.24 & .52$\pm$.36 & .46$\pm$.34 & .70$\pm$.25 & .33$\pm$.35 & .39$\pm$.- - & .72$\pm$.25 & \textbf{.78$\pm$.04} & $\uparrow$ .03 & $\downarrow$ .01\\
\rowcolor{white!15} \multirow{-2}*{Compound} & ARI & \underline{.73$\pm$.04} & .68$\pm$.24 & .51$\pm$.35 & .42$\pm$.36 & .68$\pm$.24 & .31$\pm$.34 & .39$\pm$.- - & .70$\pm$.25 & \textbf{.76$\pm$.03} & $\uparrow$ .03 & $\downarrow$ .01\\
\rowcolor{gray!15} & NMI & \underline{.76$\pm$.11} & .72$\pm$.14 & .75$\pm$.27 & .59$\pm$.35 & .75$\pm$.15 & .28$\pm$.37 & .60$\pm$.- - & .63$\pm$.28 & \textbf{.96$\pm$.02} & $\uparrow$ .20 & $\downarrow$ .09\\
\rowcolor{gray!15} \multirow{-2}*{Aggregation} & ARI & .68$\pm$.16 & .63$\pm$.19 & .\underline{70$\pm$.27} & .51$\pm$.37 & .68$\pm$.19 & .25$\pm$.35 & .52$\pm$.- - & .54$\pm$.28 & \textbf{.96$\pm$.03} & $\uparrow$ .26 & $\downarrow$ .13\\
\rowcolor{white!15} & NMI & .31$\pm$.33 & .23$\pm$.24 & .17$\pm$.19 & .36$\pm$.33 & .23$\pm$.20 & .24$\pm$.26 & .07$\pm$.- - & \underline{.41$\pm$.36} & \textbf{.67$\pm$.02} & $\uparrow$ .26 & $\downarrow$ .17\\
\rowcolor{white!15} \multirow{-2}*{D31} & ARI & .14$\pm$.26 & .04$\pm$.05 & .03$\pm$.04 & .09$\pm$.22 & .04$\pm$.04 & .06$\pm$.09 & .00$\pm$.- - & \underline{.21$\pm$.28} & \textbf{.26$\pm$.02} & $\uparrow$ .05 & $\downarrow$ .02\\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{figures/results/Epoch-NMI-offline.jpg}\vspace{-1em}
\centering
\caption{Offline clustering efficiency comparison.}\label{fig:offline_efficiency}
\vspace{-1.2mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{figures/results/Epoch-NMI-layer.jpg}\vspace{-1em}
\centering
\caption{Efficiency comparison of recursive mechanism.}\label{fig:recur_eff}
\vspace{-3.2mm}
\end{figure}
\vspace{0.5mm}
\noindent \emph{\textbf{Label Proportion Comparison. }}
Considering the influence of the proportion of labels participating in training on the {DRL-DBSCAN} training process's reward and the objective function of other baselines, we conduct experiments with different label proportions in Pathbased, and the results are shown in Fig. \ref{fig:mix}(b) (the vertical lines above the histograms are the result variances).
It can be seen that under different label proportions, the average NMI scores of $DRL_{re}$ are better than baselines, and the variance is also smaller.
Additionally, as the proportion of labels decreases, the NMI scores of most of the baselines drop sharply, while the $DRL_{re}$ tends to be more or less constant.
These stable performance results demonstrate the adaptability of {DRL-DBSCAN} to label proportion changes.
\vspace{0.5mm}
\noindent \emph{\textbf{Case Study. }}\label{sec:case}
To better illustrate the parameter search process based on RL, we take $3$ episodes in the $3$-rd recursive layer of the Pathbased dataset as an example for case study (Table \ref{tab:case}).
The columns in Table \ref{tab:case} are the action sequences made by the agent $agent^{(l)}$ in different episodes, the termination types of the episodes, the parameter combinations and NMI scores of the end point.
We can observe that {DRL-DBSCAN} aims to obtain the optimal path from the initial parameter combination to the optimal parameter combination.
The path-based form of search can use the optimal path learned in the past while retaining the ability to explore the unknown search direction for each parameter combination along the path.
Since we add the $stop$ action to the action, {DRL-DBSCAN} can also learn how to stop at the optimal position, which helps extend {DRL-DBSCAN} to online clustering situations where there is no label at all.
Note that we save the clustering information of the parameter combinations experienced in the past into a hash table, thereby no additional operations are required for repeated paths.
\begin{table}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Case study.}\label{tab:case}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|c|c}
\hline
Action Sequence & Stop Type & $Eps$ / $MinPts$ & NMI\\
\hline
$stop \rightarrow left \rightarrow up \rightarrow$ & \multirow{3}*{Out} & \multirow{3}*{$0.155$ / $41$} & \multirow{3}*{.64}\\
$down \rightarrow down \rightarrow right \rightarrow $ & & & \\
$right \rightarrow right$ & & & \\
\hline
$down \rightarrow right \rightarrow up \rightarrow$ & \multirow{2}*{Active} & \multirow{2}*{$0.139$ / $41$} & \multirow{2}*{.74}\\
$left \rightarrow down \rightarrow stop$ & & & \\
\hline
$left \rightarrow left$ & Out & $0.123$ / $43$ & .81\\
\hline
\end{tabular}
}
\end{table}
\subsection{Online Evaluation}\label{sec:online}
The learnability of RL enables {DRL-DBSCAN} to better utilize past experience in online tasks.
To this end, we comprehensively evaluate four working modes of {DRL-DBSCAN} on a streaming dataset, Sensor.
Specifically, the first eight blocks of the Sensor are used for the pre-training of $DRL_{con}$, $DRL_{all}$ and $DRL_{one}$, and the last eight blocks are used to compare the results with baselines.
Since baselines cannot perform incremental learning for online tasks, we initialize the algorithm before each block starts like $DRL_{re}$.
In addition, both experiments of $DRL_{all}$ and $DRL_{one}$ use unseen data for unlabeled testing, whereas $DRL_{one}$ uses the labeled historical data for model maintenance training after each block ends testing.
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Online evaluation NMI for training-based modes. \textmd{The best results are bolded and second-best are underlined.}}\label{tab:online_train}
\centering
\scalebox{1.0}{
\begin{tabular}{c|cc|cccc|cc|cccc}
\hline
Blocks & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multirow{1}*{\textbf{DRL$_{re}$}} & \multirow{1}*{\textbf{DRL$_{con}$}} & (Mean) & (Var.)\\
\hline
\hline
\rowcolor{white!15} $\mathcal{V}_{9}$ & .67$\pm$.24 & .83$\pm$.03 & .53$\pm$.37 & .74$\pm$.10 & .65$\pm$.29 & .19$\pm$.31 & .30$\pm$.- - & .70$\pm$.21 & \underline{.86$\pm$.01} & \textbf{.87$\pm$.00} & $\uparrow$ .04 & $\downarrow$ .03\\
\rowcolor{white!15} $\mathcal{V}_{10}$ & .36$\pm$.15 & \underline{.50$\pm$.07} & .45$\pm$.17 & \underline{.50$\pm$.20} & .43$\pm$.15 & .15$\pm$.17 & .20$\pm$.- - & .37$\pm$.20 & \underline{.50$\pm$.27} & \textbf{.64$\pm$.06} & $\uparrow$ .14 & $\downarrow$ .01\\
\rowcolor{white!15} $\mathcal{V}_{11}$ & .40$\pm$.06 & .43$\pm$.10 & .32$\pm$.26 & .55$\pm$.16 & .43$\pm$.08 & .09$\pm$.12 & .12$\pm$.- - & .47$\pm$.16 & \underline{.60$\pm$.16} & \textbf{.68$\pm$.02} & $\uparrow$ .13 & $\downarrow$ .04\\
\rowcolor{white!15} $\mathcal{V}_{12}$ & .44$\pm$.23 & .62$\pm$.16 & .27$\pm$.35 & .66$\pm$.07 & .50$\pm$.24 & .19$\pm$.28 & .11$\pm$.- - & .41$\pm$.31 & \textbf{.75$\pm$.01} & \underline{.72$\pm$.10} & $\uparrow$ .09 & $\downarrow$ .06\\
\rowcolor{white!15} $\mathcal{V}_{13}$ & .84$\pm$.06 & .87$\pm$.04 & .72$\pm$.38 & .68$\pm$.26 & .76$\pm$.17 & .38$\pm$.38 & .62$\pm$.- - & .68$\pm$.23 & \textbf{.92$\pm$.02} & \textbf{.92$\pm$.02} & $\uparrow$ .08 & $\downarrow$ .02\\
\rowcolor{white!15} $\mathcal{V}_{14}$ & .74$\pm$.12 & \underline{.82$\pm$.04} & .54$\pm$.37 & .63$\pm$.24 & .54$\pm$.24 & .25$\pm$.25 & .55$\pm$.- - & .56$\pm$.25 & .76$\pm$.25 & \textbf{.85$\pm$.00} & $\uparrow$ .03 & $\downarrow$ .04\\
\rowcolor{white!15} $\mathcal{V}_{15}$ & .68$\pm$.24 & .76$\pm$.04 & .66$\pm$.34 & .55$\pm$.25 & .62$\pm$.27 & .28$\pm$.32 & .36$\pm$.- - & .72$\pm$.14 & \textbf{.85$\pm$.07} & \underline{.83$\pm$.13} & $\uparrow$ .17 & -\\
\rowcolor{white!15} $\mathcal{V}_{16}$ & .73$\pm$.13 & .77$\pm$.09 & .77$\pm$.10 & .40$\pm$.35 & .67$\pm$.22 & .49$\pm$.31 & .11$\pm$.- - & .67$\pm$.19 & \textbf{.86$\pm$.01} & \textbf{.86$\pm$.00} & $\uparrow$ .09 & $\downarrow$ .09\\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Online evaluation NMI for testing-based modes. \textmd{The best results are bolded and second-best are underlined.}}\label{tab:online_test}
\centering
\scalebox{1.0}{
\begin{tabular}{c|cc|cccc|cc|cccc}
\hline
Blocks & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multirow{1}*{\textbf{DRL$_{all}$}} & \multirow{1}*{\textbf{DRL$_{one}$}} & (Mean) & (Var.) \\
\hline
\hline
\rowcolor{white!15} $\mathcal{V}_{9}$ & .34$\pm$.31 & .49$\pm$.33 & .22$\pm$.34 & .14$\pm$.29 & .27$\pm$.37 & .10$\pm$.26 & .30$\pm$.- - & .54$\pm$.36 & \textbf{.68$\pm$.30} & \textbf{.68$\pm$.30} & $\uparrow$ .19 & -\\
\rowcolor{white!15} $\mathcal{V}_{10}$ & .11$\pm$.14 & .28$\pm$.17 & .17$\pm$.21 & .24$\pm$.01 & .20$\pm$.21 & .12$\pm$.18 & .20$\pm$.- - & .28$\pm$.24 & \textbf{.33$\pm$.16} & \textbf{.33$\pm$.15} & $\uparrow$ .05 & -\\
\rowcolor{white!15} $\mathcal{V}_{11}$ & .16$\pm$.15 & .29$\pm$.24 & .23$\pm$.18 & \textbf{.33$\pm$.29} & .23$\pm$.23 & .02$\pm$.05 & .12$\pm$.- - & .21$\pm$.22 & .30$\pm$.13 & \underline{.32$\pm$.08} & - & -\\
\rowcolor{white!15} $\mathcal{V}_{12}$ & .23$\pm$.25 & .19$\pm$.24 & .10$\pm$.22 & \underline{.38$\pm$.26} & .34$\pm$.27 & .03$\pm$.06 & .11$\pm$.- - & .29$\pm$.27 & \underline{.38$\pm$.17} & \textbf{.46$\pm$.09} & $\uparrow$ .08 & -\\
\rowcolor{white!15} $\mathcal{V}_{13}$ & .58$\pm$.35 & \underline{.70$\pm$.24} & .47$\pm$.40 & .44$\pm$.31 & .36$\pm$.28 & .08$\pm$.14 & .62$\pm$.- - & .32$\pm$.26 & .68$\pm$.34 & \textbf{.70$\pm$.27} & - & -\\
\rowcolor{white!15} $\mathcal{V}_{14}$ & .36$\pm$.19 & .34$\pm$.28 & .47$\pm$.35 & .37$\pm$.33 & .27$\pm$.25 & .11$\pm$.24 & .55$\pm$.- - & .43$\pm$.28 & \underline{.60$\pm$.27} & \textbf{.62$\pm$.16} & $\uparrow$ .15 & $\downarrow$ .03\\
\rowcolor{white!15} $\mathcal{V}_{15}$ & .45$\pm$.35 & .38$\pm$.36 & .37$\pm$.33 & .30$\pm$.34 & .36$\pm$.32 & .09$\pm$.18 & .36$\pm$.- - & .42$\pm$.31 & \underline{.64$\pm$.28} & \textbf{.70$\pm$.03} & $\uparrow$ .25 & $\downarrow$ .15\\
\rowcolor{white!15} $\mathcal{V}_{16}$ & .22$\pm$.32 & .45$\pm$.24 & .32$\pm$.29 & .19$\pm$.27 & .36$\pm$.27 & .12$\pm$.20 & .11$\pm$.- - & \underline{.59$\pm$.23} & \textbf{.60$\pm$.27} & .53$\pm$.20 & $\uparrow$ .01 & -\\
\hline
\hline
\end{tabular}
}
\end{table*}
\vspace{0.5mm}
\noindent \emph{\textbf{Accuracy and Stability Analysis.}}
We give the performance comparison of training-based modes ($DRL_{re}$ and $DRL_{con}$) and testing-based modes ($DRL_{all}$ and $DRL_{one}$) of {DRL-DBSCAN} with the baselines in Table \ref{tab:online_train} and Table \ref{tab:online_test}, respectively.
Due to the action space of {DRL-DBSCAN} has the $stop$ action, it can automatically terminate the search.
To control the synchronization of baselines' experimental conditions, we use the average clustering rounds consumed when {DRL-DBSCAN} is automatically terminated as the maximum round of baselines for the corresponding task ($30$ for Table \ref{tab:online_train} and $16$ for Table \ref{tab:online_test}).
The results show that the means of NMI scores of the training-based and testing-based search modes are improved by about $9\%$ and $9\%$ on average over multiple blocks, respectively, and the variances of performance are reduced by about $4\%$ and $2\%$, respectively.
Specifically, Table \ref{tab:online_train} firstly shows that similar to the offline tasks, $DRL_{re}$, which is based on re-training, still retains the significant advantage in online tasks.
Secondly, compared $DRL_{con}$ which is capable of continuous incremental learning with $DRL_{re}$, $DRL_{con}$ has a performance improvement of up to $14\%$ with a significant decrease in variance.
Third, from Table \ref{tab:online_test}, it can be found that in testing-based modes, $DRL_{all}$ and $DRL_{one}$ without labels (without reward function) can significantly exceed the baselines that require labels to establish the objective function.
Fourth, $DRL_{one}$ which is regularly maintained has higher performance and less variance than $DRL_{all}$.
These results demonstrate the capability of {DRL-DBSCAN} to retain historical experience and the advantages of learnable DBSCAN parameter search for accuracy and stability.
In addition, although KDist can determine parameters without labels and iterations, its accuracy is relatively low.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{figures/results/mix.jpg}\vspace{-1em}
\centering
\caption{Comparison in online and offline tasks.}\label{fig:mix}
\vspace{-1.2mm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{figures/results/Epoch-NMI-HP.jpg}\vspace{-1em}
\centering
\caption{Parameter sensitivity.}\label{fig:hyperparameter}
\vspace{-3.2mm}
\end{figure}
\vspace{0.5mm}
\noindent \emph{\textbf{Efficiency Analysis.}}
$DRL_{all}$ and $DRL_{one}$ automatically search for the optimal DBSCAN parameters (end point parameters) without labels.
In order to better analyze these two testing-based parameter search modes, we compare the number of clustering rounds required of other methods to reach the NMI scores of the $DRL_{all}$ end-point parameters in the online tasks (Fig. \ref{fig:mix}(a)).
In the figure, the short vertical lines are the result variances.
We can see that $DRL_{all}$ reaches the optimal results within the consumption range of $11$-$14$ rounds, while other baselines require more rounds when reaching the corresponding NMI.
Moreover, many baselines' round consumption over different blocks fluctuates significantly, and the variance in the same block is also large.
The above observation suggests that the parameter search efficiency of {DRL-DBSCAN}'s testing-based modes without labels exceeds that of the baselines which require labels.
Additionally, $DRL_{con}$ consumes fewer rounds than $DRL_{re}$ when reaching the same NMI, which also proves the advantage of {DRL-DBSCAN}'s learning ability in terms of efficiency.
\vspace{0.5mm}
\noindent \emph{\textbf{{DRL-DBSCAN} Variants.}}
To better evaluate the design of states and rewards in Sec. \ref{sec:parameter_search}, we compare two variants with $DRL_{all}$ in the online tasks, namely $DRL_{no-att}$ (state has no attention mechanism) and $DRL_{only-max}$ (reward only based on future maximum immediate reward).
The results in Fig. \ref{fig:mix}(c) show that the full structure of $DRL_{all}$ has better NMI scores than the variants, and brings the maximum performance increase of $0.16$, which represents the necessity of setting the local state and end point immediate reward.
\subsection{Hyperparameter Sensitivity}\label{sec:hyperparameter}
Fig. \ref{fig:hyperparameter} shows the results of the offline evaluation of $DRL_{re}$ on the Pathbased for four hyperparameters.
Fig. \ref{fig:hyperparameter}(a) and Fig. \ref{fig:hyperparameter}(b) compare a set of parameter space sizes of $Eps$ and $MinPts$ involved in the Eq. (\ref{eq:boundary}), respectively.
It can be found that the parameter space that is too large or too small for $Eps$ will cause performance loss and a decrease in search efficiency, while $MinPts$ is less sensitive to the parameter space's size change.
Fig. \ref{fig:hyperparameter}(c) analyzes the effect of different numbers of recursive layers on the search results.
The results show that a suitable number of recurrent layers helps to obtain stable performance results.
It is worth noting that the number of layers does not require much tuning, as we use the early-stop mechanism described in Sec. \ref{sec:mode} to avoid overgrowing layers.
Fig. \ref{fig:hyperparameter}(d) compares the different influence weights of end point immediate reward and future maximum immediate reward on the final reward (Eq.\ref{eq:reward}).
The results show that equalizing the contribution of the two immediate rewards to the final reward can help improve the performance of the {DRL-DBSCAN}.
\section{RELATED WORK}\label{sec:related_work}
\vspace{0.5mm}
\noindent \emph{\textbf{Automatic DBSCAN parameter determination.}}
\textcolor{black}{DBSCAN is heavily dependent on two sensitive parameters ($Eps$ and $MinPts$) requiring prior knowledge for tuning.}
Numerous works propose different solutions for tuning the above.
OPTICS \cite{ankerst1999optics} is an extension of DBSCAN, which establishes cluster sorting based on reachability to obtain the $Eps$.
However, it needs to pre-determine the appropriate value of $MinPts$, and the acquisition of $Eps$ needs to interact with the user.
V-DBSCAN \cite{lu2007vdbscan} and KDDClus \cite{Mitra2011kddclus} plot the curve by using the sorted distance of any object to its $k$-th nearest object, and use the significant change on the curve as a series of candidate values for the $Eps$ parameter.
Similar methods include DSets-DBSCAN \cite{hou2016dsets}, Outlier \cite{akbari2016outlier} and RNN-DBSCAN \cite{bryant2017rnn_dbscan}, all of which require a fixed $MinPts$ value or a predetermined the number of nearest neighbors $k$, and the obtained candidate $Eps$ parameters may not be unique.
Beside the above work, there are some works \cite{darong2012grid, diao2018lpdbscsan} that consider combining DBSCAN with grid clustering to judge the density trend of raw samples according to the size and shape of each data region through pre-determined grid partition parameters.
Although these methods reduce the difficulty of parameter selection to a certain extent, they still require the user to decide at least one parameter heuristically, making them inflexible in changing data.
\vspace{0.5mm}
\noindent \emph{\textbf{Hyperparameter Optimization.}}
For the parameters of DBSCAN, another feasible parameter decision method is based on the Hyperparameter Optimization (HO) algorithm.
\textcolor{black}{The classic HO methods are model-free methods, including grid search \cite{darong2012grid} that searches for all possible parameters, and random search \cite{bergstra2012rand} etc.}
\textcolor{black}{Another approach is Bayesian optimization methods such as BO-TPE \cite{bergstra2011tpe}, SMAC \cite{hutter2011smac}, which optimize search efficiency using prior experience.}
In addition, meta-heuristic optimization methods, such as simulated annealing \cite{kirkpatrick1983anneal}, genetic \cite{lessmann2005ga}, particle swarm \cite{shi1998pso} and differential evolution \cite{qin2008de}, can solve non-convex, non-continuous and non-smooth optimization problems by simulating physical, biological and other processes to search \textcolor{black}{\cite{yang2020hyperparameter}}.
Based on meta-heuristic optimization algorithms, some works propose HO methods for DBSCAN.
BDE-DBSCAN \cite{karami2014bdedbscan} targets an external purity index, selects $MinPts$ parameters based on a binary differential evolution algorithm, and selects $Eps$ parameters using a tournament selection algorithm.
MOGA-DBSCAN \cite{falahiazar2021moga_dbscan} proposes the outlier-index as a new internal index method for the objective function and selects parameters based on a multi-objective genetic algorithm.
Although HO methods avoid handcrafted heuristic decision parameters, they require an accurate objective function (clustering external/internal metrics) and cannot cope with the problem of unlabeled data and the error of internal metrics.
While {DRL-DBSCAN} can not only perform DBSCAN clustering state-aware parametric search based on the objective function, but also retain the learned search experience and conduct searches without the objective function.
\vspace{0.5mm}
\noindent \emph{\textbf{Reinforcement Learning Clustering.}}
Recently, some works that intersect Reinforcement Learning (RL) and clustering algorithms have been proposed.
For example, MCTS Clustering \cite{brehmer2020rl_physics} in particle physics task builds high-quality hierarchical clusters through Monte Carlo tree search to reconstruct primitive elementary particles from observed final-state particles.
\cite{grua2018rl_health} which targets the health and medical domain leverages two clustering algorithms, and RL to cluster users who exhibit similar behaviors.
Both of these works are field-specific RL clustering methods. Compared with {DRL-DBSCAN}, the Markov process they constructed is only applicable to fixed tasks, and is not a general clustering method.
Besides the above work, \cite{bagherjeiran2005rl_kmeans} proposes an improved K-Means clustering algorithm that selects the weights of distance metrics in different dimensions through RL.
Although this method effectively improves the performance of traditional K-Means, it needs to pre-determine the number of clusters $k$, which has limitations.
\section{CONCLUSION}
In this paper, we propose an adaptive DBSCAN parameter search framework based on Deep Reinforcement Learning.
In the proposed {DRL-DBSCAN} framework, the agents that modulate the parameter search direction by sensing the clustering environment are used to interact with the DBSCAN algorithm.
A recursive search mechanism is devised to avoid the search performance decline caused by a large parameter space.
The experimental results of the four working modes demonstrate that the proposed framework not only has high accuracy, stability and efficiency in searching parameters based on the objective function, but also maintains an effective performance when searching parameters without external incentives.
\section{METHODS}\label{sec:appendix1}
\begin{figure}[t]
\centering
\includegraphics[width=9.0cm]{figures/results/dataset.jpg}\vspace{-1em}
\centering
\caption{Cluster distribution on online dataset.}\label{fig:dataset}
\vspace{-1.2mm}
\end{figure}
\begin{table}[t]
\caption{Glossary of Notations.}\label{table:notations}\vspace{-1em}
\resizebox{\linewidth}{!}{%
\begin{tabular}{r|p{6.7cm}}
\toprule
\textbf{Notation} & \textbf{Description}\\
\hline
$v_{j}$; $\mathcal{V}$ & The $j$-th object in data block; Data block\\
$x_{j}$; $\mathcal{X}$; $d$& Feature of the $j$-th object of data block; Feature set; Feature dimension\\
$y_{j}$; $\mathcal{Y}'$ & Label of the $j$-th object of data block; Partial label set\\
$c_{n}$; $\mathcal{C}$ & The $n$-th cluster; Cluster set\\
\hline
$i$; $I$; $I_{max}$ & Step; End step; Maximum step\\
$e$; $E$; $E_{max}$ & Episode; End episode; Maximum episode\\
$l$; $L$; $L_{max}$ & Layer; End layer; Maximum layer\\
$a^{(i)(e)}$; $\mathcal{A}$ & Action of the $i$-th step at episode $e$; Action space\\
$s^{(i)(e)}$; $\mathcal{S}$ & State of the $i$-th step at episode $e$; State set\\
$r^{(i)(e)}$; $R$ & Reward of the $i$-th step at episode $e$; Reward function\\
$\mathcal{P}$; $\mathcal{P}_{o}$ & Parameter combination; Optimal parameter combination\\
$B_{p}$ & Parameter space boundary of the parameter $p$\\
\bottomrule
\end{tabular}}
\vspace{-2.5mm}
\end{table}
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Online evaluation and comparison in Predictive Mode.}\label{tab:complexity}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|cc|cccc|cc|ccc}
\hline
\multirow{2}*{\textbf{Blocks}} & \multirow{2}*{\textbf{Metrics}} & \multicolumn{2}{c|}{\textbf{Traditional}} & \multicolumn{4}{c|}{\textbf{Evolutionary}} & \multicolumn{4}{c}{\textbf{Dedicated}} \\
\cline{3-13}
& & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multirow{1}*{\textbf{DRL_{all}}} & \multirow{1}*{\textbf{DRL_{one}}} & \\
\hline
\hline
\rowcolor{gray!15} $\mathcal{V}_{9}$ & NMI & .34±.31 & .49±.33 & .22±.34 & .14±.29 & .27±.37 & .10±.26 & .00±.00 & .00±.00 & \textbf{.68±.30} & \textbf{.68±.30} & $\uparrow$ .19\\
\rowcolor{white!15} & ARI & .10±.16 & .18±.20 & .07±.12 & .03±.11 & .11±.18 & .05±.15 & .00±.00 & .00±.00 & \textbf{.36±.20} & \textbf{.36±.20} & $\uparrow$ .18\\
\rowcolor{gray!15} $\mathcal{V}_{10}$ & NMI & .11±.14 & .28±.17 & .17±.21 & .24±.01 & .20±.21 & .12±.18 & .00±.00 & .00±.00 & \textbf{.33±.16} & \textbf{.33±.15} & $\uparrow$ .05\\
\rowcolor{white!15} & ARI & .00±.01 & .02±.02 & .01±.02 & .01±.02 & .01±.01 & .01±.02 & .00±.00 & .00±.00 & \textbf{.03±.02} & \textbf{.03±.02} & $\uparrow$ .01\\
\rowcolor{gray!15} $\mathcal{V}_{11}$ & NMI & .16±.15 & .29±.24 & .23±.18 & \textbf{.33±.29} & .23±.23 & .02±.05 & .00±.00 & .00±.00 & .30±.13 & \underline{.32±.08} & - \\
\rowcolor{white!15} & ARI & .01±.01 & \textbf{.02±.02} & .01±.01 & .00±.01 & .01±.02 & .00±.00 & .00±.00 & .00±.00 & .01±.01 & \textbf{.02±.00} & - \\
\rowcolor{gray!15} $\mathcal{V}_{12}$ & NMI & .23±.25 & .19±.24 & .10±.22 & \underline{.38±.26} & .34±.27 & .03±.06 & .00±.00 & .00±.00 & \underline{.38±.17} & \textbf{.46±.09} & $\uparrow$ .08\\
\rowcolor{white!15} & ARI & .01±.02 & .02±.04 & .01±.02 & .01±.03 & \textbf{.04±.06} & .00±.00 & .00±.00 & .00±.00 & .02±.01 & \underline{.03±.01} & - \\
\rowcolor{gray!15} $\mathcal{V}_{13}$ & NMI & .58±.35 & \underline{.70±.24} & .47±.40 & .44±.31 & .36±.28 & .08±.14 & .00±.00 & .00±.00 & .68±.34 & \textbf{.70±.27} & - \\
\rowcolor{white!15} & ARI & .31±.35 & .33±.28 & .21±.25 & .07±.16 & .08±.16 & .00±.00 & .00±.00 & .00±.00 & \textbf{.39±.29} & \underline{.37±.25} & $\uparrow$ .06\\
\rowcolor{gray!15} $\mathcal{V}_{14}$ & NMI & .36±.19 & .34±.28 & .47±.35 & .37±.33 & .27±.25 & .11±.24 & .00±.00 & .00±.00 & \underline{.60±.27} & \textbf{.62±.16} & $\uparrow$ .15\\
\rowcolor{white!15} & ARI & .03±.04 & .07±.15 & \textbf{.14±.17} & .06±.09 & .06±.12 & .03±.11 & .00±.00 & .00±.00 & \textbf{.14±.08} & .11±.07 & - \\
\rowcolor{gray!15} $\mathcal{V}_{15}$ & NMI & .45±.35 & .38±.36 & .37±.33 & .30±.34 & .36±.32 & .09±.18 & .00±.00 & .00±.00 & \underline{.64±.28} & \textbf{.70±.03} & $\uparrow$ .25\\
\rowcolor{white!15} & ARI & .13±.14 & .17±.26 & .14±.24 & .06±.13 & .10±.17 & .00±.01 & .00±.00 & .00±.00 & \textbf{.21±.19} & \underline{.15±.05} & $\uparrow$ .04\\
\rowcolor{gray!15} $\mathcal{V}_{16}$ & NMI & .22±.32 & .45±.24 & .32±.29 & .19±.27 & .36±.27 & .12±.20 & .00±.00 & .00±.00 & \textbf{.60±.27} & \underline{.53±.20} & $\uparrow$ .15\\
\rowcolor{white!15} & ARI & .07±.15 & \underline{.10±.19} & .06±.12 & .00±.01 & .05±.09 & .00±.01 & .00±.00 & .00±.00 & \textbf{.17±.13} & .07±.06 & $\uparrow$ .07\\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{table}[h]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Time Complexity Analysis.}\label{tab:complexity}
\centering
\scalebox{1.0}{
\begin{tabular}{c|cc}
\hline
Method & Type & Complexity\\
\hline
GA & Genetic Algorithm &\\
PSO & Particle Swarm Optimization &\\
ACO & Ant Colony Optimization &\\
\hline
SMAC & Bayesian Optimization & $O(nlogn)$\\
Randomly & Randomly Optimization &\\
\hline
V-DBSCAN & K Disatance &\\
BDE-DBSCAN & Binary Differential Evolution & $O(nlogn)$\\
\hline
{DRL-DBSCAN} & Reinforcement Learning & $O(log)$\\
\hline
\end{tabular}
}
\end{table}
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{The comparison results of different parameter search method.}\label{tab:complexity}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|cc|cccc|cc|c}
\hline
\multirow{2}*{\textbf{Dataset}} & \multirow{2}*{\textbf{Metrics}} & \multicolumn{2}{c|}{\textbf{Traditional}} & \multicolumn{4}{c|}{\textbf{Evolutionary}} & \multicolumn{3}{c}{\textbf{Dedicated}} \\
\cline{3-11}
& & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multirow{1}*{\textbf{DRL}} \\
\hline
\hline
\multirow{7}*{Pathbased} & \rowcolor{gray!15} NMI (\%) & 59±24 & 75±14 & 35±33 & 47±28 & 44±32 & 11±21 & 0 & 0 & \textbf{75±21} \\
& \rowcolor{gray!15} AMI (\%) & 59±24 & 75±14 & 35±33 & 41±33 & 43±32 & 11±20 & 0 & 0 & \textbf{75±21} \\
& \rowcolor{gray!15} ARI (\%) & 57±31 & 78±16 & 29±36 & 37±36 & 38±38 & 7±18 & 0 & 0 & \textbf{77±26} \\
& $Eps$ & .15±.08 & .14±.04 & .41±.23 & .26±.30 & .28±.28 & .45±.29 & 0 & 0 & \textbf{.11±.03} \\
& $MinPts$ & 47±44 & 50±28 & 151±83 & 88±94 & 98±74 & 164±78 & 0 & 0 & \textbf{28±10} \\
& No. Cluster & 3±0 & 3±0 & 2±1 & 51±106 & 2±1 & 2±1 & 0 & 0 & \textbf{3±0} \\
& First Iteration & 19±14 & 25±12 & 27±17 & 24±17 & 20±14 & 10±20 & 0 & 0 & \textbf{19±7} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{48±8} \\
\hline
\hline
\multirow{7}*{Compound} & \rowcolor{gray!15} NMI (\%) & 63±27 & 72±7 & 58±31 & 53±37 & 56±27 & 21±33 & 0 & 0 & \textbf{79±4} \\
& \rowcolor{gray!15} AMI (\%) & 62±27 & 72±7 & 57±31 & 52±36 & 54±28 & 20±33 & 0 & 0 & \textbf{79±4} \\
& \rowcolor{gray!15} ARI (\%) & 59±29 & 69±8 & 54±31 & 52±38 & 49±33 & 19±32 & 0 & 0 & \textbf{77±5} \\
& $Eps$ & .16±.05 & .16±.08 & .26±.24 & .36±.40 & .22±.14 & .42±.28 & 0 & 0 & \textbf{.12±.04} \\
& $MinPts$ & 73±88 & 64±41 & 121±101 & 69±127 & 99±103 & 183±129 & 0 & 0 & \textbf{33±19} \\
& No. Cluster & 16±41 & 3±1 & 3±2 & 15±27 & 8±15 & 2±1 & 0 & 0 & \textbf{4±1} \\
& First Iteration & 24±10 & 39±11 & 29±23 & 24±18 & 22±14 & 11±19 & 0 & 0 & \textbf{23±9} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{58±8} \\
\hline
\hline
\multirow{7}*{Aggregation} & \rowcolor{gray!15} NMI (\%) & 65±17 & 59±34 & 45±32 & 52±25 & 64±30 & 45±26 & 0 & 0 & \textbf{96±2} \\
& \rowcolor{gray!15} AMI (\%) & 64±17 & 59±34 & 45±32 & 40±36 & 63±30 & 44±26 & 0 & 0 & \textbf{96±2} \\
& \rowcolor{gray!15} ARI (\%) & 51±21 & 54±34 & 39±32 & 33±34 & 56±31 & 35±26 & 0 & 0 & \textbf{96±3} \\
& $Eps$ & .17±.06 & .17±.08 & .34±.25 & 18±20 & .17±.12 & .34±.25 & 0 & 0 & \textbf{.09±.02} \\
& $MinPts$ & 115±73 & 182±184 & 281±211 & 167±221 & 175±209 & 232±161 & 0 & 0 & \textbf{32±12} \\
& No. Cluster & 4±2 & 4±2 & 3±1 & 238±379 & 4±2 & 3±1 & 0 & 0 & \textbf{8±1} \\
& First Iteration & 20±11 & 22±15 & 30±17 & 26±17 & 26±15 & 24±21 & 0 & 0 & \textbf{24±6} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{44±9} \\
\hline
\hline
\multirow{7}*{D31} & \rowcolor{gray!15} NMI (\%) & 17±16 & 16±15 & 32±28 & 21±19 & 24±23 & 3±6 & 0 & 0 & \textbf{59±21} \\
& \rowcolor{gray!15} AMI (\%) & 17±16 & 16±15 & 31±28 & 15±15 & 24±23 & 3±6 & 0 & 0 & \textbf{59±21} \\
& \rowcolor{gray!15} ARI (\%) & 2±4 & 2±3 & 11±18 & 2±3 & 6±10 & 0±0 & 0 & 0 & \textbf{23±9} \\
& $Eps$ & .27±.14 & .41±.18 & .33±.23 & 0.31±.22 & .30±.15 & .47±.34 & 0 & 0 & \textbf{.14±.12} \\
& $MinPts$ & 1269±911 & 1600±676 & 845±639 & 1599±928 & 1213±851 & 1290±814 & 0 & 0 & \textbf{191±83} \\
& No. Cluster & 2±1 & 2±1 & 4±6 & 311±979 & 3±4 & 1±1 & 0 & 0 & \textbf{7±3} \\
& First Iteration & 19±15 & 23±18 & 25±21 & 25±20 & 34±19 & 4±9 & 0 & 0 & \textbf{27±12} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{46±12} \\
\hline
\hline
\multirow{7}*{IRIS} & \rowcolor{gray!15} NMI (\%) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} AMI (\%) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} ARI (\%) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& $Eps$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& $MinPts$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& No. Cluster & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& First Iteration & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{0} \\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{figure*}[t]
\centering
\subfigure[Pathbased.]{\label{fig:date1-b1}
\begin{minipage}[t]{0.22\linewidth}
\centering
\includegraphics[width=4.2cm]{figures/results/Epoch-NMI(Pathbased).pdf}
\end{minipage}%
}%
\subfigure[Compound.]{\label{fig:date1-b2}
\begin{minipage}[t]{0.22\linewidth}
\centering
\includegraphics[width=4.2cm]{figures/results/Epoch-NMI(Compound).pdf}
\end{minipage}%
}%
\centering
\subfigure[Aggregation.]{\label{fig:date1-b3}
\begin{minipage}[t]{0.22\linewidth}
\centering
\includegraphics[width=4.2cm]{figures/results/Epoch-NMI(Aggregation).pdf}
\end{minipage}%
}%
\centering
\subfigure[D31.]{\label{fig:date2-b1}
\begin{minipage}[t]{0.22\linewidth}
\centering
\includegraphics[width=4.2cm]{figures/results/Epoch-NMI(Aggregation).pdf}
\end{minipage}%
}%
\centering
\caption{Online clustering efficiency comparison.}\label{fig:online}
\end{figure*}
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{The comparison results of different parameter search method.}\label{tab:complexity}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|cc|cccc|cc|cc}
\hline
\multirow{2}*{\textbf{Dataset}} & \multirow{2}*{\textbf{Metrics}} & \multicolumn{2}{c|}{\textbf{Traditional}} & \multicolumn{4}{c|}{\textbf{Evolutionary}} & \multicolumn{4}{c}{\textbf{Dedicated}} \\
\cline{3-12}
& & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multirow{1}*{\textbf{DRL_{s}}} & \multirow{1}*{\textbf{DRL_{m}}} \\
\hline
\hline
\multirow{7}*{ALL} & \rowcolor{gray!15} NMI (\%) & 61±24 & 80±5 & 57±39 & 52±41 & 12±9 & 6±8 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} AMI (\%) & 17±24 & 30±37 & 49±34 & 30±35 & 11±8 & 5±7 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} ARI (\%) & 2±5 & 19±27 & 20±22 & 22±29 & 1±1 & 1±1 & 0 & 0 & 0 & \textbf{0} \\
& $Eps$ & .72±.36 & .63±.37 & 3.4±4.3 & 3.4±4.6 & 3.3±3.5 & 5.9±4.3 & 0 & 0 & 0 & \textbf{0} \\
& $MinPts$ & 2±1 & 1±1 & 2±1 & 106±149 & 260±152 & 274±113 & 0 & 0 & 0 & \textbf{0} \\
& No. of Cluster & 835±801 & 1117±618 & 165±203 & 552±615 & 2±1 & 2±1 & 0 & 0 & 0 & \textbf{0} \\
& First Iteration & 25±14 & 26±11 & 24±19 & 24±21 & 20±17 & 13±18 & 0 & 0 & \textbf{0} & \textbf{0} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{0} & \textbf{0}\\
\hline
\hline
\multirow{7}*{Shape} & \rowcolor{gray!15} NMI (\%) & 35±29 & 57±21 & 14±19 & 53±29 & 19±2 & 7±6 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} AMI (\%) & 14±13 & 15±13 & 9±12 & 7±9 & 18±2 & 6±5 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} ARI (\%) & 2±3 & 2±2 & 1±1 & 1±1 & 1±0 & 0±0 & 0 & 0 & 0 & \textbf{0} \\
& $Eps$ & .24±.21 & .09±.05 & 3.3±3.1 & .19±.25 & .46.10 & 1.7±2.2 & 0 & 0 & 0 & \textbf{0} \\
& $MinPts$ & 2±1 & 2±1 & 3±1 & 159±204 & 295±134 & 201±117 & 0 & 0 & 0 & \textbf{0} \\
& No. of Cluster & 378±577 & 790±795 & 26±38 & 960±825 & 2±0 & 2±0 & 0 & 0 & 0 & \textbf{0} \\
& First Iteration & 23±14 & 28±9 & 13±19 & 20±13 & 26±15 & 22±18 & 0 & 0 & \textbf{0} & \textbf{0} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{0} & \textbf{0} \\
\hline
\hline
\multirow{7}*{Texture} & \rowcolor{gray!15} NMI (\%) & 64±24 & 73±8 & 30±34 & 34±37 & 12±14 & 3±5 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} AMI (\%) & 14±14 & 17±17 & 15±17 & 4±5 & 9±7 & 2±5 & 0 & 0 & 0 & \textbf{0} \\
& \rowcolor{gray!15} ARI (\%) & 4±5 & 6±9 & 5±9 & 1±1 & 1±0 & 0±0 & 0 & 0 & 0 & \textbf{0} \\
& $Eps$ & .34±.22 & .28±.14 & 3.5±3.3 & 2.0±2.6 & .93±.51 & 3.7±2.8 & 0 & 0 & 0 & \textbf{0} \\
& $MinPts$ & 2±1 & 1±0 & 2±1 & 112±143 & 297±126 & 159±118 & 0 & 0 & 0 & \textbf{0} \\
& No. of Cluster & 1042±689 & 1165±574 & 252±454 & 628±810 & 31±95 & 1±1 & 0 & 0 & 0 & \textbf{0} \\
& First Iteration & 21±17 & 27±12 & 19±21 & 20±18 & 30±18 & 4±6 & 0 & 0 & \textbf{0} & \textbf{0} \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{0} & \textbf{0} \\
\hline
\hline
\multirow{7}*{Margin} & \rowcolor{gray!15} NMI (\%) & 51±25 & 70±14 & 42±31 & 16±22 & 17±6 & 10±14 & 0 & 0 & 0 & 0 \\
& \rowcolor{gray!15} AMI (\%) & 22±16 & 6±13 & 14±18 & 8±8 & 16±6 & 9±12 & 0 & 0 & 0 & 0 \\
& \rowcolor{gray!15} ARI (\%) & 2±3 & 0±1 & 1±1 & 1±1 & 1±1 & 1±1 & 0 & 0 & 0 & 0 \\
& $Eps$ & .60±.16 & .27±.21 & 1.9±2.4 & 2.0±1.7 & 1.1±.16 & 2.7±1.9 & 0 & 0 & 0 & 0 \\
& $MinPts$ & 2±1 & 1±1 & 2±1 & 173±124 & 258±152 & 121±61 & 0 & 0 & 0 & 0 \\
& No. of Cluster & 537±672 & 1291±648 & 503±744 & 161±505 & 2±1 & 3±5 & 0 & 0 & 0 & 0 \\
& First Iteration & 23±12 & 23±12 & 19±17 & 23±23 & 19±11 & 9±19 & 0 & 0 & \textbf{0} & 0 \\
& All Iteration & 50±0 & 50±0 & 50±0 & 52±0 & 53±0 & 57±0 & 0 & 0 & \textbf{0} & 0 \\
\hline
\hline
\end{tabular}
}
\end{table*}
\begin{table}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Multi-dimensional results.}\label{tab:multi-dimensional}
\centering
\scalebox{1.0}{
\begin{tabular}{c|cc|cc}
\hline
\multirow{2}*{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Reuters}} & \multicolumn{2}{c}{\textbf{100 leaves}}\\
\cline{2-5}
& NMI (\%) & Iteration & NMI (\%) & Iteration\\
\hline
GA & 0 & 0 & 0 & 0 \\
PSO & 0 & 0 & 0 & 0 \\
ACO & 0 & 0 & 0 & 0 \\
\hline
SMAC & 0 & 0 & 0 & 0 \\
Randomly & 0 & 0 & 0 & 0 \\
\hline
V-DBSCAN & 0 & 0 & 0 & 0 \\
BDE-DBSCAN & 0 & 0 & 0 & 0 \\
\hline
{DRL-DBSCAN}* & 0 & 0 & 0 & 0\\
{DRL-DBSCAN} & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}[t]
\centering
\subfigure[Data 1 Block 1.]{\label{fig:date1-b1}
\begin{minipage}[t]{0.333\linewidth}
\centering
\includegraphics[width=5.3cm]{figures/results/online-sample.png}
\end{minipage}%
}%
\subfigure[Data 1 Block 3.]{\label{fig:date1-b2}
\begin{minipage}[t]{0.333\linewidth}
\centering
\includegraphics[width=5.3cm]{}
\end{minipage}%
}%
\centering
\subfigure[Data 1 Block 5.]{\label{fig:date1-b3}
\begin{minipage}[t]{0.333\linewidth}
\centering
\includegraphics[width=5.3cm]{}
\end{minipage}%
}%
\centering
\subfigure[Data 2 Block 1.]{\label{fig:date2-b1}
\begin{minipage}[t]{0.333\linewidth}
\centering
\includegraphics[width=5.3cm]{}
\end{minipage}%
}%
\subfigure[Data 2 Block 3.]{\label{fig:date2-b2}
\begin{minipage}[t]{0.333\linewidth}
\centering
\includegraphics[width=5.3cm]{}
\end{minipage}%
}%
\centering
\subfigure[Data 2 Block 5.]{\label{fig:date2-b3}
\begin{minipage}[t]{0.333\linewidth}
\centering
\includegraphics[width=5.3cm]{}
\end{minipage}%
}%
\centering
\caption{Online clustering efficiency comparison.}\label{fig:online}
\end{figure*}
\begin{table*}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\caption{Online evaluation and comparison in Predictive Mode.}\label{tab:complexity}
\centering
\scalebox{1.0}{
\begin{tabular}{c|c|cc|cccc|cc|cc}
\hline
\multirow{2}*{\textbf{Chunks}} & \multirow{2}*{\textbf{Metrics}} & \multicolumn{2}{c|}{\textbf{Traditional}} & \multicolumn{4}{c|}{\textbf{Evolutionary}} & \multicolumn{4}{c}{\textbf{Dedicated}} \\
\cline{3-12}
& & \multirow{1}*{\textbf{Rand}} & \multirow{1}*{\textbf{BO-TPE}} & \multirow{1}*{\textbf{Anneal}} & \multirow{1}*{\textbf{PSO}} & \multirow{1}*{\textbf{GA}} & \multirow{1}*{\textbf{DE}} & \multirow{1}*{\textbf{KDist}} & \multirow{1}*{\textbf{BDE}} & \multirow{1}*{\textbf{DRL_{p}}} & \multirow{1}*{\textbf{DRL_{t}}}\\
\hline
\hline
\rowcolor{gray!15} $\mathcal{V}_{1}$ & NMI & .73±.19 & .86±.03 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.09±.27} & \textbf{.91±.02} \\
\rowcolor{white!15} & ARI & .39±.33 & .59±.15 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.06±.19} & \textbf{.76±.03} \\
\rowcolor{gray!15} $\mathcal{V}_{2}$ & NMI & .88±.04 & .91±.02 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.85±.12} & \textbf{.94±.00} \\
\rowcolor{white!15} & ARI & .61±.18 & .74±.12 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.55±.26} & \textbf{.85±.00} \\
\rowcolor{gray!15} $\mathcal{V}_{3}$ & NMI & .90±.06 & .94±.01 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.87±.07} & \textbf{.96±.00} \\
\rowcolor{white!15} & ARI & .68±.25 & .83±.03 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.58±.23} & \textbf{.85±.00} \\
\rowcolor{gray!15} $\mathcal{V}_{4}$ & NMI & .88±.08 & .92±.02 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.90±.04} & \textbf{.93±.00} \\
\rowcolor{white!15} & ARI & .64±.22 & .72±.07 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.69±.16} & \textbf{.76±.02} \\
\rowcolor{gray!15} $\mathcal{V}_{5}$ & NMI & .36±.16 & .53±.16 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.13±.05} & \textbf{.00±.00} \\
\rowcolor{white!15} & ARI & .02±.02 & .08±.06 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.00±.00} & \textbf{.00±.00} \\
\rowcolor{gray!15} $\mathcal{V}_{6}$ & NMI & .83±.08 & .89±.01 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.83±.05} & \textbf{.91±.00} \\
\rowcolor{white!15} & ARI & .50±.20 & .66±.07 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.43±.16} & \textbf{.71±.00} \\
\rowcolor{gray!15} $\mathcal{V}_{7}$ & NMI & .84±.05 & .86±.01 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.84±.02} & \textbf{.01±.00} \\
\rowcolor{white!15} & ARI & .48±.11 & .51±.02 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.45±.07} & \textbf{.00±.00} \\
\rowcolor{gray!15} $\mathcal{V}_{8}$ & NMI & .85±.04 & .87±.01 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.00±.00} & \textbf{.88±.00} \\
\rowcolor{white!15} & ARI & .46±.10 & .50±.07 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & .00±.00 & \textbf{.00±.00} & \textbf{.48±.04} \\
\hline
\hline
\end{tabular}
}
\end{table*}
\subsection{Multi-dimension DBSCAN Clustering Algorithm}\label{sec:multi-dimension}
In practical applications, the globally fixed density parameter is often only suitable for \textcolor{black}{low-dimensional \cite{}} or some \textcolor{black}{uniform density \cite{} data}.
In high-dimensional data, the curse of dimensionality of DBSCAN can lead to sparsity in the sample space.
In addition, when there are significant density differences between dimensions, different (group) dimensions require individualized density parameters.
To this end, we propose an improved DBSCAN algorithm with personalized parameters.
Specifically, we improve the definition of Direct Density-reachable in DBSCAN as follows:
\begin{define}
\textbf{(Directly Density-reachable in Multi-dimensional Clustering).}
An object $v_{j}$ is directly density reachable from another object $v_{i}$ in multi-dimension data chunk $\mathcal{V}$ if $\forall \mathcal{X}_p \in \{\mathcal{X}_{1},...,\mathcal{X}_{p},\mathcal{X}_{p+1},...\}$:
\begin{enumerate}
\item [1)] $v_{j} \in N_{Eps_p}(v_{i})$,
\item [2)] $v_{i}$ is the core point.
\end{enumerate}
Here, $N_{Eps_p}(v_{i})$ represents the set of points in chunk $\mathcal{V}$ that are less than distance $Eps_p$ from $v_{i}$.
Note that the distance here is the Euclidean distance calculated with the feature group $\mathcal{X}_p$.
And the condition for an object to be the core point in the feature group $\mathcal{X}_p$ is $|N_{Eps_p}(v_{i})| \geq {Minpts}_{p}$.
\end{define}
The other clustering steps (Density-reachable, Density-connected, and Cluster) of our improved DBSCAN algorithm suitable for multi-dimensional clustering are the same as those of the original DBSCAN algorithm, which will not be repeated here.
\begin{comment}
\begin{enumerate}
\item [(1)] \textbf{Directly Density-reachable.}
An object $v_{i}$ is directly density reachable from another object $v_{j}$ in data chunk $\mathcal{V}$ if
\item [(2)] \textbf{Density-reachable.}
\item [(3)] \textbf{Density-connected.}
\end{enumerate}
\begin{define}
\textbf{(Multi-dimension DBSCAN Algorithm).}
We define the multi-dimension DBSCAN clustering as the process of obtaining clusters $\mathcal{C}=\{c_{1}, ..., c_{n}, c_{n+1}, ...\}$ for all data objects $\{v_{1}, ..., v_{i}, v_{i+1}, ...\}$ in data chunk $\mathcal{V}$ based on the parameter combinations $\{Eps, MinPts\}$ for all feature group.
And $v_{i}$ is defined as core point if $N_{Eps}(v_{i})>MinPts$ holds in all feature groups.
\end{define}
\end{comment}
\begin{comment}
\begin{algorithm}[t]
\SetAlgoVlined
\KwIn{The $d$-dimensional feature set $\mathcal{X}=\{x_{1}, ..., x_{i}, x_{i+1}, ...\}$ of chunk $\mathcal{V}$; Partial label $\mathcal{Y}$ for chunk $\mathcal{V}$}
\KwOut{Optimal parameter combination $\{Eps, MinPts\}$}
\For{$episode = 1, ... E, ...$} {
Initialize parameter space for $Eps$ and $MinPts$
\For{$epoch = 1, ... e, ...$} {
Obatin the current state $s^{(e)}$ via Eq. (\ref{eq:state}) \label{code:observe_state}\\
Choose the action $a^{(e)}$ for the current state $s^{(e)}$ \label{code:choose_action}\\
Get new parameter combinations $\{Eps^{(e)}, MinPts^{(e)}\}$ \label{code:get_parameter}\\
\If{Parameter combinations have been explored}{
Load new clustering state $s^{(e+1)}$ and current rewards $r^{(e)}$ from hash table \\
}
\else{
DBSCAN clustering using parameters combinations \label{code:dbsacn_cluster}\\
Get new clustering state $s^{(e+1)}$ and current rewards $r^{(e)}$ via Eq. (\ref{eq:state}) and Eq. (\ref{eq:reward} \label{code:observe_state}\\
Update the hash table \\
}
Store $\mathcal{T}(s^{(e)},a^{(e)},s^{(e+1)},r^{(e)})$ in buffer \label{code:store_t}\\
Update optimal parameter combinations
}
}
\caption{Deep Reinforcement Learning guided automatic DBSCAN parameters search framework}
\label{algorithm:FinEvent}
\end{algorithm}
\end{comment}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{figures/results/Ratio-NMI-Label.jpg}\vspace{-1em}
\centering
\caption{Label percentage.}\label{fig:percentage}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{figures/results/Block-Epoch-online.jpg}\vspace{-1em}
\centering
\caption{Comparison of online task round consumption.}\label{fig:online}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{figures/results/Block-NMI-var.jpg}\vspace{-1em}
\centering
\caption{Performance of model variants in online evaluation.}\label{fig:variants}
\end{figure} |
1,116,691,497,669 | arxiv | \section{Introduction}
The location and nature of the QCD phase transition has been extensively
studied using lattice techniques with various different fermion actions
\cite{Cheng:2006qk,Bernard:2004je,Aoki:2006br,Chen:2000zu}.
Recently, the most detailed studies of the transition temperature have been
performed with different variants of the staggered fermion action
\cite{Cheng:2006qk,Bernard:2004je,Aoki:2006br}, which do not preserve the
full chiral symmetry of QCD at finite lattice spacing.
Domain Wall Fermions (DWF)
on the other hand, allow the realization of exact chiral symmetry on the lattice,
at the cost of introducing of an auxiliary fifth dimension.
A study of the transition temperature with domain wall fermions
was done for $N_t = 4, 6$ \cite{Chen:2000zu}. However, it was found
that at such coarse lattice spacings, the DWF formulation begins to break
down, with unphysical effects that prevent the extraction
of a reliable estimate of $T_c$.
In this work, we present a study by the RBC Collaboration of the pseudo-critical
temperature, $T_c$, with domain wall fermions with $N_t = 8$ and fifth-dimensional
extent $L_s = 32$. It is hoped that, at the finer lattice spacings needed for $N_t = 8$,
the large lattice artifacts that appear at $N_t = 4, 6$ are under better control. We also
present preliminary results from the HotQCD Collaboration with domain wall fermions at
$N_t = 8$ with $L_s = 96$.
\vspace{-0.4cm}
\section{Simulation Details at $L_s = 32$}
For our study we utilize the standard DWF action with an Iwasaki gauge action.
The behavior of DWF at zero temperature has been extensively studied for
this combination of actions; see refs. \cite{Antonio:2007tr, Allton:2008pn}
for more details.
The Rational Hybrid Monte Carlo (RHMC) algorithm
\cite{Clark:2004cp,Clark:2006fx}, an exact algorithm that satisfies
detailed balance and reversibility, is used to generate the gauge configurations.
A three-level nested Omelyan integrator
is used in the molecular dynamics evolution, with $\lambda = 0.22$.
The length of the molecular dynamics trajectories between Metropolis steps is
chosen to be $\tau = 1$. The step size is tuned to achieve an acceptance rate of
approximately $75\%$. A spatial volume of $16^3$ is used for the finite temperature ensembles
with $N_t = 8$. For each value of the gauge coupling, we use a fixed value for the bare
light and strange quark masses ($m_l a = 0.003$ and $m_s = 0.037$).
1200 molecular dynamics trajectories were also generated
at $\beta = 2.025$ with a volume of $16^3 \times 32$ and $L_s = 32$,
at the same quark masses used for the finite temperature
ensembles. These configurations were used to calculate the static quark potential, as
well as the meson spectrum.
From the static quark potential, we obtain a value for the Sommer parameter, $r_0/a = 3.08(9)$.
Using $r_0 = 0.469(7)$ fm., this indicates a lattice scale of $a^{-1} \approx 1.3$ GeV at
$\beta = 2.025$. The meson spectrum measurements, give a pion mass $m_\pi \approx 310$ MeV, while
the kaon mass is within 10\% of the physical value.
\begin{comment}
\begin{table}[bt]
\begin{center}
\begin{tabular}{c|ccccccccc}
\hline
\hline
$\beta$ & 1.95 & 1.975 & 2.00 & 2.0125 & 2.025 & 2.0375 & 2.05 & 2.0625 & 2.08$^\dagger$ \\
\hline
Trajectories & ~745 & 1100 & 1275 & 2150 & 2210 & 2690 & 3015 & 2105 & 1655\\
Acceptance Rate & ~0.778 & 0.769 & 0.760 & 0.776 & 0.745 & 0.746 & 0.754 & 0.753 & 0.852\\
$\sqrt{\left<\Delta H^2\right>}$ & ~0.603 & 0.583 & 0.647 & 0.687 & 0.824 & 1.072 & 1.248 & 1.599 & 0.478 \\
$\left<\exp(-\Delta H)\right>$ & ~1.026 & 1.022 & 0.969 & 1.017 & 0.987 & 0.995 & 0.987 & 1.051 & 1.002 \\
\end{tabular}
\caption{$\beta$ values, trajectories accumulated, and various RHMC statistics. $^\dagger$: $\delta t = 0.167$.}
\label{tab:parameters}
\end{center}
\end{table}
\end{comment}
\vspace{-0.4cm}
\section{Finite Temperature Observables}
The observables that we use to probe the chiral properties at a given temperature
are the light and strange quark chiral condensates
($\left<\bar \psi \psi_l\right>,\left<\bar \psi \psi_s\right>$), and
the disconnected part of the chiral susceptibility ($\chi_l, \chi_s$).
They are defined as:
\begin{eqnarray}
\left<\bar \psi \psi_q\right> & = &\frac{\partial \ln Z}{\partial m_q} = \frac{1}{N_s^3 N_t}\left<Tr(M_q^{-1})\right>\\
\chi_q & = & \left<(\bar \psi \psi_q)^2\right> - \left<\bar \psi \psi_q\right>^2
\end{eqnarray}
On all of our finite temperature configurations, we measure both the light
and strange chiral condensates every fifth trajectory, using 5
random sources per configuration to estimate $Tr(M_q^{-1})$.
Figure \ref{fig:pbp} shows the chiral condensate and
the disconnected part of the chiral susceptibility, respectively. Examining
the light and strange chiral condensate, it is difficult to precisely locate an
inflection point, which is the signal for a thermal crossover.
However, we can use the disconnected chiral susceptibility, a measure
of the fluctuations in the chiral condensate. As seen in figure
\ref{fig:pbp}, there is a clear peak in the light disconnected susceptibility.
The results for the chiral condensates and the associated susceptibilities are
summarized in table \ref{tab:ls32_results}. Fits to the peak region indicate
that $\beta_c = 2.041(5)$.
We also measure the observables that probe confinement, i.e. the Wilson line and its
associated susceptibility. These are defined as:
\begin{equation}
\left<W\right> = \frac{1}{N_s^3}\sum_{\mathbf{x}}Tr \left(\prod_{t=0}^{N_t-1} U_{\mathbf{x}, t}\right);~~ \chi_W = \left<W^2\right> - \left<W\right>^2.
\end{equation}
The results for the Wilson line and Wilson line susceptibility are also given in table \ref{tab:ls32_results}.
\begin{figure}[t]
\begin{minipage}[c]{0.47\textwidth}
\includegraphics[width=\textwidth]{figs/pbp.eps}
\end{minipage}
\begin{minipage}[c]{0.47\textwidth}
\includegraphics[width=\textwidth]{figs/pbp_sus.eps}
\end{minipage}
\caption{On the left, $\left<\bar{\psi}\psi\right>$ for $L_s = 32, 64, 96$. On the right, the disconnected
chiral susceptibility for $L_s = 32, 64, 96$.}
\label{fig:pbp}
\vspace{-0.5cm}
\end{figure}
\begin{table}[b]
\begin{tabular}{r@{.}l|c|cc|cc|cc}
\hline
\hline
\multicolumn{2}{c|}{$\beta$} & Traj. & $\left<\bar{\psi}\psi_l\right> ~(10^{-3})$ & $\chi_l ~(10^{-8})$ & $\left<\bar{\psi}\psi_s\right> ~(10^{-3})$ & $\chi_s ~(10^{-8})$ & $\left<W\right> ~ (10^{-3})$ & $\chi_W ~ (10^{-4})$\\
\hline
1&95 & 745 & 3.71(3) & 2.13(57) & 6.66(2) & 1.15(25) & 4.40(62) & 1.15(10)\\
1&975 & 1100 & 2.92(3) & 2.73(46) & 5.99(2) & 1.37(23) & 5.44(42) & 1.42(10)\\
2&00 & 1275 & 2.19(3) & 3.12(89) & 5.40(1) & 0.90(22) & 6.52(47) & 1.32(10)\\
2&0125 & 2150 & 1.89(3) & 5.43(68) & 5.14(2) & 1.88(23) & 9.02(53) & 1.46(5)\\
2&025 & 2210 & 1.62(3) & 5.91(87) & 4.92(2) & 1.56(21) & 10.18(61)& 1.45(8)\\
2&0375 & 2690 & 1.33(3) & 9.35(82) & 4.71(2) & 1.76(18) & 13.61(55)& 1.43(6)\\
2&05 & 3015 & 0.98(3) & 6.80(61) & 4.46(2) & 1.48(26) & 16.77(71)& 1.56(7)\\
2&0625 & 2105 & 0.84(3) & 6.85(90) & 4.34(2) & 1.38(18) & 18.22(86)& 1.72(9)\\
2&08 & 1655 & 0.58(3) & 3.77(65) & 4.10(3) & 1.00(21) & 25.91(129)&1.78(11)\\
\end{tabular}
\caption{Summary of finite temperature observables, as well as the number of trajectories generated.}
\label{tab:ls32_results}
\end{table}
\vspace{-0.4cm}
\section{Residual Mass}
One of the primary drawbacks of the current calculation is the rather
large residual chiral symmetry breaking for the parameters that we employ.
This manifests itself in a value for the residual mass, $m_{res}$
which is larger than the input light quark mass, $m_l a = 0.003$ over
almost the entire range of parameters in our calculation.
We have measured $m_{res}$ at $\beta = 2.025$ on the $16^3 \times 32$
zero temperature ensemble, which gives $m_{res} = .00665(8)$. We have also
measured the residual mass on the finite temperature lattices. These measurements
agree well with measurements on zero temperature lattices at nearby $\beta$.
Figure \ref{fig:mres} shows how $m_{res}$ varies with $\beta$.
As we can see, $m_{res}$ has a strong, exponential dependence on $\beta$.
While we have chosen the input light quark mass $m_l = 0.003$ to be fixed
at the different $\beta$, the exponential dependence
of $m_{res}$ means that the effective light quark mass,
$m_q = m_l + m_{res}$ changes significantly in the crossover region,
from $m_q \approx .0075$ at $\beta = 2.05$ up to $m_q \approx .013$ at
$\beta = 2.00$.
\begin{figure}[t]
\begin{minipage}[c]{0.47\textwidth}
\includegraphics[width=\textwidth]{figs/mres.eps}
\end{minipage}
\begin{minipage}[c]{0.47\textwidth}
\includegraphics[width=\textwidth]{figs/pbp_b2.0375.eps}
\end{minipage}
\caption{On the left, the dependence of $m_{res}$ on $\beta$. On the right, the dependence of
$\left<\bar{\psi}\psi\right>$ on $L_s$ at fixed input quark mass $m_q a = 0.003$ at $\beta = 2.0375$.}
\label{fig:mres}
\vspace{-0.5cm}
\end{figure}
\vspace{-0.4cm}
\section{Chiral observables at varying $L_s$}
The shifting of the quark mass with $\beta$ results in a distortion
of the shape of the susceptibility
curves that we use to locate the crossover transition. In order to
understand how this varying mass affects our results, we have measured
the partially quenched chiral condensate with different $L_s$ and $m_l$
at various $\beta$. In particular, we choose
$L_s = 64$ with the same input light quark, $m_l = 0.003$, while for
$L_s = 96$, we vary the input quark mass so that the total effective
quark mass $m_q$ approximately matches that for $L_s = 32$ at the chosen value
of $\beta$. For one value of the gauge
coupling ($\beta = 2.0375$), we measure $\bar{\psi}\psi$ with many
choices of valence $L_s$ and fixed input quark masses
$(m_l,m_s) = (0.003, 0.037)$.
Table \ref{tab:pbpLs} give the results of these measurements.
Figure \ref{fig:pbp} shows the results with
valence $L_s = 64$ and $L_s = 96$ in context with the $L_s = 32$ results.
\begin{table}[hbt]
\centering
\begin{tabular}{cc|ccc|ccc}
\hline
\hline
$L_s$ & $\beta$ & $m_l$ & $\left<\bar{\psi}\psi_l\right> ~(10^{-3})$ & $\chi_l ~(10^{-8})$ & $m_s$ & $\left<\bar{\psi}\psi_s\right> ~(10^{-3})$ & $\chi_s ~(10^{-8})$\\
\hline
8 & 2.0375 ~& ~0.003 & 4.33(2) & 2.39(22) & ~0.037 & 7.40(1) & 1.43(15)\\
16 & & & 1.75(2) & 4.05(40) & & 5.05(1) & 1.46(16)\\
24 & & & 1.40(2) & 5.89(63) & & 4.78(1) & 1.51(18)\\
48 & & & 1.28(3) & 11.4(14) & & 4.65(1) & 1.58(20)\\
\hline
64 & 2.0125 ~& ~0.003 & 1.83(4) & 10.7(9) & ~0.037 & 5.05(2) & 2.15(21)\\
& 2.025 & & 1.58(3) & 10.8(11) & & 4.84(2) & 1.67(25)\\
& 2.0375 & & 1.31(4) & 15.3(25) & & 4.63(1) & 1.63(20)\\
& 2.05 & & 0.96(3) & 12.9(14) & & 4.41(2) & 1.74(25)\\
\hline
96 & 2.025 ~& ~0.0078 & 2.03(2) & 5.59(92) & ~0.0418 & 5.29(1) & 1.27(20)\\
& 2.0375 ~& ~0.0063 & 1.59(2) & 8.22(75) & ~0.0403 & 4.95(1) & 1.61(19)\\
& 2.05 ~& ~0.0070 & 1.36(3) & 6.70(62) & ~0.0410 & 4.79(2) & 1.35(20)\\
\hline
\end{tabular}
\caption{Partially quenched measurements of $\left<\bar{\psi}\psi\right>$ at
different $L_s$, $\beta$, $m_l$.}
\label{tab:pbpLs}
\end{table}
From Figure \ref{fig:pbp}, we see that holding the input quark mass fixed
at $m_l = 0.003$, while reducing $m_{res}$ by setting $L_s = 64$ does not have a
significant effect on the chiral condensate. On the other hand, with a larger input quark
mass and $L_s = 96$, the chiral condensate shifts appreciably. Figure
\ref{fig:mres} shows the dependence of $\left<\bar{\psi}\psi\right>$
on $L_s$ at $\beta = 2.0375$. For small values of $L_s$, there is a strong
dependence, but the chiral condensate quickly plateaus to an approximately
constant value for $L_s = 32, 64$, even though $m_{res}$ is still changing
significantly in this region.
In contrast to the chiral condensate, the disconnected part of the chiral
susceptibility depends on the total effective quark mass,
$m_q = m_l + m_{res}$.
As seen in figure \ref{fig:pbp}, when the total quark mass is changed at $L_s = 64$,
the chiral susceptibility differs significantly from the measurements at $L_s = 32$.
However, when we keep the total quark mass $m_l + m_{res}$ fixed at $L_s = 96$,
the resulting chiral susceptibility agrees with $L_s = 32$.
Thus, while the chiral condensate is sensitive to the relative contributions from the
input quark mass and the residual mass, the chiral susceptibility is a function of
only the total quark mass, $m_q = m_l + m_{res}$.
\vspace{-0.4cm}
\section{Determining $T_c$ at $L_s = 32$}
\label{sec:Tc}
While the peak in the light chiral susceptibility is well-determined to be
$\beta_c = 2.041(5)$, there are several issues that need to be adressed in extracting
a physical value for the pseudo-critical temperature, $T_c$.
Since $m_{res}$ changes so drastically as a function of $\beta$, the chiral
susceptibility curve is distorted by the changing light quark mass. Taking a
simple ansatz $\chi_l \sim 1/m_q$, we can adjust our data so that the bare quark
mass is fixed as $\beta$ varies. This adjustment shifts $\beta_c$ to stronger
coupling, $\beta_c = 2.031(5)$.
We have determined the lattice scale at $\beta = 2.025$, which differs slightly from
$\beta_c = 2.031(5)$. Using a simple interpolation between our result at $\beta = 2.025$
and the results at weaker coupling\cite{Li:2006gra}, we obtain $r_0/a = 3.12(13)$ at
$\beta = \beta_c$. The results in ref. \cite{Li:2006gra} also indicate that chiral extrapolation and
finite-volume effects add 4\% to this value, giving $r_0/a = 3.25(18)$, where the error bar has
been artificially inflated to include this 4\% in the uncertainty.
Since this calculation is done only at $N_t = 8$ and at one set of quark masses, we cannot
perform either the chiral or continuum extrapolation needed to obtain a value for $T_c$ at physical
quark masses in the continuum.
From ref. \cite{Cheng:2006qk}, we can estimate the effect of the chiral extrapolation to the
physical quark masses to be
about 5\%. Ref. \cite{Cheng:2006qk} also found that the effect of the continuum extrapolation
is approximately 5\%, although it is obtained using the p4 action.
For our purposes, we estimate the error from the lack of continuum and chiral extrapolations
to be 10\%.
Taking these errors into account, we obtain a value of $T_c r_0 = .406(23)(41)$, or $T_c = 171(10)(17)$ MeV,
where the first error bar takes into account all the systematic errors outlined above except for the chiral
and continuum extrapolation, which are reflected in the second error.
\vspace{-0.4cm}
\section{Results at $L_s = 96$}
\begin{table}[t]
\begin{tabular}{r@{.}l|c|cc|cc|cc}
\hline
\hline
\multicolumn{2}{c|}{$\beta$} & Trajectories & $m_l a$ & $m_s a$ & $\left<\bar{\psi}\psi_l\right> ~(10^{-3})$ & $\chi_l ~(10^{-8})$ & $\left<W\right> ~ (10^{-3})$ & $\chi_W ~ (10^{-4})$\\
\hline
1&9875 & 1395 & 0.00250 & 0.0407 & 2.15(3) & 8.7(14) & 6.1(5) & 1.34(9)\\
2&00 & 1485 & 0.00325 & 0.0415 & 1.97(3) & 8.2(12) & 5.4(6) & 1.29(9)\\
2&0125 & 1425 & 0.00395 & 0.0422 & 1.68(3) & 10.1(17) & 9.4(6) & 1.59(11)\\
2&025 & 1730 & 0.00435 & 0.0426 & 1.64(2) & 8.3(11) & 9.6(6) & 1.45(8)\\
2&0375 & 1630 & 0.00485 & 0.0431 & 1.33(3) & 9.5(10) & 13.6(6) & 1.45(9)\\
2&05 & 1565 & 0.00525 & 0.0435 & 1.21(2) & 6.7(10) & 14.4(6) & 1.43(9)\\
\end{tabular}
\caption{Finite temperature observables for $L_s = 96$}
\label{tab:ls96_results}
\end{table}
The primary drawback of the RBC Collaboration's calculation just described
is the rapidly changing residual mass as a function of $\beta$ in the
transition region. As we have discussed, this means that the effective
quark masses in the strong coupling side of $\beta_c$ are significantly larger
than those on the weak coupling side. This has the effect of distorting the
shape of the chiral susceptibility peak in the $L_s = 32$ calculation, making the
peak appear sharper and at weaker coupling than if we worked at fixed quark mass.
The HotQCD Collaboration's calculation seeks to address this flaw by working at
$L_s = 96$. Utilizing $L_s = 96$ reduces $m_{res}$ approximately by a factor of 3.
It also allows us to choose the input quark masses ($m_l a$ and $m_s a$) so that
the total effective quark mass ($(m_l + m_{res}) a$ and $(m_s + m_{res}) a$) are
the same at each value of $\beta$ that is used. For this calculation,
the light quark mass is chosen to be $1/6$ the strange quark mass at each value of
$\beta$, where the strange quark is chosen to be approximately physical. The corresponding
bare quark masses are given in table \ref{tab:ls96_results}. Otherwise, the lattice actions,
the spatial volume, and the molecular dynamics algorithm used are all identical to
those used at $L_s = 32$.
Table \ref{tab:ls96_results} gives preliminary results for the chiral condensates, wilson line, and their
susceptibilities. These results are also presented in figure \ref{fig:ls96_result}. As expected, working
at fixed quark mass results in a chiral susceptibility where the peak is broader and
much harder to resolve. In addition, there are some indications that $\beta_c$ is
at stronger coupling at $L_s = 96$ compared to $L_s = 32$, as expected, but the
statistical error makes this far from a certain conclusion.
\begin{figure}[b]
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/dwfLs96_pbp_compLs32.eps}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/dwfLs96_wline_compLs32.eps}
\end{minipage}
\caption{On the left, a comparison between $L_s = 32$ and $L_s = 96$ of the light quark chiral condensate and chiral
susceptibility. On the right, the same comparison for the Wilson line and its susceptibility.}
\label{fig:ls96_result}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.4cm}
\section{Conclusions and Outlook}
\label{sec:conclusion}
We have presented two studies of the critical region of finite temperature QCD
using domain wall fermions at $N_t = 8$.
The calculation at $L_s = 32$ by the RBC uses observables related
to chiral symmetry (i.e. chiral condensate and disconnected chiral
susceptibility), to calculate the crossover transition temperature, giving
the result $T_c r_0 = .406(23)(41)$. or $T_c = 171(10)(17)$ MeV.
The second calculation by the HotQCD calculation improves upon the RBC
calculation by using $L_s = 96$ to further reduce the residual chiral symmetry breaking,
while tuning the input quark masses so that the total bare quark mass (including $m_{res}$)
is fixed at each different value of $\beta$. However, preliminary results
show that the peak is less sharply resolved compared to $L_s = 32$, so no reliable
estimate for $\beta_c$ can yet be obtained.
Data at a few additional $\beta$ at both stronger and weaker coupling are needed
to better resolve the shoulders of the chiral susceptibility peak (if
indeed such a peak exists). In addition, a zero temperature calculation to determine
the lattice scale needs to be done in order to obtain a physical value for $T_c$ in MeV.
\vspace{-0.2cm}
|
1,116,691,497,670 | arxiv | \section{Introduction}
In this paper we propose a sequential optimality condition, associated to the weak
stationarity condition presented in our previous work \cite{KrulikovskiRibeiroSachine20aX},
designed to deal with {\em Mathematical Programs with Cardinality Constraints} (MPCaC)
given by
\begin{equation}
\label{aw_prob:mpcac}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize } & f(x) \\
{\rm subject\ to } & x \in X, \\
& \|x\|_0\leq \alpha,
\end{array}
\end{equation}
where $f:\R^n\to\R$ is a continuously differentiable function, $X\subset\R^n$ is a set given
by equality and/or inequality constraints, $\alpha>0$ is a given natural number and
$\|x\|_0$ denotes the cardinality of the vector $x\in\R^n$, that is, the number of nonzero
components of $x$. We assume that $\alpha<n$, since otherwise the cardinality constraint
would be innocuous. Note, however, that if $\alpha$ is too small, the cardinality constraint
may be too restrictive leading to an empty feasible set.
Furthermore, the main difference between the problem (\ref{aw_prob:mpcac}) and a standard nonlinear
programming problem is that the cardinality constraint, despite of the notation, is not a norm,
nor continuous neither convex.
One reformulation to deal with this difficult cardinality constraint consists of addressing
its continuous counterpart \cite{BurdakovKanzowSchwartz16}
\begin{equation}
\label{aw_prob:relax}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y} & f(x) \\
{\rm subject\ to } & x \in X, \\
& e^Ty\geq n-\alpha, \\
& x_iy_i=0,\; i=1,\ldots,n, \\
& 0\leq y_i \leq 1, \; i=1,\ldots,n,
\end{array}
\end{equation}
which will be referred to as relaxed problem and, with some abuse of terminology,
will be indicated as MPCaC as well.
It can be seen that these problems are equivalent in the sense that global
solutions of (\ref{aw_prob:mpcac}) correspond, in a natural way, to global solutions
of (\ref{aw_prob:relax}) and, if $x^*\in\R^n$ is a local minimizer of (\ref{aw_prob:mpcac}),
then every feasible pair $(x^*,y^*)$ is a local minimizer of (\ref{aw_prob:relax}).
In \cite{KrulikovskiRibeiroSachine20aX} we proposed new and weaker stationarity
conditions for this class of problems, by means of a unified approach that goes
from the weakest to the strongest stationarity. Indeed we cannot assert about
KKT points for MPCaC problems, since some standard constraint qualifications
are violated. This occurs in view of the complementarity constraints $x_iy_i=0$,
$i=1,\ldots,n$.
However, the weaker condition proposed in \cite{KrulikovskiRibeiroSachine20aX},
called $W_{I}$-stationarity, even being weaker than KKT, is not a necessary
optimality condition. Therefore, we
propose in this work an Approximate Weak stationarity ($AW$-stationarity) concept,
which will be proved to be a legitimate optimality condition, independently of
any constraint qualification.
In the last few years, special attention has been paid to the so-called sequential
optimality conditions for nonlinear constrained optimization
\cite{AndreaniFazzioSchuverdtSecchin,AndreaniHaeserMartinez,AndreaniMartinezRamosSilva16,AndreaniMartinezRamosSilva18,AndreaniMartinezSvaiter,MartinezSvaiter,RibeiroSachineSantos18}.
Sequential optimality conditions are intrinsically related to the stopping criteria
of numerical algorithms, and their study aims at unifying the theoretical convergence
analysis associated with the corresponding algorithm. Within this context, for
instance, the augmented Lagrangian method (see \cite{BirginMartinez} and references therein)
has been extensively analyzed, being shown to satisfy weak sequential conditions,
thus giving rise to strong convergence results.
Sequential optimality conditions are necessary for optimality, i.e., a local
minimizer of the problem under consideration verifies such a condition, independently
of the fullfilment of any constraint qualification (CQ).
The \emph{approximate Karush-Kuhn-Tucker} (AKKT) is one of the most popular of these
conditions, and it was defined in \cite{AndreaniHaeserMartinez}
and \cite{QiWei}. Another two sequential optimality conditions for standard nonlinear
programming, both stronger than AKKT, are
\emph{positive approximate KKT} (PAKKT) \cite{AndreaniFazzioSchuverdtSecchin}
and \emph{complementary approximate KKT} (CAKKT) \cite{AndreaniMartinezSvaiter}.
Whenever it is proved that an AKKT (or CAKKT or PAKKT) point is indeed a
Karush-Kuhn-Tucker (KKT) point under a certain CQ,
any algorithm that reaches AKKT (or CAKKT or PAKKT) points
(e.g. augmented Lagrangian-type methods) automatically has the theoretical convergence
established assuming the same CQ. This paves the grounds for the aforementioned unification.
Sequential optimality conditions have also been proposed for nonstandard optimization
\cite{AndreaniHaeserSecchinSilva,HelouSantosSimoes20a,HelouSantosSimoes20b,Ramos}.
In the context of {\em Mathematical Programs with Equilibrium Constraints} (MPECs)
and motivated by AKKT, it was introduced in \cite{Ramos} the MPEC-AKKT condition with a
geometric appeal and in \cite{AndreaniHaeserSecchinSilva}, new conditions were established
for {\em Mathematical Problems with Complementarity Constraints} (MPCCs),
namely $AW$-, $AC$- and $AM$-stationarity. The latter one was compared with the sequential
condition present in \cite{Ramos}.
Even though there is a considerable literature devoted to sequential conditions for
standard nonlinear optimization and even for specific problems (MPCC and MPEC),
to the best of our knowledge, no sequential optimality condition has been
proposed for MPCaC problems. Such problems are very degenerate because of the problematic
complementarity constraints $x_iy_i=0$ and therefore the known sequential optimality
conditions may not be suitable to deal with them. Thereby, we propose a sequential
optimality condition, namely $AW$-stationarity, associated to $W_{I}$-stationarity
and designed to deal with MPCaC problems. This condition is based on the one
proposed in \cite{AndreaniHaeserSecchinSilva} for MPCC problems.
The main contribution of this paper is that $AW$-stationarity is indeed a necessary
optimality condition, without any constraint qualification assumption.
We also establish some relationships between our $AW$-stationarity and other well known
sequential optimality conditions. In particular, and surprisingly, we prove that
AKKT fails to detect good candidates for optimality
for every MPCaC problem.
We stress that, despite the algorithmic appeal of the sequential optimality
conditions, in this work we are neither concerned with
applications nor with computational aspects or algorithmic consequences.
Our aim is to discuss theoretical aspects of such conditions for MPCaC problems.
The paper is organized as follows: in Section \ref{aw_sec:prelim} we establish the
notation, some definitions and results concerning standard nonlinear programming
and recall the weak stationarity concept proposed in our previous
work \cite{KrulikovskiRibeiroSachine20aX}. Section \ref{aw_sec:aw} presents
the main results of this paper, concerning sequential optimality conditions for MPCaC.
In Section \ref{aw_sec:rel_akkt_cakkt} we provide some relationships between approximate
stationarity for standard nonlinear optimization and $AW$-stationarity.
Concluding remarks are presented in Section~\ref{aw_sec:concl}.
\medskip
{\noindent\bf Notation.} Throughout this paper, for vectors $x,y\in\R^n$, $x*y$ denotes
the Hadamard product between $x$ and $y$, that is, the vector obtained by the
componentwise product of $x$ and $y$. In the same way, the ``min'' in the vector
$\min\{x,y\}$ is taken componentwise. We also use the following sets of indices:
$I_{00}(x,y)=\{i\mid x_i=0,y_i=0\}$,
$I_{\pm 0}(x,y)=\{i\mid x_i\neq 0,y_i=0\}$,
$I_{0+}(x,y)=\{i\mid x_i=0,y_i\in(0,1)\}$,
$I_{01}(x,y)=\{i\mid x_i=0,y_i=1\}$,
$I_{0\,>}(x,y)=\{i\mid x_i=0,y_i>0\}$ and
$I_0(x)=\{i\mid x_i=0\}$. For a vector-valued function $\xi:\R^n\to\R^s$, denote
$I_\xi(x)=\{i\mid \xi_i(x)=0\}$, the set of active indices, and
$\nabla\xi= (\nabla\xi_1\ldots\nabla\xi_s)$, the transpose of the Jacobian of $\xi$.
\section{Preliminaries}
\label{aw_sec:prelim}
In this section we recall some basic definitions and results related to standard
nonlinear programming (NLP), as well as the weak stationarity concept proposed in
our previous work \cite{KrulikovskiRibeiroSachine20aX}.
Consider first the problem
\begin{equation}
\label{aw_prob:nlp}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize } & f(x)\\
{\rm subject\ to } & g(x)\leq 0, \\
& h(x)=0,
\end{array}
\end{equation}
where $f:\R^n\to\R$, $g:\R^n\to\R^m$ and $h:\R^n\to\R^p$ are continuously differentiable
functions. The feasible set of the problem (\ref{aw_prob:nlp}) is denoted by
\begin{equation}
\label{aw_feas_set}
\Omega=\{x\in\R^n\mid g(x)\leq 0, h(x)=0\}.
\end{equation}
\begin{definition}
We say that $x^*\in\Omega$ is a global solution of the problem (\ref{aw_prob:nlp}), that is,
a global minimizer of $f$ in $\Omega$, when $f(x^*)\leq f(x)$ for all $x\in\Omega$. If
$f(x^*)\leq f(x)$ for all $x\in\Omega$ such that $\|x-x^*\|\leq\delta$, for some constant
$\delta>0$, $x^*$ is said to be a local solution of the problem.
\end{definition}
A feasible point $x^*\in\Omega$ is said to be {\em stationary} for the problem (\ref{aw_prob:nlp})
if there exists a vector $\lambda=(\lambda^g,\lambda^h)\in\R_+^m\times\R^p$
(Lagrange multipliers) such that
\begin{subequations}
\begin{align}
\nabla f(x^*)+\sum_{i=1}^{m}\lambda_i^g\nabla g_i(x^*)
+\sum_{i=1}^{p}\lambda_i^h\nabla h_i(x^*)=0, & \label{aw_kkt_grad} \\
(\lambda^g)^Tg(x^*)=0. \label{aw_kkt_compl}
\end{align}
\end{subequations}
The function $L:\R^n\times\R^m\times\R^p\to\R$ given by
\begin{equation}
\label{aw_lagr:nlp}
L(x,\lambda^g,\lambda^h)=f(x)+(\lambda^g)^Tg(x)+(\lambda^h)^Th(x)
\end{equation}
is the \textit{Lagrangian} function associated with the problem (\ref{aw_prob:nlp}).
The conditions (\ref{aw_kkt_grad})--(\ref{aw_kkt_compl}) are known as Karush-Kuhn-Tucker (KKT)
conditions and, under certain qualification assumptions, are satisfied at a minimizer.
\subsection{Constraint qualifications}
\label{aw_sec:cq}
There are a lot of constraint qualifications, that is, conditions under which every minimizer
satifies KKT. In order to discuss some of them, let us recall the definition of cone, which
plays an important role in this context.
We say that a nonempty set $C\subset\R^n$ is a {\em cone} if $td\in{C}$ for all $t\geq 0$
and $d\in{C}$. Given a set $S\subset\R^n$, its {\em polar} is the cone
$$
S^\circ=\{p\in\R^n\mid p^Tx\leq 0,\ \forall x\in S\}.
$$
Associated with the feasible set of the problem (\ref{aw_prob:nlp}), we have
the {\em tangent cone}
$$
T_{\Omega}(\bar{x})=\left\{d\in\R^n\mid\exists(x^k)\subset\Omega\mbox{, }
(t_k)\subset\R_+ \mbox{ : }t_k\to 0 \mbox{ and }
\dfrac{x^k-\bar{x}}{t_k}\to d\right\}
$$
and the {\em linearized cone}
$$
D_{\Omega}(\bar{x})=\left\{d\in\R^n\mid\nabla g_i(\bar{x})^Td \leq 0, \;
i\in I_g(\bar{x})
\mbox{ and }\nabla h(\bar{x})^Td=0 \right\}.
$$
The following basic result says that we may ignore inactive constraints when
dealing with the tangent and linearized cones.
\begin{lemma}
\label{aw_lm:inactive}
Consider a feasible point $\bar{x}\in\Omega$, an index set $J\supset I_g(\bar{x})$
and
$$
\Omega'=\{x\in\R^n\mid g_{i}(x)\leq 0,\, i\in J, \; h(x)=0\}.
$$
Then, $T_{\Omega}(\bar{x})=T_{\Omega'}(\bar{x})$ and
$D_{\Omega}(\bar{x})=D_{\Omega'}(\bar{x})$.
\end{lemma}
\beginproof
Note first that $\bar{x}\in\Omega'$ since $\Omega\subset\Omega'$. Moreover, since
$g_i(\bar{x})<0$ for $i\notin J$, there exists $\delta>0$ such that
$B(\bar{x},\delta)\cap\Omega'=B(\bar{x},\delta)\cap\Omega$. Thus,
$T_{\Omega'}(\bar{x})=T_{\Omega}(\bar{x})$ because the conditions $t_k\to 0$ and
$(x^k-\bar{x})/t_k\to d$ imply that $x^k\to\bar{x}$.
The equality between the linearized cones is straightforward, as the active
indices corresponding to $\Omega$ and $\Omega'$ coincide.
\endproof
Now we relate the cones of feasible sets when some variables do not appear in the constraints.
\begin{lemma}
\label{aw_lm:withoutx}
Consider the general feasible set $\Omega$, defined in (\ref{aw_feas_set}), and the set
$$
\Omega'=\{(x,y)\in\R^n\times\R^m\mid g(x)\leq 0, h(x)=0\}.
$$
Given a feasible point $(\bar{x},\bar{y})\in\Omega'$, we have
$$
T_{\Omega'}(\bar{x},\bar{y})=T_{\Omega}(\bar{x})\times\R^m
\quad\mbox{and}\quad
D_{\Omega'}(\bar{x},\bar{y})=D_{\Omega}(\bar{x})\times\R^m.
$$
As a consequence,
$$
T_{\Omega'}^\circ(\bar{x},\bar{y})=T_{\Omega}^\circ(\bar{x})\times\{0\}
\quad\mbox{and}\quad
D_{\Omega'}^\circ(\bar{x},\bar{y})=D_{\Omega}^\circ(\bar{x})\times\{0\}.
$$
\end{lemma}
\beginproof
The relation between the tangent cones follows directly from the definition.
Moreover, if $\zeta(x,y)=g(x)$ and $\xi(x,y)=h(x)$ represent the constraints that
define $\Omega'$ and $d=(\alpha,\beta)$, then
$$
\nabla\zeta_i(x,y)^Td=\nabla g_i(x)^T\alpha\quad\mbox{and}\quad
\nabla\xi_j(x,y)^Td=\nabla h_j(x)^T\alpha,
$$
which gives the second claim. Finally, the last statement of the lemma follows from
the fact that $(S\times\R^m)^\circ=S^\circ\times\{0\}$ for any set $S\subset\R^n$.
\endproof
The two weakest constraint qualifications are defined below.
\begin{definition}
\label{aw_acq_gcq}
We say that Abadie constraint qualification (ACQ) holds at $\bar{x}\in\Omega$ if
$T_{\Omega}(\bar{x})=D_{\Omega}(\bar{x})$. If
$T_{\Omega}^\circ(\bar{x})=D_{\Omega}^\circ(\bar{x})$, we say that Guignard constraint
qualification (GCQ) holds at $\bar{x}$.
\end{definition}
In the following lemma we analyze GCQ for simple complementarity constraints.
\begin{lemma}
\label{aw_lm:compl1}
Consider the set
$$
\Omega=\{(x,y)\in\R^n\times\R^n\mid y\geq 0, x*y=0\}.
$$
Given $(\bar{x},\bar{y})\in\Omega$, there holds
$T_{\Omega}^\circ(\bar{x},\bar{y})=D_{\Omega}^\circ(\bar{x},\bar{y})$.
\end{lemma}
\beginproof
Denote the constraints that define $\Omega$ by
$\zeta(x,y)=-y$ and $\xi(x,y)=x*y$. Given $d=(u,v)\in D_{\Omega}(\bar{x},\bar{y})$,
we claim that the vectors $d_1=(u,0)$ and $d_2=(0,v)$ belong to
$T_{\Omega}(\bar{x},\bar{y})$. Indeed, since
$$
\bar{y}_iu_i+\bar{x}_iv_i=
\nabla\xi_i(\bar{x},\bar{y})^Td=0
$$
for all $i=1,\ldots,n$, we have $u_{I_{0>}}=0$ and $v_{I_{\pm0}}=0$, where we used
the simplified notation $I_{0>}=I_{0>}(\bar{x},\bar{y})$ and
$I_{\pm 0}=I_{\pm 0}(\bar{x},\bar{y})$. Thus, the sequences
$t_k={1}/{k}$ and $(x^k,y^k)=(\bar{x}+t_ku,\bar{y})$ satisfy $y^k\geq 0$, $x^k_{I_{0>}}=0$,
$y^k_{I_{\pm0}\cup I_{00}}=0$, which means that $(x^k,y^k)\subset\Omega$, and
$$
\dfrac{(x^k,y^k)-(\bar{x},\bar{y})}{t_k}\to{d_1},
$$
proving that $d_1\in T_{\Omega}(\bar{x},\bar{y})$.
Now, defining $(z^k,w^k)=(\bar{x},\bar{y}+t_kv)$, we have
$$
\dfrac{(z^k,w^k)-(\bar{x},\bar{y})}{t_k}\to{d_2}\quad\mbox{and}\quad z^k*w^k=0.
$$
Furthermore, for $i\in I_{0>}$ we have $w_i^k>0$ for all sufficiently large $k$.
On the other hand, if $i\in I_{00}$, then the constraint $\zeta_i$ is active and hence,
$
-v_i=\nabla\zeta_i(\bar{x},\bar{y})^Td\leq 0,
$
giving $w_i^k=\bar{y}_i+t_kv_i=t_kv_i\geq 0$. Thus, $(z^k,w^k)\subset\Omega$,
which yields ${d_2}\in T_{\Omega}(\bar{x},\bar{y})$, proving the claim.
Finally, given $p\in T_{\Omega}^\circ(\bar{x},\bar{y})$ we conclude
that $p^Td=p^Td_1+p^Td_2\leq 0$, proving that $p\in D_{\Omega}^\circ(\bar{x},\bar{y})$.
\endproof
\subsection{Sequential optimality conditions for standard NLP}
\label{aw_sec:akkt}
The goal of this section is to present some well known approximate optimality conditions
for nonlinear constrained optimization
\cite{AndreaniHaeserMartinez,AndreaniMartinezRamosSilva16,AndreaniMartinezRamosSilva18,AndreaniMartinezSvaiter,BirginMartinez,MartinezSvaiter}.
\begin{definition}
\label{aw_def:akkt}
Let $\bar{x}\in\R^n$ be a feasible point for the problem (\ref{aw_prob:nlp}). We say that
$\bar{x}$ is an \textnormal{Approximate KKT} (AKKT) point if
there exist sequences $(x^k)\subset{\mathbb R}^n$ and
$(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k}\big)\subset\R_+^m\times\R^p$
such that $x^k\to \bar{x}$,
\begin{subequations}
\begin{align}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})\to 0, \label{aw_grad1xxx} \\
\min\{-g(x^k),\lambda^{g,k}\}\to 0. \label{aw_compl1xxx}
\end{align}
\end{subequations}
\end{definition}
We have below two stronger conditions than AKKT.
\begin{definition}
\label{aw_def:cakkt}
Let $\bar{x}\in\R^n$ be a feasible point for the problem (\ref{aw_prob:nlp}). We say that
$\bar{x}$ is a \textnormal{Complementary Approximate KKT} (CAKKT) point if there
exist sequences $(x^k)\subset{\mathbb R}^n$ and
$(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k}\big)\subset\R_+^m\times\R^p$ such
that $x^k\to \bar{x}$,
\begin{subequations}
\begin{align}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})\to 0,
\label{aw_gradcakkt} \\ \lambda^{g,k}*g(x^k)\to 0\quad\mbox{and}\quad
\lambda^{h,k}*h(x^k)\to 0. \label{aw_complcakkt}
\end{align}
\end{subequations}
\end{definition}
\begin{remark}
\label{aw_rm:compl4_compl1}
Note that if $(\alpha^k)\subset{\mathbb R_+}$ and $(\beta^k)\subset{\mathbb R}$ are
sequences satisfying $\alpha^k\beta^k\to 0$ and $\beta^k\to\bar\beta\leq 0$, then
$\min\{-\beta^k,\alpha^k\}\to 0$. Indeed, if $\bar\beta<0$, we have $\alpha^k\to 0$ and
hence $\alpha^k<-\beta^k$ for all $k$ sufficiently large, giving
$\min\{-\beta^k,\alpha^k\}=\alpha^k\to 0$. On the other side, if $\bar\beta=0$, we also
conclude that $\min\{-\beta^k,\alpha^k\}\to 0$, since $\alpha^k\geq 0$. This means that
condition (\ref{aw_complcakkt}) implies (\ref{aw_compl1xxx}), and thus CAKKT implies AKKT.
\end{remark}
Another known sequential optimality condition relates to the sign of the multipliers.
\begin{definition}
\label{aw_def:pakkt}
Let $\bar{x}\in\R^n$ be a feasible point for the problem (\ref{aw_prob:nlp}). We say that
$\bar{x}$ is a \textnormal{Positive Approximate KKT} (PAKKT) point if there
exist sequences $(x^k)\subset{\mathbb R}^n$ and
$(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k}\big)\subset\R_+^m\times\R^p$ such
that $x^k\to \bar{x}$,
\begin{subequations}
\begin{align}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})\to 0,
\label{aw_gradpakkt} \\
\min\{-g(x^k),\lambda^{g,k}\}\to 0, \label{aw_complpakkt} \\
\lambda_i^{g,k}g_i(x^k)>0 \mbox{ if } \displaystyle\mathop{\rm lim\, sup}_{k\to\infty}
\frac{\lambda_i^{g,k}}{\delta_k}>0, \label{aw_pospakkt1} \\
\lambda_j^{h,k}h_j(x^k)>0 \mbox{ if } \displaystyle\mathop{\rm lim\, sup}_{k\to\infty}
\frac{|\lambda_j^{h,k}|}{\delta_k}>0, \label{aw_pospakkt2}
\end{align}
\end{subequations}
where $\delta_k= \|(1,\lambda^k)\|_\infty$.
\end{definition}
As well known in the literature, all the three sequential conditions above are
necessary optimality conditions without any constraint qualification.
\subsection{Weak stationarity for MPCaC}
\label{aw_sec:sequential}
In this section we recall the weaker stationarity concept and some related results,
established in \cite{KrulikovskiRibeiroSachine20aX}, for MPCaC. As we have seen in that work,
except in special cases, e.g., when $X$ is given by linear constraints, we do not have the
fulfillment of constraint qualifications for the relaxed problem (\ref{aw_prob:relax}).
So, the standard KKT conditions are not necessary optimality conditions, fact that in turn
justifies the study of weaker conditions.
For ease of presentation consider the functions $\theta:\R^n\to\R$,
$G,H,\tilde{H}:\R^n\to\R^n$ given by
$$
\theta(y)=n-\alpha-e^Ty\,,\quad G(x)=x\,,\quad H(y)=-y\quad\mbox{and}\quad\tilde{H}(y)=y-e.
$$
Then we can rewrite the relaxed problem (\ref{aw_prob:relax}) as
\begin{equation}
\label{aw_prob:relax1}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y} & f(x) \\
{\rm subject\ to } & g(x)\leq 0, h(x)=0, \\
& \theta(y)\leq 0, \\
& H(y)\leq 0, \tilde{H}(y)\leq 0, \\
& G(x)*H(y)=0.
\end{array}
\end{equation}
Given a feasible point $(\bar{x},\bar{y})$ for the problem (\ref{aw_prob:relax1}) and a set of
indices $I$ such that
\begin{equation}
\label{aw_indexI}
I_{0+}(\bar{x},\bar{y})\cup I_{01}(\bar{x},\bar{y})\subset I\subset I_0(\bar{x}),
\end{equation}
we have that $i\in I$ or $i\in I_{00}(\bar{x},\bar{y})\cup I_{\pm 0}(\bar{x},\bar{y})$
for all $i\in\{1,\ldots,n\}$. Thus, $G_i(\bar{x})=0$ or $H_i(\bar{y})=0$.
This suggests to consider an auxiliary problem by removing the problematic constraint
$G(x)*H(y)=0$ and including other ones that ensure the null product. We then define
the $I$-{\em Tightened} Nonlinear Problem at $(\bar{x},\bar{y})$ by
\begin{equation}
\label{aw_prob:tight}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y} & f(x) \\
{\rm subject\ to } & g(x)\leq 0, h(x)=0, \\
& \theta(y)\leq 0, \\
& G_i(x)=0,\; i \in I, \\
& H_i(y)\leq 0,\; i\in I_{0+}(\bar{x},\bar{y})\cup I_{01}(\bar{x},\bar{y}), \\
& H_i(y)=0,\; i\in I_{00}(\bar{x},\bar{y})\cup I_{\pm0}(\bar{x},\bar{y}), \\
& \tilde{H}(y)\leq 0.
\end{array}
\end{equation}
This problem will be also indicated by TNLP$_{I}(\bar{x},\bar{y})$
and, when there is no chance for ambiguity, it will be referred simply to
as {\em tightened problem}. Note that we tighten only
those constraints that are involved in the complementarity constraint $G(x)*H(y)=0$, by
incorporating the equality constraints $G_i$'s and converting the active inequalities
$H_i$'s into equalities.
The following lemma is a straightforward consequence of the definition of the tightened
problem TNLP$_{I}(\bar{x},\bar{y})$.
\begin{lemma}
\label{aw_lm:tnlp1}
Consider the tightened problem (\ref{aw_prob:tight}). Then,
\begin{enumerate}
\item\label{aw_inactiveH} the inequalities defined by $H_i$,
$i\in I_{0+}(\bar{x},\bar{y})\cup I_{01}(\bar{x},\bar{y})$, are inactive at $(\bar{x},\bar{y})$;
\item\label{aw_xybar_tnlp} $(\bar{x},\bar{y})$ is feasible for TLNP$_{I}(\bar{x},\bar{y})$;
\item every feasible point of (\ref{aw_prob:tight}) is feasible for (\ref{aw_prob:relax1});
\item if $(\bar{x},\bar{y})$ is a global (local) minimizer of (\ref{aw_prob:relax1}),
then it is also a global (local) minimizer of TNLP$_{I}(\bar{x},\bar{y})$.
\end{enumerate}
\end{lemma}
The Lagrangian function associated with TNLP$_{I}(\bar{x},\bar{y})$ is the function
$$
\mathcal{L}_{I}:\R^n\times\R^n\times\R^m\times\R^p\times\R\times\R^{|I|}
\times\R^n\times\R^n\to\R
$$
given by
\begin{align*}
\mathcal{L}_{I}(x,y,\lambda^g,\lambda^h,\lambda^{\theta},\lambda_I^G,\lambda^H,
\lambda^{\tilde{H}}) = & \, L(x,\lambda^g,\lambda^h)+
\lambda^{\theta}\theta(y)+(\lambda_I^G)^TG_{I}(x) \\
& +(\lambda^H)^TH(y)+(\lambda^{\tilde{H}})^T\tilde{H}(y),
\end{align*}
where $L$ is the Lagrangian defined in (\ref{aw_lagr:nlp}).
Note that the tightened problem, and hence its Lagrangian, depends on the index set $I$, which
in turn depends on the point $(\bar{x},\bar{y})$. It should be also noted that
\begin{equation}
\label{aw_nablaLIx}
\nabla_{x,y}\mathcal{L}_{I}(x,y,\lambda)=
\left(\begin{array}{c} \nabla_{x}L(x,\lambda^g,\lambda^h)+\sum_{i\in I}\lambda_i^Ge_i
\vspace{3pt} \\ -\lambda^{\theta}e-\lambda^H+\lambda^{\tilde{H}} \end{array}\right).
\end{equation}
The weaker stationarity concept proposed in \cite{KrulikovskiRibeiroSachine20aX}
is presented below.
\begin{definition}
\label{aw_def:wstat_xy}
Consider a feasible point $(\bar{x},\bar{y})$ of the relaxed problem (\ref{aw_prob:relax1})
and a set of indices $I$ satisfying (\ref{aw_indexI}). We say that $(\bar{x},\bar{y})$
is $I$-weakly stationary ($W_{I}$-stationary) for this problem if there exists a vector
$$
\lambda=(\lambda^g,\lambda^h,\lambda^{\theta},\lambda_I^G,\lambda^H,\lambda^{\tilde{H}})\in
\R_+^m\times\R^p\times\R_+\times\R^{|I|}\times\R^n\times\R_+^n
$$
such that
\begin{enumerate}
\item\label{aw_lagrangian0} $\nabla_{x,y}{\cal{L}}_{I}(\bar{x},\bar{y},\lambda)=0$;
\item\label{aw_g0} $(\lambda^g)^Tg(\bar{x})=0$;
\item\label{aw_theta0} $\lambda^{\theta}\theta(\bar{y})=0$;
\item\label{aw_H0} $\lambda^H_i=0$ for all $i\in I_{0+}(\bar{x},\bar{y})
\cup I_{01}(\bar{x},\bar{y})$;
\item\label{aw_htilde0} $(\lambda^{\tilde{H}})^T\tilde{H}(\bar{y})=0$.
\end{enumerate}
\end{definition}
\begin{remark}
\label{aw_rm:wstat}
The first item of Definition \ref{aw_def:wstat_xy} is nothing else than the gradient
KKT condition for the tightened problem (\ref{aw_prob:tight}). Items (\ref{aw_g0}), (\ref{aw_theta0})
and (\ref{aw_htilde0}) represent the standard KKT complementarity conditions for the
inequality constraints $g(x)\leq 0$, $\theta(y)\leq 0$ and $\tilde{H}(y)\leq 0$,
respectively, of the tightened problem. Item (\ref{aw_H0}) also represents KKT
complementarity conditions for the constraints $H_i(y)\leq 0$,
$i\in I_{0+}(\bar{x},\bar{y})\cup I_{01}(\bar{x},\bar{y})$, in view of
Lemma \ref{aw_lm:tnlp1}(\ref{aw_inactiveH}).
\end{remark}
As an immediate consequence of Remark \ref{aw_rm:wstat} we have the following characterization
of $W_{I}$-stationarity for the relaxed problem in terms of stationarity for the
tightened problem.
\begin{proposition}
\label{aw_prop:wstat_kkttnlp}
Let $(\bar{x},\bar{y})$ be a feasible point of the relaxed problem (\ref{aw_prob:relax1}). Then,
$(\bar{x},\bar{y})$ is $W_{I}$-stationary if and only if it is a KKT point for the tightened
problem (\ref{aw_prob:tight}).
\end{proposition}
Note that in view of Proposition \ref{aw_prop:wstat_kkttnlp} we could have defined
$W_{I}$-stationarity simply as KKT for the tightened problem (\ref{aw_prob:tight}). Nevertheless,
we prefer as in Definition \ref{aw_def:wstat_xy} in order to have its last condition (\ref{aw_H0})
explicitly, instead of hidden in the complementarity condition. This way of stating weak
stationarity is also similar to that used in the MPCC setting, see
\cite{AndreaniHaeserSecchinSilva,FlegelKanzow}.
In the next result we justify why Definition \ref{aw_def:wstat_xy} is considered a weaker
stationarity concept for the relaxed problem.
\begin{theorem}
\label{aw_th:kkt_wstat}
Suppose that $(\bar{x},\bar{y})$ is a KKT point for the relaxed problem (\ref{aw_prob:relax1}).
Then $(\bar{x},\bar{y})$ is $W_{I}$-stationary.
\end{theorem}
At this point we could ask if $W_{I}$-stationarity, being weaker than KKT, is a
necessary optimality condition. That is, can we ensure that a minimizer of the relaxed
problem is $W_{I}$-stationary for some index set $I$ satisfying (\ref{aw_indexI})?
The answer is no, as illustrated in the following example.
\begin{example}
\label{aw_ex:minnotWI}
Consider the MPCaC and the associated relaxed problem given below.
$$
\begin{array}{lr}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x\in\R^3} & x_1 \\
{\rm subject\ to } & (1-x_1)^3+x_3^2\leq 0, \\
& \|x\|_0\leq 2, \\
& \\
&
\end{array}
\hspace{.5cm}
&
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y\in\R^3} & x_1 \\
{\rm subject\ to } & (1-x_1)^3+x_3^2\leq 0, \\
& y_1+y_2+y_3\geq 1, \\
& x_iy_i=0,\; i=1,2,3, \\
& 0\leq y_i \leq 1, \; i=1,2,3.
\end{array}
\end{array}
$$
The point $x^*=(1,0,0)$ is a global solution of MPCaC and $(x^*,y^*)$, with
$y^*=(0,1,0)$, is a global solution of the relaxed problem.
For the points $x^*$ and $(x^*,y^*)$ we have
$$
I_{0}=\{2,3\},\;\; I_{01}=\{2\},\;\; I_{\pm 0}=\{1\},\;\; I_{00}=\{3\}
\quad\mbox{and}\quad I_{0+}=\emptyset.
$$
So, there are two choices for $I$ that satisfy (\ref{aw_indexI}): $I'=\{2\}$ or $I''=\{2,3\}$.
Let us analyze each one of them.
For $I=I'$ we have
\begin{align*}
\nabla_{x}L(x^*,\lambda^g)+\sum_{i\in I}\lambda_i^Ge_i=
\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right)+
\left(\begin{array}{c} 0 \\ \lambda_2^G \\ 0 \end{array}\right).
\end{align*}
Since this expression does not vanish, taking into account (\ref{aw_nablaLIx}), we see that
the pair $(x^*,y^*)$ is not $W_{I}$-stationary.
Now, for $I=I''$ we have
\begin{align*}
\nabla_{x}L(x^*,\lambda^g)+\sum_{i\in I}\lambda_i^Ge_i=
\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right)+
\left(\begin{array}{c} 0 \\ \lambda_2^G \\ \lambda_3^G \end{array}\right).
\end{align*}
Again, the above expression does not vanish and then $(x^*,y^*)$ is not $W_{I}$-stationary.
\end{example}
In view of Example \ref{aw_ex:minnotWI} and motivated to find a necessary optimality
condition for MPCaC problems, we propose in the next section the concept of
approximate weak stationarity, which will be satisfied at every minimizer,
independently of any constraint qualification.
\section{Sequential optimality conditions for MPCaC}
\label{aw_sec:aw}
In order to define our sequential optimality condition, consider the function
$$
\mathcal{L}:\R^n\times\R^n\times\R^m\times\R^p\times\R\times\R^n
\times\R^n\times\R^n\to\R
$$
defined by
\begin{align*}
\mathcal{L}(x,y,\lambda^g,\lambda^h,\lambda^{\theta},\lambda^G,\lambda^H,
\lambda^{\tilde{H}}) = & \, L(x,\lambda^g,\lambda^h)+
\lambda^{\theta}\theta(y)+(\lambda^G)^TG(x) \\
& +(\lambda^H)^TH(y)+(\lambda^{\tilde{H}})^T\tilde{H}(y),
\end{align*}
where $L$ is the Lagrangian defined in (\ref{aw_lagr:nlp}).
Note that ${\cal L}$ resembles the Lagrangian ${\cal L}_I$, associated with
TNLP$_I(\bar{x},\bar{y})$. The only difference is that the term
$(\lambda_I^G)^TG_{I}(x)$ was replaced by $(\lambda^G)^TG(x)$. Here it will be
convenient to see that
\begin{align}
\nabla_{x,y}\mathcal{L}(x,y,\lambda)=
\left(\begin{array}{c} \nabla_{x}L(x,\lambda^g,\lambda^h)+
\displaystyle\sum_{i=1}^{n}\lambda_i^G\nabla G_i(x) \\ \lambda^{\theta}\nabla\theta(y)
+\displaystyle\sum_{i=1}^{n}\lambda_i^H \nabla H_i(y)
+\sum_{i=1}^{n}\lambda_i^{\tilde{H}}\nabla \tilde{H}_i(y)
\end{array}\right). \label{aw_nablaL0}
\end{align}
\begin{definition}
\label{aw_def:aw}
Let $(\bar{x},\bar{y})$ be a feasible point of the relaxed problem (\ref{aw_prob:relax1}).
We say that $(\bar{x},\bar{y})$ is Approximately Weakly stationary ($AW$-stationary)
for this problem if there exist sequences $(x^k,y^k)\subset\R^n\times\R^n$ and
$$
(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k},\lambda^{\theta,k},\lambda^{G,k},
\lambda^{H,k},\lambda^{\tilde{H},k}\big)\subset\R_+^m\times\R^p\times\R_+
\times\R^n\times\R^n\times\R_+^n
$$
such that
\begin{enumerate}
\item\label{aw_xkyk} $(x^k,y^k)\to(\bar{x},\bar{y})$; \vspace{2pt}
\item\label{aw_gradLtnlp} $\nabla_{x,y}{\cal{L}}(x^k,y^k,\lambda^k)\to 0$; \vspace{2pt}
\item\label{aw_glbdg} $\min\{-g(x^k),\lambda^{g,k}\}\to 0$; \vspace{2pt}
\item\label{aw_thlbdth} $\min\{-\theta(y^k),\lambda^{\theta,k}\}\to 0$;
\vspace{2pt}
\item\label{aw_GlbdG} $\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0$ for all
$i=1,\ldots,n$; \vspace{2pt}
\item\label{aw_HlbdH} $\min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0$ for all
$i=1,\ldots,n$; \vspace{2pt}
\item\label{aw_HtillbdHtil} $\min\{-\tilde{H}(y^k),\lambda^{\tilde{H},k}\}\to 0$.
\end{enumerate}
\end{definition}
\begin{remark}
\label{aw_rm:awstat1}
Definition~\ref{aw_def:aw} resembles AKKT condition, where (\ref{aw_glbdg}), (\ref{aw_thlbdth})
and (\ref{aw_HtillbdHtil}) represent the approximate complementarity conditions for
the inequality constraints $g(x)\leq 0$, $\theta(y)\leq 0$ and $\tilde{H}(y)\leq 0$,
respectively and (\ref{aw_HlbdH}) is related to the last complementarity condition in
W$_I$-stationarity. As a matter of fact, $AW$-stationarity is equivalent to AKKT for
TNLP$_{I_0}$, as we shall see ahead in Theorem \ref{aw_th:aw_akkt}.
\end{remark}
Let us review Example \ref{aw_ex:minnotWI} in light of the above definition. We have seen
that the minimizer is not $W_{I}$-stationary, but now we can see that it is
$AW$-stationary.
\begin{example}
\label{aw_ex:minnotWI_AW}
Consider the problem given in Example \ref{aw_ex:minnotWI}.
We claim that the global solution of the relaxed problem, $(x^*,y^*)$, is
$AW$-stationary. Indeed,
consider the sequences $(x^k,y^k)\subset\R^3\times\R^3$ and
$$
(\lambda^k)=\big(\lambda^{g,k},\lambda^{\theta,k},\lambda^{G,k},
\lambda^{H,k},\lambda^{\tilde{H},k}\big)\subset\R_+^3\times\R_+\times\R^3\times\R^3\times\R_+^3
$$
defined by $x^k=(1+1/k,0,0)$, $y^k=(0,1,0)$, $\lambda^{g,k}=k^2/3$, $\lambda^{\theta,k}=0$ and
$\lambda^{G,k}=\lambda^{H,k}=\lambda^{\tilde{H},k}=0$. Then, we have $(x^k,y^k)\to(x^*,y^*)$
and
\begin{align*}
\nabla_{x}L(x^k,\lambda^{g,k})+\sum_{i=1}^{n}\lambda_i^{G,k}\nabla G_i(x^k)=
\left(\begin{array}{c} 1-3\lambda^{g,k}(1-x_1^k)^2 \\ 0 \\ 2\lambda^{g,k}x_3^k
\end{array}\right)=0.
\end{align*}
So, in view of (\ref{aw_nablaL0}), we obtain the first two items of Definition \ref{aw_def:aw}.
Now, note that
$g(x^k)\to g(x^*)=0$ and $\theta(y^k)\to\theta(y^*)=0$,
which in turn imply that
$$
\min\{-g(x^k),\lambda^{g,k}\}\to 0\quad\mbox{and}\quad\min\{-\theta(y^k),\lambda^{\theta,k}\}\to 0,
$$
giving items (\ref{aw_glbdg}) and
(\ref{aw_thlbdth}). The relation $\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0$ is immediate.
Besides, since $\tilde{H}(y^k)\to\tilde{H}(y^*)\leq 0$, $\lambda^{\tilde{H},k}=0$,
$H(y^k)\to H(y^*)\leq 0$ and $\lambda^{H,k}=0$,
we have $\min\{-\tilde{H}(y^k),\lambda^{\tilde{H},k}\}\to 0$ and
$\min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0$, obtaining items
(\ref{aw_GlbdG}), (\ref{aw_HlbdH}) and (\ref{aw_HtillbdHtil}).
\end{example}
Now we shall prove that the above example reflects a general result, that is,
every minimizer of an MPCaC problem is $AW$-stationary.
We start the theoretical analysis with two simple facts.
The first one says that the expression $\sum_{i=1}^{n}\lambda_i^{G,k}\nabla G_i(x^k)$
could be replaced by
$\sum_{i\in I_0}\lambda_i^{G,k}\nabla G_i(x^k)$. The second fact states
that $AW$-stationarity is weaker than $W_{I}$-stationarity, and consequently weaker
than KKT, in view of Theorem~\ref{aw_th:kkt_wstat}.
\begin{lemma}
\label{aw_lm:awI0}
Let $(\bar{x},\bar{y})$ be an $AW$-stationary point for the relaxed
problem (\ref{aw_prob:relax1}), with corresponding sequences $(x^k,y^k)$ and
$(\lambda^k)$. Then,
$$
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})+
\sum_{i\in I_0}\lambda_i^{G,k}\nabla G_i(x^k)\to 0.
$$
\end{lemma}
\beginproof
In view of (\ref{aw_nablaL0}), we have, in particular,
\begin{equation}
\label{aw_eq_awI01}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})+
\sum_{i=1}^{n}\lambda_i^{G,k}\nabla G_i(x^k)\to 0.
\end{equation}
For $i\notin I_0$, we have $\lim_{k\to\infty}G_i(x^k)=G_i(\bar{x})=\bar{x}_i\neq 0$.
Therefore, we can assume without loss of generality that there exists $\epsilon>0$
such that $|G_i(x^k)|\geq\epsilon$ for all $k$. Since
$\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0$, we obtain
$|\lambda_i^{G,k}|\to 0$ and hence,
$$
\sum_{i\notin I_0}\lambda_i^{G,k}\nabla G_i(x^k)\to 0.
$$
By subtracting this from (\ref{aw_eq_awI01}), we conclude the proof.
\endproof
\begin{lemma}
\label{aw_lm:ws_aw}
Let $(\bar{x},\bar{y})$ be a $W_{I}$-stationary point for the relaxed
problem (\ref{aw_prob:relax1}), in the sense of Definition \ref{aw_def:wstat_xy}. Then
$(\bar{x},\bar{y})$ is $AW$-stationary for this problem.
\end{lemma}
\beginproof
Consider a vector
$$
\lambda=(\lambda^g,\lambda^h,\lambda^{\theta},\lambda_I^G,\lambda^H,
\lambda^{\tilde{H}})\in\R_+^m\times\R^p\times\R_+\times\R^{|I|}\times\R^n\times\R_+^n
$$
satisfying Definition \ref{aw_def:wstat_xy}. Then, the (constant) sequences
$(x^k,y^k)\subset\R^n\times\R^n$ and
$$
(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k},\lambda^{\theta,k},\lambda^{G,k},
\lambda^{H,k},\lambda^{\tilde{H},k}\big)\subset\R_+^m\times\R^p\times\R_+
\times\R^n\times\R^n\times\R_+^n,
$$
defined by
$$
(x^k,y^k)=(\bar{x},\bar{y})\,,\ \big(\lambda^{g,k},\lambda^{h,k},\lambda^{\theta,k},
\lambda_I^{G,k},\lambda^{H,k},\lambda^{\tilde{H},k}\big)=(\lambda^g,\lambda^h,
\lambda^{\theta},\lambda_I^G,\lambda^H,\lambda^{\tilde{H}})
$$
and $\lambda_i^{G,k}=0$ for $i\notin I$ and $k\in\mathbb{N}$, satisfy
Definition \ref{aw_def:aw}.
\endproof
\begin{remark}
\label{aw_rm:awstat2}
We point out here that, in contrast to W$_I$-stationarity, which conveniently depends
on the set $I$, our sequential optimality condition is independent of any set $I$.
This is a desirable feature since $AW$-stationarity has a certain amount of algorithmic
appeal. In practice, one is able to use such conditions as a stopping criterion for
an algorithm designed to solve the MPCaC problem.
\end{remark}
Before proving our main sequential optimality results, let us see some preliminary
lemmas. To this end, consider the augmented problem
\begin{equation}
\label{aw_prob:augm}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y,w} & f(x) \\
{\rm subject\ to } & g(x)\leq 0, h(x)=0, \\
& \theta(y)\leq 0, \\
& w^G-G(x)=0,\, w^H+H(y)=0, \\
& \tilde{H}(y)\leq 0, \\
& w\in W,
\end{array}
\end{equation}
where $W=\{w=\big(w^G,w^H\big)\in\R^n\times\R_+^n\mid w^G*w^H=0\}$.
This problem will be crucial in the analysis. In the next two lemmas we establish
the equivalence between the relaxed problem (\ref{aw_prob:relax1}) and this augmented
problem. Moreover, there is a suitable reason to write the constraints
$H(y)\leq 0$ and $G(x)*H(y)=0$ of (\ref{aw_prob:relax1}) in the format
$w\in W$. Such a strategy will enable us to apply Lemma \ref{aw_lm:compl1}
to obtain Guignard constraint qualification for an auxiliary problem ahead.
\begin{lemma}
\label{aw_lm:sol_relax_sol_augm}
Let $(x^*,y^*)$ be a local (global) minimizer of the relaxed problem (\ref{aw_prob:relax1}).
Given $w^*\in\R^n\times\R^n$, if the point $(x^*,y^*,w^*)$ is
feasible for the augmented problem~(\ref{aw_prob:augm}), then it is a local
(global) minimizer of this problem. In particular, this holds for
$w^*=\big(G(x^*),-H(y^*)\big)$.
\end{lemma}
\beginproof
First, the relation between local minimizers. In view of the equivalence of norms,
we consider $\|\cdot\|_{\infty}$, for convenience.
By hypothesis, there exists $\delta>0$ such that if $(x,y)$ is feasible
for (\ref{aw_prob:relax1}) and $\|(x,y)-(x^*,y^*)\|_{\infty}\leq\delta$,
then $f(x^*)\leq f(x)$. Suppose that $(x^*,y^*,w^*)$ is feasible for the
problem (\ref{aw_prob:augm}) and consider an arbitrary feasible point $(x,y,w)$ for
this problem such that $\|(x,y,w)-(x^*,y^*,w^*)\|_{\infty}\leq\delta$.
Then, the pair $(x,y)$ is feasible for (\ref{aw_prob:relax1}) and
$\|(x,y)-(x^*,y^*)\|_{\infty}\leq\delta$. Hence, $f(x^*)\leq f(x)$ and,
therefore, $(x^*,y^*,w^*)$ is a local minimizer of (\ref{aw_prob:augm}). Note
that $(x^*,y^*,w^*)$, with $w^*=\big(G(x^*),-H(y^*)\big)$, is trivially feasible.
Finally, if we ignore the neighborhoods in the argument above, we obtain the
relation between global minimizers.
\endproof
For the sake of completeness we prove below the converse of
Lemma \ref{aw_lm:sol_relax_sol_augm}.
\begin{lemma}
\label{aw_lm:sol_augm_sol_relax}
Let $(x^*,y^*,w^*)$ be a local (global) minimizer of (\ref{aw_prob:augm}).
Then $(x^*,y^*)$ is a local (global) minimizer of (\ref{aw_prob:relax1}).
\end{lemma}
\beginproof
By the feasibility of $(x^*,y^*,w^*)$ we have that $(x^*,y^*)$ is feasible for
(\ref{aw_prob:relax1}),
\begin{equation}
\label{aw_eq:wGH1}
(w^*)^G=G(x^*) \quad\mbox{and}\quad (w^*)^H=-H(y^*).
\end{equation}
Consider $\delta_1>0$ such that $f(x^*)\leq f(x)$ for all feasible point $(x,y,w)$ of
(\ref{aw_prob:augm}), satisfying $\|(x,y,w)-(x^*,y^*,w^*)\|_{\infty}\leq\delta_1$.
Let $\delta_2>0$ be such that
\begin{equation}
\label{aw_eq:wGH2}
\|G(x)-G(x^*)\|_{\infty}\leq\delta_1 \quad\mbox{and}\quad
\|H(y)-H(y^*)\|_{\infty}\leq\delta_1
\end{equation}
for all $(x,y)\in\R^n\times\R^n$ with $\|(x,y)-(x^*,y^*)\|_{\infty}\leq\delta_2$. Define
$\delta=\min\{\delta_1,\delta_2\}$ and take $(x,y)$, feasible for (\ref{aw_prob:relax1}),
such that $\|(x,y)-(x^*,y^*)\|_{\infty}\leq\delta$. Thus we have (\ref{aw_eq:wGH2}), which in
view of (\ref{aw_eq:wGH1}) can be rewritten as $\|w-w^*\|_{\infty}\leq\delta_1$, with
$w=\big(G(x),-H(y)\big)$. Therefore, $(x,y,w)$ is feasible for (\ref{aw_prob:augm})
and
$$
\|(x,y,w)-(x^*,y^*,w^*)\|_{\infty}\leq\delta_1,
$$
implying that $f(x^*)\leq f(x)$.
Now, let us see the global optimality. So, assume that $(x^*,y^*,w^*)$ is a global minimizer
of (\ref{aw_prob:augm}). Then $(x^*,y^*)$ is feasible for (\ref{aw_prob:relax1}).
Furthermore, given an arbitrary feasible point $(x,y)$, we have that
$(x,y,w)$, with $w=\big(G(x),-H(y)\big)$, is feasible for (\ref{aw_prob:augm}).
Therefore, $f(x^*)\leq f(x)$.
\endproof
\begin{lemma}
\label{aw_lm:sol_relax_sol_prox}
Suppose that $(x^*,y^*)$ is a local minimizer of the relaxed problem (\ref{aw_prob:relax1}).
Then, given an arbitrary norm $\|\cdot\|$, there exists $\delta>0$ such that
$(x^*,y^*,w^*)$, with $w^*=\big(G(x^*),-H(y^*)\big)$, is the unique global minimizer
of the problem
\begin{equation}
\label{aw_prob:prox}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y,w} & f(x)+\dfrac{1}{2}\|(x,y)-(x^*,y^*)\|_2^2 \\
{\rm subject\ to } & g(x)\leq 0, h(x)=0, \\
& \theta(y)\leq 0, \\
& w^G-G(x)=0,\, w^H+H(y)=0, \\
& \tilde{H}(y)\leq 0, \\
& w\in W, \\
& \|(x,y,w)-(x^*,y^*,w^*)\|\leq\delta.
\end{array}
\end{equation}
\end{lemma}
\beginproof
By Lemma \ref{aw_lm:sol_relax_sol_augm}, we have that $(x^*,y^*,w^*)$ is a local minimizer
of (\ref{aw_prob:augm}).
Consider $\delta>0$ such that if $(x,y,w)$ is feasible for (\ref{aw_prob:augm}) and
\begin{equation}
\label{aw_eq:xywlocal}
\|(x,y,w)-(x^*,y^*,w^*)\|\leq\delta,
\end{equation}
then $f(x^*)\leq f(x)$.
Note that $(x^*,y^*,w^*)$ is feasible for (\ref{aw_prob:prox}). Moreover, given any
feasible point $(x,y,w)$, of (\ref{aw_prob:prox}), we have that it is also feasible for
(\ref{aw_prob:augm}) and satisfies (\ref{aw_eq:xywlocal}). Hence,
$$
f(x^*)+\dfrac{1}{2}\|(x^*,y^*)-(x^*,y^*)\|_2^2=f(x^*)\leq f(x)\leq f(x)+
\dfrac{1}{2}\|(x,y)-(x^*,y^*)\|_2^2,
$$
proving that $(x^*,y^*,w^*)$ is a global minimizer of (\ref{aw_prob:prox}).
Now, suppose that $(\bar{x},\bar{y},\bar{w})$ is also a global minimizer
of (\ref{aw_prob:prox}). Then,
$$
f(\bar{x})+\dfrac{1}{2}\|(\bar{x},\bar{y})-(x^*,y^*)\|_2^2\leq f(x^*)+
\dfrac{1}{2}\|(x^*,y^*)-(x^*,y^*)\|_2^2=f(x^*)\leq f(\bar{x}),
$$
where the last inequality follows from the fact that $(\bar{x},\bar{y},\bar{w})$ is
feasible for (\ref{aw_prob:augm}) and satisfies (\ref{aw_eq:xywlocal}). Therefore,
$(\bar{x},\bar{y})=(x^*,y^*)$, and hence
$$
\bar{w}=\big(G(\bar{x}),-H(\bar{y})\big)=
\big(G(x^*),-H(y^*)\big)=w^*,
$$
proving the uniqueness.
\endproof
The next result shows that our stationarity concept, given in Definition \ref{aw_def:aw},
is a legitimate optimality condition, independently of any constraint qualification.
This is a requirement for them to be useful in the analysis of algorithms.
\begin{theorem}
\label{aw_th:aw}
If $(x^*,y^*)$ is a local minimizer of the relaxed problem (\ref{aw_prob:relax1}), then
it is an $AW$-stationary point, in the sense of Definition \ref{aw_def:aw}.
\end{theorem}
\beginproof
Defining $w^*=\big(G(x^*),-H(y^*)\big)$, we conclude from
Lemma \ref{aw_lm:sol_relax_sol_prox} that there exists $\delta>0$ such that the point
$(x^*,y^*,w^*)$ is the unique global minimizer of the problem~(\ref{aw_prob:prox}),
with $\|\cdot\|_2$ in the last constraint. Define the (partial) infeasibility
measure associated with this problem as
\begin{equation*}
\begin{array}{rcl}
\varphi(x,y,w) & = & \dfrac{1}{2}\Big(\|g^+(x)\|_2^2+\|h(x)\|_2^2+\|\theta^+(y)\|_2^2+
\|w^G-G(x)\|_2^2 \vspace{3pt} \\
& & +\|w^H+H(y)\|_2^2+\|\tilde H^+(y)\|_2^2\Big),
\end{array}
\end{equation*}
consider a sequence $\rho_k\to\infty$ and let $(x^k,y^k,w^k)$ be a global minimizer
of the penalized problem
\begin{equation}
\label{aw_prob:penalized}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y,w} & f(x)+\dfrac{1}{2}\|(x,y)-(x^*,y^*)\|_2^2 +
\rho_k\varphi(x,y,w) \\
{\rm subject\ to } & w\in W, \\
& \|(x,y,w)-(x^*,y^*,w^*)\|_2^2\leq\delta^2,
\end{array}
\end{equation}
which is well defined because the objective function is continuous and the feasible
set is compact. Since $\|(x^k,y^k,w^k)-(x^*,y^*,w^*)\|_2\leq\delta$, we can assume
without loss of generality that the sequence $(x^k,y^k,w^k)$ converges to some point
$(\bar{x},\bar{y},\bar{w})$. We claim that $(\bar{x},\bar{y},\bar{w})=(x^*,y^*,w^*)$.
Note first that $(x^*,y^*,w^*)$ is feasible for (\ref{aw_prob:penalized}) and
$\varphi(x^*,y^*,w^*)=0$. So, by the optimality of $(x^k,y^k,w^k)$ we have
\begin{equation}
\label{aw_eq:penaliz}
f(x^k)+\dfrac{1}{2}\|(x^k,y^k)-(x^*,y^*)\|_2^2+
\rho_k\varphi(x^k,y^k,w^k)\leq f(x^*),
\end{equation}
implying that $\varphi(x^k,y^k,w^k)\to 0$, because $\rho_k\to\infty$.
This in turn implies that $\varphi(\bar{x},\bar{y},\bar{w})=0$, giving
$g^+(\bar{x})=0$, $h(\bar{x})=0$, $\theta^+(\bar{y})=0$,
$\bar{w}^G=G(\bar{x})$, $\bar{w}^H=-H(\bar{y})$ and $\tilde{H}^+(\bar{y})=0$.
Moreover, as the sequence $(x^k,y^k,w^k)$ is feasible for (\ref{aw_prob:penalized}),
its limit point $(\bar{x},\bar{y},\bar{w})$ satisfies $\bar{w}\in W$, because $W$
is a closed set, and $\|(\bar{x},\bar{y},\bar{w})-(x^*,y^*,w^*)\|\leq\delta$.
Therefore, $(\bar{x},\bar{y},\bar{w})$ is feasible for (\ref{aw_prob:prox}).
Furthermore, from (\ref{aw_eq:penaliz}) we obtain
$$
f(x^k)+\dfrac{1}{2}\|(x^k,y^k)-(x^*,y^*)\|_2^2\leq f(x^*).
$$
Taking the limit it follows that
$$
f(\bar{x})+\dfrac{1}{2}\|(\bar{x},\bar{y})-(x^*,y^*)\|_2^2\leq f(x^*),
$$
which means that $(\bar{x},\bar{y},\bar{w})$ is optimal for (\ref{aw_prob:prox}).
By the uniqueness of the optimal solution of this problem, we conclude that
$(\bar{x},\bar{y},\bar{w})=(x^*,y^*,w^*)$, proving the claim. As consequence,
we have the first item of Definition \ref{aw_def:aw}.
In order to prove the next item, let us see first that a constraint
qualification holds at the minimizer $(x^k,y^k,w^k)$. Since
$(x^k,y^k,w^k)\to(x^*,y^*,w^*)$, we may assume without loss of generality that
$\|(x^k,y^k,w^k)-(x^*,y^*,w^*)\|_2<\delta$ for all $k$. That is, the inequality
constraint in the problem (\ref{aw_prob:penalized}) is inactive at the minimizer.
By Lemma \ref{aw_lm:inactive}, the tangent and linearized cones at
this point are the ones taking into account only the constraints in $w\in W$, namely,
\begin{equation}
\label{aw_eq:wW}
-w^H\leq 0\quad\mbox{and}\quad w^G*w^H=0.
\end{equation}
Thus, in view of Lemmas \ref{aw_lm:withoutx} and \ref{aw_lm:compl1}, Guignard constraint
qualification holds at $(x^k,y^k,w^k)$.
This implies that it satisfies the KKT conditions, which means that there exist
multipliers $\mu^{H,k}\in\R_+^n$ and $\mu^{0,k}\in\R^n$, associated with the
constraints in $w\in W$, such that
\begin{subequations}
\begin{align}
\nabla f(x^k)+(x^k-x^*)+\rho_k\nabla_{x}\varphi(x^k,y^k,w^k)=0 \label{aw_gradLpx} \\
(y^k-y^*)+\rho_k\nabla_{y}\varphi(x^k,y^k,w^k)=0 \label{aw_gradLpy} \\
\rho_k\nabla_{w^G}\varphi(x^k,y^k,w^k)+\mu^{0,k}*w^{H,k}=0 \label{aw_gradLpwG} \\
\rho_k\nabla_{w^H}\varphi(x^k,y^k,w^k)-\mu^{H,k}+\mu^{0,k}*w^{G,k}=0 \label{aw_gradLpwH} \\
\mu^{H,k}*w^{H,k}=0. \label{aw_complw}
\end{align}
\end{subequations}
Noting that the partial gradients of $\varphi$ are given by
\begin{subequations}
\begin{align}
\nabla_{x}\varphi(x,y,w)=\nabla g(x)g^+(x)+\nabla h(x)h(x)+
\nabla G(x)\big(G(x)-w^G\big), \label{aw_gradphix} \\
\nabla_{y}\varphi(x,y,w)=\theta^+(y)\nabla\theta(y)+\nabla\tilde H(y)\tilde H^+(y)+
\nabla H(y)\big(w^H+H(y)\big), \label{aw_gradphiy} \\
\nabla_{w^G}\varphi(x,y,w)=w^G-G(x)\quad\mbox{and}\quad
\nabla_{w^H}\varphi(x,y,w)=w^H+H(y) \label{aw_gradphiw}
\end{align}
\end{subequations}
and defining $\lambda^k$ as
$$
\begin{array}{c}
\lambda^{g,k}=\rho_kg^+(x^k),\ \lambda^{h,k}=\rho_kh(x^k),\ \lambda^{\theta,k}=
\rho_k\theta^+(y^k),
\vspace{6pt} \\ {\lambda}^{G,k}=\rho_k\big(G(x^k)-w^{G,k}\big),
\ \lambda^{H,k}=\rho_k\big(w^{H,k}+H(y^k)\big),
\ \lambda^{\tilde{H},k}=\rho_k\tilde{H}^+(y^k),
\end{array}
$$
we see immediately that $\lambda^{g,k}\geq 0$, $\lambda^{\theta,k}\geq 0$ and
$\lambda^{\tilde{H},k}\geq 0$. Moreover, using (\ref{aw_gradLpx}) and (\ref{aw_gradphix}),
we obtain
$$
\nabla_{x}{\cal L}(x^k,y^k,\lambda^k)=
\nabla f(x^k)+\rho_k\nabla_{x}\varphi(x^k,y^k,w^k)=x^*-x^k\to 0.
$$
Furthermore, from (\ref{aw_gradLpy}) and (\ref{aw_gradphiy}), we have
$$
\nabla_{y}{\cal L}(x^k,y^k,\lambda^k)=
\rho_k\nabla_{y}\varphi(x^k,y^k,w^k)=y^*-y^k\to 0,
$$
proving item (\ref{aw_gradLtnlp}).
Let us prove item (\ref{aw_glbdg}). By the feasibility of $(x^*,y^*)$ we have
$g_i(x^*)\leq 0$ for all $i=1,\ldots,m$. If $g_i(x^*)=0$, then
$\min\{-g_i(x^k),\lambda_i^{g,k}\}\to 0$ since $g_i(x^k)\to 0$ and
$\lambda_i^{g,k}\geq 0$. On the other hand, if $g_i(x^*)<0$, we may assume
that $g_i(x^k)<0$ for all $k$. Thus, $g_i^+(x^k)=0$, yielding
$\lambda_i^{g,k}=\rho_kg_i^+(x^k)=0$. Therefore,
$\min\{-g_i(x^k),\lambda_i^{g,k}\}=0$. Items (\ref{aw_thlbdth}) and
(\ref{aw_HtillbdHtil}) can be proved by the same reasoning.
Now, note that by (\ref{aw_gradLpwG}) ,(\ref{aw_gradLpwH}) and (\ref{aw_gradphiw}) we have
\begin{equation}
\label{aw_lbdGH}
\lambda^{G,k}=\mu^{0,k}*w^{H,k}\quad\mbox{and}\quad
\lambda^{H,k}=\mu^{H,k}-\mu^{0,k}*w^{G,k}.
\end{equation}
Therefore, using the fact that $w^k\in W$, we obtain
$$
\lambda_i^{G,k}w_i^{G,k}=\mu_i^{0,k}w_i^{H,k}w_i^{G,k}=0
$$
for all $i=1,\ldots,m$. Furthermore, given $i\notin I_0(x^*)$, we have
$$
w_i^{G,k}\to(w_i^*)^G=G_i(x^*)=x_i^*\neq 0,
$$
implying that $\lambda_i^{G,k}=0$ for all $k$ large enough. So,
$\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}=0$. On the other hand, if
$i\in I_0(x^*)$, we have $G_i(x^k)\to G_i(x^*)=x_i^*=0$, and hence,
$\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0$, proving item (\ref{aw_GlbdG}).
To prove the next item, note that using (\ref{aw_lbdGH}), (\ref{aw_complw})
and the fact that $w^k\in W$,
\begin{equation}
\label{aw_lbdHwH}
\lambda^{H,k}*w^{H,k}=\mu^{H,k}*w^{H,k}-
\mu^{0,k}*w^{G,k}*w^{H,k}=0.
\end{equation}
By the feasibility of $(x^*,y^*)$, we have $H(y^*)\leq 0$. In the
case $H_i(y^*)<0$, there holds
$$
w_i^{H,k}\to(w_i^*)^H=-H_i(y^*)>0,
$$
giving $\lambda_i^{H,k}=0$ for all $k$ large enough. Thus,
$\min\{-H_i(y^k),|\lambda_i^{H,k}|\}=0$. On the other hand, if
$H_i(y^*)=0$, we have $H_i(y^k)\to H_i(y^*)=0$, and consequently,
$\min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0$, proving item (\ref{aw_HlbdH}) and
completing the proof.
\endproof
\section{Relations to other sequential optimality conditions}
\label{aw_sec:rel_akkt_cakkt}
In this section we discuss the relationships between approximate
stationarity for standard nonlinear optimization and $AW$-stationarity.
As well known, every minimizer of an optimization problem is AKKT (see
Definition \ref{aw_def:akkt}). However, and surprisingly, we start by proving
that the AKKT condition fails to detect good candidates for optimality
for every MPCaC problem.
\begin{theorem}
\label{aw_th:akktrelax}
Every feasible point $(\bar{x},\bar{y})$ for the relaxed problem (\ref{aw_prob:relax1})
is AKKT.
\end{theorem}
\beginproof
We need to prove that there exist sequences $(x^k,y^k)\subset\R^n\times\R^n$ and
$$
\big(\mu^{g,k},\mu^{h,k},\mu^{\theta,k},
\mu^{H,k},\mu^{\tilde{H},k},\mu^{\xi,k}\big)\subset\R_+^m\times\R^p\times\R_+
\times\R_+^n\times\R_+^n\times\R^n
$$
such that $(x^k,y^k)\to(\bar{x},\bar{y})$ and
\begin{subequations}
\begin{align}
\left(\begin{array}{c} \nabla_xL(x^k,\mu^{g,k},\mu^{h,k}) \\ 0 \end{array}\right)+
\left(\begin{array}{c} 0 \\ \mu^{\theta,k}\nabla\theta(y^k) \end{array}\right)+
\sum_{i=1}^{n}\left(\begin{array}{c} 0 \\ \mu_i^{H,k} \nabla H_i(y^k)\end{array}\right)
\notag{} \\ +\sum_{i=1}^{n}\left(\begin{array}{c} 0 \\ \mu_i^{\tilde{H},k}\nabla
\tilde{H}_i(y^k)\end{array}\right)+\sum_{i=1}^{n}\mu_i^{\xi,k}
\left(\begin{array}{c} H_i(y^k)\nabla G_i(x^k)\vspace{3pt} \\
G_i(x^k)\nabla H_i(y^k)\end{array}\right)\to 0, \label{aw_eq_aw_akktr1} \\
\min\{-g(x^k),\mu^{g,k}\}\to 0\,, \quad
\min\{-\theta(y^k),\mu^{\theta,k}\}\to 0, \label{aw_eq_aw_akktr2} \\
\min\{-H(y^k),\mu^{H,k}\}\to 0\,,\quad
\min\{-\tilde{H}(y^k),\mu^{\tilde{H},k}\}\to 0. \label{aw_eq_aw_akktr3}
\end{align}
\end{subequations}
Let $b=\nabla f(\bar{x})$ and define $x^k=\bar{x}$, $\mu^{g,k}=0$, $\mu^{h,k}=0$,
$\mu^{\theta,k}=0$, $\mu^{\tilde{H},k}=0$ and
\begin{subequations}
\begin{align*}
y_i^k=\bar{y}_i\,,\,\, \mu_i^{H,k}=0\,,\,\, \mu_i^{\xi,k}=\dfrac{b_i}{y_i^k}
\mbox{ for } i\in I_{0+}(\bar{x},\bar{y})\cup I_{01}(\bar{x},\bar{y}), \\
y_i^k=\dfrac{b_i}{k}\,,\,\, \mu_i^{H,k}=0\,,\,\, \mu_i^{\xi,k}=k
\mbox{ for } i\in I_{00}(\bar{x},\bar{y}), \\
y_i^k=-\dfrac{{\rm sign}(\bar{x}_i)b_i}{k}\,,\,\,\mu_i^{\xi,k}=
-{\rm sign}(\bar{x}_i)k \,,\,\, \mu_i^{H,k}=-\mu_i^{\xi,k}x_i^k \mbox{ for }
i\in I_{\pm 0}(\bar{x},\bar{y}).
\end{align*}
\end{subequations}
Thus we have $\mu_i^{H,k}\geq 0$, $(x^k,y^k)\to(\bar{x},\bar{y})$,
$$
\nabla_{x_i}L(x^k,\mu^{g,k},\mu^{h,k})-\mu_i^{\xi,k}y_i^k=b_i-\mu_i^{\xi,k}y_i^k\to 0,
$$
and
$$
-\mu^{\theta,k}-\mu_i^{H,k}+\mu_i^{\tilde{H},k}-\mu_i^{\xi,k}x_i^k\to 0
$$
for all $i=1,\ldots,n$, giving (\ref{aw_eq_aw_akktr1}). Moreover, it is easy to see
that (\ref{aw_eq_aw_akktr2}) and (\ref{aw_eq_aw_akktr3}) also hold.
\endproof
Another sequential optimality condition for standard NLP is
PAKKT (Definition \ref{aw_def:pakkt}). It is stronger than AKKT, but not
stronger than $AW$-stationarity. The next example shows that PAKKT for the relaxed problem
does not imply $AW$-stationarity, even under strict complementarity.
\begin{example}
\label{aw_ex:pakktnotaw}
Consider the MPCaC and the corresponding relaxed problem given below.
$$
\begin{array}{lr}
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x\in\R^2} & x_2 \\
{\rm subject\ to } & x_1^2 \leq 0, \\
& \|x\|_0\leq 1,\\
& \\
&
\end{array}
\hspace{.5cm}
&
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y\in\R^2} & x_2 \\
{\rm subject\ to } & x_1^2 \leq 0, \\
& y_1+y_2\geq 1, \\
& x_iy_i=0,\; i=1,2, \\
& 0\leq y_i \leq 1, \; i=1,2.
\end{array}
\end{array}
$$
Given $a>0$, we claim that the point $(\bar{x},\bar{y})$, with $\bar{x}=(0,a)$ and
$\bar{y}=(1,0)$, is PAKKT but not $AW$-stationary. Indeed, for the first statement,
consider the sequences $(x^k,y^k)\subset\R^2\times\R^2$ and
$$
(\gamma^k)=\big(\lambda^{g,k},\lambda^{\theta,k},\mu^k,\lambda^{\tilde{H},k},
\lambda^{\xi,k}\big)\subset\R_+\times\R_+\times\R_+^2\times\R_+^2\times\R^2
$$
given by $x^k=(1/k^3,a)$, $y^k=(1,-1/k)$, $\lambda^{g,k}=k^2$,
$\lambda^{\theta,k}=0$, $\mu^k=(0,ak)$, $\lambda^{\tilde{H},k}=(0,0)$ and
$\lambda^{\xi,k}=(0,k)$. Then we have $(x^k,y^k)\to(\bar{x},\bar{y})$ and,
denoting $\xi(x,y)=x*y$, the gradient of the Lagrangian of the relaxed problem
reduces to
\begin{align*}
\left(\begin{array}{c}\nabla f(x^k) \\ 0 \end{array}\right)
+\lambda^{g,k}
\left(\begin{array}{c}\nabla g(x^k) \\ 0 \end{array}\right)
+\mu_2^k \nabla H_2(y^k)+\lambda_2^{\xi,k}\nabla\xi_2(x^k,y^k) \\
=\left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right)+
\left(\begin{array}{c} 2\lambda^{g,k}x_1^k \\ 0 \\ 0 \\ 0
\end{array}\right)+
\left(\begin{array}{c} 0 \\ 0 \\ 0 \\ -\mu_2^k \end{array}
\right)+\left(\begin{array}{c} 0 \\
\lambda_2^{\xi,k}y_2^k \\ 0 \\ \lambda_2^{\xi,k}x_2^k \end{array}\right)
=\left(\begin{array}{c} 2/k \\ 0 \\ 0 \\ 0
\end{array}\right)\to 0,
\end{align*}
proving (\ref{aw_gradpakkt}). Now, note that
$g(x^k)\to g(\bar{x})=0$ and $\theta(y^k)\to\theta(\bar{y})=0$,
which in turn imply that
\begin{equation}
\label{aw_eq_ex:akktnotaw1}
\min\{-g(x^k),\lambda^{g,k}\}\to 0\quad\mbox{and}\quad
\min\{-\theta(y^k),\lambda^{\theta,k}\}\to 0.
\end{equation}
Moreover, we have $-\tilde{H}(y^k)\to -\tilde{H}(\bar{y})\geq 0$ and
$\lambda^{\tilde{H},k}=(0,0)$, giving
\begin{equation}
\label{aw_eq_ex:akktnotaw2}
\min\{-\tilde{H}(y^k),\lambda^{\tilde{H},k}\}\to 0.
\end{equation}
Furthermore, since $-H_1(y^k)\to -H_1(\bar{y})\geq 0$, $\mu_1^k=0$ and
$-H_2(y^k)=y_2^k\to 0$, we have
\begin{equation}
\label{aw_eq_ex:akktnotaw3}
\min\{-H(y^k),\mu^k\}\to 0.
\end{equation}
Conditions (\ref{aw_eq_ex:akktnotaw1}), (\ref{aw_eq_ex:akktnotaw2}) and
(\ref{aw_eq_ex:akktnotaw3}) prove the approximate complementarity (\ref{aw_complpakkt}).
Moreover, we have $\delta_k= \|(1,\gamma^k)\|_\infty=k^2$ for
all $k$ large enough,
$$
\displaystyle\mathop{\rm lim\, sup}_{k\to\infty}\frac{\lambda^{g,k}}{\delta_k}>0
\quad\mbox{and}\quad \lambda^{g,k}g(x^k)>0.
$$
For the remaining multipliers the ${\rm lim\, sup}$ is zero and so we conclude that
(\ref{aw_pospakkt1}) and (\ref{aw_pospakkt2}) hold, proving that Definition \ref{aw_def:pakkt}
is satisfied, that is, $(\bar{x},\bar{y})$ is PAKKT.
Now, let us see that $(\bar{x},\bar{y})$ is not $AW$-stationary. For this purpose, assume
that the sequences $(x^k,y^k)\subset\R^2\times\R^2$ and
$$
(\lambda^k)=\big(\lambda^{g,k},\lambda^{\theta,k},
\lambda^{G,k},\lambda^{H,k},\lambda^{\tilde{H},k}\big)\subset\R_+\times\R_+
\times\R^2\times\R^2\times\R_+^2
$$
are such that $(x^k,y^k)\to(\bar{x},\bar{y})$ and
$\min\{|G_2(x^k)|,|\lambda_2^{G,k}|\}\to 0$. Then, since $$|G_2(x^k)|=|x_2^k|\to a>0,$$
we obtain $\lambda_2^{G,k}\to 0$. Therefore, the expression
$$
\nabla_xL(x^k,\lambda^{g,k})+\sum_{i=1}^2\lambda_i^{G,k}
\nabla G_i(x^k)=\left(\begin{array}{c} 2\lambda^{g,k}x_1^k+
\lambda_1^{G,k} \\ 1+\lambda_2^{G,k} \end{array}\right)
$$
cannot converge to zero. Thus, taking into account (\ref{aw_nablaL0}),
item (\ref{aw_gradLtnlp}) of Definition~\ref{aw_def:aw} does not hold and
hence $(\bar{x},\bar{y})$ is not $AW$-stationary.
\end{example}
\medskip
In contrast to AKKT and PAKKT, the other classical sequential optimality
condition, CAKKT (Definition \ref{aw_def:cakkt}), does imply $AW$-stationarity,
as we can see in the next result.
\begin{theorem}
\label{aw_th:cakkt_aw}
If $(\bar{x},\bar{y})$ is a CAKKT point for the relaxed problem (\ref{aw_prob:relax1}),
then it is $AW$-stationary.
\end{theorem}
\beginproof
In view of Definition \ref{aw_def:cakkt}, there exist
sequences $(x^k,y^k)\subset\R^n\times\R^n$ and
$$
\big(\lambda^{g,k},\lambda^{h,k},\lambda^{\theta,k},
\mu^k,\lambda^{\tilde{H},k},\lambda^{\xi,k}\big)\subset\R_+^m\times\R^p\times\R_+
\times\R_+^n\times\R_+^n\times\R^n
$$
such that $(x^k,y^k)\to(\bar{x},\bar{y})$,
\begin{subequations}
\begin{align}
\left(\begin{array}{c}\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k}) \\ 0 \end{array}\right)+
\left(\begin{array}{c} 0 \\ \lambda^{\theta,k}\nabla\theta(y^k) \end{array}\right)+
\sum_{i=1}^{n}\left(\begin{array}{c} 0 \\ \mu_i^k \nabla H_i(y^k)\end{array}\right)
\notag{} \\
+\sum_{i=1}^{n}\left(\begin{array}{c} 0 \\ \lambda_i^{\tilde{H},k}\nabla
\tilde{H}_i(y^k)\end{array}\right)+\sum_{i=1}^{n}\lambda_i^{\xi,k}
\left(\begin{array}{c} H_i(y^k)\nabla G_i(x^k)\vspace{3pt} \\
G_i(x^k)\nabla H_i(y^k)\end{array}\right)\to 0, \label{aw_eq_cakkt_aw1} \\
\lambda^{g,k}*g(x^k)\to 0\,, \quad \lambda^{h,k}*h(x^k)\to 0\,, \quad
\lambda^{\theta,k}\theta(y^k)\to 0\,, \label{aw_eq_cakkt_aw2} \\
\mu^k*H(y^k)\to 0\,,\quad \lambda^{\tilde{H},k}*\tilde{H}(y^k)\to 0, \label{aw_eq_cakkt_aw3} \\
\lambda^{\xi,k}*G(x^k)*H(y^k)\to 0. \label{aw_eq_cakkt_aw4}
\end{align}
\end{subequations}
So, we may define
$$
\lambda^{H,k}=\mu^k+\lambda^{\xi,k}*G(x^k)\quad\mbox{and}
\quad\lambda^{G,k}=\lambda^{\xi,k}*H(y^k)
$$
to obtain item (\ref{aw_gradLtnlp}) of Definition \ref{aw_def:aw} from (\ref{aw_eq_cakkt_aw1}).
Items (\ref{aw_glbdg}), (\ref{aw_thlbdth}) and (\ref{aw_HtillbdHtil}) follow
from (\ref{aw_eq_cakkt_aw2}), (\ref{aw_eq_cakkt_aw3}) and Remark \ref{aw_rm:compl4_compl1}.
Let us prove item (\ref{aw_GlbdG}). For $i\in I_{0}$, there holds
$$G_i(x^k)\to G_i(\bar{x})=\bar{x}_i=0.$$ Thus,
$\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0$. If $i\notin I_{0}$, we have
$G_i(x^k)\to\bar{x}_i\neq 0$, which in view of (\ref{aw_eq_cakkt_aw4}) yields
$$
\lambda_i^{G,k}=\lambda_i^{\xi,k}H_i(y^k)\to 0.
$$
Therefore, $\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0$ for all $i=1,\ldots,n$.
Finally, in order to prove item (\ref{aw_HlbdH}), note that
(\ref{aw_eq_cakkt_aw3}) and (\ref{aw_eq_cakkt_aw4}) give
$$
\lambda_i^{H,k}H_i(y^k)=\mu_i^kH_i(y^k)+\lambda_i^{\xi,k}G_i(x^k)H_i(y^k)\to 0.
$$
So, applying the argument of Remark \ref{aw_rm:compl4_compl1} with
$\alpha^k=|\lambda_i^{H,k}|$ and $\beta^k=H_i(y^k)$, we obtain
$$
\min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0
$$
for all $i=1,\ldots,n$. Therefore, $(\bar{x},\bar{y})$ is
$AW$-stationary for the problem (\ref{aw_prob:relax1}).
\endproof
\begin{remark}
\label{aw_rm:cakkt}
Despite being stronger, we emphasize that the sequential
optimality condition CAKKT is not so suitable to deal with MPCaC problems as
$AW$-stationarity. The goal of considering CAKKT is to obtain, under certain
constraint qualifications, KKT points for standard nonlinear programming problems.
However, as we have been discussed, MPCaC are very degenerate problems because of
the problematic complementarity constraint $G(x)*H(y)=0$. This means that we cannot
expect to find strong stationary points for this class of problems and thereby
making $AW$-stationarity a good tool for dealing with them.
\end{remark}
To finish this section, we relate our sequential optimality condition to the
tightened problem. The following result is a sequential version of
Proposition~\ref{aw_prop:wstat_kkttnlp}.
\begin{theorem}
\label{aw_th:aw_akkt}
Let $(\bar{x},\bar{y})$ be a feasible point of the relaxed problem (\ref{aw_prob:relax1}).
Then $(\bar{x},\bar{y})$ is $AW$-stationary if and only if it is an AKKT point for the
tightened problem TNLP$_{I_0}(\bar{x},\bar{y})$ defined in (\ref{aw_prob:tight}).
\end{theorem}
\beginproof
Suppose first that $(\bar{x},\bar{y})$ is $AW$-stationary. Then, in view of
Lemma \ref{aw_lm:awI0}, we conclude that there exist sequences
$(x^k,y^k)\subset\R^n\times\R^n$ and
$$
(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k},\lambda^{\theta,k},
\lambda^{G,k},\lambda^{H,k},\lambda^{\tilde{H},k}\big)
\subset\R_+^m\times\R^p\times\R_+\times\R^n\times\R^n\times\R_+^n
$$
such that $(x^k,y^k)\to(\bar{x},\bar{y})$,
\begin{subequations}
\begin{align}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})+\sum_{i\in I_0}
\lambda_i^{G,k}\nabla G_i(x^k)\to 0, \label{aw_eq_aw_akkt1} \\
\lambda^{\theta,k}\nabla\theta(y^k) +\sum_{i=1}^{n}\lambda_i^{H,k}
\nabla H_i(y^k)+\sum_{i=1}^{n}\lambda_i^{\tilde{H},k}\nabla\tilde{H}_i(y^k)\to 0,
\label{aw_eq_aw_akkt2} \\
\min\{-g(x^k),\lambda^{g,k}\}\to 0\,, \quad \min\{-\theta(y^k),\lambda^{\theta,k}\}\to 0,
\label{aw_eq_aw_akkt3} \\ \min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0\,,\ i=1,\ldots,n,
\quad\min\{-\tilde{H}(y^k),\lambda^{\tilde{H},k}\}\to 0. \label{aw_eq_aw_akkt4}
\end{align}
\end{subequations}
For $i\in I_{0+}\cup I_{01}$ we have
$
H_i(y^k)\to H_i(\bar{y})=-\bar{y}_i<0.
$
Therefore, we can assume without loss of generality that there exists $\epsilon>0$
such that $-H_i(y^k)\geq\epsilon$ for all $k$. So, using (\ref{aw_eq_aw_akkt4}), we
obtain $|\lambda_i^{H,k}|\to 0$, which in turn implies that
$$
\sum_{i\in I_{0+}\cup I_{01}}\lambda_i^{H,k}\nabla H_i(y^k)\to 0.
$$
By subtracting this from (\ref{aw_eq_aw_akkt2}), we obtain
$$
\lambda^{\theta,k}\nabla\theta(y^k) +\sum_{i\in I_{00}\cup I_{\pm 0}}
\lambda_i^{H,k} \nabla H_i(y^k)+\sum_{i=1}^{n}
\lambda_i^{\tilde{H},k}\nabla\tilde{H}_i(y^k)\to 0.
$$
So, we can redefine $\lambda_i^{H,k}$, $i\in I_{0+}\cup I_{01}$,
to be zero, without affecting (\ref{aw_eq_aw_akkt2}).
Therefore, taking into account (\ref{aw_eq_aw_akkt1}), (\ref{aw_eq_aw_akkt3}), the second
part of (\ref{aw_eq_aw_akkt4}) and the fact that
$\min\{-H_i(y^k),\lambda_i^{H,k}\}=0$ for $i\in I_{0+}\cup I_{01}$,
we conclude that $(\bar{x},\bar{y})$ is AKKT for TNLP$_{I_0}(\bar{x},\bar{y})$,
which we recall here for convenience,
$$
\begin{array}{cl}
\displaystyle\mathop{\rm minimize }_{x,y} & f(x) \\
{\rm subject\ to } & g(x)\leq 0, h(x)=0, \\
& \theta(y)\leq 0, \\
& G_i(x)=0,\; i \in I_{0}, \\
& H_i(y)\leq 0,\; i\in I_{0+}\cup I_{01}, \\
& H_i(y)=0,\; i\in I_{00}\cup I_{\pm0}, \\
& \tilde{H}(y)\leq 0.
\end{array}
$$
To prove the converse, suppose that $(\bar{x},\bar{y})$ is AKKT for
TNLP$_{I_0}(\bar{x},\bar{y})$. Then there exist sequences $(x^k,y^k)\subset\R^n\times\R^n$
and
$$
(\lambda^k)=\big(\lambda^{g,k},\lambda^{h,k},\lambda^{\theta,k},
\lambda_{I_{0}}^{G,k},\lambda^{H,k},\lambda^{\tilde{H},k}\big)
\subset\R_+^m\times\R^p\times\R_+\times\R^{|I_{0}|}\times\R^n\times\R_+^n,
$$
with $\lambda_i^{H,k}\geq 0$ for $i\in I_{0+}\cup I_{01}$, such that
$(x^k,y^k)\to(\bar{x},\bar{y})$,
\begin{subequations}
\begin{align}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})+\sum_{i\in I_0}
\lambda_i^{G,k}\nabla G_i(x^k)\to 0, \label{aw_eq_akkt_aw1} \\
\lambda^{\theta,k}\nabla\theta(y^k) +\sum_{i=1}^{n}\lambda_i^{H,k}
\nabla H_i(y^k)+\sum_{i=1}^{n}\lambda_i^{\tilde{H},k}\nabla\tilde{H}_i(y^k)\to 0,
\label{aw_eq_akkt_aw2} \\
\min\{-g(x^k),\lambda^{g,k}\}\to 0\,, \quad \min\{-\theta(y^k),
\lambda^{\theta,k}\}\to 0, \label{aw_eq_akkt_aw3} \\
\min\{-H_i(y^k),\lambda_i^{H,k}\}\to 0,\ i\in I_{0+}\cup I_{01}\,,
\quad\min\{-\tilde{H}(y^k),\lambda^{\tilde{H},k}\}\to 0. \label{aw_eq_akkt_aw4}
\end{align}
\end{subequations}
Extending the sequence $\big(\lambda_{I_{0}}^{G,k}\big)$ from $\R^{|I_{0}|}$ to
$\R^n$ by letting $\lambda_i^{G,k}=0$ for $i\notin I_{0}$, we can
rewrite (\ref{aw_eq_akkt_aw1}) as
\begin{align}
\nabla_xL(x^k,\lambda^{g,k},\lambda^{h,k})+\sum_{i=1}^{n}
\lambda_i^{G,k}\nabla G_i(x^k)\to 0. \label{aw_eq_akkt_aw5}
\end{align}
Moreover, for $i\in I_{0}$, there holds $G_i(x^k)\to G_i(\bar{x})=\bar{x}_i=0$.
Thus,
\begin{equation}
\label{aw_eq_akkt_aw6}
\min\{|G_i(x^k)|,|\lambda_i^{G,k}|\}\to 0
\end{equation}
for all $i=1,\ldots,n$. Besides, for $i\in I_{00}\cup I_{\pm 0}$, we have
$
H_i(y^k)\to H_i(\bar{y})=-\bar{y}_i=0,
$
which implies $\min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0$. Therefore, in view of
(\ref{aw_eq_akkt_aw4}) and the fact that $\lambda_i^{H,k}\geq 0$ for
$i\in I_{0+}\cup I_{01}$, we have
\begin{equation}
\label{aw_eq_akkt_aw7}
\min\{-H_i(y^k),|\lambda_i^{H,k}|\}\to 0
\end{equation}
for all $i=1,\ldots,n$. Thus, from (\ref{aw_eq_akkt_aw2}), (\ref{aw_eq_akkt_aw3}), the
second part of (\ref{aw_eq_akkt_aw4}), (\ref{aw_eq_akkt_aw5}), (\ref{aw_eq_akkt_aw6}) and
(\ref{aw_eq_akkt_aw7}), we conclude that
$(\bar{x},\bar{y})$ satisfies the conditions of Definition \ref{aw_def:aw}, that is,
$(\bar{x},\bar{y})$ is an $AW$-stationary point for the problem (\ref{aw_prob:relax1}).
\endproof
\section{Conclusion}
\label{aw_sec:concl}
In this paper we have presented a sequential optimality condition, namely Approximate Weak stationarity ($AW$-stationarity), for Mathematical Programs with Cardinality Constraints (MPCaC).
This condition improves $W_I$-statio\-na\-rity, which was established in our previous work~\cite{KrulikovskiRibeiroSachine20aX}.
Several theoretical results were presented, such as: $AW$-stationarity is a legitimate
optimality condition independently of any constraint qualification; every feasible
point of MPCaC is AKKT; the equivalence between the $AW$-stationarity and AKKT for
the tightened problem TNLP$_{I_0}$.
In addition, we have established some relationships between our $AW$-stationarity
and other usual sequential optimality conditions, such as AKKT, CAKKT and PAKKT,
by means of properties, examples and counterexamples.
It should be mentioned that, even though the computational appeal of the sequential
optimality conditions, in this work we were not concerned with algorithmic consequences,
which is subject of ongoing research.
|
1,116,691,497,671 | arxiv | \section{Introduction\label{sec1}}
Since the first gravitational wave (GW) signal GW150914 was detected by Advanced LIGO in 2015 \cite{2016prl_GW150914},
Advanced LIGO and later Advanced Virgo have detected more and more GWs sourced from compact binary coalescences\cite{2017prl_GW170817,2021apjl_NSBH, 2016prx_LIGO_O1, 2019prx_GWTC-1,2021prx_GWTC-2}.
At the same time, this also implies that there should be many weak GWs that cannot be recognized by detectors.
The combined weak signal from the population of compact binary constitutes the stochastic gravitational wave background (SGWB).
In addition to the astrophysical sources, there are many ways to generate SGWB in the early Universe,
such as cosmological phase transitions \cite{1986mnras_GWB_cosmol_pt,2021prd_GWB_QCD_pt},
primordial gravitational waves \cite{1992pr_cosmol_perturbation,1997prd_GWB_inflation},
cosmic strings \cite{2005prd_GWB_cosmic_strings,2007prl_GWB_cosmic_strings}, etc.
The detection of the GWB can provide us with information on the astronomical distribution \cite{2014prd_astro_motivation_SGWB, 2016prx_limits_astro_GWB} and cosmology \cite{2001prd_early_universe_LISA, 2016jcap_cosmol_pt_LISA, 2020jcap_cosmol_pt_LISA},
and also provide an opportunity to test the theory of gravity \cite{2016prl_constrain_MTG_GWB, 2017prx_polarization_SGWB}.
Pulsar Timing Arrays (PTA) has accumulated more than a decade of data, and is expeted to detect SGWB in the nHz band in the near future \cite{2020apjl_GWB_NANOGrav,2021mnras_GWB_EPTA,2021apjl_GWB_PPTA,2022mnras_GWB_IPTA}.
In the Hz hand, the ground-based laser interferometers give an upper limits of fractional energy density of SGWB $\Omega_{\text{GW}}\leq5.8\times 10^{-9}$ \cite{2021prd_upper_limits_GWB_LIGO_O3}.
The future space-based interferometers such as LISA \cite{2017arX_LISA}, TianQin \cite{2016cqg_TQ}, Taiji \cite{2017nsr_Taiji} and DECIGO \cite{2011cqj_DECIGO}, will be able to detect SGWB with high sensitivity in the mHz band \cite{2022prd_TQ_GWB, 2021prd_TJ_GWB, 2020prd_mHz_GWB}.
Once we have detected the SGWB, we next need to know what information can be obtained from it.
An important role of SGWB is to test the theory of gravity.
General relativity (GR) predicts that there only exists two tensor polarizations for gravitational wave, the plus and cross modes.
However, there are additional four polarizations allowed by the generic metric theories of gravity,
including two vector modes and two scalar modes.
The observation of vector or scalar modes would cast doubt on general relatively,
and the absence could also be used to constrain modified gravity \cite{2014lrr_GR_test}.
Research on the detection of polarization modes of the SGWB has become a hot topic,
as more and more modified theories of gravity are proposed.
For PTA, the detectability of non-GR polarizations of the SGWB was first investigated in \cite{2008apj_Lee_ORF_nonGR}.
After that, more detailed extension can be found in \cite{2012prd_ORF_nonGR, 2015prd_PTA_GWB_nonGR}.
No evidence of non-GR polarizations of SGWB has been found in more than a decade of PTA data.
The constrains of the amplitude or energy density for non-GR modes in SGWB at frequency of 1/yr can be found in
\cite{2018prl_constrain_nonGR_PTA_GWB, 2021apjl_cosntrain_nonGR_PTA_GWB, 2022apj_constrain_nonGR_PTA_GWB}.
On the other hand, the ground-based gravitational-wave detection network is gradually forming, as more and more detectors join in.
It has the potential to detect the SGWB produced by compact binary mergers.
Data from Advanced LIGO's and Advanced Virgo's three observing run can constrain the fractional energy density of different polarization modes at Hz band \cite{2018_nonGR_GWB_LIGO_O1, 2019prd_LIGO_O2_GWB, 2021_GWB_O3}.
However, the ability of the ground-based laser interferometer network to study the polarizations of SGWB is limited.
The current sensitivity of ground-based detectors is limited enough to detect the SGWB.
What more, the two kind of scalar modes are completely different, one is the transverse and the other is longitudinal.
Unfortunately, the responses of the ground-based laser interferometer to scalar-breathing and scalar-longitudinal modes are completely degenerate, which means that the two modes cannot be distinguished, no matter how sensitive the detectors are \cite{2009prd_nonGR_GWB_LIGO}.
The future space-based gravitational-wave detectors have great advantages in the study of polarizations of SGWB.
First, the abundance of sources in the mHz band and the sufficiently high sensitivity of the detectors ensure that the polarization analysis can be performed.
Second, it is possible to construct specific data combinations with different responses to scalar-breathing and scalar-longitudinal modes, which implies that the two kinds of scalar modes can be distinguished by the space-based detectors.
What more, the relative orientation of two space-based detectors may change as they are in different positions in their respective orbits.
For example, for the LISA-TianQin network, the normal vector of the constellation plane of LISA is vary with time and that of TianQin points to a fixed direction \cite{2016cqg_TQ}.
This means that the overlap reduction function (ORF), the transfer function between the spectrum of SGWB and the power spectrum of cross correlation signal, will vary accordingly.
In this way, it is easier to distinguish between different polarization modes,
which can be inferred from the viewpoint that the detectors can be regarded as different detectors at different positions.
For the space-based detectors, the laser phase noise is usually orders of magnitude higher than other noises due to the mismatch of arm lengths, and also much larger than the gravitational wave signal.
Fortunately, the time delay interferometer (TDI) can be used to suppress the laser phase noise \cite{2000prd_TDI_early, 2002prd_TDI, 2002prd_LISA_optimal_sensitivity, 2020lrr_TDI}.
It is worth mentioning that three noise quadrature channels can be constructed, called A, E and T.
The T channel is relatively insensitive to the gravitational-wave compared with other channels.
So the readout of T channel can be used to model the noise in order to subtract the noise from the A and E channels to get the SGWB.
This is called the null channel method \cite{2001prd_SGWB_null, 2010prd_SGWB_null, 2017lrr_detection_GWB}.
However, the persuasiveness of the null channel is limit, it is required that the cross-correlation signals are detected in the data of two detectors to claim detection of SGWB.
In this paper, we analyze the means by which space-based detecters (taking LISA-TianQin as an expample) can detect and identify the non-GR polarizations in the SGWB.
The outline of the paper is as follows.
In Sec.\ref{sec2}, we review the SGWB in general metric theories of gravity and introduce the LISA-TianQin network.
In Sec.\ref{sec3}, we review the correlation analysis for detecting SGWB and calculate the overlap reduction function for LISA-TianQin network.
In Sec.\ref{sec4}, we study the sensitivity and detectability for the SGWB of alternative polarizations.
Then, in Sec.\ref{sec5}, we consider the method to separation the polarizations for LISA-TianQin network.
Finally, a discussion is presented in in Sec.\ref{sec7}.
\section{SGWB statistic and LISA-TianQin network\label{sec2}}
\subsection{stochastic background of non-GR polarizations}
The metric perturbations corresponding to SGWB can be expressed as a superposition of plane waves of different frequencies from different directions \cite{2017lrr_detection_GWB}:
\begin{equation}\label{h_ab_t}
h_{ab}(t,\vec{x})=\int_{-\infty}^{\infty}df \int d^2\Omega_{\hat{n}} h_{ab}(f,\hat{n}) e^{i2\pi f(t+\hat{n} \cdot \vec{x}/c)}.
\end{equation}
The fourier coefficients $h_{ab}(f,\hat{n})$ are random variables, whose statistic is significant.
In generic metric theory, the coefficients can be expanded in terms of the six spin-2 polarization tensors:
\begin{equation}
h_{ab}(f,\hat{n}) = \sum_A h_{A}(f,\hat{n}) e^{A}_{ab}(\hat{n}) ,
\end{equation}
where $A=\{+, \times, X, Y, B, L\}$ represents different polarization modes, where ${+,\times}$ represent tensor modes predicted by general relativity, and ${X,Y}$ and ${B,L}$ represent vector and scalar modes allowed by the general metric theory of gravity.
Explicitly, the six spin-2 polarization tensors are
\begin{equation}\label{e_ab_n}
\begin{aligned}
e^{+}_{ab}(\hat{n})=&\hat{\theta}_a \hat{\theta}_b - \hat{\phi}_a \hat{\phi}_b , \quad
e^{\times}_{ab}(\hat{n})=\hat{\theta}_a \hat{\phi}_b + \hat{\phi}_a \hat{\theta}_b ,\\
e^X_{ab}(\hat{n})=&\hat{\theta}_a\hat{n}_b+\hat{n}_a\hat{\theta}_b ,\quad
e^Y_{ab}(\hat{n})=\hat{\phi}_a\hat{n}_b+\hat{n}_a\hat{\phi}_b ,\\
e^B_{ab}(\hat{n})=&\hat{\theta}_a\hat{\theta}_b+\hat{\phi}_a\hat{\phi}_b ,\quad
e^L_{ab}(\hat{n})=\sqrt{2}\hat{n}_a\hat{n}_b ,
\end{aligned}
\end{equation}
and $\hat{\theta}$, $\hat{\phi}$ are the standard angular unit vectors tangent to the sphere:
\begin{equation}\label{n}
\begin{aligned}
\hat{n} &=(\sin\theta \cos\phi,\sin\theta \sin\phi,\cos\theta),\\
\hat{\theta}&=(\cos\theta \cos\phi,\cos\theta \sin\phi,-\sin\theta) ,\\
\hat{\phi} &=(-\sin\phi,\cos\phi,0) .
\end{aligned}
\end{equation}
The statistical properties of the SGWB are described by the probability distribution of the metric perturbations.
In this work, we assume that the SGWB is Gaussian, stationary and isotropic.
And without loss of generality we can assume that the background has zero mean $\left\langle h_{A}(f, \hat{n}) \right\rangle=0$.
So all the information is encoded in the quadratic expectation:
\begin{equation}\label{S_h}
\left\langle h_{A}(f, \hat{n}) h_{A^{\prime}}^{*}\left(f^{\prime}, \hat{n}^{\prime}\right)\right\rangle
=\frac{1}{8 \pi} S^A_{h}(f) \delta\left(f-f^{\prime}\right) \delta_{A A^{\prime}} \delta^{2}\left(\hat{n}, \hat{n}^{\prime}\right) ,
\end{equation}
where $S^A_{h}(f)$ can be regarded as the component corresponding to the $A$ polarization of a one-sided gravitational-wave strain power spectral density function.
We further assume that both the tensor and vector modes are unpolarized,
which implies that
\begin{equation}
\begin{aligned}
S^{+}_{h}&=S^{\times}_{h}=S^{T}_{h}/2, \\
S^{X}_{h}&=S^{Y}_{h}=S^{V}_{h}/2.
\end{aligned}
\end{equation}
However, the two scalar modes should be considered as two independent polarization modes,
since one is the longitudinal and the other is transverse.
The function $S^A_{h}(f)$, which characterizes the spectral shape of the SGWB within each polarization sector,
can be detected directly without assuming a model.
However, the amplitude of SGWB for each polarization is characterized by the fractional energy density \cite{1999prd_SGWB_detecte},
\begin{equation}\label{Omega}
\Omega^A_{\mbox{gw}}(f)=\frac{1}{\rho_{c}} \frac{d \rho^A_{\mbox{gw}}}{d \ln f} ,
\end{equation}
defined as the energy density per logarithmic frequency bin, normalized by the critical energy density of the closed Universe $\rho_{c} \equiv 3 c^{2} H_{0}^{2} / 8 \pi G $.
Here $G$ is the gravitational constant, and $H_0=67.4\text{km s}^{-1} \text{Mpc}^{-1}$ is the Hubble constant \cite{2020aa_Planck2018_cosmol}.
In general relatively, the relation between $S^A_{h}(f)$ and $\Omega^A_{\mbox{gw}}(f)$ is \cite{1999prd_SGWB_detecte}
\begin{equation}\label{OtoS}
\Omega^A_{\mbox{gw}}(f)=\frac{2\pi^2}{3H_0^2} f^3 S^A_{h}(f).
\end{equation}
In alternative theories of gravity, Eq. (\ref{OtoS}) may not hold unless the stress-energy of gravitational waves also obeys Isaacson's formula \cite{1968prd_Isaacson_Tuv_GW}:
\begin{equation}
\rho_{\mbox{gw}} = \frac{c^2}{32\pi G} \left\langle \dot{h}_{ab}(t,\vec{x}) \dot{h}^{ab}(t,\vec{x}) \right\rangle .
\end{equation}
In this case, $\Omega^A_{\mbox{gw}}(f)$ can be understood as a function of the observable $S^A_{h}(f)$ rather than the fractional energy density.
Many theoretical models of SGWB predict that the shape of $\Omega^A_{\mbox{gw}}(f)$ can be modeled as power laws \cite{2017lrr_detection_GWB}, such that
\begin{equation}
\Omega^A_{\mbox{gw}}(f)=\Omega^{\alpha_A}_{0}\left(\frac{f}{f_0}\right)^{\alpha_A}.
\end{equation}
Here $\Omega^{\alpha_A}_{0}$ is the amplitude of polarization $A$ at a reference frequency $f_0$ and $\alpha_A$ is the corresponding spectral index.
For instance, the tensor polarization background from compact binary coalescences is modeled by power law with index $\alpha_T=2/3$ \cite{2019prd_LIGO_O2_GWB} and for the inflationary cosmic background is $\alpha_T=0$ \cite{2001cqg_cosmic_SGWB_space_detect}.
\subsection{LISA-TianQin network}
The LISA and TianQin are proposed space GW missions targeting to detect the GW in the frequency band 0.1 mHz -- 1 Hz.
The difference is that LISA is heliocentric orbit and TianQin is geocentric.
In addition, the relative angle between their detecter plane will vary with time.
TianQin has a geocentric orbit and consists of three satellite formations in a nearly equilateral triangle.
Accurate to the first order, the coordinates of the three satellites of TianQin are
\begin{equation}
\vec{r}_{n} = \vec{r}_0 + \vec{R}_n ,
\end{equation}
where $\vec{r}_0 = (x_0, y_0, z_0)$ is the geocentric coordinate,
\begin{equation}
\begin{aligned}
x_0 &= R\cos\alpha_{TQ} +\frac{1}{2}eR(\cos 2\alpha_{TQ}-3), \\
y_0 &= R\sin\alpha_{TQ} +\frac{1}{2}eR(\sin 2\alpha_{TQ}-3), \\
z_0 &= 0 ,
\end{aligned}
\end{equation}
and $\vec{R}_n = (X_0, Y_0, Z_0)$ are the coordinates of the satellites in the geocentric coordinate system,
\begin{equation}
\begin{aligned}
X_0 &= R_{1}\left(\cos \phi_{s} \sin \theta_{s} \sin \alpha_{n}+\cos \alpha_{n} \sin \phi_{s}\right), \\
Y_0 &= R_{1}\left(\sin \phi_{s} \sin \theta_{s} \sin \alpha_{n}-\cos \alpha_{n} \cos \phi_{s}\right), \\
Z_0 &= -R_{1} \sin \alpha_{n} \cos \alpha_{n} .
\end{aligned}
\end{equation}
Here $R = 1\text{AU}$, $e = 0.0167$, $\alpha_{TQ} = 2\pi f_m t - \alpha_0$, $f_m = 1/\text{yr}$, $\alpha_0 = 102.9^{\circ}$,
$\alpha_{n} = 2\pi f_{sc}t + \kappa_n$, $\kappa_n = 2/3(n-1)\pi$, $R_1 = 1 \times 10^8 m $, $\theta_{s}=-4.7^{\circ}$,
$\phi_{s}=120.5^{\circ}$, $f_{sc} = 1/(3.64\text{days})$ and $n=1,2,3$ respectively denotes one of the three satellites.
The detector plane orientation is fixed as $(\cos\theta_s\cos\phi_s, \cos\theta_s\sin\phi_s, \sin\theta_s)$ and the arm length is $L=\sqrt{3} \times 10^8 \text{m} $.
The displacement measurement noise $S_x^{1/2}=1 \times 10^{-12}\text{m}/\text{Hz}^{-1/2}$ and the residual acceleration
noise $S_a^{1/2}=1 \times 10^{-15}\text{m s}^{-2}/\text{Hz}^{-1/2}$.
LISA has a heliocentric orbit at $20^\circ$ behind the Earth.
The satellite formation consists of three satellites to form an approximate equilateral triangle, and the coordinates of the three LISA satellites are $r^{\prime}_n = (x^{\prime}_n, y^{\prime}_n, z^{\prime}_n)$,
\begin{equation}
\begin{aligned}
x^{\prime}_n &= R\cos \alpha_{L S}+\frac{1}{2} e^{\prime}R\left(\cos \left(2 \alpha_{L S}-\kappa_{n}\right)-3 \cos \kappa_{n}\right), \\
y^{\prime}_n &= R\sin \alpha_{L S}+\frac{1}{2} e^{\prime}R\left(\sin \left(2 \alpha_{L S}-\kappa_{n}\right)-3 \sin \kappa_{n}\right), \\
z^{\prime}_n &= -\sqrt{3} e^{\prime} R \cos(\alpha_{LS}-\kappa_{n}),
\end{aligned}
\end{equation}
where $\alpha_{LS} = \alpha_{TQ} +20^{\circ} $ and $e^{\prime} = 0.0048$.
The detector plane is inclined to the orbit plane by $60^\circ$ and the arm length is $L^{\prime}= 2.5 \times 10^{9} \text{m}$.
The displacement measurement noise $S_x^{\prime 1/2}=1.5 \times 10^{-11}\text{m}/\text{Hz}^{-1/2}$ and the residual acceleration
noise $S_a^{\prime 1/2}=3 \times 10^{-15}\text{m s}^{-2}/\text{Hz}^{-1/2}$.
\subsection{noise and response for time delay interferometry}
There are six laser links between the three satellites.
Laser noise can be effectively reduced by constructing time delay interferometry combinations.
Any TDI combination can be expressed in terms of a polynomial of the delay operator acting on the six received signals,
\begin{equation}\label{tdi}
TDI = \sum_{i} P_i s_i .
\end{equation}
Here $i = 1,2,3,1^{\prime},2^{\prime},3^{\prime}$ represents a link respectively: $2\rightarrow1$, $3\rightarrow2$, $1\rightarrow3$, $3\rightarrow1$, $1\rightarrow2$, $2\rightarrow3$.
The time delay operator is defined as $D_i s_j(t) = s_j(t-L_i/c)$,
and converted to the frequency domain to $\tilde{D}_i = e^{-i2\pi fL_i/c}$.
For example, the coefficients of the first-generation TDI Michelson combination $X$ are given by
\begin{equation}
\begin{aligned}
P_1 &= D_{2^{\prime}2}-1 , P_2 = 0, P_3 = D_{2^{\prime}}-D_{33^{\prime}2^{\prime}} \\
P_{1^{\prime}} &= 1-D_{33^{\prime}} , P_{2^{\prime}} = D_{2^{\prime}23}-D_{3}, P_{3^{\prime}} = 0, \\
\end{aligned}
\end{equation}
which represents a laser interferometry link:
$$ [1\rightarrow 2 \rightarrow 1 \rightarrow 3 \rightarrow 1] - [1\rightarrow 3 \rightarrow 1 \rightarrow 2 \rightarrow 1].
$$
The equivalent expressions for Michelson channel $Y$ and $Z$ can be obtained by permuting the label $\{1,2,3\}$.
Three noise-orthogonal channels can be constrained by the linear combinations of $X, Y, Z$,
\begin{equation}
A=\frac{Z-X}{\sqrt{2}},E=\frac{X-2Y+Z}{\sqrt{6}},T=\frac{X+Y+Z}{\sqrt{3}}.
\end{equation}
After removing the phase noise, there are residual acceleration noise and displacement measurement noise in the TDI combination.
The power spectral densities (PSDs) of the remaining noises is
\begin{equation}
P_n = \frac{1}{L^2}\left[C_1 S_x + (2C_1 + C_2\cos\beta)\frac{S_a}{(2\pi f)^4}\right],
\end{equation}
where $\beta = 2\pi f L/c$ and the coefficients
\begin{equation}
\begin{aligned}
C_1 &= \sum_i |\tilde{P}_i|^2 , \\
C_2 &= \Re(\tilde{P}_1 \tilde{P}^*_{2^{\prime}} + \tilde{P}_2 \tilde{P}^*_{3^{\prime}} + \tilde{P}_3 \tilde{P}^*_{1^{\prime}}) .
\end{aligned}
\end{equation}
For example, the PSD of $X$ channel is
\begin{equation}
P_{n,X} = \frac{16\sin^2\beta}{L^2}\left[S_x + \frac{S_a}{(2\pi f)^4}(3 + \cos 2\beta)\right].
\end{equation}
Since the gravitational waves are weak, it is accurate enough to calculate the detecter response to the linear order.
In frequency domain, the gravitational waves signal can be expressed as
\begin{equation}\label{h_f}
\tilde{h}(f)=\int d^2\Omega_{\hat{n}} \sum_A R^A(f,\hat{n})h_A(f,\hat{n}),
\end{equation}
the response function $R^A(f,\hat{n})= R^{ab}(f,\hat{n})e^A_{ab}(\hat{n})$.
The response for any TDI channel (\ref{tdi}) is
\begin{equation}
R_{T D I}^{a b}=\sum_{i} P_{i} R^{a b}\left(f, \hat{n}, \hat{u}_{i}, \vec{r}_{i}\right) ,
\end{equation}
where $R^{a b}\left(f, \hat{n}, \hat{u}_{i}, \vec{r}_{i}\right)$ is the impulse response of a single arm,
$\hat{u}_{i}$ is the direction unit vector of the arm and $\vec{r}_{i}$ is the midpoint of arm.
Here the choose of $\vec{r}_{i}$ is different from the literature for the convenience of calculation,
such that the impulse response become
\begin{equation}
R^{a b}(f, \hat{n}, \hat{u}, \vec{r})=\frac{1}{2} u^{a} u^{b} \mathcal{T}_{\hat{u}}(f, \hat{n} \cdot \hat{u}) e^{i 2\pi f \hat{n} \cdot \vec{r} / c} ,
\end{equation}
where
\begin{equation}
\begin{aligned} \mathcal{T}(f, \hat{n} \cdot \hat{u}) & \equiv \frac{c}{i 2 \pi f L} \frac{1}{1+\hat{n} \cdot \hat{u}}\left[e^{\frac{i \pi f L}{c}(\hat{n} \cdot \hat{u})}-e^{-\frac{i \pi f L}{c}(2+\hat{n} \cdot \hat{u})}\right] \\ &=e^{-\frac{i \pi f L}{c}} \operatorname{sinc}\left(\frac{\pi f L}{c}[1+\hat{n} \cdot \hat{u}]\right)
\end{aligned}
\end{equation}
is the transfer function.
\section{correlation analysis\label{sec3}}
\subsection{cross-correlation signal}
Usually, the SGWB is very weak, which is masked by the noise of the detector, and the characteristics are close to the noise.
So it is difficult to distinguish the noise and the GWB signal in a single detector.
The correlation analysis is a powerful method to detect SGWB \cite{1999prd_SGWB_detecte}.
We review this method and apply it to the LISA-TianQin network in this section.
We start with the output signals of the two detectors,
\begin{equation}
\begin{aligned}
s_I(t) = h_I(t) + n_I(t) ,\\
s_J(t) = h_J(t) + n_J(t) ,
\end{aligned}
\end{equation}
where $I,J$ respect TianQin and LISA in this paper.
And the correlation signal is
\begin{equation}\label{S_t}
S=\int_{-T / 2}^{T / 2} d t s_{I}(t) s_{J}(t) ,
\end{equation}
or in frequency domain
\begin{equation}\label{S_f}
S=\int_{-\infty}^{\infty} d f \int_{-\infty}^{\infty} d f^{\prime} \delta_{T}\left(f-f^{\prime}\right) \tilde{s}_{I}(f) \tilde{s}_{J}^{*}\left(f^{\prime}\right) ,
\end{equation}
where $\delta_{T}(f)=\int_{-T / 2}^{T / 2} d t e^{-i 2 \pi f t}=\frac{\sin (\pi f T)}{\pi f} $.
Assume that the two detector noises are uncorrelated, the mean of the correlation signal is
\begin{equation}
\mu=\langle S\rangle=\int_{-\infty}^{\infty} d f \int_{-\infty}^{\infty} d f^{\prime} \delta_{T}\left(f-f^{\prime}\right)\left\langle\tilde{h}_{I}(f) \tilde{h}_{J}^{*}\left(f^{\prime}\right)\right\rangle .
\end{equation}
Combining Eq. (\ref{S_h}) and Eq. (\ref{h_f}),
\begin{equation}
\left\langle\tilde{h}_{I}(f) \tilde{h}_{J}^{*}\left(f^{\prime}\right)\right\rangle=\frac{1}{2} \delta\left(f-f^{\prime}\right) \sum_{A=\{T, V, B, L\}} \Gamma_{I J}^{A}(f) S_{h}^{A}(f) ,
\end{equation}
where the overlap reduction functions are
\begin{equation}
\begin{aligned}
\Gamma^T_{IJ}(f)=&\frac{1}{8\pi}\int d^2 \Omega_{\hat{n}}\sum_{A={+,\times}}R^{A}_I(f,\hat{n})R^{A*}_J(f,\hat{n}) ,\\
\Gamma^V_{IJ}(f)=&\frac{1}{8\pi}\int d^2 \Omega_{\hat{n}}\sum_{A={X,Y}}R^{A}_I(f,\hat{n})R^{A*}_J(f,\hat{n}) ,\\
\Gamma^B_{IJ}(f)=&\frac{1}{4\pi}\int d^2 \Omega_{\hat{n}} R^{B}_I(f,\hat{n})R^{B*}_J(f,\hat{n}) ,\\
\Gamma^L_{IJ}(f)=&\frac{1}{4\pi}\int d^2 \Omega_{\hat{n}} R^{L}_I(f,\hat{n})R^{L*}_J(f,\hat{n}) .
\end{aligned}
\end{equation}
We will calculate the ORFs in next subsection.
So the mean of signal is
\begin{equation}
\mu=\frac{T}{2} \int_{-\infty}^{\infty} d f \sum_{A} \Gamma_{IJ}^{A}(f) S_{h}^{A}(f) .
\end{equation}
Assume that the signal is much smaller than the noise, such that
\begin{equation}
\left\langle\tilde{s}_{i}^{*}(f) s_{i}^{*}\left(f^{\prime}\right)\right\rangle
\approx\left\langle\tilde{n}_{i}^{*}(f) n_{i}^{*}\left(f^{\prime}\right)\right\rangle
=\frac{1}{2} \delta\left(f-f^{\prime}\right) P_{i}(f) .
\end{equation}
The variance is
\begin{equation}
\sigma^{2}=\left\langle S^{2}\right\rangle-\langle S\rangle^{2}
\approx\frac{T}{4} \int_{-\infty}^{\infty} d f P_{I}(f) P_{J}(f) .
\end{equation}
So the signal-to-noise ratio (SNR) is
\begin{equation}
\rho=\frac{\mu}{\sigma}=\sqrt{T}
\frac{\int_{-\infty}^{\infty} d f \sum_{A} \Gamma_{J}^{A}(f) S_{h}^{A}(f)}
{\sqrt{\int_{-\infty}^{\infty} d f P_{I}(f) P_{J}(f)}} .
\end{equation}
\subsection{overlap reduction function}
\begin{figure}[!t]
\centering
\includegraphics[width=0.4\textwidth]{fig1_ORF_X_re} \\
\includegraphics[width=0.4\textwidth]{fig1_ORF_X_im}
\caption{The tensor ORF of the $X$ channel for LISA-TianQin network. The real and imaginary parts of ORF are the upper and lower respectively, and the time is $t=0$. }\label{fig1}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{fig2_ORF_A}
\caption{The different polarization ORF of the $A$ channel for LISA-TianQin network. The ORF curve is plotted for that $t=0$.
And the real and imaginary parts are represented by different types of curves.}\label{fig2}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{fig3_ORF_A_BvsL}
\caption{The comparison of the A channel ORF of two scalar modes for $t=0$. The above is the real part, the bottom is the imaginary part. }\label{fig3}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{fig4_ORF_A_times}
\caption{The ORF at different times. The left and right sides from top to bottom are the real and imaginary parts of the ORF at $t=0$, $t=1\text{day}$ and $t=\text{yr}/2$ respectively.}\label{fig4}
\end{figure*}
The overlap reduction function can be interpretation as the response of correlation of two detectors to the isotropic SGWB \cite{2017lrr_detection_GWB}.
The overlap reduction function mainly depends on three factors: detector similarity, separation and orientation relative to one another.
In the past literature, one usually considers two identical detectors placed in difference location.
In fact, for detectors like LISA-TianQin, the different arm lengths result in slightly different frequency bands for their respective responses, which is an important factor leading to the reduction of ORF.
On the other hand, changes in orbits cause ORFs to change over time.
In addition, the small-antenna limit that applies to ground-based detectors is no longer always applicable to space-based detectors in the detection band.
For example, the characteristic frequencies $f= c/(2L)$ for LISA and TianQin are $0.06\text{Hz}$ and $0.86\text{Hz}$ respectively.
the small-antenna limit required that the frequency much smaller than the characteristic frequencies.
However, the most sensitive frequency band of LISA-TianQin is $10^{-3}$Hz -- $10^{-1}$ Hz, so the small antenna limit is not always satisfied.
Based on the above considerations, the ORF of LISA-TianQin network deserves a careful discussion.
For any TDI channel, the ORF for $A$ polarization is
\begin{widetext}
\begin{equation}\label{Gamma_TDI}
\begin{aligned}
\Gamma_{TDI}^{A}(f)&=\frac{1}{8 \pi} \int d^{2} \Omega_{\hat{n}}
\sum_{A} R_{I}^{A}(f, \hat{n}) R_{J}^{A^{*}}(f, \hat{n}) \\
&= \frac{1}{8 \pi} \sum_{i, j} P_{I, i} P_{J, j}^{*} \int d^{2} \Omega_{\hat{n}} \sum_{A} R_{I}^{a b}\left(f, n, \hat{u}_{i}, \vec{r}_{i}\right) R_{J}^{c d^{*}}\left(f, n, \hat{u}_{j}^{\prime}, \vec{r}_{j}^{\prime}\right) e_{a b}^{A} e_{c d}^{A} \\
&= \frac{e^{\frac{i}{2}\left(\beta^{\prime}-\beta\right)}}{32 \pi} \sum_{i, j} P_{I, i} P_{J, j}^{*} \hat{u}_{i}^{a} \hat{u}_{i}^{b} \hat{u}_{j}^{\prime c} \hat{u}_{j}^{\prime d} \Gamma^A_{a b c d}\left(\alpha_{i j}, \beta, \beta^{\prime}, \hat{u}_{i}, \hat{u}_{j}^{\prime}, s_{i j}\right) ,
\end{aligned}
\end{equation}
where $\beta = 2\pi f L/c$ , $\beta^{\prime} = 2\pi f L^{\prime}/c$ and
\begin{equation}\label{Gamma_abcd}
\Gamma^A_{a b c d}\left(\alpha_{ij}, \beta, \beta^{\prime}, \hat{u}_{i}, \hat{u}_{j}^{\prime}, \hat{s}_{i j}\right)=\int d^{2} \Omega_{\hat{n}} \sum_{A} \operatorname{sinc}\left(\frac{\beta}{2}\left[1+\hat{n} \cdot \hat{u}_{i}\right]\right) \operatorname{sinc}\left(\frac{\beta^{\prime}}{2}\left[1+\hat{n} \cdot \hat{u}_{j}^{\prime}\right]\right)
e_{a b}^{A} e_{c d}^{A} e^{-i \alpha_{i j} \hat{n} \cdot \hat{s}_{ij}}.
\end{equation}
\end{widetext}
Here $ \alpha_{i j} = 2 \pi f s_{i j} / c $,
$ s_{i j} \equiv\left|\Delta \vec{x}_{i j}\right|=\left|\vec{r}_{j}^{\prime}-\vec{r}_{i}\right| $
and $ \hat{s}_{i j} = \Delta \vec{x}_{i j} / s_{i j} $.
To keep the definition consistent, the sum means $e_{a b}^{+} e_{c d}^{+} + e_{a b}^{\times} e_{c d}^{\times}$ for $A=T$,
$e_{a b}^{X} e_{c d}^{X} + e_{a b}^{Y} e_{c d}^{Y}$ for $A=V$,
$2e_{a b}^{B} e_{c d}^{B}$ for $A=B$ and $2e_{a b}^{L} e_{c d}^{L}$ for $A=L$.
In this way, the ORF of any TDI channel can be disassembled and calculated between two separate arms.
In general, the integral in Eq. (\ref{Gamma_abcd}) can not be calculated analytically.
In the short antenna limit $\beta,\beta^{\prime} \ll 1$, it can be calculated analytically \cite{1993prd_orf_LIGO, 1999prd_SGWB_detecte}.
The result is
\begin{equation}
\begin{aligned}
\Gamma_{a b c d}^{A(0)}(\alpha, \hat{s}) &= A^{A(0)}(\alpha) \delta_{a b} \delta_{c d}
+B^{A(0)}(\alpha)\left(\delta_{a c} \delta_{b d}+\delta_{b c} \delta_{a d}\right) \\
&+C^{A(0)}(\alpha)\left(\delta_{a b} s_{c} s_{d}+\delta_{c d} s_{a} s_{b}\right) \\
&+D^{A(0)}(\alpha)\left(\delta_{a c} s_{b} s_{d}+\delta_{a d} s_{b} s_{c} \right. \\
&\left.+\delta_{b c} s_{a} s_{d}+\delta_{b d} s_{a} s_{c}\right)
+E^{A(0)}(\alpha) s_{a} s_{b} s_{c} s_{d} ,
\end{aligned}
\end{equation}
where the coefficients is
\begin{equation}
X^{A(0)} =(M^{(0)})^{-1}Y^{A(0)}.
\end{equation}
Here,
\begin{equation}
X^{A(0)} = \left[\begin{array}{c}A^{A(0)} \\ B^{A(0)} \\ C^{A(0)} \\
D^{A(0)} \\ E^{A(0)}\end{array}\right],
M^{(0)}=\left[\begin{array}{rrrrr}9 & 6 & 6 & 4 & 1 \\
6 & 24 & 4 & 16 & 2 \\
6 & 4 & 8 & 8 & 2 \\
4 & 16 & 8 & 24 & 4 \\
1 & 2 & 2 & 4 & 1 \\ \end{array}\right],
\end{equation}
and
\begin{equation}
Y^{T(0)}=32\pi\left[\begin{array}{c}0 \\j_{0}(\alpha) \\ 0 \\ 2j_{1}(\alpha)/\alpha \\ j_{2}(\alpha)/\alpha^2\end{array}\right] ,
\end{equation}
\begin{equation}
Y^{V(0)}=32\pi\left[\begin{array}{c}0 \\j_{0}(\alpha) \\ 0 \\j_{0}(\alpha)- j_{1}(\alpha)/\alpha \\ j_{1}(\alpha)/\alpha-4j_{2}(\alpha)/\alpha^2\end{array}\right] ,
\end{equation}
\begin{equation}
Y^{B(0)}=32\pi\left[\begin{array}{c}j_{0}(\alpha) \\j_{0}(\alpha) \\ 2j_{1}(\alpha)/\alpha
\\2j_{1}(\alpha)/\alpha \\ 2j_{2}(\alpha)/\alpha^2\end{array}\right] ,
\end{equation}
\begin{equation}
Y^{L(0)}=16\pi\left[\begin{array}{c}j_{0}(\alpha) \\2j_{0}(\alpha) \\ 2j_{1}(\alpha)/\alpha -2j_{0}(\alpha)
\\4j_{1}(\alpha)/\alpha -4j_{0}(\alpha) \\ \left(8/\alpha^2-1\right)j_{2}(\alpha) -j_{1}(\alpha)/\alpha\end{array}\right].
\end{equation}
So the ORF of any TDI channel in the small antenna limit is
\begin{equation}
\begin{aligned}
\Gamma_{TDI}^{A0}(f)&= \frac{e^{\frac{i}{2}\left(\beta^{\prime}-\beta\right)}}{32 \pi} \sum_{i, j} P_{I, i} P_{J, j}
\left\{A^{A(0)}(\alpha_{ij}) \right.\\
&+2 B^{A(0)}(\alpha_{ij})\left(1+\hat{u}_{i} \cdot \hat{u}_{j}^{\prime}\right)\\
&+C^{A(0)}(\alpha_{ij})\left(\left(\hat{u}_{i} \cdot \hat{s}_{ij}\right)^{2}
+\left(\hat{u}_{j}^{\prime} \cdot \hat{s}_{ij}\right)^{2}\right)\\ &
+4 D^{A(0)}(\alpha_{ij})\left(\hat{u}_{i} \cdot \hat{u}_{j}^{\prime}\right)\left(\hat{u}_{i} \cdot \hat{s}_{ij}\right)
\left(\hat{u}_{j}^{\prime} \cdot \hat{s}_{ij}\right) \\
&\left.+E^{A(0)}(\alpha_{ij})\left(\hat{u}_{i} \cdot \hat{s}_{ij}\right)^{2}\left(\hat{u}_{j}^{\prime} \cdot \hat{s}_{ij}\right)^{2}\right\} .
\end{aligned}
\end{equation}
Notice that there is an extra phase factor $e^{\frac{i}{2}\left(\beta^{\prime}-\beta\right)}$, due to the difference in the arm lengths of LISA and TianQin.
For LISA-TianQin network, the phase factor $e^{i2\pi f/65\text{mHz}}$ can not ne ignored in the detection band.
Since the small antenna limit may not work well, we extend it slightly.
We expand the Eq. (\ref{Gamma_abcd}) as a Taylor series of the frequency $f$.
To the zeroth order term is the expression for the small antenna approximation above.
Expanding to the next order, we obtain that
\begin{equation}\label{Gamma2}
\begin{aligned}
\Gamma_{a b c d}^{A2}&\left(\alpha_{ij}, \beta, \beta^{\prime}, \hat{u}_{i}, \hat{u}_{j}^{\prime}, \hat{s}_{i j}\right)
=\Gamma_{a b c d}^{A(0)}\left(\alpha_{ij}, \hat{s}_{i j}\right)\\
&+\Gamma_{a b c d}^{A(2)}\left(\alpha_{ij}, \beta, \hat{u}_{i}, \hat{s}_{i j}\right)
+\Gamma_{a b c d}^{A(2)}\left(\alpha_{ij}, \beta^{\prime}, \hat{u}_{j}^{\prime}, \hat{s}_{i j}\right),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\Gamma_{d b c d}^{A(2)}(\alpha, \beta, \hat{u}, \hat{s})
&=-\frac{1}{6} \int d^{2} \Omega_{\hat{n}}\left(\frac{\beta}{2}[1+\hat{n} \cdot \hat{u}]\right)^{2} \\
&\times \sum_{A} e_{a b}^{A} e_{c d}^{A} e^{-i \alpha \hat{n} \cdot \hat{s}} .
\end{aligned}
\end{equation}
Following the same method, we construct it with $\delta_{ab}$, $s_{a}$ and $u_a$ under the premise of ensuring its symmetry.
Then solve the linear equations for the coefficients, we can get the ORF of the second order.
The details of calculation are provided in Appendix.
In order to quantify the accuracy of the expand ORF, we compared the zero-order (small antenna approximation) and second-order with numerical integration, as shown in Fig. \ref{fig1}, taking the $X$ channel as an example .
For frequencies below the characteristic frequency $f<c/(2L^{\prime})=0.06\text{Hz}$,
the second-order expression is more accurate than the zero-order expression, which is in good agreement with the numerical integration.
If the phase factor $e^{\frac{i}{2}\left(\beta^{\prime}-\beta\right)}$ is ignored, the zero-order accuracy will be even worse.
For frequencies above the characteristic frequency $f>c/(2L^{\prime})=0.06\text{Hz}$, the accuracy of second-order is worse compared with zero-order, since the error of second-order grows faster than zero-order's.
To improve accuracy, we choose to splice the two at the characteristic frequency,
\begin{equation}
\Gamma^A_{TDI}(f) = \begin{cases}
\Gamma_{TDI}^{A0}(f) & f\geq c/(2L^{\prime}) \\
\Gamma_{TDI}^{A2}(f) & f<c/(2L^{\prime})
\end{cases} .
\end{equation}
They are smooth at the connection point, both equal to 0.
This concatenated expression has high enough precision for SGWB data analysis.
The detector is insensitive to frequency bands above the characteristic frequency due to loud noise and low response for SGWB.
Also, even if the numerical integration method is used, the calculation for frequencies above the characteristic frequency is very slow and the result is inaccurate, due to the rapid oscillation caused by the high frequency.
In Fig. \ref{fig2}, we show the ORF of $A$ channel for different polarizations.
We choose not to normalize the ORF, because normalization is thankless for the TDI combination of the LISA-TianQin network.
Unlike ground-based detectors, TDI combinations are insensitive to low frequencies because of the virtual
equal arm interferometric measurements.
Actually, the ORF should be proportional to $(\beta\beta^{\prime})^2$.
On the other hand, it is not trivial to define them co-located for any TDI channel, since LISA and TianQin have different arm lengths.
Furthermore, the ORF of different polarizations also share some common characteristics.
Their zero point distributions are similar: $f \approx nc/(2|\Delta \vec{x}|)$, $f = nc/(2L)$ and $f = nc/(2L^{\prime})$.
Due to the different arm lengths of the two detectors, TianQin is still sensitive in the frequency band
$f > c/(2L^{\prime}) \approx 60\text{mHz}$, so the ORF does not decay rapidly to 0 beyond $f = c/(2L^{\prime}) \approx 60\text{mHz}$.
At low frequency approximations, the ORF of the scalar-longitudinal and scalar-breathing modes have similar patterns
$\Gamma^L=2\Gamma^B$.
However, as the frequency increases, the degeneracy between them is removed as shown in Fig. \ref{fig3}.
This implies that it is possible to resolve two scalar modes through the LISA-TianQin network.
The prerequisite is that the signal-to-noise ratio is required to be very high, since their ORF differ very little.
What more, we plot the ORF for different polarizations at different times in Fig. \ref{fig4}.
The time-varying property makes data analysis more difficult,
but also increases the chance of resolving different polarizations.
\section{sensitivity for the background of alternative polarizations\label{sec4}}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{fig5_snr}
\caption{The SNR of power-law SGWB with different amplitude $\Omega^A$ and index $\alpha^A$ for different polarizations. The reference frequency is choose to be $f_0=1\text{mHz}$. }\label{fig5}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{fig6_rho_norm}
\caption{The normalized SNR as a function of the cutoff frequency. }\label{fig6}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{fig7_delta}
\caption{The uncertainties of the amplitude $\Omega^A$ and index $\alpha^A$ for different polarizations. The uncertainties of the index $\alpha^A$ are calculated with suitable amplitude parameters such that the SNR is 10. }\label{fig7}
\end{figure}
\subsection{optimal filter}
In addition to the naive method of directly correlating the output of two detecters in Sec. \ref{sec3},
the idea of matched filter is often used to improve the SNR.
Matched filtering starts by multiplying the correlation signal (\ref{S_f}) by a filter function,
\begin{equation}\label{Sm}
S_m=\int^{\infty}_{-\infty}df \int^{\infty}_{-\infty}df^{\prime}\delta_T(f-f^{\prime})
\tilde{s}_I(f) \tilde{s}^{*}_J (f^{\prime}) Q(f).
\end{equation}
The SNR is \cite{1999prd_SGWB_detecte}
\begin{equation}
\rho_m^2=T\left(\frac{3H_0^2}{2\pi^2}\right)^2 \frac{(Q,\frac{\sum_A \Gamma^A(f) \Omega^A_h(f)}{f^3P_I(f)P_J(f)})^2}{(Q,Q)} ,
\end{equation}
where the inner product of $A$ and $B$ is defined as $(A,B)\equiv \int_{-\infty}^{\infty}dfA^*(f)B(f) P_I(f)P_J(f)$.
And the optimal filter is
\begin{equation}\label{filter}
Q(f) \propto \frac{\sum_A \Gamma^A(f)\Omega^A_h(f)}{f^3P_I(f)P_J(f)},
\end{equation}
when mean that the model and the actual energy-density spectrum match.
The resulting optimal SNR is given by
\begin{equation}\label{snr_o}
\rho_{o}=\sqrt{T}\frac{3H_0^2}{2\pi^2}\left[\int^{\infty}_{-\infty}df \frac{\left|\sum_A \Gamma^A(f)\Omega^A_h(f) \right|^2}{f^6P_I(f)P_J(f)}\right]^{1/2} .
\end{equation}
The premise of the above expression is that the noise amplitude is much larger than the signal.
In general, it will be modified to \cite{2001prd_SGWB_LISA}
\begin{equation}
\rho_{o}=\sqrt{T}\frac{3H_0^2}{2\pi^2}\left[\int^{\infty}_{-\infty}df \frac{\left|\sum_A \Gamma^A(f)\Omega^A_h(f) \right|^2}{f^6 M(f)}\right]^{1/2} ,
\end{equation}
where
\begin{equation}
\begin{aligned}
M(f) &= P_I(f)P_J(f) + S_h(f)\left(P_I(f)\Gamma_{JJ}(f) + P_J(f)\Gamma_{II}(f)\right) \\
&+ S^2_h(f)\left(\Gamma^2_{IJ}(f)+\Gamma_{II}(f)\Gamma_{JJ}(f)\right).
\end{aligned}
\end{equation}
Assuming that the spectrum of SGWB is power-law, the optimal SNR calculated by Eq. (\ref{snr_o}) can be used to evaluate the detection capability of LISA-TianQin network for SGWB.
We show the SNR of SGWB with different polarizations in Fig. \ref{fig5},
where the observation time is chosen as $T=1\text{yr}$.
The result shows that the LISA-TianQIn network is more sensitive to tensor and vector polarizations than the two scalar polarizations.
For example, for the same spectral index $\alpha^A=2/3$, in order to achieve a SNR of 10, the amplitudes at reference frequency for different polarizations are required to be: $\Omega_0^T=2.68 \times 10^{-11}$, $\Omega_0^V=2.87 \times 10^{-11}$,
$\Omega_0^B=6.81 \times 10^{-11}$ and $\Omega_0^L=3.42 \times 10^{-11}$ .
In addition, in order to find the most sensitive frequency band of LISA-TianQin, we define a normalized SNR as a function of the cutoff frequency,
\begin{equation}
\hat{\rho}_{o}(f_{\text{cut}})=\frac{1}{\rho_{o}}\sqrt{T}\frac{3H_0^2}{2\pi^2}\left[2\int^{f_{\text{cut}}}_{0}df \frac{\left|\sum_A \Gamma^A(f)\Omega^A_h(f) \right|^2}{f^6P_I(f)P_J(f)}\right]^{1/2} .
\end{equation}
We show the function $\hat{\rho}_{o}(f_{\text{cut}})$ in Fig. \ref{fig6} where the power-law SGWB with spectrum index $\alpha_A=2/3$ are assumed.
The results show that the frequency band that contributes the most to the SNR of background from compact binary coalescences is $1\text{mHz}-10\text{mHz}$.
In turn, it verifies that the error of the ORF in high frequency region does not affect the data analysis.
Also note that the curves for the scalar-breathing and the scalar-longitudinal mode are the same, since their responses are the same at low frequency approximations.
If we increase the spectrum index, which means that the high frequency part of the SGWB is louder, the curves for the scalar-breathing and the scalar-longitudinal mode will be differentiated.
\subsection{parameter estimation accuracy}
Under the premise that the model is accurate, the optimal filter can not only improve the signal-to-noise ratio, but also provide an estimate of the model parameters.
When the SNR is high enough, the Fisher information matrix (FIM) can be used to evaluate parameter estimation accuracy \cite{2007prd_LISA_parameter_error}.
The FIM is defined by
\begin{equation}
F_{ab}=T \int^{\infty}_{-\infty}df
\frac{(\sum_A \Gamma^A(f)\frac{\partial S^A_h(f)}{\partial \theta_a})(\sum_A \Gamma^{A*}(f)\frac{\partial S^A_h(f)}{\partial \theta_b})}{P_I(f)P_J(f)} ,
\end{equation}
and the estimate error of a parameter $\Delta \theta_a$ is estimated from the inverse of FIM
\begin{equation}
\Delta \theta_a =\sqrt{F_{aa}^{-1}}.
\end{equation}
There are a total of eight parameters we need to estimate for a power law model that includes all polarizations.
And the parameter estimation errors are given by
\begin{equation}
\begin{aligned}
\Delta\Omega^A_0 &= \frac{1}{\sqrt{T}} \frac{2\pi^2}{3H_0^2} \left[\int^{\infty}_{-\infty}df
\frac{ |\Gamma^A(f)|^2(f/f_0)^{2\alpha_A}}{f^6P_I(f)P_J(f)}\right]^{-1/2}, \\
\Delta\alpha_A &= \frac{1}{\sqrt{T}} \frac{2\pi^2}{3H_0^2} \frac{1}{\Omega^A_0} \\
&\times \left[2\int^{\infty}_{-\infty}df
\frac{ |\Gamma^A(f)|^2(f/f_0)^{2\alpha_A}\ln(f/f_0)}{f^6P_I(f)P_J(f)}\right]^{-1/2}.
\end{aligned}
\end{equation}
The uncertainties of the amplitude $\Omega^A$ and index $\alpha^A$ for different polarizations are shown in Fig. \ref{fig7}.
In general, the accuracies of the amplitude parameters for tensor and vector are higher than the two scalar polarizations.
This means they are easier to detect.
The uncertainties of the index are plotted with suitable amplitude parameters such that the SNR is 10.
In such a case, the spectral parameter estimation accuracy of the two scalar modes is higher.
Another point worth noting is that the uncertainty curves of the scalar-breathing mode and the scalar-longitudinal mode almost overlap because their ORFs are degenerate at low frequencies.
And when the exponent increases, which means that the high frequency signal is stronger,
the curves are not overlapping, because the degeneracy breaks at high frequency.
\section{polarization separation\label{sec5}}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{fig8_gamma_eff}
\caption{The efficient overlap functions for different polarizations. }\label{fig8}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{fig9_reverse_spectrum}
\caption{The reconstructed spectrum for each polarizations by the equivalent multi-detector-based method.
The red dashed line represents the actual injected spectrum;
the light blue solid line represents the reconstructed spectrum.}\label{fig9}
\end{figure*}
If the alternative polarization really exists,
different polarization modes will be mixed in the cross-correlation signal.
We need suitable methods to distinguish difference modes in the cross-correlation signal.
One possible approach is Bayesian model selection proposed in \cite{2017prx_polarization_SGWB},
which is adopted to detect alternative polarization background in the cross-correlation data of ground-based detecter \cite{2018_nonGR_GWB_LIGO_O1, 2019prd_LIGO_O2_GWB, 2021_GWB_O3}.
This approach is efficient and easy to implement.
However, it may be prone to bias if the model does not fit well with the true background.
If one want to get away from being bound by model assumptions,
there is a intuitive method to separate polarizations using multiple detecters pairs \cite{2009prd_nonGR_GWB_LIGO}.
Different detector pairs have different ORFs, which breaks the degeneracy between different polarizations and provides enough degrees of freedom to separate different polarizations.
Notably, this approach can be implemented independently for each frequency bin.
The accuracy of the results for each frequency bin depends on the corresponding signal and noise strength.
However, this method requires a higher signal-to-noise than the previous method.
Another disadvantage is that multiple detector pairs naturally mean that more than two detectors are required.
Since the LISA-TianQin network varies with orbits,
their ORFs for difference polarizations varies accordingly as see in Sec. \ref{sec3}.
So their correlation signals at different positions respond differently to different polarizations,
which means that it can be equivalently regarded as different detector pairs at different times.
There is an opportunity to extract different polarization patterns from the data from the two detectors,
using a similar approach to the multi-detector based approach \cite{2009prd_nonGR_GWB_LIGO}.
First, the data is divided into $N$ segments (indexed by $i$), and the duration of each segment of data $\Delta T$ satisfies the condition that it is greater than the light travel time between the two detectors and less than the time scale of the ORF changes.
A cross-correlation statistic can be constructed as
\begin{equation}\label{C_i}
\hat{C}_{i}(f)=\frac{2}{\Delta T} \frac{2 \pi^{2}}{3 H_{0}^{2}} f^{3} \tilde{s}_{1, i}(f) \tilde{s}^{*}_{2, i}(f) ,
\end{equation}
normalized such that the statistic's mean is
\begin{equation}
\left\langle\hat{C}_{i}(f)\right\rangle=\sum_{A} \Gamma_i^{A}(f) \Omega^{A}(f) ,
\end{equation}
where $\Gamma_i^{A}(f)$ is the ORF for i-th segment data.
And the variance is
\begin{equation}\label{var}
\sigma_{i}^{2}(f)=\frac{1}{2 \Delta T \Delta f}\left(\frac{2 \pi^{2}}{3 H_{0}^{2}}\right)^{2} f^{6} P_{1, i}(f) P_{2, i}(f)
\end{equation}
where $\Delta f$ is the frequency bin width, $P_{1, i}(f)$ and $P_{2, i}(f)$ are the noise power spectral density of detectors.
The likelihood function for $\hat{C}_{i}(f)$ is
\begin{equation}
\mathcal{L}[\hat{C}_i(f)|\mathcal{A}] \propto \text{exp}\left[-\sum_i^N\sum_f \frac{|\hat{C}_i(f)-\sum_{A} \Gamma_i^{A}(f) \Omega^{A}(f)|^2}{2\sigma_{i}^{2}(f)} \right],
\end{equation}
where
$\mathcal{A}=[\Omega^T(f) , \Omega^V(f) , \Omega^B(f) , \Omega^L(f) ]^T$
represent the fractional energy density of different polarizations.
Equivalently, we can write it as
\begin{equation}
\mathcal{L}[\hat{C}_i(f)|\mathcal{A}] \propto \exp \left[-\frac{1}{2}(\hat{C}-M \mathcal{A})^{\dagger} \mathcal{N}^{-1}(\hat{C}-M \mathcal{A})\right] ,
\end{equation}
where $\hat{C}(f)=[\hat{C}_1(f), \hat{C}_2(f),...,\hat{C}_N(f)]^{T}$,
\begin{equation}
M=
\begin{bmatrix}
\Gamma^T_1 & \Gamma^V_1 & \Gamma^B_1 & \Gamma^L_1 \\
\Gamma^T_2 & \Gamma^V_2 & \Gamma^B_2 & \Gamma^L_2 \\
... & ... & ... & ... \\
\Gamma^T_N & \Gamma^V_N & \Gamma^B_N & \Gamma^L_N \\
\end{bmatrix} .
\end{equation}
And the noise covariance matrix is $\mathcal{N}_{i j} = \delta_{i j} \sigma_{i}^{2}(f)$.
The maximum-likelihood estimators for the fractional energy density of different polarizations are \cite{2017lrr_detection_GWB}
\begin{equation}\label{MLE}
\hat{\mathcal{A}}=F^{-1}X ,
\end{equation}
where $F=M^{\dagger}\mathcal{N}^{-1}M$, $X=M^{\dagger}\mathcal{N}^{-1}\hat{C}$ are the Fisher matrices and 'dirty' map for this analysis.
The marginalized uncertainties of the maximum-likelihood estimates are given by the inverse of the Fisher matrices,
\begin{equation}
\sigma^2_{\Omega^A} = (F^{-1})_{AA}.
\end{equation}
Without loss of general, we assume that the noise power spectrum for different time segments are equal.
The Fisher matrix is
\begin{equation}
F_{AB} = \sum_i \Gamma_i^{A} \Gamma_i^{B*}/\sigma_i^2.
\end{equation}
And the efficient overlap function can be defined by
\begin{equation}
\Gamma_{\text{eff}}^{A}(f) \equiv \sigma^{-1}_{\Omega^A} \sigma_i = \frac{\sigma_i}{\sqrt{(F^{-1})_{AA}}}.
\end{equation}
The efficient overlap functions are shown in Fig. \ref{fig8}.
In order to analyze the ability of the above method in resolving polarization modes,
we apply it to the simulated data of different situations.
Notably, we simulated a mixture of tensor and breathing modes to examine whether it can distinguish between breathing and longitudinal modes.
We simulate a year of data and divided it into 3600 segments,
and the noise is generated according to Eq. (\ref{var}).
The spectrum injected into each piece of data are $\Omega^T=2.68 \times 10^{-11}(f/1\text{mHz})^{2/3}$ and $\Omega^V=2.87 \times 10^{-11}(f/1\text{mHz})^{2/3}$.
The parameters are choose such that individual tensor and scalar-breathing components both correspond to an optimal SNR of 10, mixed together with an optimal SNR of 18.
The spectrum for each polarization reconstructed with Eq. (\ref{MLE}) is shown in Fig. \ref{fig9}.
The spectral accuracy of the reconstructed spectrum in the frequency domain $10^{-3}-10^{-2}$ is relatively high.
It can be concluded that in the most sensitive frequency band, the different modes can indeed be distinguished.
Even two scalar modes are capable of distinguishing, but the accuracy is worse than tensor or vector modes.
\section{discussion\label{sec7}}
In this paper, we studied the detectability of alternative polarizations of SGWB with LISA-TianQin network.
The different orbits of LISA and TianQin make them very different from ground-based detectors.
The relative orientation of their orbital planes changes with time, which means that the ORF will vary accordingly.
In other words, the SGWB signal in their cross-correlation signal varies over time.
This will pose certain challenges for data analysis.
On the other hand, LISA-TianQin has advantages for resolving polarization modes in SGWB in principle.
TDI technique is applied to space gravitational wave detectors to suppress laser phase noise.
Based on the small antenna approximation, we obtain the ORF of the second-order expansion of the frequency for any TDI channel.
This method can be applied to any laser interferometric detector, even with different configurations and different arm lengths.
For the LISA-TianQin network, the accuracy can be effectively improved in its most sensitive frequency band.
Then we study the detectability of alternative polarizations in SGWB with LISA-TianQin network.
We calculated the signal-to-noise ratio and parameter estimation accuracy of power-law SGWB with different polarization modes.
Once the SGWB is detected, it is necessary to distinguish different polarization modes from it.
An equivalent multi-detector-based approach can be applied on the LISA-TianQin network, thanks to its special orbital variation.
The ORFs of the LISA-TianQin network are different at different times, so they can be equivalently regarded as different detectors.
In fact, the cross-correlated signals at different times form a system of linear equations for different polarization modes.
In turn, the system of equations can be solved to obtain different polarization modes, including the influence of noise of course.
Although this method does not rely on model assumptions, it requires a high signal-to-noise ratio.
Its resolution at a certain frequency bar is affected by the degeneracy of the ORFs for different polarizations in addition to noise at that frequency bar.
what's more, it has more advantages in distinguishing scalar-breathing mode and scalar-longitudinal mode.
\section*{Acknowledgments}
This work is supported by the National Natural Science Foundation of China (No. 12175076 and Grants No. 11925503), the Post doctoral Science Foundation of China (Grant No.2022M711259), and Guangdong Major project of Basic and Applied Basic Research (Grant No. 2019B030302001).
\begin{widetext}
|
1,116,691,497,672 | arxiv | \section{Introduction and main results}
\subsection{Background}
Let $G = (V,E)$ be a random graph with vertex set $V=[n]$, and let ${\boldsymbol A}_G\in
\{0,1\}^{n\times n}$ denote its adjacency matrix.
Spectral algorithms have proven extremely successful in analyzing
the structure of such graphs under various probabilistic
models. Interesting tasks include finding clusters, communities,
latent representations, collaborative filtering and so on \cite{alon1998finding,mcsherry2001spectral,ng2002spectral,coja2006spectral}. The underlying mathematical
justification for these applications can be informally summarized as
follows (more precise statements are given below):
\vspace{0.25cm}
\emph{If $G$ is dense enough, then ${\boldsymbol A}_G-{\mathbb{E}}\{{\boldsymbol A}_{G}\}$ is much
smaller, in operator norm, than ${\mathbb{E}}\{{\boldsymbol A}_{G}\}$.}
\vspace{0.25cm}
(Recall that the operator norm of a symmetric matrix ${\boldsymbol M}$ is $\|{\boldsymbol M}\|_{op}
=\max(\xi_1({\boldsymbol M}),-\xi_n({\boldsymbol M}))$, with $\xi_{\ell}({\boldsymbol M})$ the
$\ell$-th largest eigenvalue of ${\boldsymbol M}$.)
Random regular graphs provide the simplest model on which this intuition can be made precise
Denoting by ${\sf G}^{\mbox{\tiny {\sf reg}}}(n,d)$ the uniform distribution over graphs with $n$
vertices and uniform degree $d$, we have, for $G\sim{\sf G}^{\mbox{\tiny {\sf reg}}}(n,d)$,
${\mathbb{E}}{\boldsymbol A}_G \approx (d/n){\boldsymbol{1}}\bone^{{\sf T}}$, whence $\|{\mathbb{E}}{\boldsymbol A}_G\|_2\approx
d$. On the other hand, the fact that random regular graphs are `almost
Ramanujan' \cite{Friedman} implies $\|{\boldsymbol A}_G-{\mathbb{E}}{\boldsymbol A}_G\|_{op}\le
2\sqrt{d-1}+o_n(1)\ll d$. Roughly speaking, the random part
${\boldsymbol A}_G-{\mathbb{E}}{\boldsymbol A}_G$ is smaller than the expectation by a factor
$2/\sqrt{d}$.
The situation is not as clean-cut for random graph with irregular
degrees. To be definite, consider the \ER random graph distribution
${\sf G}(n,d/n)$ whereby each edge is present independently with
probability $d/n$ (and hence the average degree is roughly $d$).
Also in this case ${\mathbb{E}}{\boldsymbol A}_G \approx (d/n){\boldsymbol{1}}\bone^{{\sf T}}$, whence $\|{\mathbb{E}}{\boldsymbol A}_G\|_{op}\approx
d$. However, the largest eigenvalue of ${\boldsymbol A}_G-{\mathbb{E}} A_G$ is of the order
of the square root of the maximum degree, namely $\sqrt{\log
n/(\log\log n)}$ \cite{krivelevich2003largest}. Summarizing
\begin{align}
\|A_G-{\mathbb{E}} A_G\|_{op}=
\begin{cases}
2\sqrt{d-1}\, (1+o(1)) & \mbox{ if $G\sim{\sf G}^{\mbox{\tiny {\sf reg}}}(n,d)$},\\
\sqrt{\log n/(\log\log n)} (1+o(1)) & \mbox{ if $G\sim{\sf G}(n,d/n)$}.\\
\end{cases}\label{eq:MaxEigenvalue}
\end{align}
Further, for $G\sim{\sf G}(n,d/n)$, the leading eigenvectors of
${\boldsymbol A}_G-{\mathbb{E}}{\boldsymbol A}_G$ are concentrated near to high-degree vertices, and
carry virtually no information about the global structure of $G$. In
particular, they cannot be used for clustering.
Far from being a mathematical curiosity, this difference has far-reaching consequences: spectral algorithms are known fail, or to be
vastly suboptimal for random graphs with bounded average degree
\cite{feige2005spectral,coja2010graph,keshavan2010matrix,decelle2011asymptotic,krzakala2013spectral}.
The community detection problem (a.k.a. `planted partition') is an example of this failure that
attracted significant attention recently. Let ${\sf G}(n,a/n,b/n)$ be the
distribution over graph with $n$ vertices defined as follows. The
vertex set is partitioned uniformly at random into two subsets $S_1$, $S_2$ with
$|S_i|=n/2$. Conditional on this partition, edges are independent with
%
\begin{align}
{\mathbb{P}}\big((i,j)\in E\big|S_1, S_2\big) = \begin{cases}
a/n & \mbox{ if $\{i,j\}\subseteq S_1$ or $\{i,j\}\subseteq S_2$,}\\
b/n & \mbox{ if $i\in S_1, j\in S_2$ or
$i\in S_2, j\in S_1$.}
\end{cases}\label{eq:HiddenPart}
\end{align}
Given a single realization of such a graph, we would like to detect,
and identify the partition. Early work on this problem showed that simple spectral methods are
successful when $a=a(n)$, $b=b(n)\to\infty$ sufficiently fast. However
Eq.~(\ref{eq:MaxEigenvalue}) --and its analogue for the model
${\sf G}(n,a/n,b/n)$-- implies that this approach fails unless $(a-b)^2\ge
C \log n/\log\log n$. (Throughout $C$ indicates numerical constants.)
Several ideas have been developed to overcome this difficulty.
The simplest one is to simply remove from $G$ all vertices whose
degree is --say-- more than ten times larger than the average degree
$d$. Feige and Ofek \cite{feige2005spectral} showed that, if this
procedure is applied to $G\sim{\sf G}(n,d/n)$, it yields a new graph $G'$
that has roughly the same number of vertices as $G$, but
$\|{\boldsymbol A}_G-{\mathbb{E}}\{{\boldsymbol A}_G\}\|_{op}\le C\sqrt{d}$, with high probability.
The same trimming procedure was successfully applied in
\cite{keshavan2010matrix} to matrix completion, and in
\cite{coja2010graph,chin2015stochastic} to community detection.
This approach has however several drawbacks. First, the specific
threshold for trimming is somewhat arbitrary and relies on the idea
that degrees should concentrate around their average: this is not
necessarily true in actual applications.
Second, it discards a subset of the data. Finally, it is only optimal `up to
constants.'
A new set of spectral methods to overcome the same problem were
proposed and analyzed within the community detection problem
\cite{decelle2011asymptotic,krzakala2013spectral,mossel2013proof,massoulie2014community,bordenave2015non,le2015concentration}.
These methods construct a new matrix that replaces the adjacency matrix
${\boldsymbol A}_G$, and then compute its leading eigenvalues/eigenvectors.
We refer to Section \ref{sec:Related} for further discussion.
These approaches are extremely interesting and mathematically
sophisticated. In particular, some of them have been proved to have an optimal
detection threshold under the model ${\sf G}(n,a/n,b/n)$ \cite{mossel2013proof,massoulie2014community,bordenave2015non}. Unfortunately
they rely on delicate properties of the underlying
probabilistic model. For instance, they are not
robust to an adversarial addition of $o(n)$ edges (see Section \ref{sec:Generalization}).
\subsection{Main results (I): \ER and regular random graphs}
Semidefinite programming (SDP) relaxations provide a different
approach towards overcoming the limitations of spectral algorithms.
We denote the cone of $n\times n$ symmetric positive semidefinite
matrice by ${\sf PSD}(n) \equiv\{{\boldsymbol X}\in{\mathbb{R}}^{n\times n}:\; {\boldsymbol X}\succeq
0\}$. The convex set of positive-semidefinite matrices with diagonal
entries equal to one is denoted by
\begin{align}
{\sf PSD}_1(n) \equiv\big\{{\boldsymbol X}\in{\mathbb{R}}^{n\times n}:\; {\boldsymbol X}\succeq
0, \;X_{ii}=1\forall i\in [n]\big\}\, .
\end{align}
The set ${\sf PSD}_1(n)$ is also known as the \emph{elliptope}. Given a
matrix ${\boldsymbol M}$, we define\footnote{Here and below
$\<{\boldsymbol A},{\boldsymbol B}\>={\sf Tr}({\boldsymbol A}^{{\sf T}}{\boldsymbol B})$ is the usual scalar product between matrices.}
\begin{align}
{\sf SDP}({\boldsymbol M}) \equiv \max\big\{ \<{\boldsymbol M},{\boldsymbol X}\>\, :\;\;
{\boldsymbol X}\in{\sf PSD}_1(n)\big\}\, . \label{eq:SDP.DEF}
\end{align}
It is well known that approximate information about the extremal cuts
of $G$ can be obtained by computing ${\sf SDP}({\boldsymbol A}_G)$
\cite{goemans1995improved}.
The main result of this paper is that the above SDP
is also nearly optimal in extracting information about sparse random
graphs. In particular, it eliminates the irregularities due to
high-degree vertices, cf. Eq.~(\ref{eq:MaxEigenvalue}). Our first
result characterizes the value of ${\sf SDP}({\boldsymbol A}_G-{\mathbb{E}}\{{\boldsymbol A}_G\})$ for $G$ an
\ER random graph with large bounded degree\footnote{Throughout the
paper, $O(\, \cdot\, )$, $o(\,\cdot\,)$, and $\Theta(\,\cdot\,)$
refer to the usual $n \to \infty$ asymptotic, while $O_{d}(\,\cdot\,)$, $o_{d}(\,\cdot\,)$
and $\Theta_{d}(\,\cdot\,)$ are used
to describe the $d \to \infty$ asymptotic regime. We say that a sequence of events $B_n$ occurs with high probability (w.h.p.) if ${\mathbb{P}}(B_n) \to 1$
as $n\to \infty$. Finally, for random $\{X_n\}$ and
non-random $f: {\mathbb{R}}_{>0} \to {\mathbb{R}}_{>0}$, we say
that $X_n = o_{d}(f(d))$ w.h.p. as $n\to \infty$ if
there exists non-random $g(d) = o_{d}(f(d))$ such
that the sequence $B_n = \{ |X_n| \leq g(d)\}$ occurs w.h.p.
(as $n\to \infty$).}. (Its proof is given in Appendix \ref{sec:ProofMain}.)
\begin{theorem}\label{thm:Main}
Let $G\sim {\sf G}(n,d/n)$ be an \ER random graph with edge probability
$d/n$, ${\boldsymbol A}_G$ its adjacency matrix, and ${\boldsymbol A}^{\mbox{\tiny cen}}_G \equiv
{\boldsymbol A}_G-{\mathbb{E}}\{{\boldsymbol A}_G\}$ its centered adjacency matrix.
Then there exists $C=C(d)$ such that with probability at least $1-C\,
e^{-n/C}$, we have
\begin{align}
\frac{1}{n}{\sf SDP}({\boldsymbol A}^{\mbox{\tiny cen}}_G) = 2 \sqrt{d} + o_{d}(\sqrt{d})\, ,\label{eq:MaxLimit}\;\;\;\;\;
\frac{1}{n}{\sf SDP}(-{\boldsymbol A}^{\mbox{\tiny cen}}_G) = 2 \sqrt{d} + o_{d}(\sqrt{d})\, .
\end{align}
\end{theorem}
Note that ${\sf SDP}({\boldsymbol A}^{\mbox{\tiny cen}}_G)\le n\xi_1({\boldsymbol A}^{\mbox{\tiny cen}}_G)$ (here and in the following
$\xi_1({\boldsymbol M})\ge \xi_2({\boldsymbol M})\ge\dots\xi_n({\boldsymbol M})$ denote the
eigenvalues of the symmetric matrix ${\boldsymbol M}$). However, while
$\xi_1({\boldsymbol A}^{\mbox{\tiny cen}}_G)$ is sensitive to vertices of atypically large
degree, cf. Eq.~(\ref{eq:MaxEigenvalue}), ${\sf SDP}({\boldsymbol A}^{\mbox{\tiny cen}}_G)$ appears to be
sensitive only to the average degree. Intuitively, the constraint
$X_{ii}=1$ rules out the highly localized eigenvectors that are
responsible for $\xi_1({\boldsymbol A}^{\mbox{\tiny cen}}_G) \approx\sqrt{\log n/\log\log n}$.
Another way of interpreting Theorem \ref{thm:Main} is that
\ER random graphs behave, with respect to SDP as random regular graphs
with the same average degree. Indeed, we have the following more
precise result for regular graphs. (See Appendix \ref{app:Regular} for the proof.)
\begin{theorem}\label{thm:Regular}
Let $G\sim {\sf G}^{\mbox{\tiny {\sf reg}}}(n,d)$ be a random regular graph with degree
$d$, and ${\boldsymbol A}^{\mbox{\tiny cen}}_G \equiv {\boldsymbol A}_G-{\mathbb{E}}\{{\boldsymbol A}_G\}$ its centered adjacency
matrix.
Then, with high probability
\begin{align}
\frac{1}{n}{\sf SDP}({\boldsymbol A}^{\mbox{\tiny cen}}_G) = 2 \sqrt{d-1} + o_n(1)\, ,\;\;\;\;\;
\frac{1}{n}{\sf SDP}(-{\boldsymbol A}^{\mbox{\tiny cen}}_G) = 2 \sqrt{d-1} + o_n(1)\, .
\end{align}
\end{theorem}
\begin{remark}
The quantity ${\sf SDP}({\boldsymbol A}^{\mbox{\tiny cen}}_G)$ can also be thought as a relaxation
of the problem of maximizing
$\sum_{i,j=1}^nA_{ij}\sigma_i\sigma_j$ over $\sigma_i\in\{+1,-1\}$,
$\sum_{i=1}^n\sigma_i=0$. The result of our companion
paper \cite{dembo2015extremal} implies that this has --with high
probability--
value $2n {\sf P}_*\sqrt{d}+n \, o_d(\sqrt{d})$ (see
\cite{dembo2015extremal} for a definition of ${\sf P}_*$). We
deduce that --with high probability-- the SDP relaxation overestimates the optimum by a factor
$1/{\sf P}_*+o_{d}(1)$ (where $1/{\sf P}_*\approx 1.310$).
\end{remark}
\begin{remark}
For the sake of simplicity, we stated Eq.~(\ref{eq:MaxLimit}) in
asymptotic form. However, our proof provides quantitative bounds
on the error terms. In particular, the $o_{d}(\sqrt{d})$
term is upper bounded by $C d^{2/5}\log(d)$, for $C$ a
numerical constant.
\end{remark}
\subsection{Main results (II): Hidden partition problem}
\label{sec:MainPartition}
We next apply the SDP defined in Eq.~(\ref{eq:SDP.DEF}) to the
community detection problem.
To be definite we will formalize this as a binary hypothesis
testing problem, whereby we want to determine --with high probability
of success-- whether the random graph under consideration has a
community structure or not. The estimation version of the problem,
i.e. the question of determining --approximately-- a
partition into communities, can be addressed by similar techniques.
We are given a \emph{single} graph $G=(V,E)$ over
$n$ vertices and we have to decide which of the following holds:
\begin{description}
\item[{\sf Hypothesis 0:}] $G\sim {\sf G}(n,d/n)$ is an \ER random graph with edge
probability $d/n$, $d=(a+b)/2$. We denote the
corresponding distribution over graphs by ${\mathbb{P}}_0$.
\item[{\sf Hypothesis 1:}] $G\sim {\sf G}(n,a/n,b/n)$ is an random graph
with a planted partition and edge probabilities $a/n$, $b/n$. We denote the
corresponding distribution over graphs by ${\mathbb{P}}_1$.
\end{description}
A statistical test takes as input a graph $G$, and returns
$T(G)\in\{0,1\}$ depending on which hypothesis is estimated to hold.
We say that it is successful with high probability if
${\mathbb{P}}_0(T(G)=1)+{\mathbb{P}}_1(T(G)=0)\to 0$ as $n\to\infty$.
Theorem \ref{thm:Main} indicates that, under {\sf Hypothesis 0},
we have ${\sf SDP}({\boldsymbol A}_G-(d/n){\boldsymbol{1}}\bone^{{\sf T}})= 2n\sqrt{d} +
n\,o_{d}(\sqrt{d})$.
This suggests the following test:
\begin{align}
T(G;\delta) = \begin{cases}
1 & \mbox{ if ${\sf SDP}({\boldsymbol A}_G-(d/n){\boldsymbol{1}}\bone^{{\sf T}})\ge 2n(1+\delta)\sqrt{d}$,}\\
0 & \mbox{ otherwise.}\\
\end{cases}\label{eq:TestDef}
\end{align}
Mossel, Neeman, Sly \cite{mossel2012stochastic} proved that no test can be successful with
high probability if $(a-b)<\sqrt{2(a+b)}$. Polynomially computable
tests that achieve this threshold were developed in
\cite{mossel2013proof,massoulie2014community,bordenave2015non} using
advanced spectral methods. As mentioned, these approaches can be
fragile to perturbations of the precise probabilistic model, cf. Section \ref{sec:Generalization}.
Our next result addresses the fundamental question: \emph{Does the
SDP-based test achieve the information theoretic threshold?} Notice
that the recent work of \cite{guedon2014community} falls short of
answering this question since it requires the vastly sub-optimal
condition $(a-b)^2\ge 10^4(a+b)$. (We refrer to Appendix \ref{sec:ProofMain} for
its proof.)
\begin{theorem}\label{thm:SDP_Test}
Assume, for some $\varepsilon>0$,
\begin{align}
\frac{a-b}{\sqrt{2(a+b)}} \ge 1+\varepsilon\, .\label{eq:ConditionFactor1}
\end{align}
Then there exists $\delta_*=\delta_*(\varepsilon)>0$ and $d_* = d_*(\varepsilon)>0$
such that the following holds. If $d=(a+b)/2\ge d_*$, then the
SDP-based test $T(\,\cdot\,;\delta_*)$ succeeds
with high probability.
Further, the error probability is at most
$Ce^{-n/C}$ for $C=C(a,b)$ a constant.
\end{theorem}
\begin{remark}
This theorem guarantees that
SDP is nearly optimal for large but bounded degree $d$.
By comparison, the naive spectral test that returns $T_{\rm spec}(G) = 1$ if
$\lambda_1({\boldsymbol A}_G)\ge \theta_*$ and $T_{\rm spec}(G) = 0$ otherwise
(for any threshold value $\theta_*$) is sub-optimal by an unbounded
factor for $d=O(1)$.
\end{remark}
\begin{remark}
One might wonder why we consider large degree asymptotics $d=(a+b)/2\to\infty$
instead of trying to establish a threshold at $(a-b)/\sqrt{2(a+b)}=1$ for
fixed $a$, $b$. Preliminary non-rigorous calculation
\cite{OursReplicas} suggest that indeed this is necessary. For fixed
$(a+b)$ the SDP threshold does not coincide with the optimal one.
\end{remark}
\begin{remark}
For the sake of simplicity, we formulated the community detection
problem as an hypothesis testing problem. A related (somewhat more
challenging) task is to estimate the hidden partition better than by
random guessing. In Section \ref{sec:Estimation} we will show that, under the same conditions of
Theorem \ref{thm:SDP_Test}, we can assign vertices making at most $(1-\Delta)n/2$ mistakes
(with high probability for some $\Delta$ bounded away from $0$).
\end{remark}
We will discuss related work in the next section, then provide an
outline of the proof ideas in Section \ref{sec:Strategy}, and finally
discuss extension of the above results in Section
\ref{sec:Generalization}.
Detailed proofs are deferred to the appendix.
\subsection{Notations}
Given $n\in{\mathbb{N}}$, we let $[n] = \{1,2,\dots,n\}$
denote the set of first $n$ integers. We write $|S|$ for the cardinality of a
set $S$. We will use
lowercase boldface (e.g. ${\boldsymbol v} = (v_1,\dots,v_n)$, ${\boldsymbol x} = (x_1,\dots,x_n)$, etc.)
for vectors and uppercase boldface (e.g. ${\boldsymbol A} = (A_{i,j})_{i,j\in[n]}$, ${\boldsymbol Y}= (Y_{i,j})_{i,j\in[n]}$, etc.)
for matrices.
Given a symmetric matrix ${\boldsymbol M}$, we let $\xi_1({\boldsymbol M})\ge
\xi_2({\boldsymbol M})\ge \dots\ge \xi_n({\boldsymbol M})$ be its ordered eigenvalues (with
$\xi_{\max}({\boldsymbol M}) = \xi_1({\boldsymbol M})$, $\xi_{\min}({\boldsymbol M}) = \xi_n({\boldsymbol M})$).
In particular ${\boldsymbol{1}}_n = (1,1,\dots, 1)\in{\mathbb{R}}^n$ is the all-ones vector,
${\rm I}_{n}$ the identity matrix, and ${\boldsymbol e}_i\in {\mathbb{R}}^{n}$ is the $i$'th standard unit vector.
For ${\boldsymbol v}\in{\mathbb{R}}^m$, $\|{\boldsymbol v}\|_p =
(\sum_{i=1}^p|v_i|^p)^{1/p}$ denotes its $\ell_p$ norm (extendend in
the standard way to $p=\infty$). For a matrix ${\boldsymbol M}$, we denote by
$\|{\boldsymbol M}\|_{p\to q} = \sup_{{\boldsymbol v}\neq 0}\|{\boldsymbol M}{\boldsymbol v}\|_q/\|{\boldsymbol v}\|_q$ its
$\ell_p$-to-$\ell_q$ operator norm, with the standard shorthands
$\|{\boldsymbol M}\|_{op} \equiv \|{\boldsymbol M}\|_{2} \equiv \|{\boldsymbol M}\|_{2\to 2}$.
Throughout \emph{with high probability} means `with probability
converging to one as $n\to\infty$.' We follow the standard Big-Oh
notation for asymptotics. We will be interested in bounding error
terms with respect to $n$ and $d$. Whenever not clear from the
contest, we indicate in subscript the variable that is large. For
instance
$f(n,d) = o_d(1)$ means that there exists a function $g(d)\ge 0$
independent of $n$ such that $\lim_{d\to\infty}g(d) = 0$ and
$|f(n,d)|\le g(d)$. (Hence $f(n,d) = \cos(0.1 n)/d =o_d(1)$ but
$f(n,d) = \log( n)/d \neq o_d(1)$.)
A random graph has a law (distribution), which is a probability
distribution over graphs with the same vertex set $V=[n]$. Since we
are interested in the $n\to\infty$ asymptotics, it will be implicitly
understood that one such distribution is specified for each $n$.
We will use $C$ (or $C_0$, $C_1$,\dots) to denote constants, that will
change from point to point. Unless otherwise stated, these are
universal constants.
\section{Further related literature}
\label{sec:Related}
Few results have been proved about the behavior of classical SDP relaxations
on sparse random graphs and --to the best of our knowledge-- none of
these earlier results is tight.
Significant amount of work has been devoted to analyzing SDP
hierarchies on random CSP instances
\cite{grigoriev2001linear,schoenebeck2008linear}, and --more
recently-- on (semi-)random Unique games instances
\cite{kolla2011play}. These papers typically prove only one-side
bounds that are not claimed to be sharp as the number of variables diverge.
Coja-Oghlan \cite{coja2003lovasz} studies the
value of Lov\'asz theta function $\vartheta(G)$, for $G\sim{\sf G}(n,p)$ a
\emph{dense} \ER random graph,
estabilishing $C_1\sqrt{n/p}\le \vartheta(G)\le
C_2\sqrt{n/p}$ with high probability. As in the previous cases, this result is not tight.
Ambainis et
al. \cite{ambainis2012quantum} study an SDP similar to
(\ref{eq:SDP.DEF}), for ${\boldsymbol M}$ a \emph{dense} random matrix with
i.i.d. entries. One of their main results is analogous to a special
case of our Theorem \ref{thm:Gaussian}.$(b)$ below --namely, to the case $\lambda=0$.
(We prefer to give an independent --simpler-- proof also of this case.)
Several papers have been devoted to SDP approaches for community
detection and the related `synchronization' problem.
A partial list includes
\cite{bandeira2014multireference,abbe2014exact,hajek2014achieving,hajek2015achieving,awasthi2015relax}.
These papers focus on finding sufficient conditions under which the
SDP recovers \emph{exactly} the unknown signal.
For instance, in the context of the hidden partition model
(\ref{eq:HiddenPart}), this requires diverging degrees $a,b=\Theta(\log n)$
\cite{abbe2014exact,hajek2014achieving,hajek2015achieving}.
SDP was proved in \cite{hajek2014achieving} to achieve the information-theoretically optimal
threshold for exact reconstruction.
The techniques to prove this type of result are very different from
the ones employed here: since the (conjectured) optimum is known
explicitly, it is sufficient to certify it through a dual witness.
The only result on community detection that compares to ours was recently proven by Guedon and
Vershynin \cite{guedon2014community}.
Their work uses the classical Grothendieck inequality to
establish upper bounds on the estimation error of
SDP. The resulting bound applies only under the condition $(a-b)^2\ge 10^4
(a+b)$. This condition is vastly sub-optimal with respect to the
information-theoretic threshold $(a-b)^2> 2
(a+b)$ established in
\cite{mossel2012stochastic,mossel2013proof,massoulie2014community}
(and is unlikely to be satisfied by realistic graphs). In particular,
the results of \cite{guedon2014community} leave open the central question:
is SDP to be discarded in favor of the spectral methods of
\cite{mossel2013proof,massoulie2014community},
or is the sub-optimality just an outcome of the analysis?
In this paper we provide evidence indicating that SDP is in fact nearly optimal
for community detection. While we also make use of a Grothendieck
inequality as in \cite{guedon2014community}, this is only one step
(and not the most challenging) in a significantly longer
argument. Let us emphasize that the gap between the ideal threshold
at $(a-b)/\sqrt{2(a+b)} =1$,
and the guarantees of \cite{guedon2014community} cannot be
filled simply by carrying out more carefully the same proof strategy.
In order fill the gap we need
to develop several new ideas: $(i)$ A new (higher rank) Grothendieck
inequality; $(ii)$ A smoothing of the original graph parameter
${\sf SDP}(\,\cdot\,)$; $(iii)$ An interpolation argument; $(iv)$ A sharp
analysis of SDP for Gaussian random matrices.
\section{Proof strategy}
\label{sec:Strategy}
Throughout, we denote by ${\boldsymbol A}^{\mbox{\tiny cen}}_G={\boldsymbol A}_G-(d/n){\boldsymbol{1}}\bone^{{\sf T}}$ the
centered adjacency matrix of $G\sim{\sf G}(n,d/n)$ or $G\sim{\sf G}(n,a/n,b/n)$.
Our proofs of Theorem \ref{thm:Main} and Theorem \ref{thm:SDP_Test}
follows a similar strategy that can be summarized as follows:
\begin{description}
\item[Step 1: Smooth.] We replace the function ${\boldsymbol M}\mapsto{\sf SDP}({\boldsymbol M})$,
by a smooth function ${\boldsymbol M}\mapsto \Phi(\beta,k;{\boldsymbol M})$ that depends on two
additional parameters $\beta\in{\mathbb{R}}_{\ge 0}$ and
$k\in{\mathbb{N}}$. We prove that, for $\beta, k$ large (and ${\boldsymbol M}$
sufficiently `regular'), $|{\sf SDP}({\boldsymbol M})-\Phi(\beta,k;{\boldsymbol M})|$ can be made
arbitrarily small,
uniformly in the matrix dimensions. This in particular requires
developing a new (higher rank) Grothendieck-type inequality, which
is of independent interest, see Section \ref{sec:Gro}.
\item[Step 2: Interpolate.] We use an interpolation method (analogous
to the Lindeberg method) to compare the value $\Phi(\beta,k;{\boldsymbol A}^{\mbox{\tiny cen}}_G)$
to $\Phi(\beta,k;{\boldsymbol B})$, where ${\boldsymbol B}\in{\mathbb{R}}^{n\times n}$ is a symmetric
Gaussian matrix with independent entries. More precisely, we use
$B_{ij}\sim {\sf N}(0,1/n)$ to approximate $G\sim{\sf G}(n,d/n)$ and
$B_{ij}\sim {\sf N}(\lambda/n,1/n)$ to approximate the hidden
partition model $G\sim{\sf G}(n,a/n,b/n)$, with $\lambda \equiv
(a-b)/\sqrt{2(a+b)}$. Further detail is provided in Section \ref{sec:Interpolation}.
Note that the interpolation/Lindeberg method requires ${\boldsymbol M}\mapsto
\Phi(\beta,k;{\boldsymbol M})$ to be differentiable, which is the reason for Step
1 above.
\item[Step 3: Analyze.] We finally carry out an analysis of ${\sf SDP}({\boldsymbol B})$
with ${\boldsymbol B}$ distributed according to the above Gaussian models. In
doing this we can take advantage of the high degree of symmetry of
Gaussian random matrices. This part of the proof is relatively
simple for Theorem \ref{thm:Main}, but becomes challenging in the
case of Theorem \ref{thm:SDP_Test}, see Section \ref{sec:Gaussian}.
\end{description}
(The proof of Theorem \ref{thm:Regular} is more direct and will be
presented in Appendix \ref{app:Regular}). In the next subsections we will provide
further details about each of these steps. The formal proofs
of Theorem \ref{thm:Main} and Theorem \ref{thm:SDP_Test} are presented
in Appendix \ref{sec:ProofMain}, with technical lemmas in other appendices..
The construction of the smooth function $\Phi(\beta,k;{\boldsymbol M})$ is
inspired from statistical mechanics. As an intermediate step, define
the following rank-constrained version of the SDP (\ref{eq:SDP.DEF})
\begin{align}
{\sf OPT}_k({\boldsymbol M}) &\equiv \max\big\{ \<{\boldsymbol M},{\boldsymbol X}\>\, :\;\;
{\boldsymbol X}\in{\sf PSD}_1(n)\, ,\;\; {\rm rank}({\boldsymbol X})\le k\big\} \label{eq:OPT.DEF}\\
& = \max\big\{ \sum_{i,j=1}^nM_{ij}\<{\boldsymbol{\sigma}}_i,{\boldsymbol{\sigma}}_j\>\, :\;\;
{\boldsymbol{\sigma}}_i\in {\mathbb S}^{k-1}\big\}\, ,
\end{align}
where ${\mathbb S}^{k-1} = \{ {\boldsymbol{\sigma}} \in{\mathbb{R}}^k:\; \|{\boldsymbol{\sigma}}\|_2 = 1\}$ be the unit
sphere in $k$ dimensions. We then define $\Phi(\beta,k;{\boldsymbol M})$ as the
following log-partition function
\begin{align}
\Phi(\beta,k;{\boldsymbol M}) &\equiv \frac{1}{\beta}\, \log\left\{\int \,
\exp\Big\{\beta
\sum_{i,j=1}^{n}M_{ij}\<{\boldsymbol{\sigma}}_i,{\boldsymbol{\sigma}}_j\>\Big\}\,{\rm d}\nu({\boldsymbol{\sigma}})\right\}\,
.
\end{align}
Here ${\boldsymbol{\sigma}}=({\boldsymbol{\sigma}}_1,{\boldsymbol{\sigma}}_2,\dots,{\boldsymbol{\sigma}}_n)\in ({\mathbb S}^{k-1})^n$
and we denote by ${\rm d}\nu(\,\cdot\,)$ the uniform measure on
$({\mathbb S}^{k-1})^n$ (normalized to $1$, i.e. $\int {\rm d}\nu({\boldsymbol{\sigma}}) =
1$).
It is easy to see that $\lim_{\beta\to\infty}\Phi(\beta,k;{\boldsymbol M}) =
{\sf OPT}_k({\boldsymbol M})$, and ${\sf OPT}_n({\boldsymbol M}) = {\sf SDP}({\boldsymbol M})$. For carrying out the
above proof strategy we need to bound the errors $|\Phi(\beta,k;{\boldsymbol M}) -
{\sf OPT}_k({\boldsymbol M})|$ and $|{\sf OPT}_k({\boldsymbol M}) -{\sf SDP}({\boldsymbol M})|$ uniformly in $n$.
\subsection{Higher-rank Grothendieck inequalities and
zero-temperature limit}
\label{sec:Gro}
In order to bound the error $|{\sf OPT}_k({\boldsymbol M}) -{\sf SDP}({\boldsymbol M})|$ we develop a
new Grothendieck-type inequality which is of independent interest.
\begin{theorem}\label{thm:Gro}
For $k\ge 1$, let ${\boldsymbol g}\sim{\sf N}(0,{\rm I}_{k}/k)$ be a vector with
i.i.d. centered normal entries with variance $1/k$, and define
$\alpha_k \equiv ({\mathbb{E}}\|{\boldsymbol g}\|_2)^2$.
Then, for any symmetric matrix ${\boldsymbol M}\in{\mathbb{R}}^{n\times n}$, we have the inequalities
\begin{align}
{\sf SDP}({\boldsymbol M}) \ge {\sf OPT}_k({\boldsymbol M}) &\ge\alpha_k {\sf SDP}({\boldsymbol M}) - (1-\alpha_k)\,
{\sf SDP}(-{\boldsymbol M}) \, ,\label{eq:Gro}\\
{\sf OPT}_k({\boldsymbol M}) &\ge \big(2-\alpha_k^{-1}\big){\sf SDP}({\boldsymbol M}) - \big(\alpha_k^{-1}-1\big)\,
{\sf OPT}_k(-{\boldsymbol M}) \label{eq:GroBis}\, .
\end{align}
\end{theorem}
\begin{remark}
The upper bound in Eq.~(\ref{eq:Gro}) is trivial.
Further, it follows from Cauchy-Schwartz that $\alpha_k\in (0,1)$ for all
$k$. Also $\|{\boldsymbol g}\|^2_2$ is a chi-squared random variable with $k$
degrees of freedom and hence
\begin{align}
\alpha_k = \frac{2\Gamma((k+1)/2)^2}{k\Gamma(k/2)^2} = 1-\frac{1}{2k}
+O(1/k^2)\, .\label{eq:AlphaK}
\end{align}
Substituting in Eq.~(\ref{eq:Gro}) we get, for all $k\ge k_0$ with
$k_0$ a
sufficiently large constant, and assuming ${\sf SDP}({\boldsymbol M})>0$,
\begin{align}
\Big(1-\frac{1}{k}\Big){\sf SDP}({\boldsymbol M}) -\frac{1}{k}\, |{\sf SDP}(-{\boldsymbol M})|
\le {\sf OPT}_k({\boldsymbol M}) \le {\sf SDP}({\boldsymbol M})\, .\label{eq:GroSimple}
\end{align}
In particular, if $|{\sf SDP}(-{\boldsymbol M})|$ is of the same order as ${\sf SDP}({\boldsymbol M})$, we
conclude that ${\sf OPT}_k({\boldsymbol M})$ approximates ${\sf SDP}({\boldsymbol M})$ with a relative error of
order $O(1/k)$.
\end{remark}
The classical Grothendieck inequality concerns non-symmetric bilinear
forms \cite{grothendieck1996resume}.
A Grothendieck inequality for symmetric matrices was established in
\cite{nemirovski1999maximization,megretski2001relaxations}
(see also \cite{alon2006quadratic} for generalizations)
and states that, for a constant $C$,
\begin{align}
{\sf OPT}_1({\boldsymbol M}) \ge \frac{1}{C\log n}\, {\sf SDP}({\boldsymbol M})\, .
\end{align}
Higher-rank Grothendieck inequalities were developed in the setting
of general graphs in
\cite{briet2010grothendieck,briet2010positive}. However,
constant-factor approximations were not established for the present
problem (which corresponds to the the complete graph case in
\cite{briet2010grothendieck}).
Constant factor approximations exist for ${\boldsymbol M}$ positive semidefinite
\cite{briet2010positive}.
We note that Theorem \ref{thm:Gro} implies the inequality of \cite{briet2010positive}.
Using ${\sf SDP}(-{\boldsymbol M})\le -\xi_{\rm min}({\boldsymbol M})$ in Eq.~(\ref{eq:Gro}),
we obtain the inequality of \cite{briet2010positive} for the positive semidefinite
matrix ${\boldsymbol M}-\xi_{\rm min}({\boldsymbol M}){\rm I}$.
On the other hand, the result of \cite{briet2010positive} is too weak
for our applications. We want to apply Theorem
\ref{thm:Gro} --among others-- to ${\boldsymbol M}= {\boldsymbol A}^{\mbox{\tiny cen}}_G$ with ${\boldsymbol A}^{\mbox{\tiny cen}}_G$ the
adjacency matrix of $G\sim{\sf G}(n,d/n)$.
This matrix is non-positive definite, and in a dramatic
way with smallest eigenvalue satisfying $-\xi_{\rm
min}({\boldsymbol A}^{\mbox{\tiny cen}}_G) \approx (\log n/(\log\log
n))^{1/2}\gg {\sf SDP}(-{\boldsymbol A}^{\mbox{\tiny cen}}_G)$).
In summary, we could not use the vast literature on Grothendieck-type
inequality to prove our main result, Theorem \ref{thm:Main}, which
motivated us to develop Theorem \ref{thm:Gro}.
Theorem \ref{thm:Gro} will allow to bound $|{\sf SDP}({\boldsymbol M})-{\sf OPT}_k({\boldsymbol M})|$
for ${\boldsymbol M}$ either a centered adjacency matrix or a Gaussian matrix. The
next lemma bounds the `smoothing error'
$|\Phi(\beta,k;{\boldsymbol M})-{\sf OPT}_k({\boldsymbol M})|$.
\begin{lemma}\label{lemma:ZeroTemperature}
There exists an absolute constant $C$ such that for any $\varepsilon\in
(0,1]$ the following holds.
If $\|{\boldsymbol M}\|_{\infty\to 2} \equiv
\max\{\|{\boldsymbol M}{\boldsymbol x}\|_{2}:\;\; \|{\boldsymbol x}\|_{\infty}\le 1\}\le L\sqrt{n}$, then
\begin{align}
\Big|\frac{1}{n}\Phi(\beta,k;{\boldsymbol M})-\frac{1}{n}{\sf OPT}_k({\boldsymbol M})\Big|\le
2L\varepsilon\sqrt{k} +\frac{k}{\beta}\log\frac{C}{\varepsilon}\, .\label{eq:TemperatureBound}
\end{align}
\end{lemma}
\subsection{Interpolation}
\label{sec:Interpolation}
Our next step consists in comparing the adjacency matrix of random graph
$G$ with a suitable Gaussian random matrix, and bound the error in the corresponding
log-partition function $\Phi(\beta,k;\,\cdot\,)$.
Let us recall the definition of Gaussian orthogonal ensemble
${\rm GOE}(n)$. We have ${\boldsymbol W}\sim {\rm GOE}(n)$ if ${\boldsymbol W}\in{\mathbb{R}}^{n\times n}$ is symmetric with
$\{W_{i,j}\}_{1\le i\le j\le n}$ independent, with distribution $W_{ii}\sim{\sf N}(0,2/n)$ and $W_{ij}\sim{\sf N}(0,1/n)$ for $i<j$.
We then define, for $\lambda\ge 0$, the following \emph{deformed ${\rm GOE}$} matrix:
\begin{align}
{\boldsymbol B}(\lambda) \equiv \frac{\lambda}{n}\, {\boldsymbol{1}}\bone^{{\sf T}}+ {\boldsymbol W}\, ,\label{eq:Bdefinition}
\end{align}
where ${\boldsymbol W}\sim{\rm GOE}(n)$. The argument $\lambda$ will be omitted if clear from the context.
The next lemma establishes the necessary comparison bound.
Note that we state it for $G\sim{\sf G}(n,a/b,b/n)$ a random graph from the hidden partition model,
but it obviously applies to standard \ER random graphs by setting $a=b=d$.
\begin{lemma}\label{lemma:Interpolation}
Let ${\boldsymbol A}^{\mbox{\tiny cen}}_G= {\boldsymbol A}_G-(d/n){\boldsymbol{1}}\bone^{{\sf T}}$ be the centered adjacency matrix of $G\sim{\sf G}(n,a/n,b/n)$, whereby
$d= (a+b)/2$. Define $\lambda=(a-b)/2\sqrt{d}$. Then there exists an
absolute constant $n_0$
such that, if $n\ge \max(n_0,(15d)^2)$,
\begin{align}
\left|\frac{1}{n}{\mathbb{E}}\Phi\big(\beta,k;{\boldsymbol A}^{\mbox{\tiny cen}}_G/\sqrt{d}\big)-\frac{1}{n}{\mathbb{E}}\Phi(\big(\beta,k;{\boldsymbol B}(\lambda)\big)\right|\le
\frac{2\beta^2}{\sqrt{d}} +\frac{8\lambda^{1/2}}{d^{1/4}}\, .
\end{align}
\end{lemma}
Note that this lemma bounds the difference in expectation. We will use
concentration of measure to transfer this result to a bound holding
with high probability.
Interpolation (or `smart path') methods have a long history in
probability theory, dating back to Lindeberg's beautiful proof of the
central limit theorem \cite{lindeberg1922neue}.
Since our smoothing construction yields a log-partition function $\Phi(\beta,k;{\boldsymbol M})$,
our calculations are similar to certain proofs in statistical mechanics.
A short list of statistical-mechanics inspired results in probabilistic combinatorics includes
\cite{FranzLeone,FranzLeoneToninelli,BGT,panchenko2004bounds,GuerraToninelliDiluted}.
In our companion paper \cite{dembo2015extremal}, we used a similar approach
to characterize the limit value of the minimum bisection of \ER and random regular graphs.
\subsection{SDPs for Gaussian random matrices}
\label{sec:Gaussian}
The last part of our proof analyzes the Gaussian model
(\ref{eq:Bdefinition}). This type of random matrices have attracted
a significant amount of work within statistics (under the name of
`spiked model') and probability theory (as `deformed
Wigner --or GOE-- matrices'), aimed at characterizing their eigenvalues
and eigenvectors.
A very incomplete list of references includes
\cite{baik2005phase,feral2007largest,capitaine2011free,benaych2012large,bloemendal2013limits,pizzo2013finite,knowles2013isotropic}.
A key phenomenon unveiled by these works is the so-called
\emph{Baik-Ben Arous-Pech\'e (or BBAP) phase transition}. In its
simplest form (and applied to the matrix of
Eq.~(\ref{eq:Bdefinition})) this predicts a phase transition in the
largest eigenvalue of ${\boldsymbol B}(\lambda)$
\begin{align}
\lim_{n\to\infty}\xi_1({\boldsymbol B}(\lambda)) =
\begin{cases}
2 & \mbox{ if $\lambda\le 1$,}\\
\lambda+\lambda^{-1} & \mbox{ if $\lambda> 1$.}
\end{cases}\label{eq:BBAP}
\end{align}
(This limit can be interpreted as holding in probability.)
Here, we establish an analogue of this result for the SDP value.
\begin{theorem}[SDP phase transition for deformed GOE matrices]\label{thm:Gaussian}
Let ${\boldsymbol B}={\boldsymbol B}(\lambda)\in{\mathbb{R}}^{n\times n}$ be a symmetric matrix distributed
according to the model (\ref{eq:Bdefinition}). Namely ${\boldsymbol B}={\boldsymbol B}^{{\sf T}}$
with $\{B_{ij}\}_{i\le j}$ independent random variables, where
$B_{ij}\sim {\sf N}(\lambda/n,1/n)$ for $1\le i<j\le n$
and $B_{ii}\sim {\sf N}(\lambda/n,2/n)$ for $1\le i\le n$. Then
\begin{enumerate}
\item[$(a)$] If $\lambda\in [0,1]$, then for any $\varepsilon>0$, we have
${\sf SDP}({\boldsymbol B}(\lambda))/n\in [2-\varepsilon,2+\varepsilon]$ with probability
converging to one as $n\to\infty$.
\item[$(b)$] If $\lambda>1$, then there exists $\Delta(\lambda)>0$ such that
${\sf SDP}({\boldsymbol B}(\lambda))/n\ge 2+\Delta(\lambda)$ with probability
converging to one as $n\to\infty$.
\end{enumerate}
\end{theorem}
As mentioned above, we obviously have ${\sf SDP}({\boldsymbol B})/n\le \xi_1({\boldsymbol B})$. The
first part of this theorem (in conjunction with Eq.~(\ref{eq:BBAP}))
establishes that the upper bound is essentially tight of $\lambda\le
1$. On the other hand, we expect the eigenvalue upper bound not to be tight for
$\lambda>1$ \cite{OursReplicas}. Nevertheless, the second part of
our theorem establishes a phase transition taking place at
$\lambda=1$ as for the leading eigenvalue.
\begin{remark}
The phase transition in the leading eigenvalue has a high degree
of universality. In particular, Eq.~(\ref{eq:BBAP}) remains correct if
the model (\ref{eq:Bdefinition}) is replaced by ${\boldsymbol B}' =
\lambda{\boldsymbol v}\bv^{{\sf T}}+{\boldsymbol W}$, with ${\boldsymbol v}$ an arbitrary unit vector.
On the other hand, we expect the phase transition in ${\sf SDP}({\boldsymbol B}')/n$ to
depend --in general-- on the vector ${\boldsymbol v}$, and in particular on how
`spiky' this is.
\end{remark}
\section{Other results and generalizations}
\label{sec:Generalization}
While our was focused on a relatively simple model, the techniques
presented here allow for several generalizations. We discuss them
briefly here.
\subsection{Estimation}
\label{sec:Estimation}
For the sake of simplicity, we formulated
community detection as an \emph{hypothesis testing} problem. It is
interesting to consider the associated \emph{estimation} problem,
that requires to estimate the hidden partition $V =S_1\cup S_2$.
We encode the ground truth using the vector ${\boldsymbol x_0}\in\{+1,-1\}^n$, with
$x_{0,i}=+1$ if $i\in S_1$, and $x_{0,i} =-1$ if $i\in S_2$. An
estimator is a map\footnote{Earlier work sometimes assumes ${\widehat{\boldsymbol x}}:{\mathcal G}_n\to \{+1,-1\}^n$, i.e. forbids the estimate $0$.
For our purposes, the two formulations are equivalent: we can always `simulate' $\hat{x}_i=0$ by letting $\hat{x}_i\in\{+1,-1\}$ uniformly at random.}
${\widehat{\boldsymbol x}}:{\mathcal G}_n\to \{+1,0,-1\}^n$ with ${\mathcal G}_n$ the space
of graphs over $n$ vertices. It is proved in
\cite{mossel2012stochastic} that no estimator is substantially better
than random guessing for $G\sim{\sf G}(n,a/n,b/n)$, with
$\lambda=(a-b)/\sqrt{2(a+b)}<1$. More precisely, for $\lambda<1$, any
estimator achieves vanishing correlation with the ground truth:
$|\<{\widehat{\boldsymbol x}}(G),{\boldsymbol x_0}\>|=o(n)$ with high probability.
We construct a randomized SDP-based estimator $\boldsymbol{\hat{x}}^{\mbox{\tiny{SDP}}}(G)$ as follows
(we will denote expectation and probability with respect to tha
algorithm's randomness by ${\mathbb E}_{\mbox{\tiny\rm alg}}(\,\cdot\,)$ and ${\mathbb P}_{\mbox{\tiny\rm alg}}(\,\cdot\,)$):
\begin{itemize}
\item[$(i)$] Partition the edge set $E=E_1\cup E_2$ by letting
$(i,j)\in E_2$ independently for each edge $(i,j)\in E$, with
probability ${\mathbb P}_{\mbox{\tiny\rm alg}}\big((i,j)\in E_2\big)=\delta_n/(1+\delta_n)$, $\delta_n= n^{-1/2}$, and
$(i,j)\in E_1$ otherwise. Denote by $G_1=(V,E_1)$, and $G_2=(V,E_2)$
the resulting graphs.
\item[$(ii)$] Compute an optimizer ${\boldsymbol X}_*$ of the SDP
(\ref{eq:SDP.DEF}), ${\boldsymbol M}={\boldsymbol A}^{\mbox{\tiny cen}}_{G_1}$ (i.e. a matrix ${\boldsymbol X}_*\in{\sf PSD}_1(n)$ such
that $\<{\boldsymbol A}^{\mbox{\tiny cen}}_{G_1},{\boldsymbol X}_*\> = {\sf SDP}({\boldsymbol A}^{\mbox{\tiny cen}}_{G_1})$).
\item[$(iii)$] Compute the eigenvalue decomposition ${\boldsymbol X}_*=
\sum_{i=1}^n\xi_i{\boldsymbol v}_i{\boldsymbol v}_i^{{\sf T}}$, and let ${\boldsymbol v}_i =
(v_{i,1},v_{i,2},\dots,v_{i,n})$ denote the $i$-th eigenvector. For each $i,j\in [n]$ define ${\widehat{\boldsymbol x}}^{(i,j)}\in\{+1,0,-1\}$
by $\widehat{x}^{(i,j)}_\ell = {\rm sign}({\boldsymbol v}_{i})_{\ell}$ if $|v_{i,\ell}|\ge $
= $|v_{i,j}|$ and $\widehat{x}^{(i,j)}_\ell = 0$ otherwise. (In words, ${\widehat{\boldsymbol x}}^{(i,j)}$ is obtained from ${\boldsymbol v}_i$ by zeroing entries
with magnituude below $|v_{i,j}|$ and taking the sign of those above).
\item[$(iv)$] Select $(I,J) = \arg\max_{i,j\in [n]}\<{\widehat{\boldsymbol x}}^{(i,j)},{\boldsymbol A}_{G_2}{\widehat{\boldsymbol x}}^{(i,j)}\>$, and return $\boldsymbol{\hat{x}}^{\mbox{\tiny{SDP}}}(G) = {\widehat{\boldsymbol x}}^{(I,J)}$.
\end{itemize}
The next results implies that --for large bounded average degree $d$--
this estimator has a nearly optimal threshold.
\begin{theorem}\label{thm:Estimation}
Let $G\sim{\sf G}(n,a/n,b/n)$ and assume, for some $\varepsilon>0$,
$\lambda=(a-b)/\sqrt{2(a+b)} \ge 1+\varepsilon$.
Then there exists $\Delta_{\mbox{\tiny\rm est}}=\Delta_{\mbox{\tiny\rm est}}(\varepsilon)>0$ and $d_* =
d_*(\varepsilon)>0$ such that, for all $d\ge d_*(\varepsilon)$
\begin{align}
{\mathbb{P}}\left(\frac{1}{n}|\<\boldsymbol{\hat{x}}^{\mbox{\tiny{SDP}}}(G),{\boldsymbol x_0}\>|\ge
\Delta_{\mbox{\tiny\rm est}}(\varepsilon)\right) \ge 1-C\, e^{-n^{1/2}/C}\, ,
\end{align}
with ${\mathbb{P}}(\,\cdot\,)$ denoting expectation with respect to the
algorithm and the graph $G$, and $C=C(\varepsilon)$ a constant.
\end{theorem}
\subsection{Robustness}
Consider the problem of testing whether the
graph $G$ has a community structure, i.e. whether
$G\sim{\sf G}(n,a/n,b/n)$ or $G\sim{\sf G}(n,d/n)$, $d=(a+b)/2$.
The next result establishes that the SDP-based test of Section
\ref{sec:MainPartition} is robust with respect to adversarial
perturbations of these models. Namely, an adversary can arbitrarily
modify $o(n)$ edges of these graphs, without changing the detection threshold.
\begin{corollary}\label{coro:Robustness}
Let ${\mathbb{P}}_0$ the law of $G\sim{\sf G}(n,d/n)$, and ${\mathbb{P}}_1$ be the law
of $G\sim{\sf G}(n,a/n,b/n)$. Denote by $\widetilde{\mathbb{P}}_0$, $\widetilde{\mathbb{P}}_1$ be any two
distributions over graphs with vertex set $V=[n]$. Assume that, for each
$a\in \{0,1\}$, the following happens: there exists a coupling
${\mathbb{Q}}_a$ of ${\mathbb{P}}_a$ and $\widetilde{\mathbb{P}}_a$ such that, if $(G,\widetilde{G})\sim
{\mathbb{Q}}_a$, then $|E(G)\triangle E(\widetilde{G})|=o(n)$ with high probability.
Then, under the same assumptions of Theorem \ref{thm:SDP_Test},
the SDP-based test (\ref{eq:TestDef}) distinguishes $\widetilde{\mathbb{P}}_0$ from
$\widetilde{\mathbb{P}}_1$ with error probability vanishing as $n\to\infty$.
\end{corollary}
By comparison, spectral methods such as the one of
\cite{bordenave2015non} appear to be fragile to
an adversarial perturbation of $o(n)$ edges \cite{OursReplicas}.
\subsection{Multiple communities}
The hidden partition model of
Eq.~(\ref{eq:HiddenPart}) can be naturally generalized to the case of
$r>2$ hidden communities. Namely, we define the distribution
${\sf G}_r(n,a/n,b/n)$ over graphs as follows. The
vertex set $[n]$ is partitioned uniformly at random into $r$ subsets
$S_1$, $S_2$, \dots, $S_r$ with
$|S_i|=n/r$. Conditional on this partition, edges are independent with
\begin{align}
{\mathbb{P}}_1\big((i,j)\in E|\{S_\ell\}_{\ell\le r}\big) = \begin{cases}
a/n & \mbox{ if $\{i,j\}\subseteq S_\ell$ for some $\ell\in[r]$,}\\
b/n & \mbox{ otherwise.}
\end{cases}\label{eq:rHiddenPart}
\end{align}
The resulting graph has average degree $d = [a+(r-1)b]/r$.
The case studied above (hidden bisection) is recovered by setting
$r=2$ in this definition: ${\sf G}(n,a/n,b/n)= {\sf G}_2(n,a/n,b/n)$.
Of course, this model can be generalized further by allowing for $r$
unequal subsets, and a generic $r\times r$ matrix of edge
probabilities \cite{holland,abbe2015community,hajek2015achieving}.
Given a single realization of the graph $G$, we would like to test
whether $G\sim{\sf G}(n,d/n)$ (hypothesis $0$), or
$G\sim{\sf G}_r(n,a/n,b/n)$ (hypothesis $1$).
We use the same SDP relaxation already introduced in
Eq. (\ref{eq:SDP.DEF}), and the test $T(\,\cdot\,;\delta)$ defined in Eq.~(\ref{eq:TestDef}).
This is particularly appealing because it does not require knowledge
of the number of communities $r$.
\begin{theorem}\label{thm:SDP_Test_r}
Consider the problem of distinguishing $G\sim{\sf G}_r(n,a/n,b/n)$ from
$G\sim{\sf G}(n,d/n)$, $d = (a+(r-1)b)/r$.
Assume, for some $\varepsilon>0$,
\begin{align}
\frac{a-b}{\sqrt{r(a+(r-1)b)}} \ge 1+\varepsilon\, .\label{eq:ConditionFactor1_r}
\end{align}
Then there exists $\delta_*=\delta_*(\varepsilon,r)>0$ and $d_* = d_*(\varepsilon,r)>0$
such that the following holds. If $d\ge d_*$, then the
SDP-based test $T(\,\cdot\,;\delta_*)$ succeeds
with error probability probability at most
$Ce^{-n/C}$ for $C=C(a,b,r)$ a constant.
\end{theorem}
\begin{remark}
In earlier work, a somewhat tighter relaxation is sometimes used,
including the additional constraint $X_{ij}\ge -(r-1)^{-1}$ for all
$i\neq j$. The simpler relaxation used here is however sufficient for
proving Theorem \ref{thm:SDP_Test_r}.
\end{remark}
\begin{remark}
The threshold established in Theorem \ref{thm:SDP_Test_r} coincides (for large degrees) with the
one of spectral methods using non-backtracking random walks \cite{bordenave2015non}. However, for $k\ge 4$
there appears to be a gap between
general statistical tests and what is achieved by polynomial time algorithms \cite{decelle2011asymptotic,chen2014statistical}.
\end{remark}
\subsection*{Acknowledgments}
A.M. was partially supported by NSF grants CCF-1319979 and DMS-1106627 and the
AFOSR grant FA9550-13-1-0036.
S.S was supported by
the William R. and Sara Hart Kimball Stanford Graduate Fellowship.
\newpage
\bibliographystyle{amsalpha}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,497,673 | arxiv | \section{Introduction}\label{Intro}
System identification refers to the problem of building mathematical models and approximating governing equations using only observed data from the system. Governing laws and equations have traditionally been derived from expert knowledge and first principles, however in recent years the large amount of data available resulted in a growing interest in data-driven models and approaches for automated dynamical systems discovery. The applications of system identification include any system where the inputs and outputs can be measured, such as industrial processes, control systems, economic data and financial systems, biology and the life sciences, medicine, social systems, and many more (see for instance \cite{billings2013nonlinear} for more examples of applications).
Examples of frequently used approaches for data-driven discovery of nonlinear differential equations are sparse regression, Gaussian processes, applied Koopmanism and dictionary based approaches, among which neural networks. Sparse regression approaches are based on a user-determined library of candidate terms from which the most important ones are selected using sparse regression (see for instance \cite{schaeffer2013sparse}, \cite{brunton2016discovering}, \cite{rudy2017data}, \cite{schaeffer2017learning}). These methods provide interpretable results, but they are usually sensitive to noise and require the user to choose an “appropriate” sets of basis functions. Identification using Gaussian Processes places a Gaussian prior on the unknown coefficients of the differential equation and infers them via maximum likelihood estimation (see for instance \cite{raissi2017machine}, \cite{raissi2018hidden}, \cite{raissi2018numerical}). The Koopman approach is based on the idea that non linear system identification in the state space is equivalent to linear identification of the Koopman operator in the infinite-dimensional space of observables. The power of the Koopman approach is that it allows to study non-linear systems using traditional techniques in numerical linear algebra.
However, since the Koopman operator is infinite-dimensional, in practice one computes a projection of the Koopman operator onto a finite-dimensional subspace of the observables. This approximation may result in models of very high dimension and has proven challenging in practical applications (see for instance \cite{budivsic2012applied}, \cite{nathan2018applied}, \cite{lusch2018deep}). In this work we use a different approach based on neural networks.
Since neural networks are universal approximators, they are a natural choice for nonlinear system identification: depending on the architecture and on the properties of the loss function, they can be used as sparse regression models, they can act as priors on unknown coefficients or completely determine an unknown differential operator (see for instance \cite{narendra1992neural}, \cite{wang2006fully}, \cite{ogunmolu2016nonlinear}, \cite{raissi2018deep}, \cite{berg2019data}, \cite{raissi2018multistep}, \cite{champion2019data}, \cite{qin2019data}, \cite{NEGRINI2021110549}). The common goal among all such methods is learning a nonlinear and potentially
multi-variate mapping $f$, right-hand-side of the differential equation:
\begin{equation}\label{my_eq}
\dot{x}(t) = f(t,x)
\end{equation}
that can be used to predict the future system states given a set of data describing the present and past states.
Two main approaches can be used to approximate the function $f$ with a neural network. The first approach aims at approximating the function $f$ directly, like we did in our previous paper \cite{NEGRINI2021110549}. In this work, inspired by the work of Oberman and Calder in \cite{oberman2018lipschitz} , we use a Lipschitz regularized neural network to approximate the RHS of the ODE (\ref{my_eq}), directly from observations of the state vector $x(t)$. The target data for the network is made of discrete approximations of the velocity vector $\dot{x}(t)$, which act as a prior for $f$. To generate the target data we first denoise the trajectory data using spline interpolation, then we approximate the velocity vector using the numerical derivative of the splines. In the rest of the paper we refer to this method as \textit{splines method}. One limitation of this approach is that, in order to obtain accurate approximations of the function $f$, one needs to obtain reliable target data, approximations of the velocity vector, from the observations of $x(t)$. This proved to be hard when a large amount of noise (more that 2\%) was present in the data or when splines could not approximate the trajectories correctly. When instead we could obtain high quality target data, we empirically proved that, thanks to the Lipschitz regularization, our method was robust to noise and able to provide an accurate approximation of the function $f$.\\
The second approach aims at approximating the function $f$ implicitly by expressing the differential equation (\ref{my_eq}) in integral form and enforcing that the network that approximates $f$ satisfies an appropriate update rule. This is the approach used in \cite{raissi2018multistep}, which we refer to as \textit{multistep method}, where the authors train the approximating network to satisfy a linear multistep method. An advantage of this approach over the previous one is that the target data used to train the multistep network is composed only of observations of the state vector $x(t)$. However, noise in the observations of $x(t)$ can still have a strong impact on the quality of the network approximation of $f$. \\
Later on we will compare these methods with our proposed approach.
In this work we build on the second approach and introduce a new idea to overcome the limitations of the methods mentioned above. Similarly to the multistep method, we express the differential equation in integral form and train the network that approximates $f$ to satisfy Euler update rule (with minimal modifications one can use linear multistep methods as well). This implicit approach overcomes the limitations of the splines method, whose results were strongly dependent on the quality of the velocity vector approximations used as target data. Differently than the multistep method, our proposed approach is based on a Lipschitz regularized ensemble of neural networks and it is able to overcome the sensitivity to noise.
More specifically, we consider the system of ODEs (\ref{my_eq})
where $x(t) \in \mathbb{R}^d$ is the state vector of a $d$-dimensional dynamical system at time $t \in I \subset \mathbb{R}$, $\dot{x}(t) \in \mathbb{R}^d$ is the first order time derivative of $x(t)$ and $f: \mathbb{R}^{1+d}\rightarrow \mathbb{R}^d$ is a vector-valued function right-hand side of the differential equation. We approximate the unknown function $f$ with an ensemble of neural networks. A neural network ensemble is a learning paradigm where a finite number of networks are jointly used to solve a problem. An ensemble algorithm is generally constructed in two steps: first multiple component neural networks are trained to produce component predictions; then they are combined to produce a final prediction (for a more precise explanation see \cite{krogh1996learning}). In their work \cite{hansen1990neural}, Hansen and Salamon showed that the generalization ability of a neural network architecture can be significantly improved through ensembling. This is the reason why we use an ensemble of neural networks, instead of only one network as it was done in \cite{raissi2018multistep}.
Our proposed ensemble architecture is composed of two blocks: the first, which we call \textit{target data generator} is an ensemble of neural networks whose goal is to produce accurate velocity vector approximations using only observations of $x(t)$. To train this ensemble of networks, we express equation (\ref{my_eq}) in integral form and use Euler method to predict the solution at every successive time step using at each iteration a different neural network as a prior for $f$. If $M$ denotes the number of time steps at which $x(t)$ is observed, then the procedure described above yields $M-1$ time-independent networks, each of which approximates the velocity vector $\dot{x}(t)$ for a fixed time $t$. The second block of the ensemble architecture is the \textit{interpolation network}. This is a Lipschitz regularized feed forward network $N$ as defined in \cite{NEGRINI2021110549}. This network takes as input an observation of the time $t$ and of the state vector $x(t)$ and uses as target data the approximations of the velocity vector generated by the target data generator. Once trained, the interpolation network provides the desired approximation of the RHS function $f$ on its domain.
Finally we want to comment on our choice of using ensembles of neural networks as compared to the other methods listed above for system identification. In our experience and from a literature review, neural networks are a good choice for function approximation because of their ability to learn and model non-linear and complex functions as well as to generalize to unseen data. For example, it has been shown empirically in \cite{choon2008functional} that neural networks outperform polynomial regression when complicated interactions are present in the function to approximate. We also show in Section \ref{comparisonR} that, for noisy data, our Lipschitz regularized ensemble approach outperforms the splines and mutlistep methods as well as polynomial regression and the dictionary based method SINDy (Sparse Identification of Nonlinear Dynamics) \cite{brunton2016discovering}.\\
Since neural networks are universal approximators, we do not need any prior information about the order or about the analytical form of the differential equation as in \cite{schaeffer2013sparse}, \cite{rudy2017data}, \cite{schaeffer2017learning}, \cite{sahoo2018learning}, \cite{hasan2020learning}; this allows to accurately recover very general and complex RHS functions even when no information on the target function is available.
\\Since the proposed ensemble method is based on weak notion of solution using integration (see formula (\ref{weakEq})) it can be used to reconstruct non-smooth RHS functions (see Example \ref{nsRHS}). This is especially an advantage over models that rely on the notion of classical solution like the Splines Method \cite{NEGRINI2021110549}. The ability of our proposed method to accurately approximate both smooth and non-smooth functions make it an extremely valuable approach when working with real-world data.\\
Another advantage of our ensemble approach is its ability to overcome sensitivity to noise and avoid overfitting. This is due to the fact that we use an ensemble of networks to produce our predictions as well as to the Lipschitz regularization term in the loss function of the interpolation network. The ability of our method to overcome sensitivity to noise is especially an advantage over works that use finite differences and polynomial approximation to extract governing equations from data (\cite{brunton2016discovering}, \cite{rudy2017data}), over the Koopman based methods where noise in the data can impact the quality of the finite dimensional approximation of the Koopman operator (\cite{sinha2019robust}, \cite{haseli2019approximating}), as well as over the multistep method \cite{raissi2018multistep}.\\
Finally, our model is defined componentwise so it can be applied to system of equations of any dimension, making it a valuable approach when dealing with high dimensional real-world data. The flexibility and noise robustness of our approach comes, however, at the cost of loss of interpretability and increased computational cost. Training a neural network ensemble is more computationally expensive than training only one neural network, as it is the case in \cite{raissi2018multistep} and \cite{NEGRINI2021110549}, or than using polynomial regression or SINDy. Moreover, the learned ensemble is usually less interpretable than a sparse model based on a dictionary of elementary functions, especially when the number of network learnable parameters is large. However, the trained ensemble produces very accurate results and implements a function which can be easily used in future computations, for example to generate new trajectories like we do in Section \ref{num_ex}.
The paper is organized as follows: in Section \ref{architecture} we describe the ensemble architecture and the loss function used in the training; in Section \ref{data_eval} we describe how the synthetic data was generated, the metrics used to evaluate our method and we precisely define the generalization gap; in Section \ref{num_ex} we propose numerical examples, we show how the ensemble method is an improvement over our previous method and we compare it with other methods for system identification. In Section \ref{discussion} we discuss our numerical results. Finally, in the conclusion Section we summarize our results and describe possible future directions of research.
\section{The Ensemble Architecture}\label{architecture}
In this section we describe the architecture used in the experiments.
\medskip\\
In this work, we investigate the problem of approximating unknown governing equations, i.e. approximating the vector-valued RHS $f(t,x)$ of a system of differential equations $\dot{x}(t) = f(t,x)$, directly from discrete observations of the state vector $x(t) \in \mathbb{R}^d$ using an ensemble of feed forward networks, see Figure \ref{fig:ensemble} for a representation of the architecture. \\
We explained before that one limitation of our previously proposed method for system identification (see \cite{NEGRINI2021110549} for the details) is that we used as target data for the network discrete approximations of the velocity vector computed using difference quotients: these provided good approximations of the velocity vector only when small amounts of noise (maximum 2\%) was present in the data. In this work we propose an ensemble approach which is able to provide reliable approximations of the velocity vector from the state vector observations, even when large amounts of noise are present in the data (up to 10\% of noise).
Specifically, the ensemble architecture is composed of two blocks. The first one is the \textit{target data generator}. This is a family of neural networks whose goal is to produce accurate velocity vector approximations using only observations of the state vector $x(t)$. For each time instant $t_j$, we define a neural network $N_j$ which takes as input the state vector at time $t_j$, and it is trained to satisfy Euler update rule to produce an approximation of the state vector at the next time instant. This process implicitly forces the neural network $N_j$ to produce an approximation of the velocity vector at time $t_j$, $\dot{x}(t_j)$. Finally, once all the networks $N_j$ are trained, they collectively provide a discrete approximation of the velocity vector (we use $\widetilde{\quad}$ to indicate an approximation of the quantity under the tilde) :
\begin{equation}
\begin{bmatrix}
N_1(x(t_1))\\
N_2(x(t_2))\\
\vdots\\
N_{M-1}(x(t_{M-1}))
\end{bmatrix} =
\begin{bmatrix}
\widetilde{\dot{x}(t_1)}\\
\widetilde{\dot{x}(t_2)}\\
\vdots\\
\widetilde{\dot{x}(t_{M-1})}
\end{bmatrix} =: \widetilde{\dot{x}(t)}
\end{equation}
The second block, which we call \textit{interpolation network}, is a Lipschitz regularized feed forward network $N_{int}$ as defined in \cite{NEGRINI2021110549}. This network takes as input an observation of the instant time $t$ and of the state vector $x(t)$ and tries to match the target data, $\widetilde{\dot{x}(t)}$, which is made of approximations of the velocity vector generated by the target data generator (first block of the ensemble). Once trained, the interpolation network provides the desired approximation of the RHS function $f$ on its domain: $N_{int}(t,x) \approx f(t,x)$.
The pipeline for the experiments is as follows: the first step is to train the target data generator to produce reliable velocity vector approximations for the interpolation network. Each network $N_j$ produces an approximation of the velocity vector at time $t_j$, $\dot{x}(t_j)$. These discrete approximations of the velocity vector are then used as target data to train the interpolation network. Once the interpolation network is trained it produces the desired approximation of the function $f(t,x)$.
\begin{figure}[H]
\centering
\includegraphics[width = 1\linewidth]{images/archit_esemble.PNG}
\caption{A representation of the ensemble architecture}
\label{fig:ensemble}
\end{figure}
\subsection{The Target Data Generator}
The target data generator is a family of neural networks whose goal is to produce reliable velocity vector approximations which will be used as target data for the interpolation network.\\
The data is selected as follows: given time instants $t_1,\dots, t_M$ and initial conditions $x_1(0), \dots, x_K(0) \in \mathbb{R}^d$, define $$x_i(t_j) \in \mathbb{R}^d, \quad i = 1,\dots,K, \quad j = 1,\dots, M$$ to be an observation of the state vector $x(t)$ at time $t_j$ for initial condition $x_i(0)$.
For each time instant $t_j, \; j = 1, \dots, M-1$ we train a neural network $N_j(x(t_j))$ which approximates the function $f(t,x)$ at time instant $t_j$. More specifically, after training, each neural network $N_j(x(t_j))$ satisfies:
$$ \Delta t \; N_j(x_i(t_j)) + x_i(t_j) \approx x_i(t_{j+1}), \quad \forall i = 1,\dots,K $$
In other words, we express the original ODE $\dot{x} = f(t,x)$ in integral form and use Euler method to predict the solution at every successive time step using, at each iteration, a different neural network as a prior for $f$.
The data for the target data generator is defined as follows: for $j = 1, \dots, M-1$ the network data used to train the $j^{th}$ network are couples $(X_i^j, Y_i^j), \; i = 1,\dots, K$, where $X_i^j$ is the input and $Y_i^j$ is the target and $X_i^j$, $Y_i^j$ are defined as follows:
\begin{align*}
&X_i^j = (x_i(t_j)) \in \mathbb{R}^{d},\\
&Y_i^j = (x_i(t_{j+1})) \in \mathbb{R}^{d}.
\end{align*}
The data is separated into training and testing sets made respectively of 80\% and 20\% of the data. A representation of the data for the interpolation network is provided in Figure \ref{fig:data_gener}.
\begin{figure}[H]
\centering
\includegraphics[width = 1\linewidth]{images/Traget_data_gen.png}
\caption{A representation of the data for the target data generator: the inputs are observations of the state vector $x$ for a fixed time $t_j$, $x(t_j)$; the target data are observations of the state vector $x$ at the next time instant $t_{j+1}$, $x(t_{j+1})$. The goal is to train a network $N_j$ which approximates the velocity vector at time $t_j$: this is a prior for the unknown function $f$ at time $t_j$, $f(t_j, x)$.}
\label{fig:data_gener}
\end{figure}
Each network $N_j$ is a feed forward network with $L_j$ layers and Leaky ReLU activation function. We apply the network to each training input $X_i^j$ and we aim to find the best network parameters to match the corresponding $Y_i^j$.
\smallskip\\
For $j = 1, \dots, M-1$ and $h = 1,\,2,\,3$ define the weight matrices $W_h^j \in \mathbb{R}^{\;n_h \times n_{h-1}}$ and bias vectors $b_h^j \in \mathbb{R}^{n_h}$ where $n_h \in \mathbb{N}, n_0 = n_3 = d$. Let $\theta^j = \{W^j,b^j\}$ be the model parameters.\\
As activation function, we use a Leaky Rectified Linear Unit (LReLU) with parameter $\varepsilon = 0.01$:
\begin{align*}
\sigma(x) = \text{LReLU}(x) = \begin{cases}
\varepsilon x &\text{if } x<0;\\
x &\text{if } x\geq0.
\end{cases}
\end{align*}
For an input $X_i^j \in \mathbb{R}^{1+d}$ and parameters $\theta^j$ we have:
$$N_j(X_i^j, \theta^j) = W_3^j(\sigma ( W_2^j \sigma ( W_1^j X_i^j + b_1^j) +b_2^j) \dots) + b_3^j \; \in \mathbb{R}^{d}.$$
The loss function $L_j$ used to train each network $N_j$ forces each neural network $N_j$ to satisfy Euler update rule and to produce an approximation of the state vector at the next time instant:
\begin{equation}\label{weakEq}
\Delta t \; N_j(x_i(t_j)) + x_i(t_j) \approx x_i(t_{j+1})
\end{equation}
Specifically we define:
\begin{equation*}
L_j(\theta^j) =\frac{1}{K}\sum_{i=1}^K \|\Delta t \; N_j(x_i(t_j),\theta^j) + x_i(t_j)- x_i(t_{j+1}) \|_2^2 \quad j = 1\dots, M-1
\end{equation*}
Where $\theta^j$ are the network parameters.
The predicted approximation of the function $f(t,x)$ at time $t_j$ is then given by the network $N_j$ corresponding to $\underset{\theta^j}{\mathrm{argmin}}\, L_j(\theta^j)$.
\subsection{The Interpolation Network}
The interpolation network $N_{int}$ is a Lipschitz regularized neural network which takes as input a time $t$ and an observation of the state vector at time $t$, $x(t)$ and uses as target data the approximation of the velocity vector $\widetilde{\dot{x}(t)}$ given by the target data generator (this acts as a prior for the unknown function $f(t,x)$). Once trained the interpolation network $N_{int}$ provides an approximation of the RHS function $f$ on its domain, that is $N_{int}(t,x) \approx f(t,x)$.
The data used by the interpolation network are couples $(X_h, Y_h), \; h = j + (i-1)M = 1, \dots, KM$, where $X_h$ is the input and $Y_h$ is the target and $X_h$, $Y_h$ are defined as follows:
\begin{align*}
&X_h = (t_j, \; x_i(t_j)) \in \mathbb{R}^{1+d},\\
&Y_h = (\dot{x}_i(t_j)) \in \mathbb{R}^{d}.
\end{align*}
The data is separated into training and testing sets made respectively of 80\% and 20\% of the data. A representation of the data for the interpolation network is provided in Figure \ref{fig:data_interp}.
\begin{figure}[H]
\centering
\includegraphics[width = 1\linewidth]{images/Interp_net_data.png}
\caption{A representation of the data for the interpolation network: the inputs are observations of a time $t$ and of the state vector $x(t)$, the target are discrete approximations of the velocity vector $\dot{x}(t)$ which act as a prior for the values of the unknown function $f(t,x)$. The goal is to reconstruct the function $f$ on its domain using the Lipschitz regularized neural network $N_{int}$ only from the discrete approximations of the velocity vector.}
\label{fig:data_interp}
\end{figure}
The interpolation network is feed forward neural network with $L$ layers and Leaky ReLU activation function. We apply the network to each training input $X_h$ and we aim to find the best network parameters to match the corresponding $Y_h$.\\
For $ i = 1, \dots, L$ define the weight matrices $W_{int}^i \in \mathbb{R}^{\;n_i \times n_{i-1}}$ and bias vectors $b_{int}^i \in \mathbb{R}^{n_i}$ where $n_i \in \mathbb{N}, n_0 = 1+d, n_L = d$. Let $\theta_{int} = \{W_{int},b_{int}\}$ be the model parameters.\\
For an input $X_h \in \mathbb{R}^{1+d}$ and parameters $\theta$ we have:
$$N_{int}(X_h, \theta_{int}) = W_{int}^L(\dots W_{int}^3\sigma (W_{int}^2\sigma ( W_{int}^1X_h +b_{int}^1) +b_{int}^2)\dots) + b_{int}^L \; \in \mathbb{R}^{d}.$$
The loss function minimized to train the interpolation network contains two terms. The first one is the Mean Squared Error (MSE) between the network output and the target data: this forces the network predictions to be close to the observed data. The second term is a a Lipschitz regularization term which forces the Lipschitz constant of the network $N_{int}$ to be small. In contrast with the most common choices of regularization terms found in Machine Learning literature, we don't impose an explicit regularization on the network parameters, but we impose a Lipschitz regularization on the statistical geometric mapping properties of the network. More details about this regularization term can be found in our paper \cite{NEGRINI2021110549}.
Specifically, the loss function has the form:
$$L(\theta_{int}) = \frac{1}{KM}\sum_{h=1}^{KM} \| Y_h - N_{int}(X_h, \theta_{int}) \|^2_2 + \alpha \text{Lip}(N_{int}),$$
where $\| \cdot \|_2$ is the $L^2$ norm, $\alpha >0$ is a regularization parameter and $\text{Lip}(N_{int})$ is the Lipschitz constant of the network $N_{int}$. The predicted approximation of the function $f(t,x)$ is given by the network $N_{int}$ corresponding to $\underset{\theta_{int}}{\mathrm{argmin}}\, L(\theta_{int})$.\\
The Lipschitz constant of the network $N_{int}$, $\text{Lip}(N_{int})$, is computed as:
$$\text{Lip}(N_{int}) = \|\nabla N_{int}\|_{L^{\infty}(\mathbb{R}^{d+1})}$$
where the gradient of the network with respect to the input $X_h$ is computed exactly using \texttt{autograd} \cite{NEURIPS2019_9015}.
We note that controlling the Lipschitz constant of the network $N_{int}$ yields control on the smoothness and rate of change of the approximating function.
In the examples we approximate the Lipschitz constant of the interpolation network using an approach similar to the one exposed in \cite{calliess2015bayesian}: a finite set $S$ of points is selected randomly in the domain of $f$ where the data was generated; then, the Lipschitz norm of the network is estimated as the infinity norm of the gradient of $N_{int}$ evaluated on $S$.
The approximation of the derivative of the network $N_{int}$ with respect to its inputs is computed in Python using \texttt{autograd} \cite{NEURIPS2019_9015}.
Note that, as empirically shown in \cite{calliess2015bayesian}, the larger the cardinality of the set $S$, the better the approximation of the Lipschitz constant. In our experiments, we set the cardinality of $S$ to be 1000.
\begin{remark}
For ease of notation, in the rest of the paper we will drop the explicit dependence of the networks from their learnable parameters and we will only write $N_j(X_i^j)$ and $N_{int}(X_h)$.
\end{remark}
\section{Synthetic Data and Model Evaluation}\label{data_eval}
In this section we describe the synthetic data used in the experiments and the metrics we use to evaluate the performance of the ensemble architecture.
\subsection{Data Generation}
In the numerical examples we use synthetic data generated in Python: using the function \texttt{odeint} from the \texttt{scipy} package in Python (\cite{2020SciPy-NMeth}), we solve $\dot{x}(t) = f(t, x(t))$;\; this provides us with approximations of the state vector $x(t)$ for initial conditions $x_1(0), \dots, x_K(0) \in \mathbb{R}^d$ at time steps $t_1,\dots, t_M$. We perform the experiments in the case of noiseless data, and data with up to $10\%$ of noise. To generate noisy data, we proceed as follows: for each component $x^k(t)$ of the solution $x(t)$ we compute its mean range $M_k$ across trajectories as $$M_k =\frac{1}{K}\left(\sum_{i=1}^K |\max_{j=1,\dots,M} x_i^k(t_j) - \min_{j=1,\dots,M} x_i^k(t_j)|\right).$$
Then, the $5\%$ noisy version of $x_i^k(t_j)$ is given by $$\hat{x}_i^k(t_j) = x_i^k(t_j) + n_{ij}M_k, $$ where $n_{ij}$ is a sample from a normal distribution $\mathscr{N}(0,0.05)$ with mean 0 and variance 0.05. In a similar way we add $10\%$ of noise to the data.
\subsection{Model Evaluation}\label{eval}
We use three different metrics to evaluate the performance of the ensemble architecture.
\begin{enumerate}
\item We use the Mean Squared Error (MSE) on test data which measures the distance of the ensemble prediction from the test data. We note that in a real-world problem the \textit{test error} is the only information accessible to evaluate the performance on the model. We also report the \textit{generalization gap} obtained with and without Lipschitz regularization in the interpolation network. The generalization gap measures the ability of the ensemble network to generalize to unseen data (for a more precise description see section \ref{gen_gap}).
\item Since we only use synthetic data, we have access to the true RHS function $f(t,x)$. This allows to compute the relative MSE between the true $f(t,x)$ and the approximation given by the ensemble architecture on arbitrary couples $(t,x)$ in the domain of $f$. We call this error \textit{recovery error}. Note that the error obtained in this way may be different than the one obtained using test data since the test data may be influenced by the noise in the original observations, while here we compare with the true values of the function $f$.
\item Since the neural network ensemble produces a function $N_{int}(t,x)$, it can be used as RHS of a differential equation $\dot{x} = N_{int}(t,x)$. We then solve this differential equation in Python and compute the relative MSE between the solution obtained when using as RHS the ensemble approximation $N_{int}(t,x)$ and when using the true function $f(t,x)$. We call this \textit{error in the solution}.
\end{enumerate}
\subsection{Generalization Gap}\label{gen_gap}
In our previous paper \cite{NEGRINI2021110549} we approximated the RHS of a system of differential equations $\dot{x} = f(t,x)$ using a Lipschitz regularized deep neural network and we empirically demonstrated that adding a Lipschitz regularization term in the loss function improves the ability of the model to generalize to unseen data. The neural network used in our previous work had the same structure as the interpolation network proposed here, but the target data was not generated using an ensemble of neural networks. In fact, in our previous work we first denoised the trajectory data using spline interpolation, then we approximated the velocity vector using the numerical derivative of the splines. One limitation of our previous method was that, when large amounts of noise were present in the data or when splines could not approximate the trajectories correctly, the target data obtained with this process did not provide a reliable approximation of the velocity vector. This, in turn, resulted in poor approximations of the RHS function $f$.
In this work we show that not only the approximation of the true RHS function can be improved when using and ensemble architecture, but also that the Lipschitz regularization term in the interpolation network still improves the generalization properties of the model. We do this by comparing, for a fixed training error, the test error and generalization gap obtained by the ensemble with and without Lipschitz regularization. In the following we will precisely define the generalization gap.
We indicate with $\rho$ the true data distribution, with $\mathscr{D}_k$ the training data distribution and with $\mathscr{D}_{test}$ the discrete distribution of test data. By definition the training data distribution $\mathscr{D}_k$ is a discrete approximation of $\rho$ which converges to $\rho$ as the number of data points $k$ tends to infinity. We write $X \sim \rho$ to indicate that the random variable $X$ has distribution $\rho$.
\medskip\\
The Generalization Gap is defined to be the difference:
$$\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2] - \mathbb{E}_{X \sim \mathscr{D}_k}[\| N_k(X) - Y(X) \|^2_2].$$
Here $N_k$ denotes the optimal function learned after minimizing the loss function $L(\theta)$ on the training data $\mathscr{D}_k$.
While the quantity $\mathbb{E}_{X \sim \mathscr{D}_k}[\| N_k(X) - Y(X) \|^2_2]$ can be explicitly evaluated using the optimal $N_k$ and the training data $\mathscr{D}_k$, the quantity $\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2]$ is unknown since we do not have access to the true data distribution $\rho$.
In practice, however, the quantity $\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2]$ can be estimated using a test set of data $\mathscr{D}_{\text{test}}$. This is a discrete data set that was not used during the training process, but that faithfully represents the true data density $\rho$, i.e. the discrete distribution $\mathscr{D}_{\text{test}}$ converges, as the number of test data goes to infinity, to the true distribution $\rho$. The optimal network $N_k$ is then evaluated on the test set and the value of $\mathbb{E}_{X \sim \mathscr{D}_{\text{test}}}[\| N_k(X) - Y(X) \|^2_2]$, is taken as an estimate of $\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2]$.
The estimate of $\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2]$ through $\mathbb{E}_{X \sim \mathscr{D}_{\text{test}}}[\| N_k(X) - Y(X) \|^2_2]$ is more precise the larger is the test data set. More precisely, the Hoeffding inequality (see \cite{abu2012learning}, section 1.3) gives a bound which depends on the number of test data on this approximation: if $m$ is the number of test data, given any $\varepsilon >0$ the Hoeffding inequality states that:
$$ \mathbb{P}(|\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2] - \mathbb{E}_{X \sim \mathscr{D}_{\text{test}}}[\| N_k(X) - Y(X) \|^2_2]|>\varepsilon)\leq 2e^{-2 \varepsilon^2 m}.$$
Justified by this inequality, in our numerical examples we use $\mathbb{E}_{X \sim \mathscr{D}_{\text{test}}}[\| N_k(X) - Y(X) \|^2_2]$ as an estimate of $\mathbb{E}_{X \sim \rho}[\| N_k(X) - Y(X) \|^2_2]$.
\section{Numerical Examples}\label{num_ex}
In this section we propose a few numerical examples of our method and comparisons with other methods for system identification. In the examples we use synthetic data with noise amount up to 10\% for one-dimensional examples, and up to 2\% for higher dimensional examples. In this paper we only propose one and two dimensional examples, but we explicitly notice that, since our method is applied componentwise, it can be used for data of any dimension. Because of the course of dimensionality, however, the higher the dimensionality of the problem and the noise amount, the larger is the amount of data and trainable parameters needed to obtain accurate results. This is the reason why for the two-dimensional examples proposed here we only add up to 2\% of noise in the data.
When using Lipschitz regularization, we considered multiple Lipschitz regularization parameters and selected them using the same heuristic used in \cite{NEGRINI2021110549} Section 4, paragraph 2.
As explained in Section \ref{eval}, we use three different metrics to evaluate the performance of our method.
Specifically, for each example we report the training and testing MSE, the generalization gap and the Estimated Lipschitz constant obtained for Lipschitz regularized and non-regularized ensemble architectures.
Moreover we use the MSE both for the \textit{recovery error} and for the \textit{error in the solution} since this allows to compare such errors with the \textit{test MSE}.\\
Finally, we compare the recovery errors and errors in the solutions obtained by our proposed methods and other methods for system identification. Specifically, we compare our results with our previous method proposed in \cite{NEGRINI2021110549}, with the multistep method proposed in \cite{raissi2018multistep}, with polynomial regression and with the method SINDy proposed in \cite{brunton2016discovering}.
The following examples here are representative of a much larger testing activity in which several different types of right-hand-sides $f(t,x)$, sampling time intervals and initial conditions have been used, leading to comparable experimental results.
\subsection{Empirical Assessment of the Ensemble Algorithm}
In this section we use the three metrics mentioned above to assess the effectiveness of our ensemble algorithm. We also empirically demonstrate that adding a Lipschitz regularization term in the loss function when training the interpolation network improves generalization: this confirms the findings of our previous paper \cite{NEGRINI2021110549}.
\subsubsection{One-dimensional Example}
The first example we propose is the recovery of the ODE
\begin{equation}\label{one-dim}
\dot{x}= xe^t +\sin(x)^2 -x
\end{equation}
We generated the data by computing an approximated solution $x(t)$ for the equation (\ref{one-dim}) using the \texttt{odeint} function in Python. We generate solutions for time steps $t$ in the interval [0,0.8] with $\Delta t = 0.04$ and for 500 initial conditions uniformly sampled in the interval [-3,3]. The hyperparameters for our model are selected in each example by cross validation; in this example the interpolation network $N_{int}$ has $L =8$ layers, each layer has 20 neurons, while each network $N_j$ of the target data generator ensemble has $L_j = 3$ layers with 10 neurons each. The target data generator is made of 20 networks.
In Tables \ref{tab:5xet}, \ref{tab:10xet} we report the training MSE, testing MSE, Generalization Gap and estimated Lipschitz constant when 5\% and 10\% of noise is present in the data. We generated these results in a way similar to our previous paper: since our goal here is to compare the performance on test data of the networks with and without regularization, we select the number of epochs during training so as to achieve the same training MSE across all the regularization parameters choices and compare the corresponding Testing errors and Generalization Gaps. We report here only the results obtained for the non-regularized case and for the best regularized one when 5\% and 10\% of noise is present in the data; we already showed in our previous paper that Lipschitz regularization is especially useful in presence of noise, so we omit the noiseless case. We can see from the tables that Lipschitz regularization improves the generalization gap by one order of magnitude for all amounts of noise, that a larger regularization parameter is needed when more noise is present in the data and that, as expected, adding Lipschitz regularization results in a smaller estimated Lipschitz constant. This confirms the findings from our previous paper that Lipschitz regularization improves generalization and avoids overfitting, especially in presence of noise in the data.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}$\dot{x}= xe^t +\sin(x)^2 -x$, 5\% Noise\end{tabular}}} \\ \hline
\textit{\begin{tabular}[c]{@{}c@{}}Regularization\\ Parameter\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Training MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Testing MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Generalization Gap\end{tabular}}&
\textit{\begin{tabular}[c]{@{}c@{}} Estimated\\ Lipschitz Constant\end{tabular}}\\ \hline
\textit{0} & 0.618\% & 0.652\% & 0.034\% & 7.09 \\ \hline
\textit{\textbf{0.004}} & \textbf{ 0.618\%} & \textbf{0.619\%} & \textbf{0.001\%} & \textbf{6.33 } \\ \hline
\end{tabular}
\caption{Test error and Generalization Gap comparison for 5\% noise in the data.}
\label{tab:5xet}
\end{table}
\begin{table}[H]\label{gap10xet}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}$\dot{x}= xe^t +\sin(x)^2 -x$, 10\% Noise\end{tabular}}} \\ \hline
\textit{\begin{tabular}[c]{@{}c@{}}Regularization\\ Parameter\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Training MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Testing MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Generalization Gap\end{tabular}}&
\textit{\begin{tabular}[c]{@{}c@{}} Estimated \\ Lipschitz Constant\end{tabular}} \\ \hline
\textit{0} & 2.01\% &2.32\% & 0.310\% & 7.72 \\ \hline
\textit{\textbf{0.015}} & \textbf{2.01\%} & \textbf{2.03\%} & \textbf{0.030\%} & \textbf{6.38} \\ \hline
\end{tabular}
\caption{Test error and Generalization Gap comparison for 10\% noise in the data.}
\label{tab:10xet}
\end{table}
In Table \ref{tab:recSolext} we report the error in the recovery for the RHS function $f(t,x) = xe^t +\sin(x)^2 -x$ and the error in the solution of the ODE when using the interpolation network as RHS. We can see that for all amounts of noise in the data, both the reconstruction error and the error in the solution are small, respectively they are less than 0.7\% and 0.04\%.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|}
\hline
\multicolumn{2}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the recovery of the RHS of\\ $\dot{x}=xe^t +\sin(x)^2 -x $\end{tabular}}} \\ \hline
\textit{0\% Noise} & {0.100\%} \\ \hline
\textit{5\% Noise} & {0.144\%} \\ \hline
\textit{10\% Noise} & {0.663\%} \\ \hline
\end{tabular}
\qquad
\begin{tabular}{|c|c|}
\hline
\multicolumn{2}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the solution of\\ $\dot{x}= xe^t +\sin(x)^2 -x$\end{tabular}}} \\ \hline
\textit{0\% Noise} & {0.016\%} \\ \hline
\textit{5\% Noise} & {0.025\%} \\ \hline
\textit{10\% Noise} & {0.038\%} \\ \hline
\end{tabular}
\caption{\textbf{Left:} Relative MSE in the recovery of the RHS for up to 10\% of noise. \textbf{Right:} Relative MSE in the solution of the ODE for up to 10\% of noise}
\label{tab:recSolext}
\end{table}
The left panel of figure \ref{fig:recSolext} shows the true and reconstructed RHS and recovery error on the domain on which the original data was sampled for 5\% of noise in the data. In the error plot a darker color represents a smaller error. We can see that the largest error is attained at the right boundary of the domain: this is due to the fact that by design of our architecture the target data generator only generates target data up to the second-last time step. As a consequence the interpolation network has only access to observations up to the second-last time step and so it is forced to predict the value of the RHS function at the last time step by extrapolation. It is then reasonable that the largest recovery error is attained at the right boundary of the domain. In the right panel of figure \ref{fig:recSolext} we report the true solution (red line) and the solution predicted when using the interpolation network as RHS (dashed black line) for multiple initial condition and for 5\% noise in the data. We notice that the prediction is accurate for all the initial conditions selected, but that it gets worse towards the end of the time interval. This is due to the inaccurate approximation of the RHS at the right boundary of the time interval.
\begin{figure}[H]
\centering
\includegraphics[width = 3.2in,height=2.2in]{images/exp_5N.png}
\includegraphics[width = 2.2in,height=2.2in]{images/expSol_5N.png}
\caption{ \textbf{Left:} True RHS, Predicted RHS and recovery error for 5\% noise in the data. \textbf{Right:} True and Predicted solution for 5\% noise in the data}
\label{fig:recSolext}
\end{figure}
Finally, since the \textit{test error}, the \textit{error in the recovery} and the \textit{error in the solution} are all measured using MSE, it makes sense to compare such homogeneous measurements. The first thing to notice is that the testing errors are larger than the recovery errors. This shows the ability of our network to avoid overfitting and produce reliable approximations of the true RHS even when large amounts of noise are present in the data. In fact, the Test MSE is computed by comparing the value predicted by the network with the value of the corresponding \textit{noisy} observation, while the recovery error is computed by comparing the value predicted by the network with the value of the \textit{true} function $f$. The disparity between the test error and the recovery error then shows that the interpolation network provides results that successfully avoid fitting the noise in the data. The second thing to notice is the disparity between the recovery error and the solution error: specifically the solution error is on average smaller than the recovery error. This is due to the data sampling: when recovering the RHS we reconstruct the function on the full domain, while the original data was only sampled on discrete trajectories; for this reason large errors are attained in the parts of the domain where no training data was available. On the other hand the error in the solution is computed on trajectories which were originally part of the training set, so it is reasonable to expect a smaller error in this case.
\subsubsection{Simple Pendulum}
The second example we propose is the recovery of a simple pendulum equation described by the system of ODEs
\begin{align*}
\begin{cases} \dot{x_1} = x_2 \\ \dot{x_2} = -0.5x_1 \end{cases}
\end{align*}
In the notation established above, we let $f = (f_1,f_2)$ with $f_1: = x_2$ and $f_2 := -0.5x_1$.
We generated the data by computing an approximated solution $x(t)$ for the system using the \texttt{odeint} function in Python. We generate solutions for time steps $t$ in the interval [0,0.8] with $\Delta t = 0.04$ and for 1000 initial conditions uniformly sampled in the square $[0,10]\times[0,10]$.
In this example, the interpolation networks $N_1$, $N_2$ have respectively $L_1 = 10 $ and $L_2 = 10$ layers and each layer has 20 neurons, while each network $N_j$ of the target data generator ensemble has $L_j = 5$ layers with 60 neurons each. The target data generator is made of 20 networks.
Because of the curse of dimensionality in this case we need more data and more trainable parameters than in the previous example and we only add up to 2\% of noise.
In Tables \ref{tab:pend1N}, \ref{tab:pend2N} we report the training MSE, testing MSE, Generalization Gap and estimated Lipschitz constant when 1\% and 2\% of noise is present in the data. Similarly to what observed in the one-dimensional case, in all cases Lipschitz regularization results in a smaller generalization gap, and estimated Lipschitz constant and that when more noise is present in the data a stronger regularization (that is a larger regularization parameter) is needed.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{\textit{Simple Pendulum, 1\% Noise}} \\ \hline
\multicolumn{5}{|c|}{\textit{Component 1}} \\ \hline
\textit{\begin{tabular}[c]{@{}c@{}}Regularization\\ Parameter\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Training MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}}Testing MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Generalization Gap\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}}Estimated \\ Lipschitz Constant\end{tabular}} \\ \hline
\textit{0} & 0.547\% & 0.628\% & 0.081\% & 2.62 \\ \hline
\textit{\textbf{0.002}} & \textbf{0.547\%} & \textbf{0.584\%} & \textbf{0.037\%} & \textbf{1.46} \\ \hline
\multicolumn{5}{|c|}{\textit{Component 2}} \\ \hline
\textit{\begin{tabular}[c]{@{}c@{}}Regularization\\ Parameter\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Training MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Testing MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Generalization Gap\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Estimated \\ Lipschitz Constant\end{tabular}} \\ \hline
\textit{0} & 0.408\% & 0.452\% & 0.044\% & 3.77 \\ \hline
\textit{\textbf{0.001}} & \textbf{0.408\%} & \textbf{0.412\%} & \textbf{0.004\%} & \textbf{0.95} \\ \hline
\end{tabular}
\caption{Test error and Generalization Gap comparison for 1\% noise in the data, both components.}
\label{tab:pend1N}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{\textit{Simple Pendulum, 2\% Noise}} \\ \hline
\multicolumn{5}{|c|}{\textit{Component 1}} \\ \hline
\textit{\begin{tabular}[c]{@{}c@{}}Regularization\\ Parameter\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Training MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Testing MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Generalization Gap\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Estimated \\ Lipschitz Constant\end{tabular}} \\ \hline
\textit{0} & 0.3366\% & 0.3775\% & 0.0409\% & 3.02 \\ \hline
\textit{\textbf{0.008}} & 0.3366\textbf{\%} & \textbf{0.3374\%} & \textbf{0.0008\%} & \textbf{1.02} \\ \hline
\multicolumn{5}{|c|}{\textit{Component 2}} \\ \hline
\textit{\begin{tabular}[c]{@{}c@{}}Regularization\\ Parameter\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Training MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Testing MSE\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Generalization Gap\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Estimated \\ Lipschitz Constant\end{tabular}} \\ \hline
\textit{0} & 0.3811\% & 0.397\% & 0.016\% & 1.11 \\ \hline
\textit{\textbf{0.006}} & \textbf{0.3811\%} & \textbf{0.3814\%} & \textbf{0.0003\%} & \textbf{0.84} \\ \hline
\end{tabular}
\caption{Test error and Generalization Gap comparison for 2\% noise in the data, both components.}
\label{tab:pend2N}
\end{table}
In Table \ref{tab:recSolPend} we report recovery error for the RHS functions $f_1(t,x) = x_2$ and $f_2(t,x) = -0.5x_1$ and the error in the solution of the ODE when using the interpolation network to approximate $f_1$ and $f_2$. Also in this case we attain a good accuracy both in the recovery of the RHS and in the approximation of the solution with errors for both components respectively less than 0.7\% and 0.07\%. When the noise increases from 1\% to 2\% we observe an increase of one order of magnitude in both the errors in the recovery and in the solution. This shows that when larger amounts of noise are present in the data more data is needed in order to reconstruct accurately the true RHS function.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the recovery of the RHS of the \\ Simple Pendulum \end{tabular}}} \\ \hline
\textit{\textbf{}} & \textit{\begin{tabular}[c]{@{}c@{}} Component 1\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Component 2\end{tabular}} \\ \hline
\textit{0\% Noise} & 0.0131\% & 0.0186\% \\ \hline
\textit{1\% Noise} & 0.0468\% & 0.0597\% \\ \hline
\textit{2\% Noise} &0.533\% & 0.645\% \\ \hline
\end{tabular}
\qquad
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the solution of \\ Simple Pendulum \end{tabular}}} \\ \hline
\textit{\textbf{}} & \textit{\begin{tabular}[c]{@{}c@{}} Component 1\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Component 2\end{tabular}} \\ \hline
\textit{0\% Noise} & 0.002\% & 0.002\% \\ \hline
\textit{1\% Noise} & 0.004\% & 0.004\% \\ \hline
\textit{2\% Noise} & 0.061\% & 0.051\% \\ \hline
\end{tabular}
\caption{\textbf{Left:} Relative MSE in the recovery of the RHS for up to 2\% of noise. \textbf{Right:} Relative MSE in the solution of the system of ODEs for up to 2\% of noise.}
\label{tab:recSolPend}
\end{table}
In Figure \ref{fig:RHSPend} we show the reconstructed RHS functions $f_1, f_2$ when 1\% noise is present in the data. We note from these plots that the error in the RHS recovery is small across the whole domain for both components showing the ability of our ensemble method to prevent overfitting. In Figure \ref{fig:solPend} we show the true and predicted solutions $x_1, x_2$ for multiple initial conditions obtained when using the network approximations of $f_1$ and $f_2$ as RHS functions in the ODE solver. We observe that the true and predicted solutions are nearly indistinguishable from each other with the largest disparity between the two happening for large values of $t$.
\begin{figure}[H]
\centering
\includegraphics[width = 5in,height=2in]{images/2D1_1N.png}
\includegraphics[width = 5in,height=2in]{images/2D2_1N.png}
\caption{Reconstruction of the RHS function $f_1, f_2$ when 1\% of noise is present in the data. \textbf{Left:} True RHS functions $f_1,f_2$. \textbf{Center: } Reconstructed RHS functions. \textbf{Right:} Error for the two components.}
\label{fig:RHSPend}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width = 5in,height=2in]{images/2DSol_1N.png}
\caption{True and Predicted solutions when $1\%$ Noise is present in the data. \textbf{Left:} component 1 v.s. time. \textbf{Center:} component 2 v.s. time. \textbf{Right:} component 2 v.s. component 1.}
\label{fig:solPend}
\end{figure}
Finally, as in the one-dimensional case we observe that the test errors are larger than the errors in the RHS recovery for all amounts of noise, showing that indeed when noise is present in the data our method is able to avoid overfitting providing reliable approximations of the true RHS function. Again as before the discrepancy between the error in the RHS approximation and in the solution is due to the different data sapling in the two scenarios (see previous example for a more precise explanation). We also note that the error in the RHS reconstruction and in the solution are closely related and influence each other: if the approximation of the RHS is poor, then also the approximation of the solution will be of poor quality.
\subsection{Comparison with the Splines Method}\label{previousPap}
As explained in Section \ref{Intro}, in our previous paper \cite{NEGRINI2021110549} (\textit{splines method}) we approximated the RHS of a system of differential equations $\dot{x} = f(t,x)$ using a Lipschitz regularized deep neural network. The architecture used in our previous work is the same as the interpolation network proposed here, however, the target data, approximations of the velocity vector, is generated differently. Instead of using an ensemble of neural networks, in our previous work we first denoise the trajectory data using spline interpolation, then we approximate the velocity vector using the numerical derivative of the splines. We showed that this approach is very effective when the trajectories can be correctly approximated by splines. However, when this is not true, for example if trajectories are non-smooth in $t$ or if large amounts of noise is present in the data, the target data obtained using splines derivatives did not provide a reliable approximation of the velocity vector. This, in turn, resulted in poor approximations of the RHS function.
We explicitly notice that the only difference between the splines method and the ensemble method is the way we preprocess the data: in the splines method we use splines to produce reliable target data, while in the ensemble method we use the target data generator. For both methods we then use a Lipschitz regularized neural network to generate the approximation for the RHS function. To fairly compare the two methods we use the same number of trainable parameters for the splines network and for the interpolation network.
In this section we show examples for which our previous method fails at providing a good approximation of the RHS function, but for which our ensemble method succeeds.
\subsubsection{Non-smooth Right-hand Side}\label{nsRHS}
We propose is the recovery through Lipschitz approximation of
\begin{equation}\label{sign}
\dot{x}= \text{sign}(t-0.1)
\end{equation}
Both the ensemble and the splines method aim at learning a Lipschitz approximation of the right-hand side function. The spline method is based on the notion of classical solution and it is doomed to fail in such a non-smooth setting. In contrast, the ensemble method is based on weak notion of solution using integration as in formula (\ref{weakEq}).
We generated the data by computing an approximated solution $x(t)$ for the equation (\ref{sign}) using the \texttt{odeint} function in Python. We generate solutions for time steps $t$ in the interval [0,0.2] with $\Delta t = 0.02$ and for 500 initial conditions uniformly sampled in the interval [-0.1,0.1] for noise amounts up to 2\%. We only use up to 2\% of noise since, as explained in our previous paper, the splines model can only provide reliable target data for small noise amounts.The hyperparameters for the models in this example are as follows: each network $N_j$ of the target data generator ensemble has $L_j = 3$ layers with 10 neurons each, the interpolation network and the network used in the splines method both have $L =4$ layers, each layer has 30 neurons. The target data generator is made of 10 networks.
Figure \ref{fig:nonsmooth} shows how the low quality spline approximation of the trajectory data (Center) obtained in the pre-processing stage results in a completely wrong velocity approximation (Right). Note that, while it is clear from this plot that the derivative approximation obtained using the splines is wrong since we have access to the true difference quotients (black line), when using real world data we have no access to the true trajectories or to the true derivatives so it may not be as easy to detect when the splines produce low quality target data. On the other hand, our new ensemble method is completely data-driven and overcomes this approximation difficulty through the use of the target data generator ensemble and the intergal notion of solution introduced in equation (\ref{weakEq}).
\begin{figure}[H]
\centering
\includegraphics[width = 4.8in,height=1.3in]{images/trajectories_signT_old.png}
\caption{\textbf{Left:} true (black) and noisy trajectories (red). \textbf{Center:} true trajectories (black) and spline approximation (red) of the noisy trajectories. \textbf{Right:} True derivative (black) and spline derivative (red). Since the trajectories are non-smooth in $t$ the trajectory and derivative approximations obtained using splines are poor.}
\label{fig:nonsmooth}
\end{figure}
Because of the low quality of the target data obtained by spline interpolation, we can see from Table \ref{tab:nonsmooth} that the error in the recovery of the RHS for the method that uses splines is around 12\% for all amounts of noise, while when using our ensemble method it is lower than 0.005\%. The superior performance of the ensemble method over the spline method for this example can also be seen from Figure \ref{fig:rec_nonsmooth}. In this figure from left to right we represent the true, reconstructed RHS and the error in the reconstruction for the spline based method (top row) and for the ensemble method (bottom row) when 1\% of noise is present in the data. We can see from the figure that the spline method in this case is not even able to find the general form of the RHS function correctly because of the bad quality of the target data. On the contrary, our proposed ensemble method, being completely data driven and based on a weak notion of solution, is able to accurately reconstruct RHS functions like $\text{sign}(t-0.1)$ that are non-smooth in $t$.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|l|c|}
\hline
\multicolumn{4}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the recovery of the RHS of\\ $\dot{x}=\text{sign}(t)$\end{tabular}}} \\ \hline
\textit{} & \multicolumn{2}{c|}{\textit{\begin{tabular}[c]{@{}c@{}} Ensemble\end{tabular}}} & \textit{\begin{tabular}[c]{@{}c@{}} Splines\end{tabular}} \\ \hline
\textit{1\% Noise} & \multicolumn{2}{c|}{0.002\%} & 12.5\% \\ \hline
\textit{2\% Noise} & \multicolumn{2}{c|}{0.004\%} & 12.9\% \\ \hline
\end{tabular}
\caption{Relative MSE in the recovery of the RHS for up to 2\% of noise for the Ensemble and Splines methods.}
\label{tab:nonsmooth}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width = 3.5in,height=1.5in]{images/Pred_old2N.png}
\includegraphics[width = 3.5in,height=1.5in]{images/Pred_Ensemble2N.png}
\caption{\textbf{Top row:} Spline method. \textbf{Bottom row:} Ensemble method. From left to right: True RHS, Reconstructed RHS and Error in the reconstruction when 1\% of noise is present in the data.}
\label{fig:rec_nonsmooth}
\end{figure}
\subsubsection{Highly Oscillatory Right-hand Side}
We compare our ensemble method with the splines method for an equation with highly oscillatory RHS function. We propose is the recovery of
\begin{equation}\label{oscillEq}
\dot{x}= cos(50t)x
\end{equation}
We generate solutions of \ref{oscillEq} for time steps $t$ in the interval [0,0.2] with $\Delta t = 0.02$ and for 500 initial conditions uniformly sampled in the interval [-0.1,0.1] for noise amounts up to 2\%. The hyperparameters for the models are as follows: each network $N_j$ of the target data generator ensemble has $L_j = 3$ layers with 10 neurons each, the interpolation network and the network used in the splines method both have $L =4$ layers, each layer has 30 neurons. The target data generator is made of 10 networks.
In this example we see that even if in the pre-processing stage the spline approximation of the trajectory data seems accurate from the central panel in Figure \ref{fig:oscill}, the derivative approximation is not because of its highly oscillatory nature, as can be seen in the right panel of Figure \ref{fig:oscill}.
\begin{figure}[H]
\centering
\includegraphics[width = 5in,height=1.5in]{images/cos50x.png}
\caption{\textbf{Left:} true (black) and noisy trajectories (red). \textbf{Center:} true trajectories (black) and spline approximation (red) of the noisy trajectories. \textbf{Right:} True derivative (black) and spline derivative (red). }
\label{fig:oscill}
\end{figure}
The bad quality target data for the spline model results in errors in the RHS reconstruction of 0.5\% and 0.6\% respectively for 1\% and 2\% of noise in the data. The ensemble model instead provides more accurate reconstructions with errors in the recovery of 0.04\% and 0.05\% respectively (see Table \ref{tab:oscill}). Finally Figure \ref{fig:rec_oscill} represents, from left to right, the true, reconstructed RHS and the error in the reconstruction for the spline method (top row) and for the ensemble method (bottom row) when 1\% of noise is present in the data. We can see that while the ensemble method is able to reconstruct correctly the RHS function, the spline method is not even able to correctly identify the oscillatory nature of the RHS function.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|l|c|}
\hline
\multicolumn{4}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the recovery of the RHS of\\ $\dot{x}= cos(50t)x$\end{tabular}}}
\\ \hline
\textit{} & \multicolumn{2}{c|}{\textit{\begin{tabular}[c]{@{}c@{}} Ensemble\end{tabular}}} & \textit{\begin{tabular}[c]{@{}c@{}} Splines\end{tabular}} \\ \hline
\textit{1\% Noise} & \multicolumn{2}{c|}{0.042\%} & 0.505\% \\ \hline
\textit{2\% Noise} & \multicolumn{2}{c|}{0.054\%} & 0.599\% \\ \hline
\end{tabular}\caption{Relative MSE in the recovery of the RHS for up to 2\% of noise for the Ensemble and Splines methods. }
\label{tab:oscill}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width = 3.5in,height=1.5in]{images/cos50t_LipReg2N.png}
\includegraphics[width = 3.5in,height=1.5in]{images/cos50t_Ensemble2N.png}
\caption{\textbf{Top row:} Spline method. \textbf{Bottom row:} Ensemble method. From left to right: True RHS, Reconstructed RHS and Error in the reconstruction when 1\% of noise is present in the data.}
\label{fig:rec_oscill}
\end{figure}
\subsection{Comparison with other methods}\label{comparisonR}
We compare our method with the methods proposed in \cite{raissi2018multistep} and in \cite{brunton2016discovering}. For completeness we also provide a comparison with the splines method \cite{NEGRINI2021110549}. The method proposed in \cite{raissi2018multistep}, (\textit{multistep method}), is similar to ours: the authors place a neural network prior on the RHS function $f$, express the differential equation in integral form and use a multistep method to predict the solution at each successive time steps. The main difference with our method is that we use an ensemble of neural networks as a prior for $f$ instead of a single neural network $f$. We compare the ensemble and the multistep methods using Euler integral form for the equation and the same number of learnable parameters for the multistep method and the interpolation network. Similar results can be obtained when using multistep methods like Adams–Bashforth or Adams–Moulton to represent the equation in integral form.\\
The method proposed in \cite{brunton2016discovering}, which we refer to as \textit{SINDy}, is based on a sparsity-promoting technique: sparse regression is used to determine, from a dictionary of basis functions, the terms in the dynamic governing equations which most accurately represent the data.\\
Finally, we compare with the splines method described before.\\
We report here the relative error obtained by the different methods in the approximation of $f$ as well as in the solution of the ODE. The test error and generalization gaps for the ensemble model were also computed for these examples and confirmed our previous findings: as before we noticed an improvement in the generalization gap when Lipschitz regularization was added.
\subsubsection{Non-Linear, Autonomous Right-hand-side}\label{SINDy_comp}
We generated the data by computing approximated solutions of
\begin{equation}
\dot{x}= cos(3x) +x^3 -x
\end{equation} for time steps $t$ in the interval [0,1] with $\Delta t = 0.04$ and for 500 initial conditions uniformly sampled in the interval $[-0.7,0.9]$. The interpolation network $N$ has $L= 8$ layers, each layer has 30 neurons, while each network $N_j$ of the target data generator ensemble has $L_j = 3$ layers with 20 neurons each. The target data generator is made of 25 networks.
We compare the results obtained by the ensemble, spline , multistep methods, a polynomial regression with degree 20 and SINDy. SINDy allows the user to define custom dictionaries of functions to approximate an unknown differential equations from data. In this case we used a custom library of functions containing polynomials up to degree 10 as well as other elementary functions such as $\sin(x),\, \cos(x),\, e^x,\, \ln(x)$.
\smallskip\\
In Table \ref{tab:reccomp3} we report the relative MSE in the recovery of the RHS function $f= cos(3x) +x^3 -x$ for up to 10\% of noise when using our ensemble method, the splines method, the multistep method, a polynomial regression with degree 20 and SINDy with a custom library. We notice that when no noise is present in the data, so that overfitting is not a concern, SINDy outperforms all the other methods, followed by the polynomial regression. On the contrary, when noise is present in the data our ensemble method gives the best results. For example, when 5\% noise is present in the data our ensemble method obtains an error of 0.096\% which is smaller than the errors obtained by all the other methods by one order of magnitude or more. This shows that the ensemble method is able to overcome the sensitivity to noise.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the recovery of the RHS of\\ $\dot{x}= cos(3x) +x^3 -x$\end{tabular}}} \\ \hline
\textit{} & \textit{\begin{tabular}[c]{@{}c@{}} Ensemble (Ours)\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Splines\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Multistep (Euler)\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Polynomial Regression\\ degree 20\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} SINDy\\ custom library
\end{tabular}} \\ \hline
\textit{0\% Noise} & 0.0505\% & 0.214\% & 0.116\% & 6.3e-05\% & { \textbf{5.7e-05\%}} \\ \hline
\textit{5\% Noise} & { \textbf{0.0957\%}} & 0.585\% & 1.20\% & 3.33\% & 0.762\% \\ \hline
\textit{10\% Noise} & { \textbf{0.520\%}} & 1.90\% & 3.51\% & 17.0\% & 3.36\% \\ \hline
\end{tabular}
\caption{Relative MSE in the recovery of the RHS for up to 10\% of noise for ensemble, splines and multistep methods, polynomial regression with degree 20, SINDy with custom library. }
\label{tab:reccomp3}
\end{table}
In Figure \ref{fig:reccomp3} we report the true (red line) and recovered RHS function (blue line) when 5\% of noise is present in the data when using the ensemble, spline, multistep methods, polynomial regression with degree 20 and SINDy. This figure confirms the findings shown in the previous table: the ensemble network is able to reconstruct the true RHS most accurately showing that our method is robust to noise. From the table above we notice that, for noisy data, the worst accuracy was always attained by the polynomial regression. In this case, even if a 20 degree polynomial has 100 times less parameters than our neural network, increasing the degree of the polynomial increased the error in the recovery. From this figure we can clearly see why that happens: the polynomial regression with degree 20 is already overfitting the noisy data and the largest errors are attained at the boundaries of the domain where the polynomial is highly oscillatory. The other three methods are able to provide approximations that capture the general form of the true RHS function, but only our ensemble method is able to provide an accurate approximation even at the boundary of the domain.
\begin{figure}[H]
\includegraphics[height=1in, width =\linewidth]{images/comparison.PNG}
\caption{From left to right, true and recovered RHS for $5\%$ noise in the data obtained by Ensemble Method (Ours), splines method, Multistep Method, Polynomial Regression with degree 20, SINDy with custom library.}
\label{fig:reccomp3}
\end{figure}
Finally in Table \ref{tab:solcomp3} we report the relative MSE in the solution of the ODE when using as RHS of the ODE solver the approximation given by the ensemble, spline and multistep models, the polynomial regression and SINDy. When no noise is present in the data, SINDy and polynomial regression provide the best results. When noise is present in the data again our multistep method gives the best results since it is able to overcome the sensitivity to noise.
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the solution of\\ $\dot{x}= cos(3x) +x^3 -x$\end{tabular}}} \\ \hline
\textit{} & \textit{\begin{tabular}[c]{@{}c@{}} Ensemble (Ours)\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Splines\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Multistep (Euler)\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Polynomial Regression\\ degree 20\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} SINDy\\ custom library
\end{tabular}} \\ \hline
\textit{0\% Noise} & 0.00313\% & 0.0289\% & 0.00342\% &0.00033\% & \textbf{1e-05\%} \\ \hline
\textit{5\% Noise} & \textbf{0.0123\%} &0.0637\% & 0.0366\% & 0.312\% & 0.965\% \\ \hline
\textit{10\% Noise} & \textbf{0.142\%} &0.954\% & 0.251\% & 3.11\% & 0.359\% \\ \hline
\end{tabular}
\caption{Relative MSE in the solution of the ODE for up to 10\% of noise for ensemble, splines and multistep methods, polynomial regression with degree 20, SINDy with custom library. }
\label{tab:solcomp3}
\end{table}
\subsubsection{Non-linear, Non-autonomous Right-hand Side}
For this example we generated the data by computing approximated solutions of
\begin{equation}
\dot{x}= t\cos(x) +t^2x
\end{equation}
for time steps $t$ in the interval [0,1.2] with $\Delta t = 0.04$ and for 500 initial conditions uniformly sampled in the interval $[-2,2]$. The interpolation network $N$ has $L= 8$ layers, each layer has 30 neurons, while each network $N_j$ of the target data generator ensemble has $L_j = 3$ layers with 20 neurons each. The target data generator is made of 30 networks.
In Table \ref{tab:recSolcomp1} we report the relative MSE in the recovery of the RHS function $f= t\cos(x) +t^2x$ (left table) and in the solution of the ODE (right table) for up to 10\% of noise when using the ensemble, multistep and splines methods. From the tables we can see that the ensemble method outperforms the other two algorithms both in the recovery of the RHS and in the approximation of the ODE solution. For example, when 10\% of noise is present in the data the recovery error for the ensemble method is 0.8\% while for the multistep and spline methods it is respectively less than 1.75\% and 1.1\%. Similarly, our ensemble method attains the best ODE solution accuracy for all amounts of noise.
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the recovery of the RHS of\\ $\dot{x}= t\cos(x) +t^2x$\end{tabular}}} \\ \hline
\textit{\textbf{}} & \textit{\begin{tabular}[c]{@{}c@{}} Ensemble (Ours)\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Multistep (Euler)\end{tabular}}& \textit{\begin{tabular}[c]{@{}c@{}} Splines\end{tabular}} \\ \hline
\textit{0\% Noise} & \textbf{0.074\%} & 0.281\% & 0.116\% \\ \hline
\textit{5\% Noise} & \textbf{0.147\%} & 0.906\% & 0.440 \% \\ \hline
\textit{10\% Noise} & \textbf{0.807\%} & 1.758\% & 1.10 \% \\ \hline
\end{tabular}
\qquad
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\textit{\begin{tabular}[c]{@{}c@{}}Relative MSE in the solution of\\ $\dot{x}= t\cos(x) +t^2x$\end{tabular}}} \\ \hline
\textit{} & \textit{\begin{tabular}[c]{@{}c@{}} Ensemble (Ours)\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}} Multistep (Euler)\end{tabular}}& \textit{\begin{tabular}[c]{@{}c@{}} Splines \end{tabular}} \\ \hline
\textit{0\% Noise} & \textbf{0.023\%} & 0.048\% & 0.028\% \\ \hline
\textit{5\% Noise} & \textbf{0.041\%} & 0.491\% & 0.163\% \\ \hline
\textit{10\% Noise} & \textbf{0.216\%} & 0.856\% & 0.592\% \\ \hline
\end{tabular}}
\caption{\textbf{Left:} Relative MSE in the recovery of the RHS for up to 10\% of noise for ensemble, multistep and splines method. \textbf{Right:} Relative MSE in the solution of the system of ODEs for up to 10\% of noise for ensemble, multistep and splines method.}
\label{tab:recSolcomp1}
\end{table}
Finally in Figure \ref{fig:reccomp1} we report the true and reconstructed RHS and the error in the reconstruction when 5\% of noise is present in the data, for the multistep method (top row) and for our ensemble method (bottom row). The error plots, where darker color represents a smaller error, show that our esemble method attains a smaller recovery error than the multistep method in the whole domain.
We notice explicitly that, while the results above show that our method is more accurate than those in \cite{raissi2018multistep} and \cite{NEGRINI2021110549}, it is computationally more expensive. In fact in order to generate the target data for the interpolation network, we need to train as many networks as the number of time instants at which we observe the data. On the contrary, for the multistep and spline methods only one network is trained, making these methods faster to train and less computationally expensive than ours.
\begin{figure}[H]
\centering
\includegraphics[width = 3.8in,height=3.2in]{images/Comparison5N.png}
\caption{\textbf{Top row:} Multistep method. \textbf{Bottom row:} Ensemble method. From left to right: True RHS, Reconstructed RHS and Error in the reconstruction when 5\% of noise is present in the data.}
\label{fig:reccomp1}
\end{figure}
\section{Discussion}\label{discussion}
In the previous section we proposed multiple numerical examples of our ensemble method. We used one and two dimensional synthetic data with up to 10\% of noise, but since our model is applied componentwise, it can be used for data of any dimension. To evaluate the performance of our method we used three different metrics: the test MSE, the recovery error and the error in the solution. For a precise description of these metrics see Section \ref{eval}.\\
\indent
Our first goal was to compare the performance on test data of the interpolation network with and without Lipschitz regularization and see if the findings of our previous paper still applied in this case where we use an ensemble of neural networks to generate the target data instead of splines. The examples show that indeed Lipschitz regularization improves the generalization ability of the network as well as its ability to overcome sensitivity to noise and to avoid overfitting. Specifically, in all of the examples we observed an average improvement of one order of magnitude in the generalization gap confirming the findings of our previous paper \cite{NEGRINI2021110549} and of \cite{oberman2018lipschitz}.\\
\indent
Next, we studied the recovery error and the error in the solution. We observed that, for all amounts of noise in the data, our ensemble method is able to accurately reconstruct the RHS function on the domain on which the data was sampled, with the largest errors being attained at the right boundary of the time domain where no target data was available. In all of the examples we observed that, for a fixed noise amount, the test error was larger than the recovery error. This shows the ability of our network to avoid overfitting. In fact, the Test MSE measures the difference between the network prediction and the \textit{noisy} observations, while the recovery error measures the difference between the network prediction and the \textit{true} function $f$. A smaller recovery error then shows that the interpolation network provides results that closely fit the true function and successfully avoid fitting the noise in the data.\\
\indent
We also compared the true solution of the equation with the solution obtained when using the approximated RHS in the ODE solver. Also in this case we obtain accurate results in the solution reconstruction with errors that increase close to the end point of the time interval. In all of the examples we observed that the error in the solution is on average smaller than the recovery error. This is due to the data sampling: the recovery of the RHS function is done on the full domain, even if the original data was only sampled on discrete trajectories; for this reason large errors are attained in the parts of the domain where no training data was available. On the other hand the error in the solution is computed only on trajectories which were originally part of the training set, so we obtain smaller errors in this case.\\
\indent
Finally, in Sections \ref{previousPap}, \ref{comparisonR} we compared our ensemble algorithm with the method proposed in our previous paper \cite{NEGRINI2021110549} (splines method), with the method proposed in \cite{raissi2018multistep} (multistep method), with polynomial regression and with SINDy \cite{brunton2016discovering}. In all of the examples proposed our ensemble method is the one that provides the best recovery errors and errors in the solution. Specifically, we show that using the ensemble method we are able to reconstruct RHS functions that our previous paper could not reconstruct correctly, such as functions that are non-smooth in $t$ or with highly oscillatory terms. This is due to the fact that our ensemble method is completely data-driven and based on a weak notion of solution using integration. Our ensemble method outperforms the multistep method, polynomial regression and SINDy especially when noise is present in the data. We note however, that while our proposed method is more effective in providing accurate RHS approximations, it is more computationally expensive than the other methods we compared to. In fact in order to generate the target data for the interpolation network, we need to train as many networks as the number of time instants at which we observe the data. This can be very expensive especially if the user is interested in reconstructing equations on long time intervals. On the contrary, for the multistep and spline methods only one network is trained, while for polynomial regression and SINDy only a loss minimization is needed to obtain the results, making these methods faster to train than ours. We conclude that the proposed ensemble method is the best choice, compared to the other methods described here, if the goal is to obtain very accurate reconstructions even in presence of noise and the computational cost is not a concern.
\section{Conclusion}\label{conclusion}
In this paper we use a Lipschitz regularized ensemble of neural networks to learn governing equations from data. There are two main differences between our method and other neural network system identification methods in the literature. First, we add a Lipschitz regularization term to our loss function to force the Lipschitz constant of the interpolation network to be small. This regularization results in a smoother approximating function and better generalization properties when compared with non-regularized models, especially in presence of noise. These results are in line with the theoretical work of Calder and Oberman \cite{oberman2018lipschitz} and with the empirical findings of our previous paper \cite{NEGRINI2021110549}. Second, we use an ensemble of neural networks instead of a single neural network for the reconstruction. It has been shown in \cite{hansen1990neural} that the generalization ability of a neural network architecture can be improved through ensembling, but while this technique has been applied in the past for multiple problems (see for instance \cite{shimshoni1998classification}, \cite{huang2000pose}, \cite{zhou2002lung}), to our knowledge this is the first time that Lipschitz regularization is added to the ensemble to overcome the sensitivity to noise in a system identification problem.\\
\indent
The results shown in the examples, which are representative of a larger testing activity with several different types of right-hand sides $f(x,t)$, show multiple strengths of our method:
\begin{itemize}
\item In all of the examples when noise is present in the data, the Lipschitz regularization term in the loss function results in an improvement of generalization gap of one order of magnitude, when compared to the non-regularized architecture.
\item The ensemble architecture is robust to noise and is able to avoid overfitting even when large amounts of noise are present in the data (up to 10\%). The ability of the ensemble to avoid overfitting is numerically confirmed by the fact that test errors are larger than recovery errors in all of the examples. In fact, while the test error measures the distance of the ensemble prediction from the noisy RHS data, the recovery error measures the distance to the true RHS data, so that the disparity between test and recovery errors means that the ensemble is able to avoid fitting the noise in the data. This robustness to noise is especially an advantage over methods that do not use ensembling such as \cite{raissi2018multistep}, as can be seen from the examples in Section \ref{comparisonR}.
\item The ensemble architecture is completely data-driven and it is based on weak notion of solution using integration (see formula \ref{weakEq}). For this reason, it can be used to reconstruct non-smooth RHS functions (see Example \ref{nsRHS}). This is especially an advantage over models that rely on the notion of classical solution like the Splines Method \cite{NEGRINI2021110549}.
\item Since neural networks are universal approximators, we do not need any prior knowledge on the ODE system, in contrast with sparse regression approaches in which a library of candidate functions has to be defined. As shown in Section \ref{SINDy_comp}, direct comparison with polynomial regression and SINDy shows that our model is a better fit when learning from noisy data.
\item Since our method is applied componentwise, it can be used to identify systems of any dimension, which makes it a valuable approach for high-dimensional real-world problems. However, because of the curse of dimensionality, the higher the problem dimension, the larger the amount of data and trainable parameters needed to obtain accurate predictions.
\end{itemize}
We explicitly note that, while our ensemble model is able to reconstruct the RHS function $f$ with high accuracy even for very noisy data, it is computationally more expensive than the other methods we compared with (splines and multistep methods, polynomial regression and SINDy). This is because of the ensemble nature of the algorithm: in order to generate the target data for the interpolation network, we need to train as many networks as the number of time instants at which we observe the data. Consequently, this algorithm is a good choice for applications where high reconstruction accuracy is need but where the computational cost is not a concern.
Future research directions include applying our method to real-world data and generalize it to learn partial differential equations. In contrast with the most common choices of regularization terms found in machine learning literature, in this work we impose a regularization on the network statistical geometric mapping properties, instead of on its parameters. The Lipschitz regularization results, however, in an implicit constraint on the network parameters since the minimization of the loss function is done with respect to such parameters. An interesting future direction is to theoretically study the Lipschitz regularization term, how it relates to the size of the weights of the network, in line with Bartlett work about generalization \cite{bartlett1997valid}, and express it as an explicit constraint on the network learnable parameters. In this way one could avoid approximating the Lipschitz constant of the network numerically. This would decrease considerably the computational cost of the algorithm which, as explained, is one limitation of the proposed method.
\section{Acknowledgements}
Luca Capogna is partially supported by NSF DMS 1955992 and Simons Collaboration Grant for Mathematicians 585688.\\
Giovanna Citti is partially supported by the EU Horizon 2020 project GHAIA, MCSA RISE project GA No 777822.\\
Results in this paper were obtained in part using a high-performance computing system acquired through NSF MRI grant DMS-1337943 to WPI.\\
\bigskip
\bibliographystyle{plain}
\small
|
1,116,691,497,674 | arxiv | \section{Introduction}
Neural networks have become ubiquitous in natural language processing.
For the word segmentation task, there has been a growing body of work exploring novel neural network architectures for learning useful representation and thus better segmentation prediction \cite{P14-1028,ma-hinrichs:2015:ACL-IJCNLP,DBLP:conf/acl/ZhangZF16, DBLP:journals/corr/LiuCGQL16, cai2017fast, DBLP:journals/corr/abs-1711-04411}.
We show that properly training and tuning a relatively simple architecture with a minimal feature set and greedy search achieves state-of-the-art accuracies and beats more complex neural-network architectures.
Specifically, the model itself is a straightforward stacked bidirectional LSTM (Figure \ref{fig_models}) with just two input features at each position (character and bigram).
We use three widely recognized techniques to get the most performance out of the model: pre-trained embeddings \cite{yang2017neural,zhou2017word}, dropout \cite{srivastava2014dropout}, and hyperparameter tuning \cite{weiss2015structured,melis2018on}.
These results have important ramifications for further model development. Unless best practices are followed, it is difficult to compare the impact of modeling decisions, as differences between models are masked by choice of hyperparameters or initialization.
In addition to the simpler model we present, we also aim to provide useful guidance for future research by examining the errors that the model makes.
About a third of the errors are due to annotation inconsistency, and these can only be eliminated with manual annotation.
The other two thirds are those due to out-of-vocabulary words and those requiring semantic clues not present in the training data.
Some of these errors will be almost impossible to solve with different model architectures.
For example, while 抽象概念 (abstract concept) appears as one word at test time, any model trained only on the MSR dataset will segment it as two words: 抽象 (abstract) and 概念 (concept), which are seen in the training set 28 and 90 times, respectively, and never together.
Thus, we expect that iterating on model architectures will give diminishing returns, while leveraging external resources such as unlabeled data or lexicons is a more promising direction.
In sum, this work contributes two significant pieces of evidence to guide further development in Chinese word segmentation. First, comparing different model architectures requires careful tuning and application of best practices in order to obtain rigorous comparisons. Second, iterating on neural architectures may be insufficient to solve the remaining classes of segmentation errors without further efforts in data collection.
\section{Model}
\begin{figure}
\small
\centering
\scalebox{0.8}{
\begin{tikzpicture}[node distance=0.9cm,bend angle=45,auto]
\tikzstyle{char}=[circle,thick,draw=blue!75,fill=blue!20,minimum size=4mm]
\tikzstyle{LSTM}=[FF,draw=red!75,fill=red!20]
\tikzstyle{FF}=[rectangle,thick,draw=black!75, fill=black!20,minimum size=4mm]
\begin{scope}
\node[](char)[text width=1.2cm] {characters};
\node[](lstmb)[above of=char,text width=1.2cm] {backward LSTM};
\node[](lstm)[above of=lstmb,text width=1.2cm] {forward LSTM};
\node[](pred)[above of=lstm,text width=1.2cm] {softmax};
\foreach \x in {0,...,2}
{
\node[LSTM,xshift=\x * 1cm + 0.5cm](lb\x)[right of=lstmb] {\ };
\node[LSTM,xshift=\x * 1cm + 0.5cm](l\x)[right of=lstm] {\ };
\node[char,xshift=\x * 1cm + 0.5cm](c\x)[right of=char] {};
\node[xshift=\x * 1cm + 0.5cm](p\x)[right of=pred] {BIES};
\draw[->,thick] (c\x) edge (lb\x);
\draw[->,thick] (l\x) edge (p\x);
\draw[->, thick, bend right] (lb\x) edge (p\x);
\draw[->,thick, bend left] (c\x) edge (l\x);
}
\draw[->,thick] (lb2) edge (lb1) (lb1) edge (lb0);
\draw[<-,thick] (l1) edge (l0) (l2) edge (l1);
\node[yshift=0.3cm](a)[below of=c1] {(a)};
\end{scope}
\begin{scope}[shift={(5cm,0)}]
\node[](char)[text width=1.2cm] {characters};
\node[](lstmb)[above of=char,text width=1.2cm] {backward LSTM};
\node[](lstm)[above of=lstmb,text width=1.2cm] {forward LSTM};
\node[](pred)[above of=lstm,text width=1.2cm] {softmax};
\foreach \x in {0,...,2}
{
\node[LSTM,xshift=\x * 1cm + 0.5cm](lb\x)[right of=lstmb] {\ };
\node[LSTM,xshift=\x * 1cm + 0.5cm](l\x)[right of=lstm] {\ };
\node[char,xshift=\x * 1cm + 0.5cm](c\x)[right of=char] {};
\node[xshift=\x * 1cm + 0.5cm](p\x)[right of=pred] {BIES};
\draw[->,thick] (c\x) edge (lb\x) (lb\x) edge (l\x) (l\x) edge (p\x);
}
\draw[->,thick] (lb1) edge (lb0) (lb2) edge (lb1);
\draw[<-,thick] (l1) edge (l0) (l2) edge (l1);
\node[yshift=0.3cm](b)[below of=c1] {(b)};
\end{scope}
\end{tikzpicture}
}
\vspace*{-2.5em}
\caption{
\label{fig_models}
Bi-LSTM models: (a) non-stacking, (b) stacking. Blue circles are input (char and char bigram) embeddings. Red squares are LSTM cells. BIES is a 4-way softmax.
}
\end{figure}
Our model is relatively simple. Our approach uses long short-term memory neural networks architectures (LSTM) since previous work has found success with these models \cite[][inter alia]{chen2015long,zhou2017word}.
We use two features: unigrams and bi-grams of characters at each position. These features are embedded, concatenated, and fed into a stacked bidirectional LSTM (see Figure~\ref{fig_models}) with two total layers of 256 hidden units each.
The softmax layer of the bi-LSTM predicts Begin/Inside/End/Single tags encoding the relationship from characters to segmented words.
In the next sections we describe the best practices we used to achieve state-of-the-art performance from this architecture. Note that all of these practices and techniques are derived from related work, which we describe.
\paragraph{Recurrent Dropout.}
Contrary to the recommendation of \newcite{zaremba2014recurrent}, we apply dropout to the recurrent connections of our LSTMs, and we see similar improvements when following the recipe of \newcite{gal2015theoretically} or simply sample a new dropout mask at every recurrent connection.
\paragraph{Hyperparameters.}
We use the momentum-based averaged SGD procedure from \cite{weiss2015structured} to train the model, with few additions.
We normalized each gradient to be at most unit norm, and used asynchronous SGD updates to speed up training time.
For each configuration we evaluated, we trained different settings of a manually tuned hyperparameter grid, varying the initial learning rate, learning rate schedule, and input and recurrent dropout rates.
We fixed the momentum parameter $\mu = 0.95$.
The full list of hyperparameters is given in Table~\ref{tab_grids}.
We show the impact of this tuning procedure in Table~\ref{tab_hyperparameters}, which we found was crucial to measure the best performance of the simple architecture.
\paragraph{Pretrained Embeddings.} Pre-training embedding matrices
from automatically gathered data is a powerful technique that has been
applied to many NLP problems for several years
(e.g. \citet{collobert2011natural,mikolov2013efficient}). We pretrain
the character embeddings and character-bigram embeddings using
wang2vec\footnote{\url{https://github.com/wlin12/wang2vec}}
\cite{ling2015two}, which modifies word2vec by incorporating
character/bigram order information during training.
Note that this idea has been used in segmentation previously by \newcite{zhou2017word}, but they also augment the contexts by adding the predictions of a baseline segmenter as an additional context.
We experimented with both treating the pretrained embeddings as constants or fine-tuning on the particular datasets.
\begin{table}
\centering
\scalebox{0.92}{
\begin{tabular}{l|rrr}
&{} Train & Development & Test \\
\hline
AS & 4,903,564 & 546,017 & 122,610 \\
CTIYU & 1,309,208 & 146,422 & 40,936 \\
MSR & 2,132,480 & 235,911 & 106,873 \\
CTB6 & 641,368 & 59,954 & 81,578 \\
CTB7 & 950,138 & 59,954 & 81,578 \\
PKU & 994,822 & 115,125 & 104,372 \\
UD & 98,608 & 12,663 & 12,012 \\
\hline
\end{tabular}
}
\caption{
\label{statistics}
Statistics of training, development and test set.
}
\end{table}
\begin{table*}
\centering
\scalebox{0.92}{
\begin{tabular}{l|ccccccc}
&{} AS & CITYU & CTB6 & CTB7 & MSR & PKU & UD \\
\hline
\newcite{DBLP:journals/corr/LiuCGQL16} & --- & --- & 95.9 & --- & 97.3 & \bf96.8 \\
\newcite{yang2017neural} & 95.7 & 96.9 & 96.2 & --- & 97.5 & 96.3 & --- \\
\newcite{zhou2017word} & --- & --- & 96.2 & --- & 97.8 & 96.0 & --- \\
\newcite{cai2017fast} & --- & 95.6 & --- & --- & 97.1 & 95.8 & --- \\
\newcite{kurita2017neural} & --- & --- & --- & 96.2 & --- & --- & --- \\
\newcite{chen2017adversarial}& 94.6 & 95.6 & 96.2 & --- & 96.0 & 94.3 & --- \\
\newcite{K17-3015} & --- & --- & --- & --- & --- & --- & 94.6 \\
\newcite{DBLP:journals/corr/abs-1711-04411} & --- & --- & --- & --- & 98.0 & 96.5 & --- \\
\hline
Ours (fix embedding) & \bf96.2 & \bf97.2 & \bf96.7 & \bf96.6 & 97.4 & 96.1 & \bf96.9 \\
Ours (update embedding) & 96.0 & 96.8 & 96.3 & 96.0 & \bf98.1 & 96.1 & 96.0 \\
\end{tabular}
}
\caption{
\label{tab_state_of_the_art}
The state of the art performance on different datasets. For \newcite{kurita2017neural} and \newcite{chen2017adversarial} we report their best systems (segpos+dep and Model-I-ADV respectively).
$\dagger$Not directly comparable to the rest of the table due to the usage of an external dictionary. Our bolded results are significantly better (p $<$ 0.05 bootstrap resampling) except on MSR.
}
\end{table*}
\begin{table*}
\centering
\scalebox{0.92}{
\begin{tabular}{l|cccccccc}
&{} AS & CITYU & CTB6 & CTB7 & MSR & PKU & UD \\
\hline
\newcite{DBLP:journals/corr/LiuCGQL16} & --- & --- & 94.6 & --- & 94.8 & 94.9 & --- \\
\newcite{zhou2017word} & --- & --- & 94.9 & --- & 97.2 & 95.0 & --- \\
\newcite{cai2017fast} & 95.2 & 95.4 & --- & --- & 97.0 & \bf95.4 & --- \\
\newcite{DBLP:journals/corr/abs-1711-04411} & --- & --- & --- & --- & 96.7 & 94.7 & --- \\
\hline
Ours & \bf95.5 & \bf95.7 & \bf95.5 & \bf95.6 & \bf97.5 & \bf95.4 & \bf94.6 \\
\end{tabular}
}
\caption{
\label{close_test}
Performance of recent neural network based models without using pretrained embeddings. Our model's wins are statsitically significantly better than prior work (p $<$ 0.05 bootstrap resampling), except on PKU.
}
\end{table*}
\paragraph{Other Related Work.}
Recently, a number of different neural network based models have been proposed for word segmentation task. One common approach is to learn word representation through the characters of that word. For example, \newcite{DBLP:journals/corr/LiuCGQL16} runs bi-directional LSTM over characters of the word candidate and then concatenate bi-directional LSTM outputs at both end points. \newcite{cai2017fast} adopts a gating mechanism to control relative importance of each character in the word candidate.
Besides modeling word representation directly, sequential labeling is another popular approach. For instance, \newcite{D13-1061} and \newcite{P14-1028} predict the label of a character based context of a fixed sized local window. \newcite{chen2015long} extends the approach by using LSTMs to capture potential long distance information. Both \newcite{chen2015long} and \newcite{P14-1028} use a transition matrix to model interaction between adjacent tags. \newcite{zhou2017word} conduct rigorous comparison and show that such transition matrix rarely improves accuracy. Our model is similar to \newcite{zhou2017word}, except that we stack the backward LSTM on top of the forward one, which improves accuracy as shown in later section.
Our model is also trained via a simple maximum likelihood objective. In contrast, other state-of-the-art models use a non-greedy approach to training and inference, e.g. \newcite{yang2017neural} and \citet{zhang2016transition}.
\begin{table*}
\centering
\begin{tabular}{lr}
\toprule
Parameter & Values \\ \hline
Char embedding size & [64] \\
Bigram embedding size & [16, 32, 64] \\
Learning rate & [0.04, 0.035, 0.03] \\
Decay steps & [32K, 48K, 64K] \\
Input dropout rate & [0.15, 0.2, 0.25, 0.3, 0.4, 0.5, 0.6] \\
LSTM dropout rate & [0.1, 0.2, 0.3, 0.4] \\
\bottomrule
\label{tab_grids}
\end{tabular}
\caption{Hyperparameter settings. }
\end{table*}
\begin{table*}
\centering
\begin{tabular}{lrrrrrrr}
& AS & CITYU & CTB6 & CTB7 & MSR & PKU & UD \\ \hline
OOV \% & 4.2 & 7.5 & 5.6 & 5.0 & 2.7 & 3.6 & 12.4 \\ \hline
Recall \% (random embedding) & 65.7 & 75.1 & 73.4 & 74.1 & 71.0 & 66.0 & 81.1 \\
Recall \% (pretrain embedding) & 70.7 & 87.5 & 85.4 & 85.6 & 80.0 & 78.8 & 89.7 \\
\end{tabular}
\caption{
\label{test_set_oov} Test set OOV rate, together with OOV recall achieved with randomly initialized and pretrained embeddings, respectively.
}
\end{table*}
\begin{table*}
\centering
\scalebox{0.92}{
\begin{tabular}{r|rrrrrrr|r}
System & AS & CITYU & CTB6 & CTB7 & MSR & PKU & UD & Average \\
\hline
This work & 98.03 & 98.22 & 97.06 & 97.07 & 98.48 & 97.95 & 97.00 & 97.69 \\
-LSTM dropout & +0.03 & -0.33 & -0.31 & -0.24 & +0.04 & -0.29 & -0.76 & -0.35 \\
-stacked bi-LSTM & -0.13 & -0.20 & -0.15 & -0.14 & -0.17 & -0.17 & -0.39 & -0.27 \\
-pretrain & -0.13 & -0.23 & -0.94 & -0.74 & -0.45 & -0.27 & -2.73 & -0.78 \\
\end{tabular}
}
\caption{
\label{tab_ablation}
Ablation results on development data. Top row: absolute performance of our system. Other rows: difference relative to the top row.
}
\end{table*}
\section{Experiments}
\paragraph{Data.} We conduct experiments on the following datasets:
Chinese Penn Treebank 6.0 (CTB6) with data split according the official document;
Chinese Penn Treebank 7.0 (CTB7) with recommended data split \cite{wang-EtAl:2011:IJCNLP-2011};
Chinese Universal Treebank (UD) from the Conll2017 shared task \cite{zeman-EtAl:2017:K17-3} with the official data split;
Dataset from SIGHAN 2005 bake-off task (Emerson, 2005).
Table~\ref{statistics} shows statistics of each data set.
For each of the SIGHAN 2005 dataset, we randomly select $10\%$ training data as development set. We convert all digits, punctuation and Latin letters to half-width, to handle full/half-width mismatch between training and test set.
We train and evaluate a model for each of the dataset, rather than train one model on the union of all dataset. Following \newcite{yang2017neural}, we convert AS and CITYU to simplified Chinese.
\subsection{Main Results}
Table~\ref{tab_state_of_the_art} contains the state-of-the-art results from recent neural network based models, together with the performance of our model.
Table ~\ref{close_test} contains results achieved without using any pretrained embeddings.
Our model achieves the best results among NN models on 6/7 datasets.
In addition, while the majority of datasets work the best if the pretrained embedding matrix is treated as constant, the MSR dataset is an outlier: fine-tuning embeddings yields a very large improvement.
We observe that the likely cause is a low OOV rate in the MSR evaluation set compared to other datasets.
\subsection{Ablation Experiments}
\label{sec_ablation}
To see which decisions had the greatest impact on the result, we
performed ablation experiments on the holdout sets of the different
corpora. Starting with our proposed system\footnote{Based on development set accuracy, we keep the pretrained embedding fixed for all datasets except MSR and AS. }, we remove one
decision, perform hyperparameter tuning, and see the change in
performance. The results are summarized in Table~\ref{tab_ablation}.
Negative numbers in Table~\ref{tab_ablation} correspond to decreases in performance for the ablated system. Note that although each of the components help performance on average, there are cases where we observe no impact. For example using recurrent dropout on AS and MSR rarely affects accuracy.
We next investigate how important the hyperparameter tuning is to this
ablation. In the main result, we tuned each model separately for each
dataset. What if instead, each model used a single hyperparameter
configuration for all datasets? In Table \ref{tab_hyperparameters}, we
compare fully tuned models with those that share hyperparameter
configurations across dataset for three settings of the model. We can
see that hyperparameter tuning consistently improves model accuracy across all settings.
\begin{table}
\centering
\begin{tabular}{l|ccc}
System & Fully tuned & Avg \\% & Mismatch \\
\hline
This work & 97.69 & 97.49 \\%& 96.84\\
-Stacked & 97.41 & 97.16 \\%& 95.91 \\
-Pretraining & 96.90 & 96.81 \\%& 96.11 \\
\end{tabular}
\caption{
\label{tab_hyperparameters}
Hyperparameter ablation experiments. ``Fully tuned'' indicates per-system tuning for each dataset. ``Avg'' is the best setting when averaging across datasets.
}
\end{table}
\subsection{Error Analysis}
In order to guide future research on Chinese word segmentaion, it is
important to understand the types of errors that the system is making.
To get a sense of this, we randomly selected 54 and 50 errors from the CTB-6 and MSR test set, respectively.
We then manually analyzed them.
The model learns to remember words it has seen, especially for high frequency words. It also learns the notion of prefixes/suffixes, which aids predicting OOV words, a major source of segmentation errors \cite{HUANGChang-ning:8}.
Using pretrained embeddings enables the model to expand the set of prefixes/suffixes through their nearest neighbors in the embedding spaces, and therefore further improve OOV recall (on average, using pretrained embeddings contributes to $10\%$ OOV recall improvement, also see Table~\ref{test_set_oov} for more details).
Nevertheless, OOV remains challenging especially for those that can be divided into words frequently seen in the training data, and most (37 out of 43) of the oversegmentation errors are due to this.
For instance, the model incorrectly segmented the OOV word 抽象概念 (abstract concept) as 抽象 (abstract) and 概念 (concept). 抽象 and 概念 are seen in the training set for 28 times and 90 times, respectively.
Unless high coverage dictionaries are used, it is difficult for any supervised model to learn not to follow this trend in the training data.
In addition, the model sometimes struggles when a prefix/suffix can also be a word by itself.
For instance, \hichar{权} (right/power) frequently serves as a suffix, such as 管理\hichar{权} (right of management), 立法\hichar{权} (right of legislation) and 终审\hichar{权} (right of final judgment).
When the model encounters 下放 (delegate/transfer) \hichar{权}(power), it incorrectly merges them together.
Similarly, the model segments \hichar{居} (in/at) + 中 (middle) as \hichar{居}中 (in the middle), since the training data contains words such as \hichar{居}首 (in the first place) and \hichar{居}次 (in the second place). This example also hints at the ambiguity of word delineation in Chinese, and explains the difficulty in keeping annotations consistent.
As another example, \hichar{县} is often attached to another proper noun to become a new word, e.g., 高雄 (Kaohsiung) + \hichar{县} becomes 高雄县 (county of Kaohsiung), 新竹(Hsinchu) + \hichar{县} becomes 新竹\hichar{县} (county of Hsinchu). When seeing
银行\hichar{县}支行 (bank's county branch), which should be
银行 (bank) + \hichar{县}支行 (county branch), the model outputs 银行\hichar{县} + 支行 (i.e. a county named bank).
Fixing the above errors requires semantic level knowledge such as `Bank' (银行) is unlikely to be the name of a county (\hichar{县}), and likewise, transfer power (下放\hichar{权}) is not a type of right (\hichar{权}).
Previous work \cite{HUANGChang-ning:8} also pointed out that OOV is a major obstacle to achieving high segmentation accuracy. They also mentioned that machine learning approaches together with character-based features are more promising in solving OOV problem than rule based methods. Our analysis indicate that learning from the training corpus alone can hardly solve the above mentioned errors. Exploring other sources of knowledge is essential for further improvement.
One potential way to acquire such knowledge is to use a language model that is trained on a large scale corpus \cite{DBLP:journals/corr/abs-1802-05365}. We leave this to future investigation.
Unfortunately, a third (34 out of 104) of the errors we have looked at were due to annotation inconsistency.
For example, 建筑系 (Department of Architecture) is once annotated as 建筑 (Architecture) + 系 (Department) and once as 建筑系 under exactly the same context 建筑系教授喻肇青 (Zhaoqing Yu, professor of Architecture).
高新技术 (advanced technology) is annotated as 高 (advanced) + 新 (new) + 技术 (technology) for 37 times, and is annotated as 高新 (advanced and new) + 技术 (technology) for 19 times.
In order to augment the manual verification we performed above, we also wrote a script to automatically find inconsistent annotations in the data. Since this is an automatic script, it cannot distinguish between genuine ambiguity and inconsistent annotations. The heuristic we use is the following: for all word bigrams in the training data, we see if they also occur as single words or word trigrams. We ignore the dominant analysis and count the number of occurrences of the less frequent analyses and report this number as a fraction of the number of tokens in the corpus.
Table~\ref{tab_auto_inconsistency} shows the results of running the script. We see that the AS corpus is the least consistent (according to this heuristic) while MSR is the most consistent. This might explain why both our system and prior work have relatively low performance on AS even though this has the largest training set. By contrast results are much stronger on MSR, and this might be in part because it is more consistently annotated.
The ordering of corpora by inconsistency roughly mirrors their ordering by accuracy.
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
{} & tokens & inconsistency \% \\
corpus & & \\
\midrule
AS & 4,903,564 & 1.31 \\
CITYU & 1,309,208 & 0.62 \\
CTB6 & 641,368 & 1.27 \\
CTB7 & 950,138 & 1.64 \\
MSR & 2,132,480 & 0.28 \\
PKU & 994,822 & 0.53 \\
UD & 98,608 & 0.46 \\
\bottomrule
\end{tabular}
\caption{
\label{tab_auto_inconsistency} Automatically computed inconsistency in the corpus training data. See text for methodology.
}
\end{table}
\end{CJK}
\section{Conclusion}
In this work, we showed that further research in Chinese segmentation must overcome two key challenges: (1) rigorous tuning and testing of deep learning architectures and (2) more effort should be made on exploring resources for further performance gain.
\newpage
}
|
1,116,691,497,675 | arxiv | \section*{\bf Conflicts of Interest}}}
|
1,116,691,497,676 | arxiv | \section{Introduction}
In this article we consider self-similar solutions to Smoluchowski's mean-field model for coagulation. The model applies to a system of particles in which
at any time two particles can coagulate to form a larger particle. If $\phi(\xi,t)$ denotes the number density of particles of size $\xi>0$ at time $t$, then $\phi$
satisfies the following nonlocal integral equation.
\begin{equation}\label{smolu1}
\partial_t \phi(\xi,t) = \frac 1 2 \int_0^\xi K(\xi{-}\eta,\eta) \phi(\xi{-}\eta,t) \phi(\eta,t)\,d\eta - \phi(\xi,t) \int_0^{\infty} K(\xi,\eta) \phi(\eta,t)\,d\eta =: Q[\phi](\xi)\,.
\end{equation}
Here $K(\xi,\eta)$ denotes the so-called rate kernel, a nonnegative and symmetric function,
that describes the rate at which particles of size $\xi$ and $\eta$ coagulate. The kernel $K$ depends on the
microscopic details of the coagulation process and many different type of kernels can be found in the applied literature (see for example \cite{Aldous99,Drake72} and the references therein).
Most notable is Smoluchowski's kernel
\[
K(\xi,\eta) = K_0 \Big( \xi^{1/3} + \eta^{1/3}\Big) \Big( \xi^{-1/3} + \eta^{-1/3}\Big)\,,
\]
that has been derived in Smoluchowski's original paper \cite{Smolu16} to describe coagulation in a homogeneous colloidal gold solution. The main assumptions in the derivation
are that particles are spherical, diffuse by Brownian motion when they are well-separated and coagulate quickly when two particles become close. Then $
\Big( \xi^{1/3} + \eta^{1/3}\Big)$ is proportional to the diameter of two particles of volume $\xi$ and $\eta$ respectively, whereas $\Big( \xi^{-1/3} + \eta^{-1/3}\Big) $ is due
to Einstein's formula proportional to the diffusion constant.
An important aspect of solutions to \eqref{smolu1} is mass conservation. Since mass is neither created or destroyed on a microscopic level, one would expect that the same is
true on the macroscopic level, that is solutions of \eqref{smolu1} should satisfy
\begin{equation}\label{massconservation}
\int_0^{\infty} \xi \phi(\xi,t)\,d\xi =\int_0^{\infty} \xi \phi(\xi,0)\,d\xi \qquad \mbox{ for all } t >0\,.
\end{equation}
In fact, integrating \eqref{smolu1} and exchanging the order of integration, one finds formally \eqref{massconservation}. However, it is well-known by now
that \eqref{massconservation} is not true in general. It has first been established for the multiplicative kernel $K(x,y)=xy$, (see e.g. \cite{McLeod62a})
and later for more general kernels
which grow faster than linearly at infinity \cite{Jeon98,EMP02}, that there is a finite time $t_*\geq 0$, that depends on the kernel and on the initial data, such that
mass is conserved up to time $t_*$ and decays afterwards. This phenomenon is known as gelation and corresponds to the creation of infinitely large clusters at the finite
time $t_*$. If, on the other hand, the kernel $K$ grows at most linearly at infinity, then mass-conservation of solutions has been established for a large range of
kernels (see e.g. \cite{Norris99,LauMisch02,LauMisch04}).
\medskip
A fundamental issue in the analysis of coagulation equations is the dynamic scaling hypothesis. It states that for homogeneous kernels, solutions to \eqref{smolu1} converge
to a uniquely determined self-similar solution, either as time goes to infinity, or, in the case of gelation, as time approaches the gelation time. However, this issue is only
well-understood for the so-called solvable kernels, $K(x,y)=const, K(x,y)=x+y$ and $K(x,y)=xy$ for which explicit solution formulas are available. In fact, for those kernels
it has been established that there is one self-similar solution with finite mass, and convergence to this solution under some assumptions on the data has been established in
a range of papers \cite{KreerPen94,DMR00,MePe04,LauMisch05,CMM10}. In \cite{MePe04} it was also established that in addition to self-similar
solutions with finite mass there exists a family of self-similar solutions that have fat tails. Furthermore, in \cite{MePe04}
the domains of attraction of all those solutions have been completely
characterized. However, the proofs of all these results rely on the use of the Laplace transform or on explicit formulas for the self-similar
solutions and cannot, at least not directly, be extended to any other kernel.
More recently, some results on self-similar solutions to \eqref{smolu1} for kernels that are homogeneous with degree $\lambda<1$ have been established.
First, existence of self-similar profiles with finite mass for a large range of such kernels has been proved in \cite{FouLau05,EMR05} and properties of such solutions have
been investigated \cite{EsMisch06,FouLau06a,CanMisch11,NV11b}. In addition, the existence of self-similar solutions with fat tails has been
established for kernels that are bounded as $K(x,y) \leq C(x^{\lambda}+y^{\lambda})$ for $\lambda \in [0,1)$ in \cite{NV12a}. However, it has been
an open problem whether solutions with a given tail behaviour, are unique.
In this paper we present the first such result for non-solvable kernels. More precisely we prove that self-similar solutions with finite
mass are unique if the kernel $K$ is homogeneous with degree zero and
is close to the constant kernel in the sense outlined below (see \eqref{kernel2}-\eqref{kernel0}).
To describe our result in detail we recall that self-similar solutions with finite mass to \eqref{smolu1} for kernels $K$ of homogeneity zero are given by
\begin{equation}\label{ss1}
\phi(\xi,t) = t^{-2} f(x) \qquad \mbox{ with } x = \frac{\xi}{t}
\end{equation}
where $f$ satisfies
\begin{equation}\label{eq1}
-xf'(x) - 2f(x) =Q[f](x)\,
\end{equation}
with
\begin{equation}\label{eq2}
\int_0^{\infty} x f(x)\,dx =M\,.
\end{equation}
It is convenient to rewrite equation \eqref{eq1} as
\begin{equation}\label{eq1b}
-\big(x^2 f(x)\big)'=x Q[f](x) = -\partial_x \int_0^x \int_{x-y}^{\infty} K(y,z) y f(z)f(y)\,dz\,dy
\end{equation}
and by integrating \eqref{eq1b} to reformulate \eqref{eq1} as
\begin{equation}\label{eq1c}
x^2 f(x) = \int_0^x \,dy \int_{x{-}y}^{\infty}\,dz K(y,z)y f(z)f(y)\,.
\end{equation}
We call $f$ a self-similar profile with finite mass to \eqref{smolu1} if $f \in L^1_{loc}(\R)$, $f \geq 0$, $\int x f(x)\,dx < \infty$ and if $f$ satisfies \eqref{eq1c}
for almost all $x \in \R$.
Notice also, that if $f$ is a solution to \eqref{eq1c}, then so is the rescaled function $g(x)=af(ax)$ for $a>0$. We can fix the parameter $a$
by fixing $M$ in \eqref{eq2}.
Our goal in this paper is to show that solutions to \eqref{eq1c} and \eqref{eq2} are unique if the kernel $K$ is close to the constant one. More precisely we
make the following
assumptions on the kernel:
We assume for the kernel $K\colon (0,\infty)^2 \to [0,\infty)$
that \begin{equation}\label{kernel1}
K \mbox{ is homogeneous of degree zero, that is } K(\lambda x,\lambda y) = K(x,y) \; \mbox{ for all } x,y,\lambda>0\,.
\end{equation}
Furthermore we assume that there exists $\eps>0$ and $\alpha \in [0,1)$ such that
\begin{equation}\label{kernel2}
W(x,y):=K(x,y)-2 \geq -\eps\,,\qquad \mbox{ for all } x,y>0\,,
\end{equation}
\begin{equation}\label{kernel3}
W(x,y) \leq \eps \Big( \Big(\frac{x}{y}\Big)^{\alpha} + \Big (\frac{y}{x}\Big)^{\alpha}\Big)\qquad \mbox{ for all } x,y>0
\end{equation}
and that $K$ is differentiable with
\begin{equation}\label{kernel0}
\Big | \frac{\partial}{\partial x} K(x,y) \Big| \leq \frac{C\eps}{x} \Big( \Big(\frac{x}{y}\Big)^{\alpha} + \Big (\frac{y}{x}\Big)^{\alpha}\Big)\qquad \mbox{ for all } x,y>0\,.
\end{equation}
The last assumption could be weakened in the sense that it would suffice that a H\"older norm of $K$ is small locally with a certain blow-up rate as $x,y \to 0$. Assumption \eqref{kernel0}
is just somewhat easier to formulate and it is also satisfied (up to the smallness assumption) by kernels one typically encounters in applications.
\begin{theorem}\label{T.uniqueness}
Assume that $K$ satisfies the assumptions \eqref{kernel1}-\eqref{kernel0} and let $f_1$ and $f_2$ be two self-similar profiles that satisfy \eqref{eq2}.
Then, if $\eps$ is sufficiently small, we have $f_1=f_2$.
\end{theorem}
The key ingredients of our proof are the following. In Section \ref{S.apriori} we collect several a priori estimates.
First, we need certain regularity
of the solutions as $x\to 0$ and it is for those estimates that we need a uniform lower bound on the kernel. In fact, it is known that for kernels that are not uniformly bounded
away from zero (e.g. the diagonal kernel) solutions have less regularity than what we need for our proof. In order to derive these results and also for the contraction argument
in the uniqueness proof we consider, as in \cite{MePe04} for the solvable kernels, the desingularized Laplace transform of $f$, for which we can derive an approximate differential
equation (see Lemma \ref{L.Qequation}).
Another key estimate is that any self-similar solution with finite mass decays exponentially as $x \to \infty$ (cf. Lemma \ref{L.ublargex} and Lemma \ref{C.ublargex}).
This result and more detailed estimates for the behaviour for large
$x$ is contained in \cite{NV13a}. For completeness we present the proof of the upper bound that is needed here in the Appendix.
In Lemma \ref{L.qclose} we show that the self-similar solution is close to the one for the constant kernel in the sense that their Laplace transforms are
close.
The contraction argument that gives uniqueness is contained in Section \ref{S.uniqueness}. Again, the key idea is to consider a suitable norm (cf. \eqref{normdef}) that is
a weighted norm of the desingularized Laplace transform and hence measures the distance of solutions in the weak topology.
\bigskip
For the following it is convenient to use the normalization $M=1$ in \eqref{eq2}, such that the self-similar solution for $K=2$ is $f(x)=e^{-x}$.
\section{A priori estimates }\label{S.apriori}
\subsection{Properties of the Laplace transform}
For the following we use what is sometimes called the desingularized Laplace transform of $f$, given by
\begin{equation}\label{desinglaplaceg}
Q(q)= \int_0^{\infty} \big(1-e^{-qx}\big) f(x)\,dx\,.
\end{equation}
The function $Q$ is defined for all $q \geq 0$ and due to \eqref{eq2} we have $Q(0)=0$.
Normalizing the mass $M=1$ also implies that $Q'(0)=1$. We will see later, see Lemma \ref{L.ublargex}, that the function $Q$ is defined on $(-\delta,\infty)$ for some
$\delta >0$.
\bigskip
For the following we define
\begin{equation}\label{mdef}
{\cal M}(f,f)(q) =\frac 1 2 \int_0^{\infty} \int_0^{\infty} W(x,y) f(x)f(y) \big(1-e^{-qx}\big) \big( 1- e^{-qy}\big)\,dx\,dy\,.
\end{equation}
We first need to show via some a-priori estimates that ${\cal M}(f,f)(0)=0$.
\begin{lemma}\label{L.mproperty}
If $K(x,y) \geq c_0>0$ and if $f$ is a solution to \eqref{eq1c} and \eqref{eq2} then
\[
\lim_{q \to 0} {\cal M}(f,f)(q)=0\,.
\]
\end{lemma}
\begin{proof}
We first notice that the proof of Lemma 2.1 in \cite{NV11b} applies without any change to conclude that
\begin{equation}\label{extra1}
\sup_{R>0} \frac{1}{R} \int_{R/2}^R x f(x)\,dx \leq C\,.
\end{equation}
Then, by a dyadic argument, we conclude with \eqref{extra1} that
\begin{equation}\label{ubsmallx}
\begin{split}
\int_0^1y^{1-\alpha} f(y)\,dy & \leq \sum_{n=0}^{\infty} \int_{2^{-(n{+}1)}}^{2^{-n}} y^{1{-}\alpha} f(y)\,dy\\
& \leq \sum_{n=0}^{\infty} 2^{n\alpha} \int_{2^{-(n{+}1)}}^{2^{-n}} y f(y)\,dy\\
& \leq \sum_{n=0}^{\infty} 2^{n(\alpha-1)} \leq C\,.
\end{split}
\end{equation}
As a consequence, we can estimate
\[
\begin{split}
\int_0^{1/2q}\int_0^{1/2q}& |W(x,y)| f(x)f(y)\big(1-e^{-qx}\big)\big(1-e^{-qy}\big) \,dx\,dy\\
& \leq
C q^2 \int_0^{1/2q}\int_0^{1/2q} \Big( \Big(\frac{x}{y}\Big)^{\alpha} + \Big(\frac{y}{x}\Big)^{\alpha}\Big) yx f(x)f(y)\,dx\,dy\\
& \leq C q^{2-2\alpha} \to 0 \qquad \mbox{ as } q \to 0\,.
\end{split}
\]
Furthermore, using \eqref{eq2}, we have
\[
\begin{split}
\int_{1/2q}^{\infty} \int_{1/2q}^{\infty}& |W(x,y)| f(x)f(y) \big(1-e^{-qx}\big)\big(1-e^{-qy}\big) \,dx\,dy \\
& \leq
C \int_{1/2q}^{\infty} \int_{1/2q}^{\infty} x^{\alpha} y^{\alpha} f(x) f(y)\,dx\,dy\\
& \leq C q^{2-2\alpha} \to 0 \qquad \mbox{ as } q \to 0
\end{split}
\]
and we can similarly conclude that the term $\int_0^{1/2q} \int_{1/2q}^{\infty} \,dy\,dx ....$ converges to zero as $q \to 0$, which proves the claim.
\end{proof}
\bigskip
To obtain further estimates we derive a differential equation for $Q$.
\begin{lemma}\label{L.Qequation}
The function $Q$ satisfies for all $q$ with $Q(q)<\infty$ that
\begin{equation}\label{qequation}
-q Q'(q) = Q^2-Q + {\cal M}(f,f)(q)\,.
\end{equation}
\end{lemma}
\begin{proof}
Multiplying \eqref{eq1c} by $e^{-qx}$ and integrating we find, after changing the order of integration, that
\[
\begin{split}
-Q^{''}(q) &= \int_0^{\infty} x^2 f(x) e^{-qx}\,dx \\
&=\int_0^{\infty} \int_0^{\infty} K(y,z) y f(y) f(z) \int_{y}^{y+z} e^{-qx}\,dx\,dy\,dz\\
&= \int_0^{\infty} \int_0^{\infty} K(y,z) y f(y) f(z) \frac{1}{q} e^{-qy} \Big(1 - e^{-qz}\Big)\,dy\,dz\\
&= \frac{2}{q} Q'(q) Q(q) + \frac{1}{q} {\cal M}(f,f)'(q)
\end{split}
\]
and as a consequence we find
\[
- \big( q Q'\big)' = \big(Q^2\big)' - Q' + \big( {\cal M}(f,f)\big)'\,.
\]
By definition, we have $Q(0)=0$ and Lemma \ref{L.mproperty} implies that ${\cal M}(f,f)(0)=0$. Hence, integrating the previous identity we deduce the claim.
\end{proof}
In the following we denote by $\bar Q$ the desingularized Laplace transform for the case $K=2$, that is
\begin{equation}\label{Qbardef}
\bar Q(q) = \int_0^{\infty} e^{-x}\big(1-e^{-qx}\big)\,dx = 1-\frac{1}{1+q}=\frac{q}{1+q}\,.
\end{equation}
In the following Lemma we derive some a-priori estimates for $Q$ and ${\cal M}$ that are essential for our analysis
and follow rather easily from the lower bound on $K$.
\begin{lemma}\label{L.apriori}
If $K(x,y) \geq c_0>0$ for all $x,y>0$, then the following estimates hold.
\begin{align}
\lim_{q \to \infty} Q(q)&<\infty \label{qinfty}\qquad \mbox{ and hence } \int_0^{\infty} f(x)\,dx < \infty\,,\\
\sup_{q >0} |q Q'(q)|&\leq C\,,\label{qprimebound}\\
\int_0^{\infty}\int_0^{\infty} K(x,y) f(x) f(y)\,dx\,dy&< \infty\,, \label{kintegralbound}\\
\lim_{q \to \infty} {\cal M}(f,f)(q)& <\infty\label{Mlimit}.
\end{align}
\end{lemma}
\begin{proof}
With the assumption on $K$ we can deduce from \eqref{qequation}, written with $K$ instead of $W$, that
\[
-q Q'(q) = -Q + \int_0^{\infty} \int_0^{\infty} K(x,y) f(x)f(y) \big(1{-}e^{-qx}\big)\big(1{-}e^{-qy}\big)dxdy \geq - Q + c_0 Q^2.
\]
Hence, by comparing with the solution of the corresponding ODE, the function $Q$ is uniformly bounded. Since $Q$ is increasing, statement \eqref{qinfty} follows.
Next, we have
\[
Q'(q) = \frac{1}{q} \int_0^{\infty} xq e^{-xq} f(x)\,dx \leq \frac{C}{q} \int f(x)\,dx \,,
\]
which together with \eqref{qinfty} establishes \eqref{qprimebound}.
Then it follows from the equation for $Q$ that
\[
\int_0^{\infty} \int_0^{\infty} K(x,y) f(x)f(y) \big(1-e^{-qx}\big)\big(1-e^{-qy}\big)\,dx\,dy\leq C
\]
and by monotone convergence we find \eqref{kintegralbound} in the limit $q \to \infty$. Denoting this limit by $J$ we finally get that
\begin{align*}
{\cal M}(f,f)(q)& = \int_0^{\infty} \int_0^{\infty} W(x,y) f(x)f(y) \big(1-e^{-qx}\big)\big(1-e^{-qy}\big)\,dx\,dy\\
&=\int_0^{\infty} \int_0^{\infty} K(x,y) f(x)f(y) \big(1-e^{-qx}\big)\big(1-e^{-qy}\big)\,dx\,dy - Q(q)^2\\
& \to J - Q(\infty)^2\,
\end{align*}
which proves \eqref{Mlimit}.
\end{proof}
\subsection{Regularity near zero}
In the following Lemma we prove a certain regularity for $f$ as $x \to 0$. As already mentioned in the introduction, this result relies on a uniform lower bound on the kernel.
In fact, for the diagonal kernel, the corresponding result is known not to be true, since solutions behave as $f(x) \sim \frac{C}{x}$ as $x \to 0$ and thus
\eqref{regularity} and \eqref{negativemoment}
do not hold.
\begin{lemma}\label{L.regularity}
Given $\eta>0$ there exists $\rho_0>0$ such that for sufficiently small $\eps$
\begin{equation}\label{regularity}
\int_{\rho}^{2\rho} f(x)\,dx \leq C\rho^{1-\eta} \qquad \mbox{ for all } \rho \in (0,\rho_0]\,.
\end{equation}
As a consequence we obtain
\begin{equation}\label{negativemoment}
\int_0^1 \frac{f(x)}{x^{\alpha}}\,dx \leq C_{\alpha}\,.
\end{equation}
\end{lemma}
\begin{proof}
We have seen in Lemma \ref{L.apriori} that $L:=\lim_{q \to \infty} {\cal M}(f,f)(q)$ exists. Furthermore, we deduce from
\eqref{qequation} that
\begin{equation}\label{qlrelation}
Q^2(\infty)-Q(\infty)+L=0\,.
\end{equation}
Using \eqref{kernel2}, \eqref{qequation} and \eqref{qlrelation} we can derive the following differential inequality for the positive function $Z(q):=Q(\infty)-Q(q)$:
\begin{align*}
qZ'&= Q^2-Q+ {\cal M}(f,f)\\
&= Z^2 + \big (1-2Q(\infty)\big) Z -L + {\cal M}(f,f)\\
&=Z^2 + \big (1-2Q(\infty)\big) Z + \tfrac 1 2 \int_0^{\infty} \int_0^{\infty} W(x,y) f(x) f(y) \Big( \big(1-e^{-qx}\big)\big(1{-}e^{-qy}\big) {-}1\Big)dxdy \\
& \leq Z^2 + \big (1-2Q(\infty)\big) Z -C\eps \Big( Q(q)^2-Q(\infty)^2\Big)\\
&\leq (1-C\eps) Z^2 + \big(1-2(1-C\eps) Q(\infty)\big) Z\,.
\end{align*}
We know that given $\delta>0$ we have $Q(\infty) \in (1-\delta,1+\delta)$ if $\eps$ is sufficiently small. Hence $Z(q) \leq \delta $ for $q \geq \hat q$ where $\hat q$ is
sufficiently large. As a consequence we obtain
\[
qZ'(q) \leq (\eta-1) Z \qquad \mbox{ with } \eta = (1-C\eps)\delta + 2C(\eps+\delta)) \,,
\]
which implies
\begin{equation}\label{Zbound}
Z(q) \leq Z(\hat q) \Big( \frac{\hat q}{q}\Big)^{1{-}\eta}
\end{equation}
and thus
\begin{equation}\label{Zbound1}
Z(q) = \int_0^{\infty} f(x) e^{-qx}\,dx \leq \frac{C}{q^{1{-}\eta}}\,\qquad \mbox{ for } q \geq \hat q\,.
\end{equation}
Choosing $q=\frac{1}{\rho}$, the estimate \eqref{regularity} follows.
To obtain \eqref{negativemoment} we use a dyadic argument. More precisely, we estimate for $\eta < 1-\alpha$ that
\[
\int_0^1 \frac{f(x)}{x^{\alpha}}\,dx = \sum_{n=0}^{\infty} \int_{2^{-(n+1)}}^{2^{-n}} \frac{f(x)}{x^{\alpha}}\,dx \\
\leq C \sum_{n=0}^{\infty} 2^{-(1{-}\eta {-}\alpha)n }\leq C\,.
\]
\end{proof}
\subsection{Exponential decay}
A key result for our analysis is the following decay estimate. If $f$ is a solution of \eqref{eq1c} and \eqref{eq2} then it decays exponentially fast.
This fact as well as stronger results can be proved for a much larger class of kernels than considered in this paper (see \cite{NV13a}). For the convenience
of the reader we present the proof of Lemma \ref{L.ublargex} in the appendix.
\begin{lemma}
\label{L.ublargex}
There exist constants $C,a>0$ such that any solution of \eqref{eq1c}, \eqref{eq2} satisfies
\[
f(x) \leq C e^{-ax} \qquad \mbox{ for all } x\geq 1\,.
\]
\end{lemma}
\begin{rem}
Due to the invariance of \eqref{eq1c} under rescaling, we can obtain that $f(x) \leq Ce^{-x}$, but have to give up \eqref{eq2} instead.
\end{rem}
As a consequence of Lemma \ref{L.ublargex} one also obtains the following result.
\begin{lemma}\label{C.ublargex}
Let $f(x)$ be a solution to \eqref{eq1c} and \eqref{eq2} such that $\int_0^{\infty} f(x) e^{ax}\,dx < \infty$ for $a>0$. Then there exists $b>0$ such that
$f(x)e^{ax} \leq C e^{-bx}$ for all $x \geq 1$.
\end{lemma}
\begin{proof}
The statement follows from the observation that the function $g(x)=f(x)e^{-ax}$ satisfies the inequality
\[
x^2 g(x) = \int_0^x \,dy \int_{x-y}^{\infty} \,dz K(y,z) e^{a(x-(y+z))} y g(y)g(z) \leq \int_0^{x} \,dy \int_{x-y}^{\infty} \,dz K(y,z) y g(y)g(z)\,,
\]
which is sufficient to apply the proof of Lemma \ref{L.ublargex} to $g(x)$.
\end{proof}
\subsection{The solution is close to the one for the constant kernel}
Our next Lemma shows that $Q$ is close to $\bar Q$ for small $\eps$ as long as we stay away from the singularity of $\bar Q$, that is $q=-1$.
\begin{lemma}\label{L.qclose}
Given $\delta>0$ and $\nu>0$, we have for sufficiently small $\eps>0$ that
\[
\sup_{q > -1+\nu} |(Q-\bar Q)(q)| \leq \delta\,.
\]
\end{lemma}
\begin{proof}
We denote $G(q):=Q(q)-\bar Q(q)$ such that $G$ satisfies the equation
\begin{equation}\label{Gequation}
-qG'(q)= \big(2\bar Q-1\big) G + G^2 +{\cal M}(f,f)(q)
\end{equation}
and $G(0)=0$ as well as due to our normalization $G'(0)=0$.
Integrating \eqref{Gequation} we find
\begin{equation}\label{Grepresentation}
\frac{G(q)}{q} = \frac{G(q_0)}{q_0} - \frac{1}{(1+q)^2} \int_{q_0}^q (1+r)^2 \frac{G^2(r)}{r^2}\,dr + \frac{1}{(1+q)^2} \int_{q_0}^q (1+r)^2 \frac{{\cal M}(f,f)(r)}{r^2}\,dr\,.
\end{equation}
We first consider $q_0=0$ and recall that by our assumptions $\lim_{q \to 0} \frac{G(q)}{q}=0$. For $\rho\geq 0$ define
\[
\|G\|_{\rho}:=\sup_{|q|\leq \rho} \Big| \frac{G(q)}{q}\Big|\,,
\]
such that $\|G\|_{0} =0$. From Lemma \ref{L.ublargex} we know that there exists $\eta>0$ such that $Q$ and hence $G$ are defined for all $q\in [-\eta,\infty)$.
By linearizing $1-e^{-qx}$, which is possible due to Lemma \ref{C.ublargex}, we have the estimate
\[
|{\cal M}(f,f)(q)|\leq C_{\eta}\eps q^2 \int_0^{\infty} \int_0^{\infty} \Big( \Big( \frac{x}{y}\Big)^{\alpha} + \Big ( \frac{y}{x}\Big)^{\alpha}\Big) xy f(x) f(y)\,dx\,dx
\leq C_{\eta} \eps q^2
\]
for $-\eta <q<\infty $. Now let $\rho \in (0,\eta]$ be such that $\|G\|_{\rho} \leq \frac{1}{2}$\,. Then, we obtain from
\eqref{Grepresentation}
\[
\frac{G(q)}{q} \leq \frac{1}{2} \int_0^q \frac{G(r)}{r}\,dr + C_{\eta} \eps \rho
\]
and Gronwall's inequality implies
\[
\frac{G(q)}{q} \leq C_{\eta} \eps \rho\,,
\]
which implies that we can choose $\rho = \eta$ and have the desired estimate in $[-\eta,\eta]$.
We are now going to derive the estimate in $[-1+\nu,-\eta]$. To that aim observe that ${\cal M}$ can be estimated,
recalling \eqref{negativemoment}, by
\begin{equation}\label{Mestimate}
\begin{split}
{\cal M}(f,f)(q)&\leq C \eps \int_0^{\infty} x^{\alpha} f(x) \big( 1-e^{-qx}\big)\,dx \int_0^{\infty} y^{-\alpha} f(y) \big(1-e^{-qy}\big)\,dy\\
& \leq C \eps \big( 1+|Q'|\big)^{\alpha} |Q|^{1-\alpha} \big(1+|Q|\big) \\
& \leq C \eps \big( 1 + |Q'| + |Q|^{\frac{2-\alpha}{1-\alpha}}\big)\,.
\end{split}
\end{equation}
We know that $|\bar Q(q)|\leq C_{\nu}$ for $q \in [-1+\nu,-\eta]$. We consider now an interval $[-\rho,-\eta]$ such that $|Q(q)| \leq 2|\bar Q(q)|\leq 2C_{\nu}$.
We know from Lemma \ref{C.ublargex} that $Q(q)$ is defined on a larger interval, if $Q(q)$ remains bounded.
Then \eqref{qequation} and \eqref{Mestimate} imply that in $[-\rho,-\eta]$ the function $Q$ satisfies an equation of the form
\[
-q \Big(1+\frac{a(q)}{q}\Big) Q' = Q^2 - Q + b(q)
\]
with
\[
|a(q)|\leq C \eps \qquad \mbox{ and } \qquad |b(q)|\leq C \eps\,
\]
and by linearization
\begin{equation}\label{eqdiff}
-qQ'= Q^2-Q + \sigma \qquad \mbox{ with } |\sigma(q)|\leq C \eps.
\end{equation}
Then $G$ solves
\[
-q G'= \big( 2 \bar Q -1 \big) G + G^2 + \sigma\,, \qquad |G(-\eta)| \leq \delta_1
\]
where $\delta_1$ can be made arbitrarily small if $\eps$ is small.
We can then use the representation formula \eqref{Grepresentation} for $G$ and Gronwall's inequality to conclude that
\[
|G(q)| \leq C(\delta_1 + \eps) \qquad \mbox{ for all } q \in [-\rho,-\eta]
\]
and this in turn implies that we can take $\rho=-1+\nu$ and we have the desired estimate in $[-1+\nu,\-\eta]$.
The corresponding estimate in $[\eta,\infty)$ follows similarly, using that due to \eqref{qinfty} we have a uniform bound on $Q$ and thus we also have \eqref{eqdiff}.
\end{proof}
Our next Lemma shows that $Q$ blows up at
a point $q^*$ that is close to $-1$ and it blows up with the same rate as $\bar Q$.
\begin{lemma}\label{L.singularity}
Given $\delta>0$ there exists $\eps>0$ such that there exists $q^*$ with
$|q^*+1|\leq \delta$ and $\lim_{q \to q^*} |Q(q)|=\infty$.
Furthermore there exists $r>0$ such that
\begin{equation}\label{singularity}
\big| (q-q^*)Q(q)+1\big| \leq \delta \qquad \mbox{ for all } q \in (q^*,q^*+r)\,.
\end{equation}
\end{lemma}
\begin{proof}
From the previous Lemma we know that $q^* \leq -1+\nu$ where $\nu$ can be made arbitrarily small with $\eps$.
To obtain a lower bound on $q^*$ we return to \eqref{Mestimate}
and derive
\begin{align}
{\cal M}(f,f)(q) &\leq C \eps \big(1+|Q'|\big)^{\alpha} |Q|^{1-\alpha} |V|\nonumber\\
& \leq C \eps \big( 1 + |Q'| + |Q| |V|^{\frac{1}{1{-}\alpha}}\big) \label{Mestimate1}
\end{align}
with
\[
V(q)=\int_0^{\infty} x^{-\alpha} f(x) \big(1-e^{-qx}\big)\,.
\]
We find that $V$ satisfies
\begin{equation}\label{Vhoelder}
|V(q)-V(\hat q)| \leq C \big(1+|Q(\hat q)|\big) |q-\hat q|^{\alpha}
\end{equation}
Indeed, this follows from
\begin{align}
\big| V(q)-V(\hat q)\big| &= \Big|\int_0^{\infty} x^{-\alpha} \big(e^{-qx} - e^{-\hat qx}\big) f(x)\,dx \Big|\nonumber \\
& \leq \int_0^{\infty} e^{-\hat q x} |1-e^{-x(q-\hat q)}| \frac{1}{(x|q-\hat q|)^{\alpha}} |q-\hat q|^{\alpha} f(x)\,dx\nonumber \\
& \leq C |q-\hat q|^{\alpha}\int_0^{\infty} e^{-\hat q x} f(x)\,dx \label{Mestimate2}\\
& \leq C |q-\hat q |^{\alpha} \big( 1 + |Q(\hat q)|\big)\,.\nonumber
\end{align}
Given $\eta>0$ we now choose $\nu$ in Lemma \ref{L.qclose} such that with $q_0=-1+\nu$
\[
|Q(q_0)| \leq \eta |Q(q_0)|^2 \qquad \mbox{ and } \qquad |\bar Q(q_0)| \leq \eta |\bar Q(q_0)|^2\,.
\]
Then we define a decreasing sequence $q_n$ in the following way:
\begin{equation}\label{qndef}
q_{n+1}=q_n - \frac{1}{4|Q(q_n)|}\,.
\end{equation}
We are going to show by induction that
\begin{align}
|V(q_{n{+}1})|^{\frac{1}{1-\alpha}} & \leq C |Q(q_n)| \,,\label{vestimate}\\
\frac{1}{2}|Q(q_n)| & \leq |Q(q_{n+1})| \leq 2 |Q(q_n)|\,, \label{Q1}\\
|Q(q_{n{+}1})| &\geq \frac 7 6 |Q(q_n)| \,. \label{Q2}
\end{align}
In fact, it follows from \eqref{Mestimate2} and \eqref{qndef} that
\begin{align*}
|V(q_{n{+}1})|^{\frac{1}{1{-}\alpha}} & \leq |V(q_n)|^{\frac{1}{1{-}\alpha} } + |Q(q_n)|^{\frac{1}{1{-}\alpha}} |q_{n{+}1}-q_n|^{\frac{\alpha}{1{-}\alpha}}\\
& \leq C_{\nu} + |Q(q_n)| \leq C |Q(q_n)|\,.
\end{align*}
Inserting \eqref{Mestimate1} into \eqref{Mestimate2} we obtain for $q \in [q_{n{+}1},q_n]$, taking also into account that $|Q(q)|$ is increasing for decreasing $q$, that
\begin{equation}\label{Mestimate3}
{\cal M}(f,f)(q) \leq C \eps \big( 1 + |Q'(q)| + |Q(q)|^2\big)
\end{equation}
As a consequence, we obtain that $Q$ satisfies for $q \in [q_{n{+}1},q_n]$
\[
-Q'(q) = \big( 1 + a(q)\big) Q^2 \qquad \mbox{ with } |a(q)| \leq C \big( \eps + \eta + \nu\big)\,.
\]
Integrating this equation, we find
\begin{align}
Q(q) &= \frac{Q(q_n)}{1- Q(q_n) \big( 1 + O(\eps+\eta+\nu)\big) (q-q_n)}\nonumber\\
& = \frac{Q(q_n)}{1- |Q(q_n)| \big( 1 + O(\eps+\eta+\nu)\big) |q-q_n|} \label{Qsolution}
\end{align}
and in particular, due to the monotonicity of $Q$ and the definition of the sequence $\{q_n\}$ in \eqref{qndef}, we
deduce \eqref{Q1} and \eqref{Q2}.
Then
\begin{align*}
q_{n+1} & = q_0 - \frac{1}{4} \Big( \frac{1}{|Q(q_0)|} + \cdots + \frac{1}{|Q(q_n)|}\Big)\\
& \geq q_0 - \frac{1}{4|Q(q_0)|} \Big( 1 + \frac{6}{7} + \Big( \frac{6}{7}\Big)^2 \cdots \Big) \to q_0 - \frac{7}{4|Q(q_0)|} \,.
\end{align*}
As a consequence of this and \eqref{Q2}, we obtain that $Q$ blows up at a point $q^* \geq q_0-\frac{7}{4|Q(q_0)|}$.
It remains to prove \eqref{singularity}.
We return to \eqref{Qsolution} to obtain
\[
Q(q_{n+1}) = \frac{Q(q_n)}{ 1- \frac{1}{4} (1+ O(\eps + \eta +\nu ) )} = \frac{4}{3} Q(q_n) (1+ O(\eps+ \eta+\nu) )\,.
\]
Iterating this argument we find
\begin{align*}
\Big( \frac 4 3 \Big)^{k-(n+1)} &|Q(q_{n+k})| \big( 1- O(\eps+\eta+\nu)\big)
\leq |Q(q_k)| \\
&\leq \Big( \frac 4 3 \Big)^{k-(n+1)} |Q(q_{n+k})| \big( 1+O(\eps+ \eta+\nu) \big)\,.
\end{align*}
As a consequence
\[
q_{n+1}-q^* = \frac{1}{4} \sum_{k \geq n+1} \frac{1}{|Q(q_k)|} \geq \frac{1}{4 |Q(q_{n+1})|} \sum_{l=0}^{\infty} \Big( \frac 3 4 \Big)^l \frac{1}{1+C(\eps+\eta
+\nu)^l}
\]
and
\[
q_{n+1}-q^* = \frac{1}{4} \sum_{k \geq n+1} \frac{1}{|Q(q_k)|} \leq \frac{1}{4 |Q(q_{n+1})|} \sum_{l=0}^{\infty} \Big( \frac 3 4 \Big)^l \frac{1}{1-C(\eps
+ \eta+\nu)^l}\,.
\]
Hence
\[
\big(1- C(\eps+ \eta+\nu)\big) \frac{1}{|Q(q_{n+1})|} \leq q_{n+1}-q^* \leq \big(1+ C(\eps + \eta+\nu)\big) \frac{1}{|Q(q_{n+1})|}\,.
\]
Since
\[
|Q(q)| = \frac{|Q(q_n)|}{1+|Q(q_n)|(1+O(\eps+\eta+\nu))(q-q_n)} = \frac{1}{q-q_n} \Big( 1 + O(\eps + \eta+\nu)\Big)
\]
we also find
\[
|Q(q)| = \frac{1}{q-q^*} \Big( 1 + O(\eps + \eta+\nu)\Big)
\]
and the proof of \eqref{singularity} is finished.
\end{proof}
\section{Uniqueness proof}
\label{S.uniqueness}
From now on we rescale the solution such that the singularity of its desingularized Laplace transform $Q$ is at $q=-1$. We denote the corresponding functions
again by $f$ and $Q$ respectively.
Since all the transforms are defined on the interval $(-1,\infty)$ we can define the following norm, that is particularly suited for our uniqueness proof:
\begin{equation}\label{normdef}
\|Q\|:= \sup_{q>-1} \frac{1+q}{|q|}|Q(q)|\,.
\end{equation}
As a corollary of Lemmas \ref{L.qclose} and \ref{L.singularity} we obtain the following.
\begin{lemma}\label{L.qgloballyclose}
Given $\delta>0$ there exists $\eps>0$ such that
\begin{equation}\label{smallness}
\| Q-\bar Q\|\leq \delta\,.
\end{equation}
\end{lemma}
\subsection{The representation formula}
Our next goal is to derive a representation formula for $U:=Q-\bar Q$. Then $U$ satisfies the equation
\begin{equation}\label{uequation}
-q U'(q) = \big( 2 \bar Q -1\big)U +U^2 + {\cal M}(f,f)(q)\,
\end{equation}
and $U = o\big( \frac{1}{1+q}\big)$ as $q \to -1$.
\begin{lemma}\label{L.representation}
The solution to \eqref{uequation} can be represented as
\begin{equation}\label{urepresentation}
U(q) = - \frac{q}{(1+q)^2} \int_{-1}^q \frac{(1+s)^2}{s^2} \int_0^s \,\psi(\eta)\,d\eta\,ds \qquad \mbox{ with } \psi= U^2 + {\cal M}(f,f)\,.
\end{equation}
Furthermore, if $U_1$ and $U_2$ are two such solutions, then
\begin{equation}\label{udifference}
\begin{split}
U_1(q)-U_2(q)& = - \frac{q}{(1+q)^2} \int_{-1}^q \frac{(1+s)^2}{s^2} \Big( U_1(s)^2 - U_2(s)^2\Big)\,ds
\\ &
- \frac{q}{(1+q)^2} \int_{-1}^q \frac{(1+s)^2}{s^2} \Big( {\cal M}(f_1,f_1)(s) - {\cal M}(f_2,f_2)(s)\Big)\,ds \,.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
Integrating the equation
\[
-qU'(q)= (2\bar Q-1)U + \psi= \Big(1-\frac{2}{1+q}\Big) U +\psi
\]
gives
\[
\Big( \frac{(1+q)^2}{q} U\Big)' = - \Big( \frac{1+q}{q}\Big)^2 \psi
\]
and thus \eqref{urepresentation} follows.
\end{proof}
\subsection{The contraction argument}
\begin{prop}
Let $U_1$ and $U_2$ be two solutions of \eqref{uequation} as in Lemma \ref{L.representation} then we have $U_1=U_2$ if $\eps>0$ is sufficiently small.
\end{prop}
\begin{proof}
We deduce from \eqref{udifference} that
\begin{equation}\label{prop1}
\begin{split}
\|U_1-U_2\|&\leq \sup_{q>-1} \frac{1}{q{+}1} \int_{-1}^q \frac{(1+s)^2}{s^2} \Big | U_1(s)^2 - U_2(s)^2\Big| \,ds\\
& \quad + \sup_{q>-1} \frac{1}{q{+}1}\Big| \int_{-1}^q \frac{(1+s)^2}{s^2} \Big( {\cal M}(f_1,f_1)(s) - {\cal M}(f_2,f_2)(s)\Big)\,ds \Big|\\
& =: (I) + (II)\,.
\end{split}
\end{equation}
The first term is easy to estimate. In fact, using \eqref{smallness}, we find for sufficiently small $\eps$ that
\begin{equation}\label{prop2}
\begin{split}
|(I)|& \leq \sup_{q>-1} \frac{1}{(1{+}q)} \int_{-1}^q \Big( \|U_1\|+\|U_2\|\Big) \|U_1-U_2\| \,ds \\
& \leq \Big( \|U_1\|+\|U_2\|\Big) \|U_1-U_2\| \\
& \leq \frac 1 2 \|U_1-U_2\| \,.
\end{split}
\end{equation}
The main task is to derive a similar bound on the second term in \eqref{prop1}.
We formulate this main result as a proposition and postpone its proof to the next section.
\begin{prop}\label{P.main}
For sufficiently small $\eps$ we have
\begin{equation}\label{main}
\sup_{q>-1} \frac{1}{1{+}q}\Big| \int_{-1}^q \frac{(1+s)^2}{s^2} \Big( {\cal M}(f_1,f_1)(s) - {\cal M}(f_2,f_2)(s)\Big)\,ds \Big| \leq C \eps \| U_1-U_2\|\,.
\end{equation}
\end{prop}
With Proposition \ref{P.main} the statement of the theorem follows.
\end{proof}
\subsection{Proof of Proposition \ref{P.main}}
We first notice that it suffices to prove Proposition \ref{P.main} for $W(x,y)$ that satisfies \eqref{kernel2}-\eqref{kernel0} with $\eps=1$. The result then follows by
scaling.
For the proof of Proposition \ref{P.main} we argue by contradiction.
Suppose that \eqref{main} (with $\eps=1$) is not true. Then there exist sequences $\{W_n\}, \{f_{1,n}\}, \{f_{2,n}\}$ and $\{q_n\}$ such that, with $U_{i,n}$ denoting
the corresponding functions as above,
\begin{equation}\label{assump1}
\|U_{1,n} - U_{2,n}\| \to 0 \qquad \mbox{ as } n \to \infty
\end{equation}
and
\begin{equation}\label{assump2}
\frac{1}{q_n{+}1}\Big| \int_{-1}^{q_n} \frac{(1+s)^2}{s^2} \Big( {\cal M}(f_{1,n},f_{1,n})(s) - {\cal M}(f_{2,n},f_{2,n})(s)\Big) \,ds\Big| \geq 1\,.
\end{equation}
By our regularity assumption \eqref{kernel0} we can assume without loss of generality that there exists a function $W_*=W_*(x,y)$, satisfying \eqref{kernel1}-\eqref{kernel0} such that
\begin{equation}\label{wnconvergence}
W_n \to W_* \qquad \mbox{ locally uniformly on } (0,\infty)^2\,.
\end{equation}
We now collect some a-priori estimates for solutions $f$.
\begin{lemma}
Let $f$ be a solution to \eqref{eq1}. Then
\begin{align}
\int_0^{\infty} f(x)\,dx & \leq 2 \,,\label{f1}\\
\int_0^1 \frac{f(x)}{x^{\alpha}}\,dx & \leq C_{\alpha}\,,\label{f2}\\
\Big| \int_0^{\infty}\big(1-e^{-qx}\big)f(x)\,dx \Big| &\leq \frac{2|q|}{1+q}\qquad \mbox{ for all } q>-1\,,\label{f3}\\
\int_R^{2R} e^x f(x)\,dx & \leq 4 R \qquad \mbox{ for } R \geq \frac{1}{1-\log 2} \,, \label{f4}\\
\int_1^{\infty} \frac{e^x}{x^{3-\alpha}} f(x)\,dx &\leq C \,. \label{f5}
\end{align}
\end{lemma}
\begin{proof}
The first estimate \eqref{f1} and the third \eqref{f3} follow from \eqref{smallness}, the second \eqref{f2} has been proved in Lemma \ref{L.regularity}. We can now deduce
\eqref{f4} from \eqref{f3}. In fact, choosing $q<-\log 2$, we have
\[
\int_0^{\infty} \big( e^{-qx}-1\big) f(x)\,dx = \Big | \int_0^{\infty} \big( 1- e^{-xq}\big) f(x)\,dx\Big| \geq \frac 1 2 \int_1^{\infty} e^{-qx} f(x)\,dx.
\]
As a consequence we obtain
\[
\int_1^{\infty} e^{|q|x} f(x) \,dx \leq \frac{4}{1+q} \qquad \mbox{ for } q\in (-1,-\log 2)\,.
\]
Choosing now $1+q=\frac{1}{R}$ and $x \in (R,2R)$ estimate \eqref{f4} follows.
Finally, estimate \eqref{f5} follows from \eqref{f4} via the usual dyadic argument, that is
\[
\int_1^{\infty} \frac{e^x}{x^{3-\alpha}} f(x)\,dx \leq \sum_{n=0}^{\infty} \int_{2^n}^{2^{n{+}1}} \frac{e^x}{x^{3-\alpha}} f(x)\,dx \leq C \sum_{n=0}^{\infty}
2^{n{+}1} 2^{-(3-\alpha)(n{+}1)} \leq C\,.
\]
\end{proof}
We now write
\begin{align*}
\frac{1}{q{+}1}&\int_{-1}^q \,ds \frac{(1+s)^2}{s^2} \Big( {\cal M}(f_1,f_1)(s) - {\cal M}(f_2,f_2)(s)\Big)\\
&= \int_0^{\infty} \int_0^{\infty} W(x,y) \big( f_1(x)+f_2(x)\big)\big(f_1(y)-f_2(y)\big) H(q,x,y)\,dx\,dy
\end{align*}
with
\begin{equation}\label{Hdef}
H(q,x,y)= \frac{1}{1+q} \int_{-1}^q \frac{(1+s)^2}{s^2} \big(1-e^{-sx}\big) \big( 1-e^{-sy}\big)\,ds\,.
\end{equation}
\subsubsection{The case $q_n \to q^* \in (-1,\infty]$.}
Now assume that $q_n\to q^*\in (-1,\infty]$. In this case we can use the following estimate for $H$.
\begin{lemma}\label{L.Hestimate}
For $q >-1+\frac{1}{L}$ we have
\[
0 \leq H(q,x,y) \leq C_L \frac{\min(x,1)\min(y,1)}{1+(x+y)^3} e^{x+y}\,.
\]
\end{lemma}
\begin{proof}
If $x,y \leq 1$, the estimate is immediate by linearizing the function $1-e^{-sx}$. If $x,y \geq 1$, then the main contribution to the integral comes from the region
$s \sim -1$. In fact, if $-1+1/L < q <-1/L$, then
\begin{align*}
H(q,x,y) &\leq C_L \int_{-1}^q (1+s)^2 e^{-s(x+y)}\,ds\\
& = C_L \frac{e^{x+y}}{(x+y)^3} \int_0^{(1+q)(x+y)} t^2 e^{-t}\,dt \\
& \leq C_L \frac{1}{1+(x+y)^3}e^{x+y}\,.
\end{align*}
In a neighborhood of $s=0$ we can again linearize, while for $s \geq \frac{1}{L}$ we just use the upper bound
$\big(1-e^{-sx}\big) \big( 1-e^{-sy}\big) \leq 1$.
If e.g. $x \geq 1$ and $y \leq 1$, the result follows analogously.
\end{proof}
\begin{rem}
If $q_n \to q^* \in (-1,\infty)$, then it is obvious that $H(q_n,\cdot)$ converges locally uniformly in $(0,\infty)^2$ to $H(q^*,\cdot,\cdot)$.
If $q_n \to \infty$, then $H(q_n,\cdot)$ converges locally uniformly to $2$.
\end{rem}
\bigskip
Then, if $q >-1+\frac{1}{L}$ and if $f$ is a solution to \eqref{eq1}, we have, using Lemma \ref{L.Hestimate}, that for large $R$
\begin{align*}
\int_0^{1/R} &\,dx \int_0^{\infty} dy W(x,y) f(x)f(y) H(q,x,y)\\
&\leq C \int_0^{1/R}dx \int_0^{\infty} dy \Big( \Big(\frac{x}{y}\Big)^{\alpha} + \Big( \frac{y}{x}\Big)^{\alpha} \Big) f(x)f(y) \frac{x\min(y,1)}{1+(x+y)^3} e^{x+y}\\
& \leq C \int_0^{1/R} x^{1{-}\alpha} f(x)\,dx \, \int_0^{\infty} \Big( y^{-\alpha} + y^{\alpha}\Big) \frac{\min(y,1)e^y}{1+y^3} f(y)\,dy
\\ & \leq \frac{C}{R^{1{-}\alpha}}\,,
\end{align*}
where the last estimate follows from \eqref{f1} and \eqref{f5}.
Furthermore, using also \eqref{f4}, we arrive similarly at
\begin{align*}
&\int_{R}^{\infty} \,dx \int_0^{\infty} dy W(x,y) f(x)f(y) H(q,x,y)
\leq C \int_R^{\infty} \frac{x^{\alpha} e^x}{1+x^3} f(x)\,dx\\
& \leq C \sum_{n=0}^{\infty} \int_{R2^n}^{R2^{n{+}1}} \frac{e^x}{x^{3-\alpha}} f(x)\,dx \leq C \sum_{n=0}^{\infty}
\big( R 2^n\big)^{-3+\alpha} R 2^n \leq \frac{C}{R^{2-\alpha}} \sum_{n=0}^{\infty} 2^{n(-2+\alpha)} \leq \frac{C}{R^{2-\alpha}}\,.
\end{align*}
Hence, in order to arrive at a contradiction to \eqref{assump2}, it remains to show that for large but fixed $R$
\begin{equation}\label{prop3}
\int_{1/R}^R \int_{1/R}^R W_n(x,y) \big( f_{1,n}(x)+f_{2,n}(x)\big)\big(f_{1,n}(y)-f_{2,n}(y)\big) H(q_n,x,y)\,dx\,dy\to 0 \quad \mbox{ as } n \to \infty.
\end{equation}
Since $W_n$ and $H(q_n,\cdot,\cdot)$ converge locally uniformly to their respective limits and since assumption \eqref{assump1} in particular implies that
$f_{1,n}-f_{2,n} \to 0$ locally in the sense of measures, we find that
\[
F_n(x):= \int_{1/R}^R W_n(x,y) H(q_n,x,y) \big(f_{1,n}(y)-f_{2,n}(y)\big) \,dy \to 0 \qquad \mbox{ as } n \to \infty
\]
locally uniformly in $x$. Hence, we can derive \eqref{prop3} and we have proved a contradiction in case $q_n \to q^* \in (-1,\infty]$.
\subsubsection{The case $q_n \to -1$.}
This case is somewhat more difficult to treat. We introduce the rescaling
\begin{equation}\label{rescaling}
X=(1+q_n)x \qquad \mbox{ and } g(X)=e^x f(x)\,.
\end{equation}
The integral (cf. \eqref{assump2}) for which we want to show that it converges to zero as $n \to \infty$ becomes
\begin{equation}\label{rescaledterm}
\int_0^{\infty} \int_0^{\infty} W_n(X,Y) \big( g_{1,n}(X) + g_{2,n}(X)\big) \big( g_{1,n}(Y) - g_{2,n}(Y)\big) \tilde H(q_n,X,Y)\,dX\,dY
\end{equation}
with
\begin{equation}\label{Htildedef}
\tilde H(q_n,X,Y)= \frac{e^{-(X+Y)/(1+q_n)}}{(1+q_n)^2} H\Big( q_n, \frac{X}{1+q_n}, \frac{Y}{1+q_n}\Big)\,.
\end{equation}
We are going to derive a-priori estimates for $g$ and $\tilde H$.
\begin{lemma}\label{L.gestimates}
We have for any solution $f$ of \eqref{eq1} and $g$ defined as in \eqref{rescaling} that
\begin{align}
\int_0^{\infty} e^{-\frac{X}{1+q_n}} g(X)\,dX & \leq C(1+q_n) \,, \label{g1}\\
\int_0^{2(1+q_n)} \frac{g(X)}{X^{\alpha}} \,dX &\leq C_{\alpha} (1+q_n)^{1-\alpha}\,, \label{g2}\\
\int_R^{2R} g(X)\,dX & \leq CR \qquad \mbox{ for all } R \geq 2(1+q_n)\,\label{g3}\\
\int_0^{\infty} \Big( Y^{\alpha} + Y^{-\alpha}\Big) \frac{\min(Y/(1+q_n),1)}{1+Y^3}g(Y)\,dY & \leq C\,.\label{g4}
\end{align}
\end{lemma}
\begin{proof}
The first estimate \eqref{g1} follows from \eqref{f1} and the definitions in \eqref{rescaling}, while estimate \eqref{g2} is a consequence of \eqref{f2}.
To establish \eqref{g3} we deduce from \eqref{f3} that for $q<0$ we have
\begin{align*}
\int_0^{\infty} e^{-X\frac{1+q}{1+q_n}}g(X)\,dX & \leq 2|q| \frac{1+q_n}{1+q} + \int_0^{\infty} e^{-\frac{X}{1+q_n}}g(X)\,dX\\
& \leq 2|q| \frac{1+q_n}{1+q} + C(1+q_n)\,.
\end{align*}
We choose $R=\frac{1+q_n}{1+q}$ for any $q \in (-1,-1/2)$ to infer \eqref{g3}.
Finally, we use the usual dyadic argument and \eqref{g3} to estimate
\begin{align*}
\int_1^{\infty} Y^{\alpha-3} g(Y)\,dY & \leq \sum_{k=1}^{\infty} \int_{2^k}^{2^{k{+}1}} Y^{\alpha-3} g(Y)\,dY\\
& \leq C\sum_{k=1}^{\infty} 2^{k(\alpha-3)} 2^k \leq C \sum_{k=1}^{\infty} 2^{-k(2-\alpha)} \leq C
\end{align*}
as well as
\[
\int_{2(1+q_n)}^1 Y^{-\alpha} g(Y)\,dY\leq C \sum_{k=0}^{2^{-k} \geq 2(1+q_n)} \int_{2^{-(k{+}1)}}^{2^{-k}} Y^{-\alpha} g(Y)\,dY
\leq C \sum_{k=0}^{2^{-k} \geq 2(1+q_n)} 2^{k\alpha} 2^{-k} \leq C
\]
which together with \eqref{g2} gives \eqref{g4}.
\end{proof}
\begin{lemma}\label{L.Htildeestimates}
\[
\tilde H(q_n,X,Y) \leq C \frac{\min(X/(1+q_n),1)\min(Y/(1+q_n),1)}{1+(X+Y)^3}\,.
\]
\end{lemma}
\begin{proof}
Using the definitions of $\tilde H$, the estimate follows exactly as in the proof of Lemma \ref{L.Hestimate}.
\end{proof}
With these estimates we can control the regions near zero and infinity. Indeed, using \eqref{g2}, \eqref{g4}
and Lemma \ref{L.Htildeestimates}, we obtain
\begin{align*}
& \int_0^{1/R}dX \int_0^{\infty} dY W_n(X,Y) \big( g_{1,n}(X) + g_{2,n}(X)\big) \big( g_{1,n}(Y) - g_{2,n}(Y)\big) \tilde H(q_n,X,Y)\,dX\,dY\\
& \leq C \int_0^{1/R}dX\int_0^{\infty} dY \Big( \Big(\frac{X}{Y}\Big)^{\alpha} + \Big( \frac{Y}{X}\Big)^{\alpha}\Big) g_{1,n}(X)g_{2,n}(Y) \tilde H(q_n,X,Y)\\
& \leq C \sum_{j=0}^{\stackrel{2^{-j}\geq}{ R(1+q_n)}} \int_{\frac{2^{-(j+1)}}{R}}^{\frac{2^{-j}}{R}} dX \int_0^{\infty} dY \Big( \big(2^{j}RY\big)^{-\alpha}
+ \big(2^{j}R Y\big)^{\alpha}\Big) g_{1,n}(X)g_{2.n}(Y) \frac{\min(\frac{Y}{1+q_n},1)}{1+Y^3}\\
& \qquad + \int_0^{1+q_n}dX \int_0^{\infty} \Big( \Big(\frac{X}{Y}\Big)^{\alpha} + \Big( \frac{Y}{X}\Big)^{\alpha}\Big) \frac{X}{1+q_n} g_{1,n}(X) g_{2.n}(Y)
\frac{\min(\frac{Y}{1+q_n},1)}{1+Y^3}\\
& \leq C \sum_{j=0}^{2^{-j} \geq R (1+q_n)} \big( 2^jR\big)^{-(1{+}\alpha)} + \big(2^{j} R\big)^{\alpha{-}1} + C (1+q_n)^{1-\alpha}\\
& \leq C \Big( R^{\alpha{-}1} + (1+q_n)^{1{-}\alpha}\Big)\,.
\end{align*}
Second, we estimate
\begin{align*}
&\int_R^{\infty} dX \int_{1/R}^{\infty} dY W_n(X,Y) \big( g_{1,n}(X) + g_{2,n}(X)\big) \big( g_{1,n}(Y) - g_{2,n}(Y)\big) \tilde H(q_n,X,Y)\,dX\,dY\\
& \leq C \int_R^{\infty} dX \int_{1/R} ^{\infty} dY \Big( \Big(\frac{X}{Y}\Big)^{\alpha} + \Big( \frac{Y}{X}\Big)^{\alpha}\Big)
g_{1,n}(X)g_{2,n}(Y) \frac{\min(\frac{Y}{1+q_n},1)}{1+(X+Y)^3} \\
&=: \int_R^{\infty} \int_{1/R}^R dY \cdots + C \int_R^{\infty} dX \int_R^{\infty} dY ... =: (I)+(II)
\end{align*}
Using \eqref{g3} we obtain
\begin{align*}
(I)& \leq C
\int_R^{\infty} dX \int_{1/R}^R \Big( \Big(\frac{X}{Y}\Big)^{\alpha} + \Big( \frac{Y}{X}\Big)^{\alpha}\Big) \frac{g_{1,n}(X)g_{2,n}(Y)}{X^3}\\
& \leq C \sum_{j=0}^{\infty} \sum_{k=0}^{2^{-k} \geq 1} \int_{R2^j}^{R2^{j{+}1}}dX \int_{R 2^{-(k{+}1)}}^{R2^{-k}} dY \Big( 2^{\alpha(j{+}k)} + 2^{-\alpha(j{+}k)}\Big)
\frac{g_{1,n}(X)g_{2,n}(Y)}{(R2^j)^3}\\
& \leq \frac{C}{R} \sum_{j=0}^{\infty} \sum_{k=0}^{\infty}\Big( 2^{\alpha(j{+}k)} + 2^{-\alpha(j{+}k)}\Big) 2^{-2j-k}\\
& \leq \frac{C}{R} \sum_{j=0}^{\infty} \sum_{k=0}^{\infty}
\Big( 2^{j(\alpha-2)} 2^{k(\alpha-1)} + 2^{-j(\alpha{+}2)} 2^{-k(1{+}\alpha)}\Big)\leq \frac{C}{R}\,.
\end{align*}
Furthermore,
\begin{align*}
(II)&\leq \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \int_{R2^j}^{R2^{j{+}1}} dX \int_{R2^k}^{R2^{k{+}1}}dY \Big( 2^{\alpha(j{-}k)} + 2^{\alpha(k{-}j)}\Big) \frac{g_{1,n}(X)
g_{2,n}(Y)}{R^3 (2^j+2^k)^3}\\
& \leq \frac{C}{R} \sum_{j,k=0}^{\infty} \Big( 2^{\alpha(j{-}k)} + 2^{\alpha(k{-}j)}\Big)\frac{2^{j+k}}{(2^j+2^k)^3}\\
& \leq \frac{C}{R} \int_1^{\infty} d\xi \int_1^{\infty}d\eta \frac{\big( \frac{\xi}{\eta}\big)^{\alpha} + \big( \frac{\eta}{\xi}\big)^{\alpha}}{(\xi+\eta)^3} \\
& \leq \frac{C}{R} \int_1^{\infty} \frac{dr}{r^2} \int_0^{\pi/2} d\theta
\big( \tan \theta^{\alpha} + \mbox{cotan} \theta^{\alpha}\big)
\leq \frac{C}{R}.
\end{align*}
Thus it remains to show that
\begin{equation}\label{final}
\int_{1/R}^RdX \int_{1/R}^R dY W_n(X,Y) \big (g_{1,n}(X) + g_{2,n}(X)\big) \big( g_{1,n}(Y) - g_{2,n}(Y)\big) \tilde H(q_n,X,Y) \to 0
\end{equation}
as $ n \to \infty$.
\begin{lemma}\label{L.Htildeconverge}
As $q_n\to -1$ we have
\[
\tilde H(q_n,X,Y) \to \frac{1}{(X+Y)^3} \int_0^{X+Y} \xi^2 e^{-\xi}\,d\xi \quad \mbox{ locally uniformly in } (X,Y) \in (0,\infty)^2\,.
\]
\end{lemma}
\begin{proof}
We have
\begin{align*}
\tilde H(q_n,X,Y) &= \frac{1}{(1+q_n)^3} \int_{-1}^{q_n} \frac{(1+s)^2}{s^2} \Big( 1 - e^{-s\frac{X}{1+q_n}}\Big) \Big( 1 - e^{-s\frac{Y}{1+q_n}}\Big)e^{-\frac{X+Y}{1+q_n}}\,ds\\
& = \frac{1}{(1+q_n)^3} \int_{-1}^{q_n} \frac{(1+s)^2}{s^2} e^{- \frac{1+s}{1+q_n} (X+Y)}\,dx \\
&+ \frac{1}{(1+q_n)^3} \int_{-1}^{q_n} \frac{(1+s)^2}{s^2}
\Big( 1 - e^{-\frac{sX}{1+q_n}} - e^{-\frac{sY}{1+q_n}}\Big) e^{-\frac{X+Y}{1+q_n}}\,ds\,.
\end{align*}
We can estimate the second term on the right hand side as
\begin{align*}
\Big| \frac{1}{(1+q_n)^3} &\int_{-1}^{q_n} \frac{(1+s)^2}{s^2}
\Big( 1 - e^{-\frac{sX}{1+q_n}} - e^{-\frac{sY}{1+q_n}}\Big) e^{-\frac{X+Y}{1+q_n}}\,ds\Big|\\
&\leq \frac{1}{(1+q_n)^3} \int_{-1}^{q_n} \frac{(1+s)^2}{s^2}\Big(e^{-\frac{X+Y}{1+q_n}} + e^{-X\frac{1+s}{1+q_n}} e^{-\frac{Y}{1+q_n}} + e^{-Y\frac{1+s}{1+q_n}} e^{-\frac{X}{1+q_n}}\Big)\,ds
\end{align*}
and since e.g. $e^{-X\frac{1+s}{1+q_n}} \leq C$ and $ e^{-\frac{Y}{1+q_n}} \to 0$ as $q_n \to -1$, we can deduce that the whole term converges to zero as $n \to \infty$.
On the other hand,
\[
\frac{1}{(1+q_n)^3} \int_{-1}^{q_n} \frac{(1+s)^2}{s^2} e^{- \frac{1+s}{1+q_n} (X+Y)}\,dx\\
= o(1) + \int_0^1 \xi^2 e^{-\xi(X+Y)}\,d\xi
\]
and the result follows after another rescaling of the integral on the right hand side.
\end{proof}
\begin{lemma}\label{L.gconverge}
If $\|f_{1,n} - f_{2,n}\| \to 0 $ as $n \to \infty$, then $\int_0^{\infty} e^{-\theta X} \big (g_{1,n}(X)-g_{2.n}(X)\big) dX \to 0$ for all $\theta>0$. Hence
$g_{1,n}-g_{2,n} \to 0$ weakly in the sense of measures.
\end{lemma}
\begin{proof}
By the definitions we have
\begin{align*}
0& \leftarrow \sup_{q>-1}\frac{1+q}{|q|} \Big| \int_0^{\infty} \big(1-e^{-qx}\big) (f_{1,n}(x)-f_{2,n}(x)) \,dx\Big| \\
& = \sup_{q>-1} \frac{1+q}{|q|(1+q_n)} \Big| \int_0^{\infty} \big( 1-e^{- \frac{qX}{1+q_n}}\big) e^{-\frac{X}{1+q_n}} \big (g_{1,n}(X)-g_{2,n}(X)\big) dX \Big|\,.
\end{align*}
Given $\theta>0$ we define $\tilde q_n \to -1$ such that $\theta(1+q_n)=1+\tilde q_n$. Then, using \eqref{g1}, \eqref{g2} and \eqref{g3}, we find
\[
\Big| \int_0^{\infty} e^{- \theta X} \big (g_{1,n}(X)-g_{2,n}(X)\big) dX \Big|
\leq o(1) + C \Big| \int_0^{\infty} e^{-\frac{X}{1+q_n}} \big (g_{1,n}(X)-g_{2,n}(X)\big) dX \Big|
\to 0
\]
as $n\to \infty$.
Since, the left hand side is just the Laplace transform of $g_{1,n}-g_{2,n}$, this proves the statement of the Lemma.
\end{proof}
With Lemmas \ref{L.Htildeconverge} and \ref{L.gconverge}
we can deduce that \eqref{final} holds. This gives a contradiction to \eqref{assump2} and finishes the proof
of Proposition \ref{P.main}.
\bigskip
{\bf Acknowledgment.} The authors acknowledge support through the CRC 1060 {\it The mathematics of emergent effects } at the University of Bonn, that is funded through the
German Science Foundation (DFG).
\section{Appendix: Proof of Lemma \ref{L.ublargex}}
\begin{proof}
Dividing \eqref{eq1c} by $x$, integrating and changing the order of integration on the right hand side we derive
as a first a priori estimate that
\begin{align}
\int_0^{\infty} x f(x)\,dx & = \int_0^{\infty} \int_0^{\infty} K(y,z) f(y) f(z) y \log \Big( \frac{y+z}{y}\Big)\nonumber\\
& \geq C \int_0^{\infty} \,dz \int_0^z \,dy f(y) f(z) K(y,z) y \log \Big( \frac{y+z}{y}\Big)\label{ublargex1}\\
& \geq C \int_0^{\infty} \,dz \int_0^z \,dy f(y) f(z) K(y,z)y\,.\nonumber
\end{align}
Next, we denote for $\gamma \geq 1$
\begin{equation}
\label{ublargex2}
M(\gamma):=\int_0^{\infty} x^{\gamma} f(x)\,dx \,.
\end{equation}
Our goal is to show inductively that $M(\gamma) \leq \gamma^{\gamma} e^{A\gamma}$
for some (large) constant $A$.
To that aim we first multiply \eqref{eq1b} by $x^{\gamma-2}$ with some $\gamma>1$ and after integrating we obtain
\[
(\gamma{-}1) M(\gamma) = \tfrac 1 2 \int_0^{\infty}\int_0^{\infty} K(x,y) f(x) f(y)\big( (x+y)^{\gamma} - x^{\gamma} - y^{\gamma} \big)\,.
\]
By symmetry we also find
\begin{align*}
M(\gamma)&= \frac{1}{\gamma{-}1} \int_0^{\infty} \,dx \int_0^x \,dy K(x,y) f(x) f(y) \Big( \big(x+y\big)^{\gamma} - x^{\gamma}\Big)\\
& = \int_0^{\infty} \,dx \int_0^{x/\gamma} \,dy \cdots + \int_0^{\infty} \,dx \int_{x/\gamma}^x \,dy \cdots\,.
\end{align*}
Due to \eqref{ubsmallx} we have
\begin{align}
\int_0^1\,dx \int_0^x \,dy K(x,y) f(x) f(y) \Big( \big(x+y\big)^{\gamma} - x^{\gamma}\Big)&
\leq C \int_0^1 \,dx \int_0^x \,dy K(x,y) f(x) f(y) x^{\gamma}\nonumber \\
& \leq C\int_0^1 x^{\alpha +\gamma}f(x)\int_0^x y^{1{-}\alpha} f(y)\,dy\,dx \label{ublargex4}\\
& \leq C \,. \nonumber
\end{align}
Using \eqref{ubsmallx} and $(x+y)^{\gamma} - x^{\gamma} \leq c x^{\gamma-1}y$ for $ y \leq \frac{x}{\gamma}$, we find that
\begin{equation}\label{ublargex5}
\begin{split}
\int_1^{\infty} \int_0^{x/\gamma} K(x,y) y x^{\gamma{-}1} f(x)f(y)\,dy\,dx&\leq \int_1^{\infty} \int_0^{x/\gamma} x^{\gamma+\alpha-1} y^{1-\alpha} f(x)f(y)\,dy\,dx
\\ &\leq
C M(\gamma+\alpha -1)\,,
\end{split}
\end{equation}
so that for the sum of both terms we can prove by induction that it is smaller than $1/2 \gamma^{\gamma} e^{A\gamma}$.
It remains to estimate
\begin{align*}
\frac{C}{\gamma-1}&\int_1^{\infty} \,dx \int_{x/\gamma}^x\,dy K(x,y) f(x) f(y) \big(x+y\big)^{\gamma}\\
& \leq \frac{C}{\gamma} \int_1^{\infty} \,dx \int_{x/\gamma}^x \,dy f(x) f(y) \big(x+y\big)^{\gamma} \Big( \frac{x}{y}\Big)^{\alpha}=:(*)\,.
\end{align*}
In the following $\{\zeta_n\} \subset (0,1]$ will be a decreasing sequence of numbers that will be specified later.
Then we define a corresponding sequence of numbers $\kappa_n$ such that given a sequence $\{\theta_n\} \subset (0,1)$, also to be
specified later, we have
\begin{equation}\label{kappandef}
\big(x+y\big)^{\gamma} \leq \kappa_n^{\gamma} x^{\gamma(1-\theta_n)} y^{\gamma \theta_n} \qquad \mbox{ for } \frac{y}{x} \in [\zeta_{n+1},\zeta_n]\,.
\end{equation}
Equivalently we have
\begin{equation}\label{kappadef1}
\kappa_n = \max_{\zeta \in [\zeta_{n{+}1},\zeta_n]} \Big( \frac{1+\zeta}{\zeta^{\theta_n}}\Big)\,.
\end{equation}
With these definitions we have
\[
(*) \leq \frac{C}{\gamma} \sum_{n=0}^{n_0(\gamma)} \kappa_n^{\gamma} \zeta_{n{+}1}^{-\alpha} M(\gamma(1-\theta_n)) M(\gamma \theta_n)\,,
\]
where $n_0(\gamma)$ is such that $\zeta_{n_0(\gamma)} = \frac{1}{\gamma}$.
We choose $\theta_n$ such that for $\psi_{\theta_n}(\zeta):= \log(1+\zeta)-\theta_n \log \zeta$ we have
\[
\min_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta) = \log(1+\zeta_n) - \theta_n \log(\zeta_n)\,.
\]
This is equivalent to
\begin{equation}\label{thetandef}
\theta_n = \frac{\zeta_n}{1+\zeta_n}\,.
\end{equation}
We want to prove now by induction over $\gamma$ that $(*) \leq \frac{1}{2} \gamma^{\gamma} e^{A\gamma}$. Inserting the corresponding hypothesis, this reduces to showing that
\[
\frac{C}{\gamma} \sum_{n=0}^{n_0} \zeta_{n{+}1}^{-\alpha} \exp \Big( \gamma \big( \max_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta)
+ \theta_n \log \theta_n + (1-\theta_n) \log (1-\theta_n)\big) \Big)\leq \frac 1 2\,.
\]
By our definition \eqref{thetandef} we have
\begin{align*}
&\max_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta)
+ \theta_n \log \theta_n + (1-\theta_n) \log (1-\theta_n) \\
& = \min_{\zeta \in [\zeta_{n+1},\zeta_n]}\psi_{\theta_n}(\zeta) + (\max-\min)_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta)
+ \theta_n \log \theta_n + (1-\theta_n) \log (1-\theta_n) \\
& = (\max-\min)_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta)\,.
\end{align*}
Thus we need to investigate
\begin{align*}
(\max-\min)_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta) &= \psi_{\theta_n}(\zeta_{n{+}1}) - \psi_{\theta_n}(\zeta_n)\\
& = \log \Big( \frac{1+\zeta_{n{+}1}}{1+\zeta_n} \Big) - \theta_n \log \Big( \frac{\zeta_{n{+}1}}{\zeta_n} \Big)\\
& = \log \Big( 1 + \frac{\zeta_{n{+}1}-\zeta_n}{1+\zeta_n}\Big) - \frac{\zeta_n}{1+\zeta_n} \log \Big( 1 + \frac{\zeta_{n{+}1}-\zeta_n}{\zeta_n}\Big)\,.
\end{align*}
Expanding the nonlinear terms we find
\begin{equation*}
V:=(\max-\min)_{\zeta \in [\zeta_{n+1},\zeta_n]} \psi_{\theta_n}(\zeta) \leq C \Big( |\zeta_{n{+}1}-\zeta_n|^2 + \frac{(\zeta_{n{+}1}-\zeta_n\big)^2}{
\zeta_n}\Big)\,.
\end{equation*}
We now split $\{1,2,\cdots,n_0\}$ into a finite number of sets $\{1,2,\cdots,N_1\}$, $\{N_1+1, \cdots, N_2\}$, $\cdots$, $\{N_{k-1}+1, \cdots,N_k=n_0\}$ in the following way.
We first define
\[
\zeta_0 =1 \,, \quad \eta_0= 1+ \frac{1}{\sqrt{\gamma}}\,, \quad \zeta_n = \eta_0^{-n} \zeta_0\,, \quad \mbox{ for all } n \leq N_1
\]
where $N_1$ is such that $ \zeta_{n} \geq \frac{1}{\sqrt{\gamma}}$, that is we can choose $N_1 \sim \sqrt{\gamma} \log \gamma$.
With these definitions we find
\[
\Big| \frac{\zeta_{n{+}1} - \zeta_n}{\zeta_n}\Big| \leq \frac{C}{\sqrt{\gamma}} \qquad \mbox{ for all } 1\leq n\leq N_1
\]
and thus
\[
\frac{1}{\gamma} \sum_{n=0}^{N_1} \zeta_{n{+}1}^{-\alpha} \exp \big( \gamma V\big) \leq \frac{C N_1}{\gamma} \gamma^{\alpha/2}
\sim \gamma^{(\alpha-1)/2} \log \gamma \to 0 \quad \mbox{ as } \gamma \to \infty\,.
\]
Next, for $n \in (N_1,N_2]$ we define
\[
\zeta_n=\eta_1^{-(n-N_1)} \zeta_{N_1}\,, \qquad \eta_1=1+\frac{1}{\gamma^{1/4}}\,.
\]
Then $\zeta_n \leq \frac{1}{\sqrt{\gamma}}$, such that $|\zeta_{n{+}1}-\zeta_n|^2 \leq \frac{C}{\gamma}$ and
\[
\gamma V \leq C \gamma \Big( \zeta_n |\eta_1-1|^2 + \frac{1}{\gamma}\Big) \leq C\,.
\]
We need to determine $N_2$ such that
\[
(N_2-N_1) \frac{1}{\zeta_{N_2}^{\alpha} \gamma} \to 0 \qquad \mbox{ as } \gamma \to \infty\,.
\]
Making the ansatz $\zeta_{N_2} = \gamma^{-\sigma}$, that is $N_2 \sim \gamma^{1/4} \log \gamma$, this implies that we need
\[
\frac{\gamma^{1/4} \gamma^{\alpha \sigma} \log \gamma}{\gamma} \ll 1\,
\]
and this needs $\alpha \sigma < \frac 3 4$. Hence $\sigma:= \min (1,\frac{3}{4\alpha})$. If $\sigma=1$ we are done, otherwise we need to iterate.
Thus, let us assume that $\sigma_1=\frac{3}{4\alpha} <1$.
We define for $k \geq 2$ the sequence $\sigma_{k+1} = \frac{1}{2\alpha} (1+\sigma_k)$.
Then, we define $\eta_{k+1}= 1+\frac{1}{\gamma^{(1-\sigma_{k+1})/2}}$ and $\zeta_n = \eta_{k+1}^{-(n-N_{k+1})} \zeta_{N_{k+1}}$ for $n \in (N_{k+1},N_{k+2}]$ with
$\zeta_{N_{k+1}}=\gamma^{-\sigma_k}$. Then $N_{k+1}-N_k = \gamma^{(1-\sigma_k)/2} \log \gamma $ and we find that our sum is controlled by
$C \gamma^{(1-\sigma_k)/2 -1 + \alpha \sigma_{k+1}} \ll 1$ by our definition of $\sigma_{k+1}$. Since $\alpha<1$ we find after a finite number of steps that
$\sigma_k \geq 1$, and then we can stop the iteration.
It remains to show that \eqref{ublargex2} implies the pointwise estimate for $f$. Indeed, \eqref{ublargex2} implies for $R>0$ that
\[
R^{\gamma} \int_{R}^{2R} f(x)\,dx \leq \int_{R}^{2R} x^{\gamma} f(x)\,dx \leq \gamma^{\gamma} e^{A\gamma}
\]
and thus
\[
\int_R^{2R} f(x)\,dx \leq \exp\Big( \gamma (\log(\gamma)+ \log(R)) + A \gamma\Big)\,.
\]
The minimum of $\psi(\gamma):= \gamma (\log(\gamma) +\log(R)) + A \gamma$ is obtained for $\gamma = e^{-(A+1)} R$
and thus
\[
\int_R^{2R} f(x)\,dx \leq \exp\Big( - e^{-(A+1)} R\Big)
\]
and obviously it follows that there exists (another) $A>0$ such that
\begin{equation}\label{ublargex20}
\int_R^{2R} f(x)\,dx \leq \exp\Big( - A R\Big)\,.
\end{equation}
Equation \eqref{eq1c} implies
\[
x^2 f(x) = \int_0^x\,dy \int_{x{-}y}^x\,dz K(y,z) y f(y)f(z) + \int_0^x\,dy \int_x^{\infty}\,dz K(y,z) y f(y)f(z)\,.
\]
Now we use \eqref{negativemoment} and \eqref{ublargex20} to conclude
\begin{align*}
\int_0^x\,dy \int_x^{\infty}\,dz K(y,z) y f(y)f(z)& \leq C \int_0^x y^{1{-}\alpha} f(y)\,dy \int_x^{\infty} z^{\alpha} f(z)\,dz\\
& \leq C \sum_{n=0}^{\infty} \int_{2^nx}^{x^{n{+}1}x} z^{\alpha} f(z)\,dz\\
& \leq C \sum_{n=0}^{\infty} \big( 2^nx\big)^{\alpha} \exp \big(- A 2^n x\big)\\
& \leq C \exp \Big(-\frac{A}{2} x\Big)\,.
\end{align*}
Similarly, we can estimate by symmetry
\begin{align*}
\int_0^x\,dy \int_{x{-}y}^x\,dz K(y,z) y f(y)f(z) & \leq C \int_{x/2}^x \,dy \int_{x{-}y}^x \,dz K(y,z)y f(y) f(z)\\
& \leq C \int_{x/2}^x y^{1{+}\alpha} f(y)\,dy \int_0^x z^{-\alpha} f(z)\,dz\\
& \leq C\int_{x/2}^x y^{1{+}\alpha} f(y)\,dy \leq C \exp \Big( - \frac{A}{4} x\
\Big)\,,
\end{align*}
which implies the desired estimate.
\end{proof}
{\small
\bibliographystyle{alpha}%
|
1,116,691,497,677 | arxiv | \section{Introduction}
Climate change and its global effects can no longer be ignored. The urgency to both understand and find ways to mitigate climate effects has become an increasing focus of research, driven by the increase in extreme events including wild fires, heat waves, and extreme flooding. As part of this conversation, climate tipping points are a topic of growing interest, as these tipping points represent states at which large, abrupt, irreversible changes occur in the environment that could result in devastating and accelerated global change. Worryingly, the mechanisms, likelihood, and potential impacts of tipping points are not fully understood. The Intergovernmental Panel for Climate Change summarized some of the major factors related to climate tipping points in a special report \cite{portner2019ocean}, which highlights the risks to lands, oceans, food sources, and human health. In a recent published report by \citet{lenton2019climate}, 15 different tipping points are described as being currently ``active.'' For example, the melting of the Greenland ice sheet is occurring at an unprecedented rate and could reach a tipping point at 1.5°C of warming \cite{portner2019ocean,lenton2019climate}.
Unfortunately, studying tipping points is challenged by the fact that their occurrence in climate models depends on numerous physical processes that are governed by poorly constrained parameters. Exploring the entire state space spanned by these parameters is computationally infeasible in the full general circulation models used for climate projection. Climate researchers need better ways to direct their attention to scenarios that simulate the present-day world with good fidelity, but are also closer to tipping point that the current generation of models. We show how AI can be used to support tipping point discovery using the collapse of the Atlantic Meridional Overturning Circulation (AMOC) as a use case.
\section{Background--The AMOC}
The AMOC is an important element of the climate system, as it is central to how heat and freshwater are transported \cite{buckley2016observations}. Often called the conveyor belt of the ocean, its circulation pattern involves warm salty upper-ocean water flowing into the North Atlantic, cooling, and sinking into the deep. It has such a significant effect on the regulation of the Earth’s climate \cite{zhang2019review} that small changes in sea surface temperatures can have large global climate effects.
Some evidence suggests that the AMOC has slowed down, although the issue is intensely debated. Climate models project that the AMOC will weaken in the 21st century and some climate models with ultrahigh resolution in the ocean suggest the AMOC might collapse \citep{thornalley2018anomalously,jackson2018hysteresis}.
In recent articles and published papers, it has been speculated that a full collapse of the AMOC could have long term effects on food insecurity \cite{benton2020running}, sea level rise \cite{bakker2022ocean}, and Arctic related effects \cite{liu2022interaction}.
\section{Related Work}
There has been a long debate on whether deep learning could be used to replace numerical weather/climate models \cite{schultz2021can}, but many small successes in applying deep learning to focused climate and weather related problems have demonstrated promise \cite{rasp2018deep,reichstein2019deep,singh2021deep}. In this study, we focused on how deep learning could be used for the discovery of climate tipping points by recommending parameters for climate model runs that would induce tipping, which is less explored due to the computational challenges of modeling climate tipping points using traditional methods. However, related work which explored using deep learning for early warning signal detection included work by \citet{bury2021deep} and \citet{deb2022machine}, both of whom developed systems using Long Short Term Memory (LSTM) networks trained on the dynamics to predict tipping points, focusing on behavior near the tipping point by finding critical slowing patterns. Though these methods are related to the bifurcation work included herein, we focus on a larger problem of building hybrid AI climate models that leverage these outputs.
On the specific topic of AMOC, a variety of simplified dynamical frameworks have been used for insight into the dynamics and sensitivity of the overturning \cite{johnson2019recent}. The development of those frameworks can be said to begin with \citet{stommel1961thermohaline}, demonstrating the bistability of the AMOC, followed more recently by \citet{gnanadesikan1999pycnocline}, who added Southern Ocean wind and eddy processes. This was expanded by \citet{johnson2007reconciling} to include prognostic equations for temperature and salinity, and by \citet{jones2016interbasin} to include the Pacific basin. Finally, \citet{gnanadesikan2018fourbox} expanded from the \citet{johnson2007reconciling} model to include lateral tracer mixing. Each of these models has different simplifying assumptions, but all have dynamics that are similar to observations in the AMOC-on state.
\section{The Hybrid AI Climate Modeling Methodology}
The Hybrid AI Climate modeling methodology includes an AI simulation based on a Generative Adversarial Network (GAN) \cite{goodfellow2014generative} that explores different climate models to learn how to invoke climate tipping point scenarios using a surrogate model and a bifurcation \cite{dijkstra2019numerical} method. The bifurcation method identifies areas in state space where abrupt changes in state occur, i.e. tipping points. Training the GAN involves an interaction with a neuro-symbolic method as shown in Figure \ref{fig:neuro_gan}. The neuro-symbolic method learns how to translate questions that a climate modeler would ask of the model into ``programs" that could then be run by the GAN and translates ``imagined" models that the GAN generates into natural language questions that could be understood by a climate modeler. This unique approach to learning provides two key advantages: 1.) it enables explainability that is human understood - an important requirement among scientific researchers, and 2.) it provides a way to direct climate researchers to areas in the search space that are roughly where the tipping points may live for in-depth climate modeling. Our method is built to be generalizable, as the questions are based on an ontological representation of the climate domain and the surrogate model is supplied by the climate modeler. The GAN and the bifurcation method are not specific to any domain and can be described as a general machinery for discovery.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{neuro_and_gan.png}
\caption{Learning to Translate Questions into Programs and Programs into Questions.}
\label{fig:neuro_gan}
\end{figure}
\subsection{Multi-Generator Tipping Point Generative Adversarial Network}
Building on previous work that used multiple generators for stablizing GAN training \cite{hoang2018mgan}, we explored using multiple generators to exploit the regions in state space where tipping points occur. The multi-generator tipping point GAN (TIP-GAN) is built as a novel adversarial game involving a set of generators and a discriminator. The generators learn to generate climate model configurations that will result in a climate tipping point. The discriminator is trained to learn which generator is generating the model configurations and which model configurations lead to a tipping point. A custom loss function is used for this setup which includes learning to predict a collapse or not and learning which generator generated the model configurations. In this setup we assume the discriminator is asking the surrogate climate model to provide the answer as to whether a tipping point occurred or not. For the AMOC the tipping point explored is the collapse of the AMOC.
\subsection{Knowledge-Guided Neuro-symbolic Learning}
To support hybrid AI climate modeling, we use a set of neuro-symbolic deep architectures to enable a translation between what is learned by TIP-GAN and climate modeler-generated natural language questions. The inclusion of a neuro-symbolic layer in this system enables us to take complicated questions that a climate modeler may ask during the scientific exploration process, and use the AI simulated environment to get an answer to those questions that will provide the climate modeler with an area in the search space that should be further explored using traditional climate modeling techniques. This provides the climate modeler with a way to tackle the discovery of climate tipping points that would otherwise be impossible to find without a brute force approach.
Building on the early effort in \cite{yi2019clevrer}, we have developed a translation methodology that converts natural language into program-style symbolic representations to structurally represent natural language questions. The programs developed are used to capture questions pertaining to parameter changes that could cause a tipping point to occur. The generators of TIP-GAN randomly generate perturbed model configurations to invoke climate tipping points. They generate these perturbations in the form of programs that are then run using the surrogate model. These programs using the trained neuro-symbolic translation architectures are translated into natural language questions with associated answers obtained by the generators through their interactions with the discriminator.
In Figure \ref{fig:neuro} we show the proposed neuro-symbolic translation network is a triangular model that includes a question encoder, a question decoder, a program decoder, and a program encoder. It is bidirectional in that it translates from questions to programs and from programs to questions. A word embedding and word positional embedding are shared across networks and are used to support the translations. The text encoder network encodes text into this shared space. The decoder network decodes encodings into questions and into programs. Another encoder network encodes programs into text. The TIP-GAN works at a vector level processed by the climate model and its perturbed model configurations are converted from vectors to programs and then programs to questions in natural form.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{neuro.png}
\caption{Learning to Translate Questions into Programs and Programs into Questions.}
\label{fig:neuro}
\end{figure}
\section{Experimental Setup}
In this section we describe the experimental setup for this work which includes using a Four Box model \cite{gnanadesikan2018flux} as the surrogate model in the AI simulation. This Four Box model was created specifically to study the behavior of the AMOC overturning and potential collapse states. We set up the TIP-GAN to perturb parameters for this Four Box model. We use the neuro-symbolic trained translators to learn to translate from programs that are generated from the GAN's perturbations into natural language questions. This is performed while the GAN is training. After the GAN is trained we translate natural language questions to programs that the TIP-GAN can run on its latent space (trained model).
\subsection{Data and the Surrogate Model}
Climate models, such as those modeling the AMOC, can be approximated using simple box models \citep{levermann2010atlantic}. Box models reduce the number of system parameters but aim to retain the essential dynamics that characterize AMOC tipping points. We used the \citet{gnanadesikan2018flux} four-box model shown in Figure \ref{fig:fourbox} which includes boxes for the deep ocean, the surface Southern Ocean, the surface low-latitudes, and the surface North Atlantic/Arctic Oceans. The model is developed in Matlab. The AMOC strength is represented by the mass transport variable $M_n$, which depends on the time dependent density difference between the low and high northern-latitude boxes and the depth of the low latitude box $D_{low}$. The AMOC is ``on'' when mass is removed from the low-latitude box and ``off'' when mass is recycled to the low latitudes. $D_{low}$ is determined by a mass balance equation which is affected by the magnitude of the wind-driven upwelling in the Southern Ocean $M_{ek}$ which modulates the conversion of dense deep water to light surface water. Atmospheric freshwater fluxes $F_w^n$ and $F_w^s$ act to make the high latitude boxes lighter and the low-latitude box denser, while heat fluxes have the reverse effect. In the experiments reported here, $M_n$ is monitored while the other variables are manually perturbed to change within their given ranges. There are nine equations in this model: temperatures and salinities in all four boxes and $D_{low}$ are predicted as the model is run over time. The AMOC tipping point is plotted in terms of the overturning transport $M_n$ as a function of the freshwater flux $F_w^n$. As the climate warms $F_w^n$ is expected to increase and to reduce the density difference between low and high latitudes. The extent to which increasing $F_w^n$ can collapse the overturning (and to which reducing it can restart the overturning), will depend on the magnitude of $M_{ek}$ as well as the initial value of $D_{low}$, as illustrated in Fig. \ref{fig:fourbox_plot}.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{4boxmodel.png}
\caption{The Four Box Model.}
\label{fig:fourbox}
\end{figure}
We developed Python code to recreate the Four Box model and to enable us to build a large dataset of model configurations with initial values for parameters over ranges of acceptable values, and labeled outcomes indicating AMOC on or off states for machine learning training and evaluation. We verified that we were able to recreate the same AMOC collapses as in the original model using Python tools shown in Figure \ref{fig:fourbox_plot}.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{python_four.png}
\caption{Recreated Collapses Using Python Generated Tools for Machine Learning Dataset Creation from the Four Box Model. As the Southern Ocean upwelling flux $M_{ek}$ becomes larger, the magnitude of the overturning $M_n$. The value of $F_w^n$ required to collapse the model is increases as $D_{low}$ or $M_{ek}$ increase.}
\label{fig:fourbox_plot}
\end{figure}
\subsection{TIP-GAN}
We set up three experiments using the Four box model data for training the GAN. We focused on perturbation of three bounded parameters shown in Table \ref{tab:params}. Each experiment included generators perturbing one of the variables. All other variables were held constant. The full model configuration is shown in Figure \ref{fig:setup}.
\begin{table}
\resizebox{\columnwidth}{!}{
\begin{tabular}{||c c c||}
\hline
Parameter Name & Parameter Description & Bounds \\ [0.5ex]
\hline\hline
$D_{low0}$ & Initial low latitude pycnocline depth (m) & [100.0, 400.0] \\
\hline
$M_{ek}$ & Ekman flux from the southern ocean (Sv) & [15, 35] \\
\hline
$F_{w}^n$ & Fresh water flux in North (Sv) & [0.05, 1.55] \\
\hline
\end{tabular}
}
\caption{Parameters that were perturbed for the Uncertainty Experiment.}
\label{tab:params}
\end{table}
We trained the TIP-GAN using equally-weighted generators and a shutoff classification cross-entropy loss function. The TIP-GAN was run for approximately 250 epochs and we ran the experiments for each $n \in [1,2,4]$ where $n =$ represents the number of generators. Data was augmented for uniform sampling from a 3-D space. The distribution of collapse vs. non-collapse samples was 743/413. We then used the trained TIP-GAN to generate samples that either resulted in AMOC collapse and non-collapse.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{setup.png}
\caption{Four box model experimental configuration replicated in TIP-GAN.}
\label{fig:setup}
\end{figure}
\subsection{Neuro-symbolic Learning}
We setup two experiments, one using a small subset of CLEVR data consisting of questions that are 11 tokens or less when tokenized. This resulted in 59,307 samples for training dataset and 12,698 samples for testing. Program sequences could be as long as 43 tokens. We trained our model using the Adam optimizer. We trained for two epochs with a batch size of 64 and a learning rate of 0.001. We evaluated the performance of our bi-directional method by evaluating text-to-text translations, text-to-program translations, and program to text translations. These three tasks can be further distinguished by the length of each question, which we measure in the number of tokens. Since we include \textbf{beginning-of-sentence} (\textbf{BOS}) and \textbf{end-of-sentence} (\textbf{EOS}) tokens with each question, the shortest sequences we analyze consist of seven tokens, and the longest consist of 13.
We also built a custom dataset based on a a select set of questions and programs related to AMOC collapse from the Four box model which includes a single question represented in natural language as ``If [parameter $x$] is set to value [$y$], does the AMOC collapse within [amount of time $t$]?'' There are more than 20 parameters that may be considered in the box model that have large possible ranges of values. Similarly, the value of $t$ could extend infinitely. The resulting dataset consisted of 1,066 question program samples. Using the bidirectional model trained on CLEVR data, we performed transfer learning using the training data generated from the four box model. The program for this model took the form of:
$$ \textit{ChangeSign}(\textit{box\_model}(\textit{SetTo}(...)), M_n)$$
where the ellipses denotes the various box model parameters and their desired values. Using this approach, we evaluated the performance of the translation architectures based on training the neuro-symbolic translation networks using transfer learning.
\section{Early Results}
We share early experimental results for TIP-GAN and the neuro-symbolic learners.
\subsection{TIP-GAN Results}
Early discriminator performance in classifying configurations as collapse or no collapse are shown in Table \ref{tab:test_classifier}. The high F-measure scores indicates that the discriminator was able to accuracy classify AMOC collapse from non-collapse runs for a held-out test set. Increasing the number of generators decreased the performance slightly. We observed this is because the discriminator tends to incorrectly classify a larger fraction of real samples as synthetic as the number of generators increases.
\begin{table}
\resizebox{\columnwidth}{!}{
\begin{tabular}{||c c c c||}
\hline
& Precision & Recall & F-measure \\
\hline\hline
1 Generator & 1.0 & 1.0 & 1.0 \\
\hline
2 Generators & 0.993 & 1.0 & 0.997 \\
\hline
4 Generators & 0.929 & 1.0 & 0.963 \\
\hline
\end{tabular}
}
\caption{Test Classification Results.}
\label{tab:test_classifier}
\end{table}
After training the GAN, we generated 500 samples. From these samples we observed that the generators tend to favor exploring areas of shut-offs as shown in Table \ref{tab:test_collapse}. Though the training data had some minor imbalance, these results are compelling.
\begin{table}
\resizebox{\columnwidth}{!}{
\begin{tabular}{||c c c c c||}
\hline
& Generator 1 & Generator 2 & Generator 3 & Generator 4 \\
\hline\hline
1 Generator & 0.854 & & & \\
\hline
2 Generators & 0.992 & 0.998 & & \\
\hline
4 Generators & 0.982 & 0.986 & 0.972 & 1 \\
\hline
\end{tabular}
}
\caption{Fraction of samples that resulted in collapse.}
\label{tab:test_collapse}
\end{table}
\subsection{Neuro-symbolic Results}
The overall accuracy (token for token) across all tasks (text-to-text translations, text-to-program translations, and program to text translations) was approximately $70\%$. Of the three tasks, the highest accuracy was achieved performing the text-to-text translation, while program-to-test translation achieved the lowest.
In addition to measuring accuracy, we also used a normalized Levenshtein distance \cite{yujian2007normalized} to measure performance. With accuracy, a translation prediction would be considered incorrect if it was not an exact match to the ground truth. In some cases, the prediction is off by a space, or by a repeated word. In other cases the prediction is wrong because it chose a word that was synonymous with what was expected. Levenshtein distance measures the number of substitutions from one string to another, and although it cannot be used to account for synonyms, it can be a more accurate measure for understanding how close the prediction is to the ground truth. Future measurements will include semantic similarity-based measurements.
As shown in Figure \ref{fig:lev}, the Levenshtein distance performance was consistent with the accuracy of each task. Text-to-text had the best performance for the 11 token model and Program-to-Text had the worst performance. The Cumulative Distribution Function (CDF) of the normalized Levenshtein distance for the 11 Token Model is shown in Figure \ref{fig:cdf_lev}.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{levenstein1.png}
\caption{Measuring Performance of Neuro-symbolic Translations using Levenshtein Distance for the 11 Token Model.}
\label{fig:lev}
\end{figure}
For the Four box model question program dataset using transfer learning, we performed an overfit evaluation where the train and test set were equal. The model achieved a text-to-text accuracy of 99.9\%, at text-to-program accuracy of 99.8\%, and a program-to-text accuracy of 100\%. The scores were similar for the Levenshtein distance. The results by sequence length are shown in Figure~\ref{fig:amoc_all}. Due to the size of the dataset when we performed train test splits on this data, there was not a sufficient amount of samples to enable generalization.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{levenstein2.png}
\caption{Cumulative Distribution Function (CDF) of the Normalized Levenshtein Distance for the 11 Token Model.}
\label{fig:cdf_lev}
\end{figure}
In addition to questions generated from natural language, we also tried translating GAN-based output. We constructed an appropriate program for the parameters varied by the GAN during it's exploration. We tested each combination of the three parameters and the two questions, and the model translated them with 100\% accuracy. Some example programs are as follows:
\begin{itemize}
\small
\item \textit{ChangeSign(box\_model(SetTo(M\_ek,28496768)),M\_n)}
\item \textit{ChangeSign(box\_model(SetTo(Fwn,638758),\\ SetTo(D\_low0,288)),M\_n)}
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{amoc_all.png}
\caption{AMOC Tipping Point Problem Translations.}
\label{fig:amoc_all}
\end{figure}
Though the question and program structure for the AMOC-specific neuro-symbolic translations is simplistic, it is a first attempt to learn the domain-specific question program translation. These results, in addition to the more extensive CLEVR results are encouraging. We are currently expanding the questions and programs to be more realistic. For example, one of the questions currently being learned is \emph{If I increase the Ekman flux by some value, will overturning increase?}. We are also building an ontology to support the neuro-symbolic language.
\section{Future Work and Conclusions}
We show the early results of a hybrid AI climate modeling methodology. The novel GAN architecture which includes multiple generators is able to accurately predict AMOC collapse and non-collapse for a dataset generated from a Four box model using three parameter perturbations. Increasing the number of generators showed that the generators had a tendency to focus on the areas where collapse is likely to occur. Our current efforts are to advance the underlying bifurcation methods and to use large global models that are calibrated to the Four box model so we could continue to build datasets for training the GAN.
In addition, early results showed our neuro-symbolic translation architectures can accurately translate between natural language questions and programs using the CLEVR dataset. When we applied this to a small set of tightly coupled AMOC questions, we showed transfer learning was a viable option for training our architectures on AMOC-specific questions. These results were very early however and there were simply not enough questions and program variety to achieve good generalization. However, our second generation dataset includes a much larger set of questions and programs. The goal of having the neuro-symbolic representations is both to provide a way for climate researchers to ask questions of what is learned by the GAN and for explainability. Future work will include more specific questions pertaining to the AMOC and a more advanced grammar for the neuro-symbolic language. We have also begun developing an underlying ontology to support this language.
|
1,116,691,497,678 | arxiv | \section{INTRODUCTION}
Collisionless shocks are the acceleration sites of energetic particles
responsible for high-energy emission of astrophysical objects and contributing
to the flux of cosmic rays detected at Earth.
Nonthermal particle populations at nonrelativistic shocks of supernova remnants
(SNRs) are believed to
be generated {by diffusive shock acceleration (DSA). The
particle spectra thus produced indeed} agree well with those deduced
from radio--to--X-ray electron synchrotron emission of SNRs
\citep{reynolds08}. The
high efficiency of the DSA mechanism together with considerations of the global
energetics of a supernova explosion make the forward shocks of shell-type SNRs
the prime candidates for the sources of Galactic cosmic rays (CRs). Nonthermal
power-law particle spectra attributed to relativistic electrons are also
inferred from modeling the electromagnetic emission from astrophysical sources
harboring relativistic shock waves, such as jets of active galactic nuclei
(AGN) and gamma-ray bursts (GRBs) \citep{meszaros02}.
Efficient particle acceleration at shocks invariably requires a continuous
excitation of magnetic
turbulence in the upstream region, which serves as a scattering medium to
confine the energetic
particles to the shock region for further acceleration \citep{md01}. Turbulent magnetic
fields of amplitude much larger than the homogeneous interstellar field are
needed upstream of SNR shocks to account for protons of energies up to
{and beyond} the
"knee" at 10$^{15}$ eV in the cosmic-ray spectrum. Recent X-ray observations of
several young SNRs give evidence that indeed highly amplified fields exist
downstream of SNR forward shocks (see \citet{reynolds08}, and references therein).
Downstream of ultrarelativistic GRB external shocks the magnetic fields must
also be amplified orders of magnitude beyond shock-compression levels to explain
GRB afterglow spectra and light curves. Even in the preshock medium magnetic
fields of milligauss strengths are required to account for the observed X-ray
afterglows \citep{li06}.
A plausible scenario for magnetic-field generation assumes that cosmic ray
particles accelerated at the shock drift as an ensemble relative to the upstream
plasma and trigger a variety of instabilities that may lead to the growth of a
turbulent field component \citep{bl01,bell04}. The distribution function of the
cosmic rays is shaped by the scattering rate in the self-excited field upstream,
thus forcing a nonlinear
relationship between the upstream plasma, the energetic particles, and
small-scale
electromagnetic fields. The upstream field would subsequently be advected and
compressed downstream of the shock and possibly further amplified by fluid
instabilities in the downstream plasma
\citep{gia07,zir08,couch08}. While a full modeling of the upstream region is
elusive to date,
simulations of turbulence build-up using prescribed distribution functions for
the
upstream plasma and the cosmic rays can be invaluable tools for the study of the
saturation processes and levels, as well as the backreaction of the evolved
turbulence on the
particles. The system in which {a} population of cosmic rays slowly drifts
relative to the
upstream plasma has been studied with MHD simulations, { which represented cosmic rays
with a constant external current}
\citep{bell04,bell05,zira08,rev08}
and with first-principles particle-in-cell (PIC) simulations assuming a constant cosmic-ray current \citep{ohira09} or including the full dynamics of the energetic particles
\citep{niem08,stroman,riqu09}. They confirmed the quasi-linear predictions by
\citet{bell04}, who showed that for the parameters of young SNRs, magnetic
turbulence would appear in a form of nonresonant, circularly polarized, and
aperiodic transverse waves. Numerical simulations analyzing the nonlinear
evolution of the system found that the turbulence growth eventually saturates, but
the exact saturation levels differ between the approaches, {the full PIC
simulations typically yielding considerably lower field amplitudes than MHD studies with
constant CR currents}.
Here we report kinetic (PIC) simulations of the interaction
between
the far-upstream plasma and a cold dilute relativistic beam of particles
streaming along a homogeneous background magnetic field. We assume that the beam
is composed of cosmic-ray ions, and the current { and charge} carried by the beam is balanced
by electrons of the background medium. The situation is
relevant to the upstream region of both relativistic and nonrelativistic shocks
undergoing efficient particle acceleration. In a SNR shock environment, it
applies to the most energetic cosmic rays accelerated at the shock which stream
far upstream of the free-escape boundary. In this case, a predominantly ionic
cosmic-ray component results from the character of injection processes at
nonrelativistic shocks. Upstream of relativistic shocks of GRBs and AGN,
distributions of particles accelerated in a wide energy range are highly
anisotropic. This is because particles and the shock move close to the speed of
light, and a deflection of a particle trajectory by an angle greater than
$\Gamma_{sh}^{-1}$ allows the shock to overtake the particle.
Therefore, in the upstream rest frame the nonthermal particles are highly beamed and their transverse momenta are a factor of $\Gamma_{sh}^{-1}$ smaller than the momenta along the shock direction. { We approximate this situation by assuming that the CR beam is cold. This assumption better holds for the freshly accelerated particles in the far-upstream region, whose transverse momenta are much smaller than $\Gamma_{sh}^{-1}$ times the parallel momentum; the highest-energy particles escape from the precursor and therefore have an anisotropic distribution in the shock rest frame, which is further enhanced by the shock curvature.} Some instabilities, e.g. filamentation, depend sensitively on the transverse temperature of the CR beam, and care must be exercised in extrapolating our simulation results to situations in which the CR-beam properties are somewhere between those of the cold beam studied here and the very hot, but slow beam investigated earlier \citep{niem08,stroman,riqu09}. { However,} PIC simulations of relativistic shocks in electron-ion plasmas suggest that filamentation indeed occurs only far upstream of the shock { (Spitkovsky 2008a; 2008b; see also Medvedev \& Zakutnyaya 2009), and it is generated by a warm ($p_\perp \lesssim p_\parallel/\Gamma_{sh}$) ion beam. We may therefore expect that our assumption of a cold CR beam remains a valid approximation for systems with warm CR beams.}
Furthermore, our setup applies to the cosmic-ray ions whose energies are larger
than the upper limit on the energy of electrons accelerated by the shock, which
is imposed by {radiative energy losses} { \citep[e.g.][]{li06}}.
The highly energetic ions will thus reach
farther upstream than the CR electrons and the return current will be provided
by the ambient electrons.
Note that the applicability of the system under study to relativistic
astrophysical sources relies on the ability of relativistic shocks to accelerate
particles to very high energies. Although the first-order Fermi process at such
shocks is widely considered to be the source of cosmic rays, recent studies in
the test particle approximation \citep{nie06a,nie06b,lem06} and using
PIC simulations \citep{sir09} show that this
mechanism can operate only in quasi-parallel or weakly magnetized shocks. If the
GRB or AGN outflows are strongly magnetized{ /quasi-perpendicular}, some other processes must be
responsible for particle acceleration (e.g., magnetic reconnection), and our
results do not apply.
It is known from studying non-relativistic beams in interplanetary
space that a competition arises between resonant and nonresonant modes, which exert different backreactions on the beam
\citep{winske}. For the
case of a monoenergetic, unidirectional distribution of streaming cosmic rays,
the rates for {the resonant growth of} Alfv\'{e}nic \citep{ps2000} and
electrostatic \citep{pls02} turbulence
have been derived using quasilinear theory. Based on an analytical treatment,
\citet{rev06}
found that also in this case nonresonant, purely growing modes may be expected
to be significantly
faster, although the growth rate falls off with the temperature of the
background medium.
Application of this mechanism to the external GRB shocks was phenomenologically
studied by \citet{mil06}, who concluded that CR-driven turbulence may account
for the levels of amplified magnetic fields inferred from these sources.
We have performed a series of two-dimensional simulations for this setup to
explore the relationship between this instability and that found for drifting cosmic rays,
and to determine { the mutual backreaction between the magnetic turbulence and the CR beam.
The interaction of a cold relativistic ion beam { is studied} in the limit
of a magnetized background plasma, for which the results of the analytical calculations of
\citet{rev06} apply.}
{ The simulation setup is described \S 2 and the results of the linear kinetic analysis of the system are presented in \S 3. In \S 4 the simulation results are presented. The differences in the properties of the system between the runs representing the CR beam with a constant external current and the fully kinetic simulations are discussed in \S 4.1 and \S 4.2. The detailed properties of the magnetic turbulence and the evolution of particle phase-space distributions are then presented in \S 4.3 based on the results for mildly relativistic ion beams. We conclude with a summary and
discussion in \S 5.}
\section{SIMULATION SETUP}
The code used in this study is a 2.5D (2D3V) version of the relativistic
electromagnetic particle code TRISTAN with MPI-based parallelization
\citep{buneman93,niem08}.
In the simulations a cold, relativistic, and monoenergetic cosmic-ray ion beam
with Lorentz
factor $\gamma_{CR}$ { (velocity $v_{CR}$)}
and number density $N_{CR}$ streams along a homogeneous magnetic field
$B_{\parallel 0}$
relative to the ambient electron-ion plasma. The ions of the ambient medium have
a thermal distribution with number density $N_i$, in thermal equilibrium with
the electrons. The electron
population with density $N_e=N_i+N_{CR}$ contains the excess electrons required
to provide charge-neutrality and drifts with $v_d=v_{CR}N_{CR}/N_e$ with
respect to the background ions, so it provides a return current balancing the
current carried by
the ion beam.
We have explored the system in the limit of a magnetized
background plasma, $\omega\ll \Omega_i$ (see \citep{rev06}). Specifically, we assumed
$\gamma_{max}/\Omega_i = 0.2$, where
\begin{equation}
\label{e1}
\gamma_{max}=\Im\omega\approx {1\over 2}\,{{v_{CR}\,N_{CR}}\over {v_A\, N_i}}\,\Omega_i
\end{equation}
is the growth rate of the most unstable nonresonant mode, $\Omega_i$ is the ion
gyrofrequency, and
$v_{A}=[B_{\parallel 0}^2/\mu_0 (N_em_e+N_im_i)]^{1/2}$ is the plasma Alfv\'{e}n
velocity. The relativistic cosmic-ray populations represent
very dilute ion beams which we study with density ratios $N_i/N_{CR}=50$ and $125$,
and Alfv\'{e}n velocities
$v_A = c/20$ and $v_A = c/50$, respectively (for which the ratio
$\omega_{pe}/\Omega_e=4.4$ and $11.0$, respectively).
The simulations have been performed for the case of an ultrarelativistic beam
with $\gamma_{CR}=300$ and a slower beam with $\gamma_{CR}=20$. We have also
studied the case in which the beam is represented by
a constant uniform external current, so that {the} backreaction of the magnetic
turbulence on the cosmic-ray beam is suppressed. {The parameters of all simulation
runs described here} are summarized in Table 1.
To ensure the numerical accuracy of our simulations, we use a total of 16
particles per cell, and we apply the splitting method for the beam particles.
The density ratio between {simulated cosmic-ray and ambient particles
on the grid is $1/3$, and
weights are applied to each beam particle to match the desired $N_{CR}/N_{i}$.
The same weights are
used for the excess electrons $\delta N_e=N_e-N_i=N_{CR}$, and thus
each ion particle can be initialized at the same location as the
corresponding electron} for the identically zero initial charge
density.
The electron skindepth $\lambda_{se}=c/\omega_{pe}=4\Delta$, where
$\omega_{pe}=(N_e e^2/m_e\epsilon_0)^{1/2}$ is the electron plasma frequency
and $\Delta$ is the grid cell size. We further assume a reduced ion-electron
mass ratio
$m_i/m_e=20$. This choice allows us to clearly separate the plasma and
turbulence scales and yet use a computational box that can contain several
wavelengths of the most unstable mode
\begin{equation}
\label{e2}
\lambda_{max}\approx 2\pi(\gamma_{max}/\Omega_i)^{-1}\lambda_{si},
\end{equation}
where $\lambda_{si}$ is the ion skindepth.
As {the} results of our linear analysis of \S 3 show, the one-dimensional dispersion
relations obtained
by \citet{rev06} capture the physical properties of the system,
provided the Lorentz factor of the ion beam is {very} large or
a constant external current is applied. In this study we perform
simulations for an ultrarelativistic beam and for the case of a constant
external cosmic-ray current to cross-check those results as well as the validity
of our approach. For these runs we use smaller computational grids: ($L_{\rm
x}, L_{\rm y}) = (7.4 \lambda_{max}, 5.7\lambda_{max})$ for runs A and D, and
($L_{\rm x}, L_{\rm y}) = (4.6 \lambda_{max}, 3.3\lambda_{max})$ for run B.
Larger grids with ($L_{\rm x}, L_{\rm y}) = (10.2 \lambda_{max},
7.4\lambda_{max})$ are used to investigate the systems with mildly relativistic
cosmic-ray beams, which have not been studied with PIC simulations before.
Cosmic-ray beam ions move in the $-x$-direction, antiparallel to the
homogeneous
magnetic field $B_{\parallel 0}$.
Periodic boundary conditions are assumed for all boundaries. In all simulations
the time step {is} $\delta t=0.1125/\omega_{pe}$, and the inverse maximum growth rate
of the nonresonant modes $\gamma_{max}^{-1}=3975\delta t$ for runs A--C and
$9938\delta t$ for runs D and E.
Our previous work on similar systems as well as additional test runs performed
for the current study ensures that our results are not affected by a particular
choice of simulation parameters, e.g., electron-ion mass ratio, number of
particles per cell, or the electron skindepth. The validity of two-dimensional
simulations in capturing the essential physics has been demonstrated in
\citet{niem08}.
\section{LINEAR ANALYSIS \label{lin}}
The growth rate and wavelength of the most unstable purely-growing
nonresonant mode given by
equations \ref{e1} and \ref{e2} were obtained by \citet{bell04} and \citet{rev06}
using linear kinetic analysis in
the limit of a cold ambient plasma and only for wavevectors $k_\parallel$ parallel to
$B_{\parallel 0}$. We have numerically calculated the growth rates for
arbitrary
orientation of the wavevector, $\vec{k}$, in the zero-temperature limit { (for details of the calculations see,
e. g., \citep{bret09})}.
In Figure 1 we show the growth rates in the reduced wavevector space
$(Z_\parallel, Z_\perp)$,
$Z_i=k_iv_{CR}/\omega_{pe}$, that is contained in our simulation box, for
beams moving along a homogeneous magnetic field of strength given by the
Alfv\'{e}n velocity of $v_A = c/20$.
In each case
the dominant unstable mode
is the electrostatic Buneman mode between background ions and drifting
electrons
($Z_\parallel\gtrsim 2$). The growth rate
of this mode is about $10^2$ times larger, and its wavelength about $1.25\times
10^3$ shorter than that of the nonresonant mode {(see Eqs. \ref{e1}
and \ref{e2})}. Our simulations do not fully resolve this
mode, because the wavelength of maximum growth corresponds to half a cell
($Z_\parallel\approx 55$) on our computational grid,
and we are able to see longer wavelength modes only with somewhat smaller growth
rates. However, the Buneman instability is very sensitive to thermal effects and
should
saturate if the thermal velocity of ambient particles becomes comparable to
their relative
drift velocity. The initial electron thermal velocity in the simulations is thus
set
to values $v_{e,th}\lesssim v_d$ to ensure the quick saturation and dissipation
of this unstable
mode. Note that such plasma parameters well reproduce the real conditions in
astrophysical
objects, and the Buneman mode will be relevant {only if the beam density is
high, because then $v_d$ will be high as well.}
The nonresonant mode, { which we are chiefly interested in,} is visible at $Z_\perp <
0.1$ and shows
a broad peak centered at $Z_\parallel\approx 0.05$, corresponding to the
estimate
given by Eq.~\ref{e2}. However, {if the CR beams are treated fully
kinetically} ($\gamma_{CR}=300$ and $20$ -- Fig. \ref{fig1}b and
Fig. \ref{fig1}c, respectively), the nonresonant mode is not dominant even in the
limited wavevector space {covered in our simulations.}
In fact, the strongest growth occurs for $0.3 < Z_\perp < 1$, almost
independent of $Z_\parallel$.
The very peaked growth at $Z_\parallel\approx 1$ pertains to the Buneman
instability between relativistic ion beam and ambient electrons. The growth at
smaller
$Z_\parallel$ represents the filamentation of the ambient plasma and the ion
beam.
The appearance of these fast-growing modes modifies the system, and one
should expect that the properties of the nonresonant mode emerging in the
nonlinear stage
differ from those predicted in the analytical calculations { by \citet{rev06}}. This is in fact what
is observed in our simulations. { It should be noted, though, that for warm CR beams
filamentation { might be} suppressed, and therefore may not play a role in the precursor region
close to the shock.}
Note that the growth rates as shown in Figure 1 depend on the parameters of the
system
under study. In particular, for ultrarelativistic beams (Fig. \ref{fig1}b) the
growth rate for the filamentation
modes is much smaller than for $\gamma_{CR}=20$.
Hence, for $\gamma_{CR}\gtrsim 100$ we primarily have a competition between the
nonresonant mode
and the Buneman modes. If we replace
the ion beam by an constant external current (which somewhat corresponds
to $\gamma \gg 1$, Fig. \ref{fig1}a), then the ambient electrons do not interact
with the ion beam, and a
Buneman instability is not excited. The evolution of the system is then
artificially dominated by the nonresonant instability \citep{ohira09}.
\section{SIMULATION RESULTS}
\subsection{Simulations with Constant Cosmic-Ray Current}
The temporal evolution of the energy density in the transverse magnetic-field
component is shown in Figure~\ref{double_fig}a.
If the backreaction on the cosmic rays is suppressed, i.e., a constant uniform
external
current is applied, then a purely-growing parallel mode of magnetic turbulence
appears
in the plasma. Its growth rate ($\gamma\approx 0.8 \gamma_{max}$ for the two
cases with $v_A = c/20$
and $c/50$, run A and D, respectively) and wavelength (dashed line in Fig.~\ref{double_fig}b)
agree well with those predicted by quasi-linear analytical calculations.
The mode represents a purely magnetic, circularly polarized, and aperiodic
transverse wave. {The interactions of the magnetic turbulence with the plasma are
predominantly} related to the
return current carried by the ambient electrons, $\vec{j}_{ret}$.
The $\vec{j}_{ret}\times\delta B_\perp$ force induces
motions and turbulence in the background plasma,
{which in the later stages cause} the turbulence
to turn nearly isotropic and highly nonlinear.
As in the case of drifting cosmic rays \citep{niem08} and nonrelativistic beams
\citep{winske},
the saturation of the magnetic-field growth proceeds via bulk acceleration and
occurs when
the bulk velocity of the background plasma approaches the cosmic-ray ion beam
speed { (see \S 4.3.2.).} { Note that current and charge balance is still observed
if the cosmic-ray current is chosen constant, and the background plasma is charged by
adding extra electrons to compensate for the charge of the cosmic rays. The plasma thus
''knows'' the cosmic-ray drift speed as that at which the plasma electrons no longer
stream relative to the plasma ions to carry the return current.} The { nonlinear}
amplitude of the field perturbations is { slightly} larger for smaller Alfv\'{e}n velocity,
in agreement with \cite{riqu09}. { However, the magnetic-field amplitudes become comparable at the end of both runs, and reach}
$\delta B_\perp/B_{\parallel 0}\simeq 25$, which is close to the maximum
obtained with MHD
simulations \citep{bell04,bell05,zira08} {and other PIC simulations \citep{ohira09},
in which the cosmic rays were}
also represented by a constant current.
We will now describe the behavior of the system including the
response of the relativistic cosmic-ray ion beam.
\subsection{Fully Kinetic Simulations}
If the cosmic rays are treated fully kinetically, the dynamics of the system
changes. The
interaction of the ion beam with the plasma quickly leads to plasma and beam
filamentation
which is modified by a Buneman instability between the ion beam and plasma
electrons.
The Buneman beam-electron interactions produce mainly electrostatic, slightly
oblique turbulence whose wavelength { parallel to the direction of the
beam is in very good agreement with the predictions of our linear analysis, which gives
$\lambda=2\pi(v_{CR}/c)\lambda_{se}\sim 25\Delta$ (\S 3)}.
The mode grows very fast, causing density fluctuations in the beam and electron
plasma. However, in the simulations its amplitude quickly saturates and is subsequently
kept at a moderate level. { These features are in agreement with the known properties of the Buneman modes (see, e.g., \citet{dieck07} for a detailed discussion of the nonlinear evolution and saturation mechanism of the Buneman instability).}
The Buneman mode dissipates only after filamentation and nonresonant
modes have strongly
backreacted on the ion beam in the nonlinear stage.
\subsubsection{Ultrarelativistic Beams}
The properties of the magnetic turbulence depend on the Lorentz factor of the
beam.
For an ultrarelativistic beam with $\gamma_{CR} = 300$ (run B; dotted lines in Fig. 2),
the
filamentation is weak
and the parallel nonresonant mode appears with the theoretically predicted
wavelength.
Its growth rate is initially $\gamma\approx 0.94 \gamma_{max}$ and decreases
during the
nonlinear evolution. As one can see in Figure 2a, the peak amplitude
of the magnetic-field
perturbations, $\delta B_\perp/B_{\parallel 0}\simeq 9$, is close to that
obtained with
constant external current (run A; solid line) at the onset of the saturation of the
turbulence growth
($t\sim 15\gamma_{max}^{-1}$). It appears that in this phase the high beam
Lorentz factor
provides sufficient stiffness to the ion beam that its backreaction is
suppressed, rendering the
system response similar to that for a constant external current. The similarity
ends when the saturation kicks in, though.
The subsequent dissipation of the turbulence in the run with
$\gamma_{CR} = 300$ is much stronger than in the case of a constant external
current,
which places in doubt the accuracy of simulations that use a constant external
current
to describe the highly nonlinear phases in the evolution of the system.
\subsubsection{Mildly Relativistic Beams}
Results for a system with a mildly relativistic beam with
$\gamma_{CR} = 20$, and for $v_A = c/20$ (run C) and $c/50$ (run E), are presented in Figure 2a
with dash-dotted and long-dashed lines, respectively. As our linear analysis of \S 3
shows, the filamentation modes at perpendicular wavevectors $k_\perp \approx 1/\lambda_{se}$
are strong in this case. They cause
filamentation in the ambient plasma and the ion beam, before the nonresonant
parallel modes have
emerged. As one can see in Figure 2, these modes do not lead to magnetic-field
perturbations of significant amplitude. Nevertheless, their action on the
ambient plasma changes its properties, which considerably influences the
characteristics of
the purely-growing parallel modes. The nonresonant modes appear in a broad range
of wavelengths around $\lambda_{max}$ (Fig. 2b),
and the growth rate of the magnetic-field perturbations is only $\sim
0.4\gamma_{max}$.
The backreaction of the turbulence on the system further enhances the
filamentation in the beam
and the plasma, and leads to the saturation
and dissipation of the magnetic turbulence at a level { a few times the
homogeneous magnetic field strength.}
The peak amplitudes for the two cases with $v_A = c/20$ and $c/50$ are
$\delta B_\perp/B_{\parallel 0}\simeq 4.7$ and
$\delta B_\perp/B_{\parallel 0}\simeq 7.5$, respectively, showing
that instabilities
operating in a less-magnetized medium provide a stronger field amplification.
{ It is unclear whether the modification of the parallel mode arises specifically from
filamentation or from any type of perpendicular, small-scale density fluctuations,
including preexisting turbulence. We can therefore not reliably predict the behavior of
a system containing a warm cosmic-ray beam, for a example the denser parts of a
cosmic-ray precursor to an astrophysical shock.}
\subsection{Aperiodic Magnetic Turbulence Produced by Mildly Relativistic
Beams}
\subsubsection{Spectral Properties of the Turbulence}
The characteristic features of magnetic turbulence in a system {
containing a} mildly
relativistic cosmic-ray beam are detailed in Figures \ref{en16}, \ref{four}, and
\ref{multi} for the run with $\gamma_{CR} = 20$ and $v_A = c/20$ (run C). {The temporal
evolution of the magnetic and electric field average energy densities} is shown in Figure
\ref{en16}. Figure \ref{four} presents Fourier power spectra of the perpendicular
magnetic-field component $B_z$ for $t\gamma_{max}=2,5,$ and $8$ in
two-dimensional reduced wavevector space $(Z_\parallel, Z_\perp)$. Figure
\ref{multi} shows snapshots of the time evolution of the electron and
cosmic-ray ion density, {and the structure in} the $B_z$ magnetic-field component.
The initial filamentation in {the ambient plasma grows quickly in
spatial scale by merging of
adjacent filaments, which can be clearly seen in $E_x$ and
$E_y$, and also in the $B_z$ field components shown in Figure \ref{four}}.
{ Because the Buneman instability between the ion
beam and the plasma electrons is slightly oblique (see Fig. \ref{fig1}c),
hence not purely
electrostatic, it is} visible in magnetic-field Fourier spectra as a
feature at $Z_{\parallel}\approx 1$ (Fig. \ref{four}). The corresponding strong
short-scale modulations in {the densities of ambient electrons and the ion beam} can be
seen in Figure \ref{multi}a-b and Figure \ref{multi}d-e. The nonresonant
parallel modes
of magnetic turbulence emerge in a medium already strongly modified by
filamentation. They appear in a range $0.02\lesssim Z_{\parallel}\lesssim 0.1$
{around the} theoretically predicted $Z_{\parallel}(\lambda_{max})\simeq 0.08$ and
quickly grow in wavelength (see Figs. \ref{four}b-c, \ref{multi}c, and
\ref{multi}f). The influence of the nonresonant modes is stronger on the filamentation
in the slowly drifting ambient plasma than {that} in the relativistic ion
beam. In essence, ambient plasma filaments become vertically tilted (Fig.
\ref{multi}d), which leads to even stronger plasma filamentation. The lack of
spatial correlation between filaments {in the ambient plasma and the beam}
results in a
local charge imbalance and the build-up of charge-separation electric fields,
which, together with { electric} fields induced by the Buneman instability,
dominate the turbulent electromagnetic energy content of the system in the
initial stage (Fig. \ref{en16}). During the nonlinear stage ($t\gtrsim
8\gamma_{max}^{-1}$) the enhanced filamentation leads to the generation of
{ stronger} turbulence in the $B_z$ component of the magnetic field with
perpendicular wavevectors, $k_y$, which disrupts the structure of the parallel magnetic
modes. This interaction between filamentary and nonresonant modes is visible in
Figure \ref{multi}f and in the Fourier spectrum in Figure \ref{four}c.
As one can see in Figures \ref{multi}g-i, the strongly amplified magnetic field
starts to backreact on the cosmic-ray beam in the later stage of the system
evolution. Cosmic-ray filaments become tilted and eventually disrupted. This is
accompanied by turbulent ambient plasma motions and results in highly nonlinear
and nearly isotropic magnetic turbulence.
The characteristics of the turbulence in its post-saturated state are thus
similar to those
observed in {simulations of nonrelativistically drifting hot cosmic-rays}
in the precursor to SNR shocks.
\subsubsection{Particle Phase-Space Distributions}
The effects of the backreaction of the magnetic turbulence on the particles
are presented in Figures \ref{vbulk}, \ref{energy}, and \ref{phase}. The average (bulk)
velocities of {all} particle species converge in the nonlinear stage,
{when the magnetic-field growth saturates} (at $t\approx
15\gamma_{max}^{-1}$ for runs A and C in Fig. \ref{vbulk}). While the {
relative drift between the plasma and the cosmic-ray beam disappears in all
our simulations, the mechanism by which that is achieved differs between runs
which suppress the cosmic-ray backreaction and} fully kinetic runs.
In the fully kinetic simulations (solid line in Fig. \ref{vbulk}), the cosmic-ray
beam slows down considerably, and the ambient plasma accelerates up to $\sim
-0.2c$. This {behavior is similar} to the results of \citet{winske} for
nonrelativistic dilute ion beams interacting with ambient plasma via
nonresonant modes, which showed that {the energy of the} decelerating beam
is transferred in
approximately equal parts to the ambient ions and the magnetic field, in
accordance to the predictions of quasi-linear theory. The simulation results of
\citet{winske} were obtained with a one-dimensional hybrid model that treats
electrons as a massless fluid, {with which} one cannot observe
filamentation modes, and the associated electron heating is artificially
suppressed. As shown in Figure \ref{energy}, which presents the temporal
evolution of energy densities in particles and fields for run C, in our
simulations the initial plasma filamentation is accompanied by electron
heating at the expense of the beam. However, the heating of the
electrons saturates at $t\approx 7\gamma_{max}^{-1}$, when the nonresonant modes
start to emerge. During the subsequent evolution, beam energy is transferred with
approximately the same rate into the magnetic field and ambient ions, {while the
electrons experience} only moderate further heating. This process of the
energy transfer saturates when the turbulent magnetic field reaches its maximum
energy density and starts to dissipate. The nonlinear evolution of
{nonresonant modes in a system
containing relativistic ion beams} thus proceeds in
qualitatively the same way as for nonrelativistic beams.
If the cosmic-ray ion beam is represented by a constant external current, the
energy { conservation between the beam to the ambient medium is
violated.} As shown in Figure \ref{vbulk} (dashed lines), the saturation of the
magnetic field growth still comes about by the disappearance of the ion
beam-ambient plasma relative motion, { at which the return current is provided
without a drift of the plasma electrons relative to the plasma ions,} but now the
ambient ions and
electrons {must assume the constant} cosmic-ray beam bulk velocity.
This {implies that energy is continuously pumped into the
system, and therefore} energy conservation becomes severely violated in the
nonlinear stage. Thus the {validity of simulations that assume a constant
cosmic-ray current is limited to the early phases in the evolution of the system.}
Figure \ref{phase} shows the {phase-space distributions of the cosmic-ray beam and
the} ambient ions at $t\gamma_{max}=$ 0, 7, 14, and 19 for run C. The early stage of
the system evolution
($t\gamma_{max}\lesssim 7$) is dominated by the Buneman instability modes
between the ion beam and ambient electrons. The electrostatic fields associated
with this mode heat up electrons (Fig. \ref{energy}) and significantly stretch
the beam ions distribution along the beam propagation direction. At the same
time the cosmic-ray beam is {heated} in the transverse direction due to
filamentation modes. The ambient ions remain unaffected by the Buneman
mode\footnote{The phase velocity of the Buneman wave mode between cosmic-ray ion
beam and electrons { is $\sim v_{CR}$ in} the ambient ions rest frame. Thus the
associated electrostatic fields are seen by the ambient ions as a high-frequency
oscillations.}
and become only moderately heated in this stage. The stretching of the beam ions
distribution gradually
saturates at $t\gamma_{max}\sim 7$, by which time the nonresonant modes have set
in and started to strongly backreact on the system.
During the subsequent evolution the beam momentum becomes quickly randomized in
direction. This randomization is
the combined effect of the pinching of ion-beam filaments and pitch-angle
scattering of the beam particles. The latter process becomes more important in
the highly nonlinear phase ($t\gamma_{max}\gtrsim 10$; compare Fig.
\ref{multi}h), {during which the filaments start to get disrupted.
At the same time the ion beam slows down in bulk, and by
$t\gamma_{max}\sim 19$ the evolution
saturates when the ion beam particles have been efficiently pitch-angle scattered around
a} mean (bulk) momentum of $\sim -3.8m_ic$.
The {randomization of beam momentum through} pitch-angle scattering was
previously reported for nonrelativistic beams in conditions allowing for an
efficient magnetic field amplification through nonresonant modes \citep{winske}.
{Here we have demonstrated that these modes can also provide efficient
scattering for relativistic beams.}
We can estimate the scattering mean free path from the time evolution of the
phase-space distribution of the ion beam, a few snapshots of which are shown in
Figure~\ref{phase}. Between $t\,\gamma_{\rm max}=10$ and $t\,\gamma_{\rm max}=14$
the scattering mean free path in simulation run C is
\begin{equation}
\lambda_{\rm mfp}\approx 5000\,\Delta.
\label{mfp}
\end{equation}
At the same time, the rms amplitude of the turbulent magnetic field increases
by more than 250\%, from about $B_{\parallel 0}$ to $3.5\,B_{\parallel 0}$. Using
the mean of the two numbers, we obtain for the Bohm mean free path
\begin{equation}
\lambda_{\rm Bohm}\approx 3000\,\Delta
\label{bohm}
\end{equation}
Given the uncertainty in the estimate arising from the substantial variation in the
magnetic-field amplitude, about a factor 2,
we conclude that the observed scattering mean free path,
and therefore the spatial diffusion coefficient, { for mildly relativistic beams}
are entirely compatible with Bohm diffusion.
{ The estimate of the scattering mean free path can also be made for ultrarelativistic beams based on run B. However, by the time our simulation ends the CR beam is only partially pitch-angle scattered up to an angle $\sim\pi/6$. Nevertheless, a rough estimate shows that $\lambda_{\rm mfp}$ is again within the factor of a few comparable to $\lambda_{\rm Bohm}$.}
\section{DISCUSSION AND CONCLUSIONS}
We have studied the interaction of a cold, relativistic ion beam penetrating a
cold plasma composed of
electrons and ions. We have presented 2.5D PIC simulations, complemented with
a linear analysis of the dispersion relation
for linear waves with arbitrary orientation of ${\vec k}$, for parameters that
permit the growth of
nonresonant, purely-magnetic parallel modes \citep{rev06}.
Our research is relevant for the understanding of the structure of, and
particle acceleration at, shocks in SNR, GRB, and AGN, for which radiation modeling
suggests that the magnetic
field near the shock is strongly amplified.
We observe a close competition of the nonresonant mode with the filamentation
instability and Buneman
modes, {which is also evident in} the linear dispersion relation. The specific
choice of parameters
determines which of the three modes of instability dominates. In some cases
filamentation is initially
important and modifies the later evolution of the parallel nonresonant mode. In
all cases we find that
a representation of the ion beam by a constant current, as is routinely done in
MHD studies, is suboptimal,
because it suppresses part of the nonlinear response of the system, delays the
saturation processes,
and leads to a significant overestimate of the {final} magnetic-field amplitude.
As in the case of drifting cosmic rays \citep{niem08,stroman} and nonrelativistic
beams \citep{winske},
the saturation of the magnetic-field growth proceeds via bulk acceleration.
For mildly- and { ultra-relativistic} beams, the instability saturates at field
amplitudes { a few times larger than} the homogeneous magnetic field. These results match our recent
studies of nonrelativistically drifting cosmic-rays upstream
of SNR shocks which also indicated only a moderate magnetic-field amplification
by nonresonant instabilities.
{ We have demonstrated that the magnetic field amplified via nonresonant interactions between the CR beam and the plasma can efficiently scatter cosmic rays even for moderate field amplification levels.
The scattering mean free path is compatible with Bohm diffusion. Sub-Bohm diffusion was observed in Monte-Carlo simulations of particle transport in the nonlinear turbulent magnetic field generated in the nonresonant instability by \citet{rev08}. In that work, parallel and perpendicular diffusion coefficients were calculated by probing the spatial displacement of test particles in a static snapshot of the amplified magnetic field that resulted from MHD simulations of the instability. Here we estimate the isotropic spatial diffusion coefficient by probing the evolution of the angular distribution of particles in the self-excited, non-stationary (growing) turbulence whose typical wavelength is at least a factor of a few smaller than the gyroradii of cosmic rays.}
{In the application to nonrelativistic shocks in SNRs, strong ($\delta B/B_0\gg 1$)
quasi-isotropic magnetic turbulence would be compressed by the shock, thus turning
into quasi-two-dimensional turbulence in the downstream region. Radio polarimetry suggests that
the magnetic field immediately behind the shock is preferentially oriented
along the shock normal \citep{sp09}, which is at odds with the above expectation,
if the turbulent field is not quickly damped to an amplitude
$\lesssim B_{\parallel 0}$ \citep{pyl05}.}
{ In the application to relativistic shocks in AGN and GRBs, Monte-Carlo studies of the
first-order Fermi acceleration have shown that the process can operate only for
quasi-parallel subluminal shocks, provided that strong, short-wave magnetic turbulence exists upstream of the shock \citep{nie06b}. Our results show that the turbulence self-generated by the accelerated particles streaming in the shock precursor may provide scattering sufficient to randomize CR momenta. However,
it is not clear how the strong quasi-isotropic magnetic turbulence in the upstream region influences the particle acceleration at the shock (but see \citet{couch08}).
Finally, our
simulations show that the saturation of instabilities operating upstream may
limit the magnetic amplitude to moderate levels. If very strong magnetic field
is required by radiation modeling, it may therefore be generated at the shock itself or
in the immediate downstream region.}
\acknowledgments
{ JN and MP are grateful for the hospitality of Kavli Institute for Theoretical Physics, Santa Barbara, where this work has been completed. JN acknowledges helpful discussions with Mark Dieckmann and Luis Silva.}
The work of JN is supported
by MNiSW research project N N203 393034, and The Foundation for Polish
Science through the HOMING program, which is supported by a grant from
Iceland, Liechtenstein, and Norway through the EEA Financial
Mechanism. Simulations were partly performed at the Columbia facility at the
NASA Advanced Supercomputing (NAS). This research was also supported in part by
the National Science Foundation under Grant No. PHY05-51164 and
through TeraGrid resources provided by the
National Center for Supercomputing Applications (NCSA) under project PHY070013N.
|
1,116,691,497,679 | arxiv | \section{Introduction}
The solar dynamo generates magnetic flux inside the Sun, whch is transported
outward and emerges through the Sun\textquoteright{}s surface into
the corona. Magnetic loops build up \textquotedblleft{}closed\textquotedblright{}
magnetic flux (connected to the Sun at both ends) in the corona. Some
of these closed loops subsequently \textquotedblleft{}open\textquotedblright{}
into interplanetary space \textendash{} that is, they are connected
to the Sun at only one end with the other extending to great distances
in the heliosphere or beyond. Owing to the very high electrical conductivity,
open magnetic flux is frozen into the solar wind and carried out with
it. The magnetized solar wind expands continuously outward from the
Sun in all directions, filling and inflating our heliosphere and protecting
the inner solar system from the vast majority of galactic cosmic rays.
The balance between the opening and closing of magnetic flux from
the Sun is thus critical and fundamental both to the solar wind and
to the radiation environment of our solar system.
Magnetic flux opens when coronal mass ejections (CMEs) erupt through
the corona, carrying previously closed magnetic loops beyond the
critical point where the solar wind exceeds the Alfv\'en speed
(typically <$20\, R_{\odot}$) and can no longer return to the
Sun. CMEs were first studied in OSO-7 and Skylab observations of the
corona (e.g.\citealt{Tousey1973,Gosling1974,Hundhausen1993}), and
since then continued work has provided an increasingly detailed
picture of these transient magnetic structures both during their
formation and ejection, and as they continue to evolve and interact
with the solar wind.
Long lasting, radial \textquotedbl{}legs\textquotedbl{} are often
observed along the flanks of a CME and persisting behind it. These
legs are generally interpreted as evidence for at least some continued
magnetic connection of CMEs back to the Sun and hence the opening
of new magnetic flux with CME ejections. That picture is further supported
by observatation, \emph{in situ}, of beamed suprathermal halo electrons
streaming in both directions along the local interplanetary magnetic
field (IMF) during the passage of an interplanetary CME (ICME) cloud
(e.g. \citealt{Gosling1990,Gosling1993} \& references therein), which
are commonly interpreted as signatures of direct connection of the
ICME magnetic field to the solar corona in both directions, and hence
of newly opening magnetic flux. However, less is known about the equally
necessary process of disconnection that must be present to remove
newly opened flux and prevent the IMF from growing without limit.
Because of the continual opening of magnetic flux through CMEs, McComas
and coworkers in the early 1990s pursued a series of studies to determine
how magnetic flux could be closed back off and avoid a so-called magnetic
flux \textquotedblleft{}catastrophe\textquotedblright{} of ever increasing
magnetic field strength in the interplanetary magnetic field (\citealt{McComas1995}\&
references therein). The amount of open magnetic flux in interplanetary
space can be approximated with the \textquotedbl{}total flux integral\textquotedbl{}
which removes the effects of variations in the solar wind speed in
determining the amount of magnetic flux crossing 1 AU (\citealt{McComas1992a}).
Using this integral, McComas et al. (\citeyear{McComas1992a,McComas1995})
showed that if all counterstreaming electron events represent simply
connected opening magnetic loops, then for solar maximum CME rates,
the amount of flux crossing 1 AU would double over only \textasciitilde{}9
months. For flux rope CMEs, significantly more magnetic flux may be
observed in the loops crossing 1 AU than what remains attached to
the Sun along the CMEs\textquoteright{} legs; however, it must be
stressed that if CMEs retain any solar attachment whatsoever, the
flux catastrophe will ultimately occur in the absence of some other
process to close off previously open fields.
Of course a magnetic flux catastrophe is not observed in the solar
wind and, in fact, the overall magnitude of the IMF and amount of
open flux seems to vary over the solar cycle. For cycle 21, the average
magnitude varied by \textasciitilde{}50\% (\citealt{Slavin1986})
while the total flux integral varied by \textasciitilde{}60\% (McComas
et al. \citeyear{McComas1992a,McComas1992b}), with maxima shortly
after solar maximum and minima shortly after solar minimum \citealt{McComas1994}.
Since these studies, the solar wind has gone through a prolonged (multi-cycle)
reduction in both solar wind power (the dynamic pressure of the solar
wind that ultimately inflates the heliosphere) (\citealt{McComas2008})
and magnetic field magnitude (\citealt{Smith2008}). The lack of a
flux catastrophe, solar cycle variation and now long-term reduction
in the open magnetic flux from the Sun all show that there must be
some process for closing off previously open field regions and returning
magnetic flux to the Sun.
Magnetic reconnection plays an important role in regulating the topology
of solar magnetic flux, however, once the top of a loop passes the
critical point, its magnetic flux remains open until some other process
occurs to close it off below the critical point. That is, reconnection
above the critical point can only rearrange the topology of open magnetic
flux in the heliosphere \textendash{} only reconnection between two
oppositely directed (inward and outward field) regions of open flux
close to the Sun can close off previously open magnetic flux. The
most obvious method of reducing the amount of magnetic flux open to
interplanetary space is via reconnection between oppositely directed,
previously open field lines (McComas et al. \citeyear{McComas1989}),
which creates closed field loops that can return to the Sun and the
release of disconnected U-shaped field structures into interplanetary
space. An example of such a coronal disconnection event was shown
by \citealt{McComas1991} using SMM coronagraph images from 1 June
1989. An even older example of a likely coronal disconnection event
can be found as far back as the 16 April 1893 solar eclipse (e.g.,
\citealt{Cliver1989}), where sketches (data in 1893) made in time
ordered sequence from Chile, Brazil, and Senegal, indicate the outward
motion of a large U-shaped structure (\citealt{McComas1994}).
For the opening and closing of the solar magnetic flux to maintain
some sort of equilibrium, there must be some type of feedback between
these two processes. McComas et al. (\citeyear{McComas1989,McComas1991})
suggested that this feedback occurs through transverse magnetic pressure
in the corona, where the expansion of newly opened field regions must
enhance transverse pressure and compress already open flux elsewhere
around the Sun. When enough pressure builds up, reconnection between
oppositely direct open flux would reduce the pressure and amount of
open flux. The sequence of images from the 27 June 1988 coronal disconnection
event, in fact showed just such a compression, indicated by the deflection
of the streamers in the corona, just prior to and appearing to precipitate
the coronal disconnection event. Another line of supporting evidence
was provided by numerical simulations (\citealt{Linker1992}), which
indicated that increased magnetic pressure could lead to reconnection
across a helmet streamer and the release of disconnected flux. \citet{Schwadron2010}
recently reexamined the flux balance issue in light of the anomalously
long solar minimum between cycles 23 and 24 and modeled the level
of magnetic flux in the inner heliosphere as a balance of that flux
injected by CMEs, lost through disconnection, and closed flux lost
through interchange reconnection near the Sun.
\citet{McComas1992c}conducted a statistical study of three months
of SMM coronagraph observations (\citet{Hundhausen1993}) to assess
the frequency of coronal disconnection events. These authors found
that while the initial survey (\citealt{StCyr1990}) found no obvious
disconnections, six of the 53 transient events during this interval
(11\%) showed some evidence of disconnection in more than one frame
and 13 (23\%) showed a single frame with an outward \textquotedbl{}U\textquotedbl{}
or \textquotedbl{}V\textquotedbl{} structure. Given the imaging and
analysis technology of the day, McComas et al. (1992c) concluded that
magnetic disconnection events on previously open field lines may be
far more common than previously appreciated. With today\textquoteright{}s
imaging and exceptional analysis capabilities, the question of coronal
disconnection events should finally be resolvable.
For this study we used image sequences, collected by the SECCHI
(\citealt{HowardRA2008}) instrument suite on board NASA's
\emph{STEREO-A }spacecraft, of Thomson-scattered sunlight from free
electrons in the interplanetary plasma. The observations span from the
deep solar corona to beyond 1 AU at elongation angles of up to
$70^{\circ}$ from the solar disk; this continuous observation is
enabled by recently developed background subtraction techniques
(\citealt{DeForest2011}) operating on the \emph{STEREO} data. The
signature U-shaped loops of disconnected plasma are far clearer in the
processed heliospheric data far from the Sun than in the coronagraph
data close to the Sun, and we detect 12 characteristic departing
{}``V'' or {}``U'' events in 36 days -- far more than the expected
number based on scaling the results of McComas et al. (1992c). For
this initial report, we focus on quantitative analysis of a single
event. In Sections 2.1-2.7, we describe the observations and calculate
the geometry, the mass evolution, and (by assuming the U-loop is
accelerated by the tension force) the coronal magnetic field sand
entrained flux in the disconnecting structure. As a plausibility
check, we explore the tension force scenario and its consequences for
the long-term evolution of the feature, and find that the scenario is
consistent with accepted values for the solar wind density and
speed. In Section 3, we discuss broader consequences of the
observation, including estimating the disconnection rate based on the
number of similar events in our data set, and discuss implications for
the global magnetic flux balance.
\section{Observations}
The SECCHI suite on STEREO was intended to be used as a single integrated
imaging instrument (e.g. \citealt{HowardRA2008}). It consists of
an EUV imager (EUVI) observing the disk of the Sun, and four visible
light imagers (COR-1, COR-2, HI-1, and HI-2) with progressively wider
overlapping fields of view, to cover the entire range of angles between
the solar disk and the Earth. The visible light imagers view sunlight
that has been Thomson scattered off of free electrons in the corona
and interplanetary space; the theory of Thomson scattering observations
has been recently reviewed by \citet{HowardTappin2009a}. We set out
to view coronal and heliospheric events in the weeks around 2008 December,
using newly developed background subtraction techniques to observe
solar wind features in the HI-1 and HI-2 fields of view (\citet{DeForest2011,HowardDeForest2011}).
In the initial 36 day data set we prepared, we observed 12 disconnection
events identified by a clear {}``V'' or {}``U'' shaped bright
structure propagating outward in the heliosphere. We chose a particuarly
clearly presented one, which was easily traceable to its origin in
the low corona on 2008-Dec-12 at 04:00, for further detailed study.
\subsection{Image Preparation}
Data preparation followed standard and published techniques. For COR-1
and COR-2, we downloaded Level 1 (photometrically calibrated) data,
and further processed them by fixed background subtraction: we acquired
images for an 11-day period, and found the 10 percentile value of
each pixel across the entire 11-day dataset. This image was median
filtered over a 5x5 pixel window to generate a background image that
included the F corona, any instrumental stray light, and the smooth,
steady portion of the K corona. Subtracting this background from each
image yielded familiar coronal images of excess feature brightness
compared to the smooth, steady background. We performed one additional
step: motion filtration to suppress stationary image components. This
step matches the motion filtration step used for HI-1 and HI-2 (below),
and suppresses the stationary streamer belt while not greatly affecting
the moving features under study.
The heliospheric imagers required further processing to remove the
starfield, which is quite bright compared to the faint Thomson scattering
signal far from the Sun in the image plane. We processed the STEREO-A
HI-2 data as described by \citet{DeForest2011}. The HI-1 data used
a similar process adapted to the higher background gradients in that
field of view and described by \citet{HowardDeForest2011}.
All the imagers yielded calibrated brightness data in physical units
of the mean solar surface brightness ($ $$ $ $B_{\odot}=2.3\times10^{7}W\, m^{-2}\, SR^{-1}$).
Because of the wide field of view, as a subsequent processing step
we distorted the images into azimuthal coordinates, in which one coordinate
is azimuth (solar position angle) in the image plane and the other
is either elongation angle $\epsilon$ ({}``radius'' on the celestial
sphere) or its logarithm. The latter projection, if scaled properly,
is conformal: it preserves the shape of features that are small compared
to their distance from the Sun. To equalize brightness, we applied
radial filters to the images for presentation, with either a $\epsilon^{3.5}$
scaling (for coronal images) or a $\epsilon^{3}$ scaling (for heliospheric
images).
Figure \ref{fig:4-panels} shows snapshots of the disconnected plasma
and associated cusp, as observed by four separate instruments over
the course of four days as it propagated outward. Shortly after 2008
Dec 18 04:00, the streamer belt at $160^{\circ}$ ecliptic azimuth
($20^{\circ}$ CCW of the Sun-Earth line) pinched and separated, forming
a {}``U'' loop that retracted outward, with a trailing cusp, over
the course of the following three days. The feature remained visible
in Thomson scattered light because of plasma scooped up during the
early acceleration period in the lower corona: this plasma remained
denser than the surrounding medium, yielding a bright feature throughout
the data set. The disconnected plasma completely missed the ecliptic
plane and was therefore not observed \emph{in situ }by any of the
near-Earth or STEREO probes.
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=6in]{f1.eps}
\par\end{centering}
\caption{\label{fig:4-panels}The disconnection event of 2008 Dec 18 in context
- 8 still images showing formation and evolution of the U-loop: left
to right, top to bottom.}
\end{figure}
\subsection{Observing Geometry and 3-D structure }
The observing geometry for the 2008 Dec 18 event is shown in Figure
\ref{fig:Observing-geometry}, from an overhead (northward out-of-ecliptic)
point of view co-rotating with the STEREO-A orbit. The event departure
angle was measured both using direct triangulation between the coronagraphs
in STEREO-A and STEREO-B. We used the triangulation method described
by \citet{HowardTappin2008}. Although the disconnection event is
small compared to most CMEs, subtending just a few degrees in latitude,
it is still large enough to cast doubt on the simple triangulation
results, so we also used {}``TH model'' semi empirical transient
event reconstruction tool (\citealt{TappinHoward2009}) to extract
the departure angle. TH was developed to reconstruct CME leading edge
({}``sheath'') overall envelope and propagation speed, but is also
applicable to smaller transient events such as this one. Details,
applications, and limitations of the TH model are further described
by \citealt{TappinHoward2009} and by Howard \& Tappin (\citeyear{HowardTappin2009b,HowardTappin2010}).
Departure longitude was measured to be $-10^{\circ}\text{\textpm}5^{\circ}$
in heliographic coordinates, with an estimated event width of under
$5^{\circ}.$
We took the disconnected feature's trajectory to have constant radial
motion (the {}``Fixed-$\Phi$ aproximation'') in the solar inertial
frame -- this leads to the slightly curved aspect to the trajectory
in the co-rotating heliographic ecliptic frame, which maintains the
prime meridian at the Earth-Sun line. Figure \ref{fig:Observing-geometry}
shows an out-of-ecliptic projected view of the observing geometry,
including construction angles and distances used in Section \ref{sub:Acceleration-profile}
for trajectory calculations.
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=5in]{f2.eps}
\par\end{centering}
\caption{\label{fig:Observing-geometry}Observing geometry in the ecliptic
plane on 2008 Dec 18 - 2008 Dec 22}
\end{figure}
\subsection{Feature Evolution}
To analyze the feature's evolution across a two-order-of-magnitude
shift in scale over its observed lifetime, we transformed the processed
\emph{STEREO-A} source images into local heliographic radial coordinates
-- i.e. zero azimuth is due solar West from the viewpoint of \emph{STEREO-A},
with azimuthal coordinate increasing clockwise around the image plane;
this follows early work by \citet{DeForestPlunkettAndrews2001} in
imaging polar plumes. Distances from Sun center are recorded as elongation
angle $\epsilon$ from the center of the Sun, as a reminder of the
angular nature of the wide-field observations. To avoid aliasing in
the resampling process, we resampled the images using the optimized
resampling package described by \citet{DeForest2004}. Figures \ref{fig:liftoff}
and \ref{fig:Propagation} show the liftoff and propagation of the
feature across 65 degrees of elongation from its origin in the solar
streamer belt. Both figures have a radial gain filter applied to equalize
the feature's brightness, which varies by over seven orders of magnitude:
from $1.5\times10^{-9}B_{\odot}$ in the low streamer belt at 2008
Dec 18 04:30 to $5.7\times10^{-17}B_{\odot}$ six days later, at $\epsilon=65^{\circ}$.
The bright feature takes the classic wishbone shape of reconnecting
field lines emerging from a current sheet (e.g. Chapter 4 of \citealt{PriestForbes2000}).
The aspect ratio of the wishbone may be estimated by dividing the
vertical height from cusp to the top of the visible horns, by the
width between the horns. This aspect ratio varies from \textasciitilde{}10:1
when the horns are first clearly resolved near 2008 Dec 18 08:00,
to approximately 2:1 some four hours later and 1:1 by 2008 Dec 19
04:00 -- one full day after the first pinch is observed in the streamer
belt. After 2008 Dec 19, the feature expands approximately self-similarly
as it propagates, subtending approximately $16^{\circ}$ of azimuth
and not changing its aspect ratio throughout the rest of its trajectory.
Note that aspect ratio is \emph{not} preserved by the linear azimuthal
mapping used in Figure \ref{fig:liftoff}, which was selected to show
the early acceleration clearly; aspect ratio is preserved by the (conformal)
logarithmic mapping used in Figure \ref{fig:Propagation}, which shows
nearly self-similar expansion in the image plane despite perspective
effects that come into play above about $\epsilon=30^{\circ}$
The scaling of brightness is reassuring because, in a uniformly propagating
wind with no acceleration, density must decrease as $r^{-2}$ and
feature column density must thus decrease as $r^{-1}$, while illumination
decreases as $r^{-2}$, so feature brightness is expected to decrease
as $r^{-3}$. The fact that brightness levels do not change much across
$ $Figure \ref{fig:Propagation}, which is scaled by $\epsilon^{3}$,
suggests that the disconnected flux and material entrained in it are
indeed propagating approximately uniformly. The fact that they \emph{do}
change slightly, with brighter images to the right, indicates that
the feature is gaining intrinsic brightnsss by accumulating material
as it propagates.
The horizontal positions and error bars in Figures \ref{fig:liftoff}
and \ref{fig:Propagation} are the results of manual feature location
of the cusp, with a point-and-click interface. The white error bars
are based on the sharpness of the feature. In the excess brightness
plot, the feature is easy to see but blurs near the top of Figure
\ref{fig:liftoff} due to the higher levels of both photon noise and
motion blur as the feature accelerates to the top of the coronagraph
field of view. The running difference plot highlights fine scale feature
and helps identify the cusp location near the top of the COR-2 field
of view.
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=1\textwidth]{f3.eps}
\par\end{centering}
\caption{\label{fig:liftoff}Formation and early acceleration of the 2008 Dec
18 disconnection event through the STEREO-A COR-1 and COR-2 fields
of view The trailing edge of the event is marked, with error bars
based on feature identification. TOP: direct excess-brightness images
show feature formation and overall structure. BOTTOM: running-difference
images show detail. These stack plots include a small image of the
feature at each sampled time to show evolution. Intensities are scaled
with $\epsilon^{3.5}$ to equalize brightness vs. height. The individual
images have been resampled into linear azimuthal (radial) coordinates,
and the horizontal range is $160^{\circ}-174^{\circ}$ of azimuth.
Note that this projection does not preserve aspect ratio: despite
appearances, the event widens as it rises.}
%
\end{figure}
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=1\textwidth]{f4.eps}
\par\end{centering}
\caption{\label{fig:Propagation}Propagation and evolution of the 2008 Dec
18 disconnection event through the STEREO-A HI-1 and HI-2 fields of
view. The trailing edge of the event is marked, with error bars based
on feature identification. These stack plots include a small image
of the feature at each sampled time to show evolution. The individual
images have been resampled into logarithmic azimuthal (radial) coordinates,
and the horizontal range is $162^{\circ}-178\text{\textdegree}$of
azimuth. This projection is conformal, so the shape of the feature
is preserved in each image. Intensities are scaled with $\epsilon^{3}$
to equalize brightness vs. height. Note self-similar expansion: the
angular width and shape of the feature are preserved.}
\end{figure}
\subsection{\label{sub:Acceleration-profile}Acceleration profile}
Converting angular observed coordinates to examine the inertial behavior
of the plasma requires triangulation using the Law of Sines. Using
the {}``fixed $\Phi"$ approximation (assuming the feature's cusp
is small and that it propagates in a radial line from the Sun), the
feature's radius from the Sun is easily calculated: \begin{equation}
r_{ev}=r_{A}\frac{{sin\left(\epsilon'\right)}}{sin\left(\epsilon\right)},\label{eq:law-of-sines}\end{equation}
where the variables take the meanings in Figure \ref{fig:Observing-geometry}:
$\epsilon$ is the solar elongation of the feature as seen from \emph{STEREO-A},
$\epsilon'$ is the solar elongation of \emph{STEREO-A} as seen from
the feature), and the \emph{STEREO-A} solar distance $r_{A}$ is found
by spacecraft tracking and is supplied by the mission. Although no
camera was present at the event itself, $\epsilon'$ is calculated
by noting that $\epsilon'=180^{\circ}-\epsilon-\left(L-L_{ev}\right)$,
as the feature, \emph{STEREO-A, }and the Sun form a triangle. Figure
\ref{fig:accel-plots} shows the results of the tracking from Figures
\ref{fig:liftoff} and \ref{fig:Propagation}, propagated through
Equation \ref{eq:law-of-sines}.
As expected, the event rapidly accelerates during the early phase,
reaching a peak acceleration of $20\, m\, s^{-2}$ as the aspect ratio
changes in the initial hours. The acceleration peaks 4-5 hours after
the initial pinch in the streamer belt, or 2-3 hours after the first
observation of a well-formed cusp. The feature reaches its final speed
of $\sim320\pm15\, km\, s^{-1}$ within just 8 hours of the initial
pinch at 04:00 and within 6 hours of the first observation of the
well formed cusp at 06:00, and undergoes no further significant acceleration
nor deceleration during its obsered passage to beyond 1 AU over the
next five days.
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=6in]{f5.eps}
\par\end{centering}
\caption{\label{fig:accel-plots}Inferred position, speed, and acceleration
of the disconnected plasma from the 2008-Dec-18 event, during onset
(LEFT) and over the full observation period (RIGHT). Error bars are
derived by propagating \emph{a priori} location error and geometric
error in the longitude of the event. The shaded region indicates the
full time range of the left-side plots.}
\end{figure}
\subsection{\label{sub:Mass-profile}Mass profile}
We extracted photometric densities using the feature brightness in
suitable frames. The feature brightness is determined from the density
via the Thomson scattering equation (see, e.g., Howard \& Tappin 2009a
for a clear exposition). Compact features can be treated as nearly
point sources, and the line-of-sight integral for the optically thin
medium reduces to:\begin{equation}
B=B_{\odot}\Omega_{\odot}(r)\sigma_{e}\left(1+cos^{2}\chi\right)\rho\mu_{av}^{-1}d\label{eq:brightness-equation}\end{equation}
where $B$ is the measured feature brightness (in units of emissivity:
$Wm^{-2}SR^{-1}$), $B_{\odot}$ is (still) the solar surface brightness;
$\Omega_{\odot}(r)$ is the solid angle subtended by the Sun at the
point of scatter, well approximated by $\pi r_{\odot}^{2}r_{ev}^{-2}$
everywhere above about 4 $r_{\odot}$; $\sigma_{e}$ is the differential
Thomson scattering cross section, given by half of the square of the
classical electron radius $r_{e}^{2}/2=4.0\times10^{-30}m^{2}$; $\chi$
is the scattering angle (equal to $\epsilon'$ in Figure \ref{fig:Observing-geometry});
$\rho$ is the mass density; $\mu_{av}$ is the average mass per electron
in the coronal plasma; and $d$ is the depth of the feature.
$mu_{av}$ may be calculated from the spectroscopically measured 5\%
He/H number ratio in the corona (Laming \& Feldman 2000) and the
assumption that the helium is fully ionized (yielding two electrons
per ion). This yields $mu_{av}=1.1 m_{p}=1.84\times10^{-27}kg$.
Solving for the line-of-sight integrated mass surface density $\rho d$
gives:\begin{equation}
\rho d=\mu_{av}\frac{B}{B_{\odot}}\Omega_{\odot}^{-1}(r)\sigma_{e}^{-1}\left(1+cos^{2}\epsilon'\right)^{-1}\label{eq:surface-density}\end{equation}
and therefore\begin{equation}
m_{ev}=\left(\rho d\right)wh=\mu_{av}\frac{B}{B_{\odot}}\Omega_{\odot}^{-1}(r)\sigma_{e}^{-1}\left(1+cos^{2}\epsilon'\right)^{-1}\Omega_{ev}^{\left(1+cos^{2}\epsilon'\right)}S^{2}\label{eq:mass}\end{equation}
where $w$ and $h$ are the dimensions shown in Figure \ref{fig:Cartoon};
$\Omega_{ev}$ is the solid angle subtended by the feature in
the images; and $S$ is the calculated spacecraft-feature distance,
calculated by the law of sines as for $r_{ev}.$
To extract the mass profile from the data, we generated an image sequence
containing the feature, and marked the locus of the feature visually
using a pixel paint program. Using the generated masks, we summed
masked pixels in the feature for each photometric image, thereby integrating
the feature brightness over the solid angle represented by the corresponding
pixels, to obtain an intensity and an average brightness within the
feature. To account for errors in visual masking, we assigned error
bars based on one-pixel dilation and one-pixel contraction of the
masked locus. We omitted frames with excessive noise, encroachment
of an image boundary, or a star or cosmic ray in or near the feature.
The results of the calculation are given in Figure \ref{fig:mass},
which shows steady accretion of material through most of the journey
through the heliosphere.
Because our photometric analysis is based on subtraction of a calculated
background derived from the data set itself, we measure only excess
feature brightness (not absolute brightness) from Thomson scattering;
thus our brightness measurements and mass estimates are biased low,
because we cannot measure the absolute density of the background.
The initial derived mass of 20-25 Tg translates to an electron number density
of $2\times10^{7}\, cm^{-3}$ in the lower corona, which is comparable
to the density in bright coronal features -- so the total mass may
be up to a factor of order two higher.
The final {}``feature excess'' mass is $8\pm2\times10^{10}kg$, and the
final subtended solid angle is 0.028 SR, for a presented cross-section
of $4.3\pm0.1\times10^{20}m^{2}$. Taking the depth to be the square
root of the observed cross section yields an estimated volume at 1 AU
of $8.9\pm0.3\times10^{31}m^{3}$, for a total estimated excess
electron density of $5\pm1.4\, cm^{-3}$ at 1 AU, which is in good
agreement with slow solar wind densities ($3-10\, cm^{-3}$ when scaled
to 1 AU) that were observed by \emph{Ulysses} in situ in the same
heliographic latitude range (e.g. \citealt{McComas2000}). Approximately
2/3 of this excess density appears to have been accumulated enroute from
the surroundign solar wind; this is further described in Section \ref{sub:accretion}.
\begin{figure}[tbh]
\includegraphics[width=5.0in]{f6.eps}
\caption{\label{fig:mass}Photometrically determined excess mass profile of
the retracting disconnected feature of 2008 Dec 18. Error bars are
based on identification of the feature boundary in the images. The
trendline is extracted from regression of the HI-1 and HI-2 data.
The mass shown is excess mass in the feature compared to the background
solar wind (see text).}
\end{figure}
\subsection{Entrained magnetic flux}
\begin{figure}[tbh]
\includegraphics[width=3.0in]{f7.eps}
\caption{\label{fig:Cartoon}Cartoon of the initial acceleration process of
a disconnection event. Tension force along newly released field lines
is balanced by mass entrained on the field lines. By measuring the
acceleration and mass we infer the amount of magnetic flux that was
disconnected.}
\end{figure}
From the mass of the feature, and its acceleration, it is possible to
extract the entrained magnetic field by measuring the rate of change
of momentum and inferring a magnetic tension force via $f=ma$. The
system is sketched in Figure \ref{fig:Cartoon}. The magnetic tension
force is conserved along the open field lines, so we can calculate it
at any convenient cut plane including the one shown. Tension force is
frequently referred to as a ``curvature force'' and calculated
locally; here we integrate around the ``U'', and notice that the
integrated force is just the unbalanced tension on the field lines
contained in the ``U'' shape. It is therefore given
by
\begin{equation}
f=m_{ev}a_{ev}=f_{B}=\frac{B^{2}A}{2\mu_{0}}=\frac{{\Phi^{2}}}{2\mu_{0}A}\label{eq:tension-force}
\end{equation}
where, here, $B$ is the magnetic field strength (not brightness, as
before). Solving for $\Phi$,
\begin{equation}
\Phi=\sqrt{2\mu_{0}dwm_{ev}a_{ev}}\label{eq:flux}
\end{equation}
taking $m_{ev}$ to be $25Tg$ ($2.5\times10^{10}kg$) during the peak of
the acceleration, and taking $w=d=0.2R_{\odot}$ (based on the measured
width of the feature's fork during maximum acceleration, at $6R_{s}$
from the surface ($7R_{s}$ from Sun center) gives
$\Phi=1.6\times10^{11}Wb$ ($1.6\times10^{19}Mx$ ), corresponding to an
average field strength of 8$\mu T$ ( 0.08 Gauss) at that altitude, or
an equivalent $r^{2}$-scaled field of 400$\mu T$ (4 Gauss) at the
surface; this is comparable to accepted values of the open flux
density at the solar surface at solar minimum. Because of the way $m$
was calculated (section 2.5 above) this figure is probably low by a
factor of order $\sqrt{2}$.
\subsection{\label{sub:accretion}Accretion and force balance}
As the disconnectioned structure travels outward, it accretes new
material. This effect is dramatic: as seen in Figure \ref{fig:mass},
the mass increases by a factor of 3 from the corona to 1 AU. We
conjecture that the material is accreted by {}``snowplow'' effects
from the plasma ahead of the disconnected cusp as it propagates. For
the observed mass growth in the feature, new material must be
compressed to become visible in our Thomson scattering images, and the
most plausible way for it to be compressed is via ram effects. This
scenario also neatly explains the constant speed of the feature, by
balancing the continued tension force from the cusp with accretion
momentum transfer. Here we explore the concept of force balance
between accretion and the tension force, to identify whether some
other model is required in addition to this simple one.
Extending Newton's law to include momentum transfer by accretion,
and neglecting all but the tension force, \begin{equation}
\frac{\Phi^{2}}{2\mu_{0}A_{\Phi}}=m_{ev}a_{ev}+\frac{dm_{ev}}{dt}\Delta v,\label{eq:force}\end{equation}
where the LHS is just the tension force from Equation \ref{eq:tension-force},
with the modification that the cross section of the exiting field
lines is written $A_{\Phi}$; $a_{ev}=0$ after the initial acceleration;
and the second term represents momentum transfer into accreted material,
with$ $ $\Delta v$ being the difference between the feature speed
and surrounding wind speed. The feature is thus in equilibrium between
accretion drag and continued acceleration by the tension force. This
accretion drag is important to the observed increase in feature mass,
because ram pressure against the surrounding wind material is what
compresses incoming material and renders it visible in the data against
the subtracted background.
Applying conservation of mass, we can relate $\Delta v$ and the average
density of the background solar wind through which the feature is
propagating:\begin{equation}
\rho_{sw}=\frac{dm_{ev}/dt}{A_{ev}\Delta v}.\label{eq:conservation-of-mass}\end{equation}
where $A_{ev}$ is the geometrical area presented by the feature to
the slow wind ahead of it. Solving Equations \ref{eq:force} and \ref{eq:conservation-of-mass}
to eliminate $\Delta v$ gives\begin{equation}
\rho_{sw}=\left(\frac{dm_{ev}}{dt}\right)^{2}\left(\frac{2\mu_{0}}{\Phi^{2}}\right)\left(\frac{A_{\Phi}}{A_{ev}}\right),\label{eq:background-density}\end{equation}
which gives the background solar wind density in terms of the accumulation
rate of mass in the observed feature, assuming constant outflow for
both the wind and the feature, and acceleration by the tension force.
Given the conservation of mass and the approximately constant speed
of the solar wind, $\rho_{sw}$ falls as approximately $r^{-2}$.
Further, we observe nearly self-similar expansion throughout most
of the heliospheric range, so $A_{\Phi}/A_{ev}$ is constant in that
part of the trajectory -- hence $dm_{ev}/dt$ must also fall as $r^{-1}$
during the approximately constant speed portion of the feature's lifetime.
Using this functional form, we can extract an analytic expresson for
the feature mass versus radius. We introduce the $r^{-1}$ dependence by
switching from the linear regression used in Figure \ref{fig:mass}, to a
semi-log regression that assumes $dm_{ev}/d(log_{e}(r)$ to be constant.
Figure \ref{fig:Semilog-regression-fit} shows such a regression, with the
result that $dm_{ev}/dr=34\pm3\times10^{9}\left(R_{\odot}/r\right)kgR_{\odot}^{-1}$.
Including the measured outflow speed of $315\pm15\, km\, s^{-1}$,
we find that $dm_{ev}/dt=\left(1AU/r\right)\left(7.1\pm1\times10^{4}kg\, s^{-1}\right)$.
Including all of these values into Equation
\ref{eq:background-density}, together with the average particle mass
from Section \ref{sub:Mass-profile}, yields a background wind numeric
density of $n_{sw}\left(1\, AU\right)$ of $30\pm6\,
cm^{-3}\left(A_{\Phi}/A_{ev}\right)$. From the morphology of the
feature in Figure \ref{fig:Propagation}, we conservatively estimate
$A_{\Phi}/A_{ev}<0.25$, i.e. the forward cross section of the
``horns'' of the vee appears to be well under 1/4 of the cross section
of the vee itself. This value yields a derived background solar wind
density of $n_{sw}<8\, cm^{-3}$ at 1 AU to maintain the force balance in
Equation \ref{eq:force}. That figure is again in line with
the wind measurements from \emph{Ulysses} at $15^{\circ}$ heliographic
latitude (McComas et al. 2000), adding to the plausibility of the
accretion force balance picture. The corresponding mass density limit is
$\rho_{sw}<8\times10^{-19}\, kg\, m^{-3}$ at 1 AU
As a sanity check, we can use this $\rho_{sw}$ limit and Equation
\ref{eq:conservation-of-mass} to find that $\Delta v$ must then
be a few tens of $km\, s^{-1},$ i.e. the background wind speed must
be close to $300\, km\, s^{-1}$.
We conclude that the picture of force balance between snowplow accretion
and the tension force is at least broadly consistent with the observed
feature, though further study of more events (preferably with corresponding
\emph{in situ} measurements of the feature itself) is necessary.
\[
\]
\begin{figure}[tbh]
\includegraphics[width=5.0in]{f8.eps}
\caption{\label{fig:Semilog-regression-fit}Semilog regression fit of $m_{ev}(r)$}
\end{figure}
\section{Discussion}
Using data from STEREO/SECCHI, we have identified and measured the
characteristics of a single flux disconnection event and associated
cusp feature, similar to that discovered by McComas et al. (1992),
from initial detection in the lower corona to distances beyond 1 AU.
The cusp feature is formed in the classic X-point geometry and rapidly
accelerates under the tension force to approximately $320\, km\, s^{-1}$,
which it reaches in under 4 hours at an altitude of approximately
10$R_{\odot}$. Thereafter the feature continues to accumulate mass
but maintains approximately constant speed until it is lost to sight
1.2 AU from the Sun.
Based on photometry, we are able to estimate the onset mass of the
event as $25\, Tg$ and the entrained flux as $160\, GWb,$ corresponding
to a coronal field strength of $0.08\, G$ and an $r^{2}$-normalized
surface open field of $4\, G$ over the projected surface footprint
of the feature. These estimates are likely low by a factor of order
$\sqrt{2}$, because they make use of feature excess brightness rather
than absolute Thomson-scattered brightness in the coronagraph images;
using polarized-brightness imagery could improve the measurement by
separating the non-transient component of the Thomson scattering signal
from the unwanted F coronal background.
Because our measurements are all based on morphology and photometry,
we have performed several consistency checks to build confidence in
the calculated parameters of the feature as it propagates. In particular,
a model of simple force balance between the tension force and mass
accretion is consistent with both the inferred magnetic field and
accepted values for background slow solar wind density and speed.
Simple accretion models such as we developed here demonstrate clearly
why ejected features such as U-loops or CMEs seem frequently to propagate
at near constant speed: under continuous weak driving, an equilibrium
forms rapidly between the driving force and momentum transfer by mass
accretion. The equilibrium outflow speed is the sum of a large, fixed
(or at least driver-independent) speed -- that of the surrounding
wind -- with a smaller offset speed that drives mass accretion. Thus
the feature speed is quite insensitive to the driver. In our case,
doubling the tension force would only increase the outflow rate by
$\sim10\%$.
The event under study is well presented, but is not unusual at all;
such events are easy to identify in heliospheric image sequences,
because of their distinctive {}``U'' and cusp shape; they are readily
traced back to the corona. This technique represents a new, very effective
way of finding these disconnection events, which are small and hard
to identify\emph{ }in the coronagraph sequences alone, but are strongly
and easily visible in the processed heliospheric images.
In an initial reduced data set of 36 days near the deepest part of
the recent extended solar minimum (2008 Dec -- 2009 Jan), we identified
12 such events; all of them were identified by tracking {}``V''
or {}``U'' shapes back from the heliospheric images to the corona.
Assuming the present feature to be typical, and considering that the
single viewpoint affords clear coverage of about 1/4 of the circumference
of the Sun, we estimate the global disconnection feature rate at that
time to be over $1\, event\, d^{-1}$, and the flux disconnection
rate to thus be at least of order $60\, TWb\, y^{-1}$. Expanded to
a 1 AU sphere, this amounts to a rate of change of the open field
of order $0.2\, nT\, y^{-1}$, which is a significant fraction of
the observed cycle-dependent rate of change of the open heliospheric
field (e.g. Schwadron, Connick, \& Smith 2010). These figures are
based on a single calculated flux and an event rate obtained by initial
visual inspection of a single 36-day data set, and hence are merely
rough estimates -- but they indicate that flux disconnections of this
type are important to the global balance of open flux. Further study,
in the form of a systematic survey, is needed to determine whether
they are the primary mechanism of flux disconnection from the Sun.
\acknowledgements{The authors thank the STEREO instrument teams for making their data
available. Our image processing made heavy use of the freeware Perl
Data Language (http://pdl.perl.org). The work was enhanced by enlightening
conversations with J. Burkepile, C. Eyles, and N. Schwadron, to whom
we are indebted. This work was supported by NASA's SHP-GI program,
under grant NNG05GK14G.}
\section{References}
\bibliographystyle{apj}
|
1,116,691,497,680 | arxiv | \section{Introduction}
The quantized enveloping algebra $U_q(\widehat{\mathfrak{sl}}_2)$ has a subalgebra $U_q^+$, called the positive part \cite{charp,lusztig}. Both $U_q(\widehat{\mathfrak{sl}}_2)$ and $U_q^+$ appear in combinatorics \cite{drg,TD00,ter_alternating,ter_catalan,ter_pospart}, mathematical physics \cite{baspp,BB01}, and representation theory \cite{beck,charp,XXZ01}.
\medskip
\noindent In \cite{rosso}, M. Rosso introduced an embedding of the algebra $U_q^+$ into a $q$-shuffle algebra.
\medskip
\noindent In \cite{damiani}, I. Damiani obtained a Poincar\'e-Birkhoff-Witt (or PBW) basis for $U_q^+$. In her construction the PBW basis elements $\{E_{n\delta+\alpha_0}\}_{n=0}^\infty$, $\{E_{n\delta+\alpha_1}\}_{n=0}^\infty$, $\{E_{n\delta}\}_{n=1}^\infty$ are defined recursively. In \cite{ter_catalan} P. Terwilliger expressed the Damiani PBW basis elements in closed form, using the Rosso embedding of $U_q^+$.
\medskip
\noindent In \cite{ter_alternating}, Terwilliger used the Rosso embedding to obtain a type of element in $U_q^+$, said to be alternating. The alternating elements fall into four families, denoted by $\{W_{-n}\}_{n \in \mN}$, $\{W_{n+1}\}_{n \in \mN}$, $\{G_n\}_{n \in \mN}$, $\{\tilde{G}_n\}_{n \in \mN}$. It was shown in \cite{ter_alternating} that the alternating elements $\{W_{-n}\}_{n \in \mN}$, $\{W_{n+1}\}_{n \in \mN}$, $\{\tilde{G}_{n+1}\}_{n \in \mN}$ form a PBW basis for $U_q^+$. In \cite[Section 9]{ter_alternating}, Terwilliger considered the generating function $\tilde{G}(t)$ for $\{\tilde{G}_n\}_{n \in \mN}$. He used the multiplicative inverse $D(t)$ of $\tilde{G}(t)$ to describe the Damiani PBW basis under the Rosso embedding; see \cite[Proposition 11.9]{ter_alternating}.
\medskip
\noindent To motivate our results, we make some comments about $D(t)$. We mentioned that $D(t)$ is the multiplicative inverse of $\tilde{G}(t)$. Using this relationship, the coefficients of $D(t)$ can be computed recursively from the coefficients $\tilde{G}_n$ of $\tilde{G}(t)$. A calculation of the first few coefficients of $D(t)$ suggests that the coefficients of $D(t)$ admit a closed form. Our goal in this paper is to express these coefficients in closed form. We will state our main result shortly.
\medskip
\noindent First, we establish some conventions and notation.
\medskip
\noindent In this paper, $\mN=\{0,1,2,\ldots\}$ is the set of natural numbers, and $\mZ=\{0,\pm 1,\pm 2,\ldots\}$ is the set of integers. The letters $n,k,i,j,r,s$ always represent an integer. Let $\mF$ denote a field. All algebras discussed are over $\mF$, associative, and with a multiplicative identity. Let $q$ denote a nonzero scalar in $\mF$ that is not a root of unity. For $n \in \mN$, define
\[
[n]_q=\frac{q^n-q^{-n}}{q-q^{-1}}, \hspace{4em} [n]_q^!=[n]_q[n-1]_q \cdots [1]_q.
\]
We interpret $[0]_q^!=1$.
\medskip
\noindent We will be looking at the positive part of $U_q(\widehat{\mathfrak{sl}}_2)$, denoted by $U_q^+$ \cite{charp,lusztig}. The algebra $U_q^+$ is defined by generators $A,B$ and the $q$-Serre relations
\begin{equation*}\label{qserre1}
A^3B-[3]_qA^2BA+[3]_qABA^2-BA^3=0,
\end{equation*}
\begin{equation*}\label{qserre2}
B^3A-[3]_qB^2AB+[3]_qBAB^2-AB^3=0.
\end{equation*}
Next we recall the Rosso embedding of $U_q^+$ into a $q$-shuffle algebra \cite{rosso}. Let $x,y$ denote noncommuting indeterminates (called \textit{letters}). Let $\mV$ denote the free algebra generated by $x,y$. A product $v_1v_2 \cdots v_n$ of letters is called a \textit{word}, and $n$ is called the \textit{length} of this word. The word of length $0$ is called \textit{trivial} and denoted by $\m1$. The words form a basis for the vector space $\mV$, called the \textit{standard basis}. The vector space $\mV$ admits another algebra structure called the $q$-shuffle algebra. The $q$-shuffle algebra was first introduced by Rosso \cite{rosso1,rosso} and later reinterpreted by Green \cite{green}. The $q$-shuffle product, denoted by $\star$, is defined recursively as follows:
\begin{itemize}
\item For $v \in \mV$,
\begin{equation*}\label{star1}
\m1 \star v=v \star \m1=v.
\end{equation*}
\item For the letters $u,v$,
\begin{equation*}\label{star2}
u \star v=uv+vuq^{\langle u,v \rangle},
\end{equation*}
where
\[
\langle x,x \rangle=\langle y,y \rangle =2, \hspace{4em}\langle x,y \rangle=\langle y,x \rangle=-2.
\]
\item For a letter $u$ and a nontrivial word $v=v_1v_2 \cdots v_n$ in $\mV$,
\begin{equation*}\label{star3.1}
u \star v=\sum_{i=0}^n v_1 \cdots v_iuv_{i+1} \cdots v_n q^{\langle u,v_1 \rangle+\cdots+\langle u,v_i \rangle},
\end{equation*}
\begin{equation*}\label{star3.2}
v \star u=\sum_{i=0}^n v_1 \cdots v_iuv_{i+1} \cdots v_n q^{\langle u,v_n \rangle+\cdots+\langle u,v_{i+1} \rangle}.
\end{equation*}
\item For nontrivial words $u=u_1u_2 \cdots u_r$ and $v=v_1v_2 \cdots v_s$ in $\mV$,
\begin{equation*}\label{star4.1}
u \star v=u_1((u_2 \cdots u_r) \star v)+v_1(u \star (v_2 \cdots v_s))q^{\langle v_1,u_1 \rangle+\cdots+\langle v_1,u_r \rangle},
\end{equation*}
\begin{equation*}\label{star4.2}
u \star v=(u \star (v_1 \cdots v_{s-1}))v_s+((u_1 \cdots u_{r-1}) \star v)u_rq^{\langle u_r,v_1 \rangle+\cdots+\langle u_r,v_s \rangle}.
\end{equation*}
\end{itemize}
Note that the $q$-shuffle product of two words of length $l_1,l_2$ is a linear combination of words of length $l_1+l_2$.
\medskip
\noindent Green showed in \cite{green} that $x,y$ satisfy the $q$-Serre relations in the $q$-shuffle algebra $\mV$:
\begin{equation*}\label{starqserre1}
x \star x \star x \star y-[3]_qx \star x \star y \star x+[3]_qx \star y \star x \star x-y \star x \star x \star x=0,
\end{equation*}
\begin{equation*}\label{starqserre2}
y \star y \star y \star x-[3]_qy \star y \star x \star y+[3]_qy \star x \star y \star y-x \star y \star y \star y=0.
\end{equation*}
As a result there exists an algebra homomorphism $\natural$ from $U_q^+$ to the $q$-shuffle algebra $\mV$ that sends $A \mapsto x,B \mapsto y$. The map $\natural$ is injective by \cite[Theorem 15]{rosso}. Let $U$ denote the subalgebra of the $q$-shuffle algebra $\mV$ generated by $x,y$. By construction, the image of $\natural$ is $U$.
\medskip
\noindent We now mention some special words in $\mV$ that will be useful later.
\begin{definition}\label{def:alternating}\rm
(See \cite[Definition 5.2, Lemma 5.4]{ter_alternating}.) We define $G_0=\tilde{G}_0=\m1$.
\medskip
\noindent For $n \in \mN$, define
\[G_{n+1}=G_n yx,\hspace{2em} \tilde{G}_{n+1}=\tilde{G}_n xy,\]
\[W_{-n}=\tilde{G}_nx,\hspace{2em} W_{n+1}=y\tilde{G}_n.\]
\medskip
\noindent The words $\{W_{-n}\}_{n \in \mN}$, $\{W_{n+1}\}_{n \in \mN}$, $\{G_{n}\}_{n \in \mN}$, $\{\tilde{G}_{n}\}_{n \in \mN}$ are called \textit{alternating}.
\end{definition}
\begin{example}\label{ex:alternating}\rm
We have
\begin{equation*}
\begin{split}
&W_0=x, \hspace{2em} W_{-1}=xyx, \hspace{2em} W_{-2}=xyxyx, \hspace{2em} \ldots \\
&W_1=y, \hspace{2em} W_2=yxy, \hspace{2em} W_3=yxyxy, \hspace{2em} \ldots \\
&G_1=yx, \hspace{2em} G_2=yxyx, \hspace{2em} G_3=yxyxyx, \hspace{2em} \ldots \\
&\tilde{G}_1=xy, \hspace{2em} \tilde{G}_2=xyxy, \hspace{2em} \tilde{G}_3=xyxyxy, \hspace{2em} \ldots
\end{split}
\end{equation*}
\end{example}
\noindent By \cite[Theorem 8.3]{ter_alternating}, the alternating words are contained in $U$.
\medskip
\noindent It is shown in \cite[Proposition 5.10]{ter_alternating} that with respect to $\star$, $\{W_{-n}\}_{n \in \mN}$ mutually commute, $\{W_{n+1}\}_{n \in \mN}$ mutually commute, $\{G_{n}\}_{n \in \mN}$ mutually commute, and $\{\tilde{G}_{n}\}_{n \in \mN}$ mutually commute. Furthermore, by \cite[Theorem 10.1]{ter_alternating} the alternating words $\{W_{-n}\}_{n \in \mN}$, $\{W_{n+1}\}_{n \in \mN}$, $\{\tilde{G}_{n+1}\}_{n \in \mN}$ form a PBW basis for $U$.
\medskip
\noindent In this paper, we focus on the alternating words $\{\tilde{G}_n\}_{n \in \mN}$. Consider their generating function
\[\tilde{G}(t)=\sum_{n=0}^\infty \tilde{G}_nt^n.\]
We will be discussing the multiplicative inverse of $\tilde{G}(t)$ with respect to $\star$. We now introduce this inverse.
\begin{definition}\label{def:Dt}\rm
(See \cite[Definition 9.5]{ter_alternating}.) We define the elements $\{D_n\}_{n \in \mN}$ of $U$ in the following recursive way:
\begin{equation}\label{convolution}
D_0=\m1, \hspace{4em} D_n=-\sum_{k=0}^{n-1}D_k \star \tilde{G}_{n-k} \hspace{1.5em} (n \geq 1).
\end{equation}
Define the generating function
\begin{equation*}\label{eq:Dt}
D(t)=\sum_{n=0}^\infty D_nt^n.
\end{equation*}
\end{definition}
\begin{lemma}\label{lem:Dtinverse}\rm
(See \cite[Lemma 4.1]{ter_conjecture}.) The generating function $D(t)$ is the multiplicative inverse of $\tilde{G}(t)$ with respect to $\star$. In other words,
\begin{equation}\label{eq:Dtinverse}
\tilde{G}(t) \star D(t)=\m1=D(t) \star \tilde{G}(t).
\end{equation}
\end{lemma}
\begin{proof}
The relation \eqref{eq:Dtinverse} can be checked routinely using \eqref{convolution}.
\end{proof}
\noindent For $n \in \mN$ we can calculate $D_n$ recursively using \eqref{convolution}.
\begin{example}\label{ex:Dn}\rm
We list $D_n$ for $0 \leq n \leq 3$.
\[
D_0=\m1, \hspace{4em} D_1=-xy, \hspace{4em} D_2=xyxy+[2]_q^2xxyy,
\]
\[
D_3=-xyxyxy-[2]_q^2xxyyxy-[2]_q^2xyxxyy-[2]_q^4xxyxyy-[2]_q^2[3]_q^2xxxyyy.
\]
\end{example}
\noindent We are going to obtain a closed formula for $D_n$.
\medskip
\noindent To motivate the formula, let us examine Example \ref{ex:Dn}. We can see that each $D_n$ is a linear combination of words of length $2n$, and each coefficient is equal to $(-1)^n$ times a square. Furthermore, the words appearing in the linear combination have a certain type said to be Catalan. We now recall the definition of a Catalan word.
\begin{definition}\label{def:Cat}\rm
(See \cite[Definition 1.3]{ter_catalan}.) Define $\overline{x}=1$ and $\overline{y}=-1$. A word $a_1 \cdots a_k$ is \textit{Catalan} whenever $\overline{a}_1+\cdots+\overline{a}_i \geq 0$ for $1 \leq i \leq k-1$ and $\overline{a}_1+\cdots+\overline{a}_k=0$. The length of a Catalan word is always even. For $n \in \mN$, let $\Cat_n$ denote the set of all Catalan words of length $2n$.
\end{definition}
\begin{example}\label{ex:Catn}\rm
We describe $\Cat_n$ for $0 \leq n \leq 3$.
\[
\Cat_0=\{\m1\}, \hspace{4em} \Cat_1=\{xy\}, \hspace{4em} \Cat_2=\{xyxy,xxyy\},
\]
\[
\Cat_3=\{xyxyxy,xxyyxy,xyxxyy,xxyxyy,xxxyyy\}.
\]
\end{example}
\noindent We observe that for $0 \leq n \leq 3$ each $D_n$ is a linear combination of Catalan words of length $2n$. We now show that this observation is true for all $n \in \mN$.
\begin{proposition}\label{prop:Dndecomp}\rm
For $n \in \mN$, $D_n$ is contained in the span of $\Cat_n$.
\end{proposition}
\begin{proof}
For $n \in \mN$, by Definition \ref{def:alternating} we have that $\tilde{G}_n=xyxy \cdots xy$ where the $xy$ is repeated $n$ times. The word $\tilde{G}_n$ is Catalan by Definition \ref{def:Cat}. Note that the $q$-shuffle product of two Catalan words is a linear combination of Catalan words. The result follows by \eqref{convolution} and induction on $n$.
\end{proof}
\begin{definition}\label{def:Dn}\rm
For $n \in \mN$ and a word $w \in \Cat_n$, let $(-1)^nD(w)$ denote the coefficient of $w$ in $D_n$. In other words,
\begin{equation}\label{eq:Dn}
D_n=(-1)^n\sum_{w \in \Cat_n}D(w)w.
\end{equation}
\end{definition}
\begin{example}\rm
In the table below, we list the Catalan words $w$ of length $\leq 6$ and the corresponding $D(w)$.
\begin{center}
\begin{tabular}{ c|ccccccccc }
$w$ & $\m1$ & $xy$ & $xyxy$ & $xxyy$ & $xyxyxy$ & $xxyyxy$ & $xyxxyy$ & $xxyxyy$ & $xxxyyy$\\
\hline
$D(w)$ & 1 & 1 & 1 & $[2]_q^2$ & 1 & $[2]_q^2$ & $[2]_q^2$ & $[2]_q^4$ & $[2]_q^2[3]_q^2$
\end{tabular}
\end{center}
\end{example}
\noindent By \eqref{eq:Dn}, our goal of finding a closed formula for $D_n$ reduces to finding a closed formula for $D(w)$ where $w$ is Catalan. The following is the main theorem of this paper.
\begin{theorem}\label{thm:main}\rm
For $n \in \mN$ and a word $w=a_1 \cdots a_{2n} \in \Cat_n$, we have
\begin{equation}\label{eq:Dw}
D(w)=\prod_{i=1}^{2n}\left[\overline{a}_1+\cdots+\overline{a}_{i-1}+(\overline{a}_i+1)/2\right]_q.
\end{equation}
Moreover,
\begin{equation}\label{eq:Dw1}
D(w)=E(w)^2,
\end{equation}
where
\begin{equation}\label{eq:Dw2}
E(w)=\prod_{\substack{1 \leq i \leq 2n \\ a_i=x}}[\overline{a}_1+\cdots+\overline{a}_i]_q=\prod_{\substack{1 \leq i \leq 2n \\ a_i=y}}[\overline{a}_1+\cdots+\overline{a}_{i-1}]_q.
\end{equation}
\end{theorem}
\begin{remark}\rm
There is a striking resemblance between \eqref{eq:Dw} and \cite[Definition 2.5]{ter_catalan}. While not explicitly used in our proofs, this resemblance did motivate our proof techniques and our interest in this entire topic.
\end{remark}
\section{The proof of Theorem \ref{thm:main}}
In this section, we prove Theorem \ref{thm:main}.
\begin{definition}\label{def:cDw}\rm
For $n \in \mN$ and a word $w=a_1 \cdots a_{2n} \in \Cat_n$, we define
\begin{equation*}\label{Dw}
\cD(w)=\prod_{i=1}^{2n}\left[\overline{a}_1+\cdots+\overline{a}_{i-1}+(\overline{a}_i+1)/2\right]_q,
\end{equation*}
\[\cD_x(w)=\prod_{\substack{1 \leq i \leq 2n \\ a_i=x}}[\overline{a}_1+\cdots+\overline{a}_i]_q,\]
\[\cD_y(w)=\prod_{\substack{1 \leq i \leq 2n \\ a_i=y}}[\overline{a}_1+\cdots+\overline{a}_{i-1}]_q.\]
\end{definition}
\noindent In order to prove Theorem \ref{thm:main}, we establish the following for all Catalan words $w$:
\begin{enumerate}[(i)]
\item $\cD(w)=D(w)$;
\item $\cD(w)=\cD_x(w)\cD_y(w)$;
\item $\cD_x(w)=\cD_y(w)$.
\end{enumerate}
\noindent Item (i) will be achieved in Theorem \ref{thm:DD}.
\medskip
\noindent Item (ii) will be achieved in Lemma \ref{rem:Dw}.
\medskip
\noindent Item (iii) will be achieved in Lemma \ref{rem:Dwprofile0}.
\begin{lemma}\label{rem:Dw}\rm
For any Catalan word $w$, we have
\[
\cD(w)=\cD_x(w)\cD_y(w).
\]
\end{lemma}
\begin{proof}
Note that $(\overline{x}+1)/2=1$ and $(\overline{y}+1)/2=0$, so the result follows by Definition \ref{def:cDw}.
\end{proof}
\noindent Next we will show item (iii). In order to do this, we now recall the concept of elevation sequences and profiles.
\begin{definition}\label{def:elevation_sequence}\rm
(See \cite[Definition 2.6]{ter_catalan}.) For $n \in \mN$ and a word $w=a_1 \cdots a_{n}$, its \textit{elevation sequence} is $(e_0,\ldots,e_{n})$, where $e_i=\overline{a}_0+\cdots+\overline{a}_i$ for $0 \leq i \leq n$.
\end{definition}
\begin{example}\rm
In the table below, we list the Catalan words $w$ of length $\leq 6$ and the corresponding elevation sequences.
\begin{center}
\begin{tabular}{ c|c }
$w$ & elevation sequence of $w$\\
\hline
$1$ & $(0)$\\
$xy$ & $(0,1,0)$\\
$xyxy$ & $(0,1,0,1,0)$\\
$xxyy$ & $(0,1,2,1,0)$\\
$xyxyxy$ & $(0,1,0,1,0,1,0)$\\
$xxyyxy$ & $(0,1,2,1,0,1,0)$\\
$xyxxyy$ & $(0,1,0,1,2,1,0)$\\
$xxyxyy$ & $(0,1,2,1,2,1,0)$\\
$xxxyyy$ & $(0,1,2,3,2,1,0)$
\end{tabular}
\end{center}
\end{example}
\begin{definition}\label{def:profile}\rm
(See \cite[Definition 2.8]{ter_catalan}.) For $n \in \mN$ and a word $w=a_1 \cdots a_{n}$, its \textit{profile} is the subsequence of its elevation sequence consisting of the $e_i$ that satisfy one of the following conditions:
\begin{itemize}
\item $i=0$;
\item $i=n$;
\item $1 \leq i \leq n-1$ and $e_{i+1}-e_i \neq e_i-e_{i-1}$.
\end{itemize}
In other words, the profile of a word $w$ is the subsequence of the elevation sequence of $w$ consisting of the end points and turning points.
\medskip
\noindent By a \textit{Catalan profile} we mean the profile of a Catalan word.
\end{definition}
\begin{example}\rm
In the table below, we list the Catalan words $w$ of length $\leq 6$ and the corresponding profiles.
\begin{center}
\begin{tabular}{ c|c }
$w$ & profile of $w$\\
\hline
$1$ & $(0)$\\
$xy$ & $(0,1,0)$\\
$xyxy$ & $(0,1,0,1,0)$\\
$xxyy$ & $(0,2,0)$\\
$xyxyxy$ & $(0,1,0,1,0,1,0)$\\
$xxyyxy$ & $(0,2,0,1,0)$\\
$xyxxyy$ & $(0,1,0,2,0)$\\
$xxyxyy$ & $(0,2,1,2,0)$\\
$xxxyyy$ & $(0,3,0)$\\
\end{tabular}
\end{center}
\end{example}
\begin{lemma}\label{lem:Dwprofile0}\rm
For a Catalan word $w$ with profile $(l_0,h_1,l_1,\ldots,h_r,l_r)$, we have
\[
\cD_x(w)=\dfrac{[h_1]_q^!\cdots[h_r]_q^!}{[l_0]_q^!\cdots[l_r]_q^!},
\]
\[
\cD_y(w)=\dfrac{[h_1]_q^!\cdots[h_r]_q^!}{[l_0]_q^!\cdots[l_r]_q^!}.
\]
\end{lemma}
\begin{proof}
Follows from \cite[Lemma 2.10]{ter_catalan} by direct computation.
\end{proof}
\begin{lemma}\label{rem:Dwprofile0}\rm
For any Catalan word $w$, we have
\[\cD_x(w)=\cD_y(w).\]
\end{lemma}
\begin{proof}
Follows from Lemma \ref{lem:Dwprofile0}.
\end{proof}
\begin{lemma}\label{lem:Dwprofile}\rm
For $n \in \mN$ and a word $w \in \Cat_n$ with profile $(l_0,h_1,l_1,\ldots,h_r,l_r)$, we have
\[\cD(w)=\cD_x(w)^2=\cD_y(w)^2=\left(\dfrac{[h_1]_q^!\cdots[h_r]_q^!}{[l_0]_q^!\cdots[l_r]_q^!}\right)^2.\]
\end{lemma}
\begin{proof}
Follows from Lemmas \ref{rem:Dw}, \ref{lem:Dwprofile0}, \ref{rem:Dwprofile0}.
\end{proof}
\noindent Motivated by Lemma \ref{lem:Dwprofile}, we make the following definition.
\begin{definition}\label{def:cDwprofile}\rm
Given a Catalan profile $(l_0,h_1,l_1,\ldots,h_r,l_r)$, define
\[\cD(l_0,h_1,l_1,\ldots,h_r,l_r)=\left(\dfrac{[h_1]_q^!\cdots[h_r]_q^!}{[l_0]_q^!\cdots[l_r]_q^!}\right)^2.\]
\end{definition}
\begin{definition}\label{def:cDn}\rm
For $n \in \mN$, we define
\[\cD_n=(-1)^n\sum_{w \in \Cat_n}\cD(w)w.\]
We interpret $\cD_0=\m1$.
\end{definition}
\noindent Next we will achieve a recurrence relation involving the $\cD_n$. This will be accomplished in Proposition \ref{thm:Dnrecursion}.
\begin{lemma}\label{lem:Dprofile}\rm
For a Catalan profile $(l_0,h_1,l_1,\ldots,h_r,l_r)$ with $r \geq 1$,
\begin{align*}
&\cD(l_0,h_1,l_1,\ldots,h_r,l_r)\\
&=\sum_{j=\xi}^{r-1}\cD(l_0,h_1,l_1,\ldots,h_j,l_j,h_{j+1}-1,l_{j+1}-1,\ldots,l_{r-1}-1,h_r-1,l_r)\left([h_{j+1}]_q^2-[l_j]_q^2\right),
\end{align*}
where $\xi=\max\{j \mid 0 \leq j \leq r-1,l_j=0\}$.
\end{lemma}
\begin{proof}
To prove the above equation, consider the quotient of the right-hand side divided by the left-hand side. We will show that this quotient is equal to $1$.
\medskip
\noindent By Definition \ref{def:cDwprofile}, the above quotient is equal to
\begin{align*}
&\sum_{j=\xi}^{r-1}\frac{[l_{j+1}]_q^2\cdots[l_{r-1}]_q^2}{[h_{j+1}]_q^2\cdots[h_r]_q^2}\left([h_{j+1}]_q^2-[l_j]_q^2\right) \\
&=\frac{1}{[h_{\xi+1}]_q^2\cdots[h_r]_q^2}\sum_{j=\xi}^{r-1}[h_{\xi+1}]_q^2\cdots[h_j]_q^2[l_{j+1}]_q^2\cdots[l_{r-1}]_q^2\left([h_{j+1}]_q^2-[l_j]_q^2\right) \\
&=\frac{1}{[h_{\xi+1}]_q^2\cdots[h_r]_q^2}\sum_{j=\xi}^{r-1}\left([h_{\xi+1}]_q^2\cdots[h_{j+1}]_q^2[l_{j+1}]_q^2\cdots[l_{r-1}]_q^2-[h_{\xi+1}]_q^2\cdots[h_j]_q^2[l_j]_q^2\cdots[l_{r-1}]_q^2\right) \\
&=\frac{1}{[h_{\xi+1}]_q^2\cdots[h_r]_q^2}\left([h_{\xi+1}]_q^2\cdots[h_r]_q^2-[l_\xi]_q^2\cdots[l_{r-1}]_q^2\right) \\
&=1,
\end{align*}
where the last step follows from $l_\xi=0$.
\end{proof}
\begin{lemma}\label{lem:innerprod0}\rm
For any Catalan word $w=a_1 \cdots a_m$, we have
\[
\frac{q x \star w-q^{-1} w \star x}{q-q^{-1}}=\sum_{i=0}^ma_1 \cdots a_ixa_{i+1} \cdots a_m[1+2\overline{a}_1+\cdots+2\overline{a}_i]_q.
\]
\end{lemma}
\begin{proof}
By the definition of the $q$-shuffle product, we have
\begin{align*}
& \frac{q x \star w-q^{-1} w \star x}{q-q^{-1}} \\
& =\sum_{i=0}^m a_1 \cdots a_ixa_{i+1} \cdots a_m \hspace{0.25em} \frac{q^{1+2\overline{a}_1+\cdots+2\overline{a}_i}-q^{-1+2\overline{a}_{i+1}+\cdots+2\overline{a}_m}}{q-q^{-1}} \\
& =\sum_{i=0}^m a_1 \cdots a_ixa_{i+1} \cdots a_m \hspace{0.25em} \frac{q^{1+2\overline{a}_1+\cdots+2\overline{a}_i}-q^{-1-2\overline{a}_1-\cdots-2\overline{a}_i}}{q-q^{-1}} \\
& =\sum_{i=0}^m a_1 \cdots a_ixa_{i+1} \cdots a_m[1+2\overline{a}_1+\cdots+2\overline{a}_i]_q.
\end{align*}
\end{proof}
\noindent For notation convenience, we bring in a bilinear form on $\mV$.
\begin{definition}\label{def:biform}\rm
(See \cite[Page 6]{ter_catalan}.) Let $(~,~):\mV \times \mV \to \mF$ denote the bilinear form such that $(w,w)=1$ for any word $w$ in $\mV$ and $(w,v)=0$ for any distinct words $w,v$ in $\mV$.
\end{definition}
\noindent Observe that $(~,~)$ is non-degenerate and symmetric. For any word $w$ in $\mV$ and any $u \in \mV$, the scalar $(w,u)$ is the coefficient of $w$ in $u$.
\begin{lemma}\label{lem:innerprod}\rm
For any word $v$ and any Catalan word $w=a_1 \cdots a_m$, consider the scalar
\begin{equation}\label{eq:innerprod}
\left(\frac{(q x \star w-q^{-1} w \star x)y}{q-q^{-1}},v\right).
\end{equation}
\begin{enumerate}
\item If $v$ is Catalan and of length $m+2$, then the scalar \eqref{eq:innerprod} is equal to
\[\sum_i[1+2\overline{a}_1+\cdots+2\overline{a}_i]_q,\]
where the sum is over all $i$ $(1 \leq i \leq m)$ such that $v=a_1 \cdots a_ixa_{i+1} \cdots a_my$.
\item If $v$ is not Catalan or is not of length $m+2$, then the scalar\eqref{eq:innerprod} is equal to $0$.
\end{enumerate}
\end{lemma}
\begin{proof}
Follows from Lemma \ref{lem:innerprod0}.
\end{proof}
\begin{lemma}\label{lem:Dvdecomp}\rm
For $n \geq 1$ and a word $v \in \Cat_n$, we have
\[\cD(v)=\sum_{w \in \Cat_{n-1}}\cD(w)\left(\frac{(q x \star w-q^{-1} w \star x)y}{q-q^{-1}},v\right).\]
\end{lemma}
\begin{proof}
By Lemma \ref{lem:innerprod}, it suffices to show that $\cD(v)$ is equal to
\begin{equation}\label{eq:cD(v)}
\sum_{w,i}\cD(w)[1+2\overline{a}_1+\cdots+2\overline{a}_i]_q,
\end{equation}
where the sum is over all ordered pairs $(w,i)$ such that $w=a_1 \cdots a_{2n-2} \in \Cat_{n-1}$ and $v=a_1 \cdots a_ixa_{i+1} \cdots a_{2n-2}y$.
\medskip
\noindent Let $(l_0,h_1,l_1,\ldots,h_r,l_r)$ denote the profile of $v$ and let $\xi=\max\{j \mid 0 \leq j \leq r-1,l_j=0\}$.
\medskip
\noindent To compute the sum \eqref{eq:cD(v)}, we study what kind of words $w$ are being summed over and what is the coefficient for each corresponding $\cD(w)$.
\medskip
\noindent For any $w$ being summed over in \eqref{eq:cD(v)}, its profile must be of the form
\[(l_0,h_1,l_1,\ldots,h_j,l_j,h_{j+1}-1,l_{j+1}-1,\ldots,l_{r-1}-1,h_r-1,l_r)\]
for some $j$ such that $\xi \leq j \leq r-1$. (If $j<\xi$, then the profile of $w$ contains $l_\xi-1=-1$, which means $w$ is not Catalan.)
\medskip
\noindent For such $w$, the coefficient of $\cD(w)$ in \eqref{eq:cD(v)} is
\[\sum_{s=l_j}^{h_{j+1}-1}[1+2s]_q,\]
which is equal to
\[[h_{j+1}]_q^2-[l_j]_q^2\]
by direct computation.
\medskip
\noindent Therefore, by Lemma \ref{lem:Dprofile} we have
\begin{align*}
&\sum_{w,i}\cD(w)[1+2\overline{a}_1+\cdots+2\overline{a}_i]_q \\
&=\sum_{j=\xi}^{r-1}\cD(l_0,h_1,l_1,\ldots,h_j,l_j,h_{j+1}-1,l_{j+1}-1,\ldots,l_{r-1}-1,h_r-1,l_r)\left([h_{j+1}]_q^2-[l_j]_q^2\right) \\
&=\cD(l_0,h_1,l_1,\ldots,h_r,l_r) \\
&=\cD(v).
\end{align*}
\end{proof}
\begin{proposition}\label{thm:Dnrecursion}\rm
For $n \geq 1$,
\begin{equation}\label{eq:Dnrecursion}
\cD_n=\frac{(q^{-1} \cD_{n-1} \star x-q x \star \cD_{n-1})y}{q-q^{-1}}.
\end{equation}
\end{proposition}
\begin{proof}
Given any word $v$, we will show that its inner product with the right-hand side of \eqref{eq:Dnrecursion} coincides with $(\cD_n,v)$.
\medskip
\noindent If $v$ does not have length $2n$, then the two inner products are both $0$.
\medskip
\noindent If $v$ is not Catalan, then $(\cD_n,v)=0$ by Definition \ref{def:cDn}, and
\[\left(\frac{(q^{-1} \cD_{n-1} \star x-q x \star \cD_{n-1})y}{q-q^{-1}},v\right)=0\]
by Definition \ref{def:cDn} and Lemma \ref{lem:innerprod}.
\medskip
\noindent If $v \in \Cat_n$, then by Definition \ref{def:cDn} and Lemma \ref{lem:Dvdecomp},
\begin{align*}
&\left(\frac{(q^{-1} \cD_{n-1} \star x-q x \star \cD_{n-1})y}{q-q^{-1}},v\right) \\
&=(-1)^n\sum_{w \in \Cat_{n-1}}\cD(w)\left(\frac{(q x \star w-q^{-1} w \star x)y}{q-q^{-1}},v\right) \\
&=(-1)^n\cD(v) \\
&=(\cD_n,v).
\end{align*}
\end{proof}
\begin{definition}\label{def:cDt}\rm
(See \cite[Definition 9.11]{ter_alternating}.) We define a generating function
\[
\cD(t)=\sum_{n=0}^\infty \cD_nt^n,
\]
where $\cD_n$ is from Definition \ref{def:cDn}.
\end{definition}
\noindent Next we will show that $\cD(t)=D(t)$. To do this, we will show that $\cD(t)$ is the multiplicative inverse of $\tilde{G}(t)$ with respect to $\star$. This will be accomplished in Proposition \ref{prop:inverse}.
\begin{lemma}\label{lem:inverse0}\rm
For $k \in \mN$, we have
\[
q\tilde{G}_k \star x=(q-q^{-1})W_{-k}+q^{-1}x \star \tilde{G}_k.
\]
\end{lemma}
\begin{proof}
Follows from the definition of $\star$ by direct computation.
\end{proof}
\begin{lemma}\label{lem:inverse}\rm
For $n \geq 1$,
\begin{equation}\label{convolution2}
\cD_n=-\sum_{k=1}^n\tilde{G}_k \star \cD_{n-k}.
\end{equation}
\end{lemma}
\begin{proof}
We use induction on $n$.
\medskip
\noindent First assume that $n=1$. Then \eqref{convolution2} holds because
\[\cD_0=\m1, \hspace{4em} \cD_1=-xy, \hspace{4em} \tilde{G}_1=xy.\]
Next assume that $n \geq 2$. By induction,
\begin{equation}\label{i1}
\cD_{n-1}=-\sum_{k=1}^{n-1}\tilde{G}_k \star \cD_{n-1-k}.
\end{equation}
In order to prove \eqref{convolution2}, it suffices to show
\begin{equation}\label{i1.5}
\sum_{k=1}^{n-1}\tilde{G}_k \star \cD_{n-k}=-\cD_n-\tilde{G}_n.
\end{equation}
For $1 \leq k \leq n-1$ we examine the $k$-summand in \eqref{i1.5}. We use the following notation: for a word $w$ ending with the letter $y$, the word $wy^{-1}$ is obtained from $w$ by removing the rightmost $y$. Furthermore, for a linear combination $A$ of words ending in $y$, the element $Ay^{-1}$ is obtained from $A$ by removing the rightmost $y$ of each word in the linear combination.
\medskip
\noindent Note that $\tilde{G}_k$ is a word ending in $y$, and $\cD_{n-k}$ is a linear combination of Catalan words which end in $y$ by Definition \ref{def:Cat}, so
\begin{equation}\label{i2}
\tilde{G}_k \star \cD_{n-k}=(\tilde{G}_ky^{-1} \star \cD_{n-k})y+(\tilde{G}_k \star \cD_{n-k}y^{-1})y.
\end{equation}
We focus on the second term of the right-hand side of \eqref{i2}. By Proposition \ref{thm:Dnrecursion} and Lemma \ref{lem:inverse0}, we have
\begin{equation*}\label{i3}
\begin{split}
&\tilde{G}_k \star \cD_{n-k}y^{-1} \\
&=-\frac{1}{q-q^{-1}}\tilde{G}_k \star (qx \star \cD_{n-k-1}-q^{-1}\cD_{n-k-1} \star x) \\
&=-\frac{q}{q-q^{-1}}\tilde{G}_k \star x \star \cD_{n-k-1}+\frac{q^{-1}}{q-q^{-1}}\tilde{G}_k \star \cD_{n-k-1} \star x \\
&=-W_{-k} \star \cD_{n-k-1}-\frac{q^{-1}}{q-q^{-1}}x \star \tilde{G}_k \star \cD_{n-k-1}+\frac{q^{-1}}{q-q^{-1}}\tilde{G}_k \star \cD_{n-k-1} \star x.
\end{split}
\end{equation*}
By the above comment, and since $\tilde{G}_ky^{-1}=W_{-k+1}$, we can write \eqref{i2} as
\begin{equation*}\label{i4}
\begin{split}
&\tilde{G}_k \star \cD_{n-k} \\
&=(W_{-k+1} \star \cD_{n-k})y-(W_{-k} \star \cD_{n-k-1})y \\
&\hspace{4em} -\frac{q^{-1}}{q-q^{-1}}(x \star \tilde{G}_k \star \cD_{n-k-1})y+\frac{q^{-1}}{q-q^{-1}}(\tilde{G}_k \star \cD_{n-k-1} \star x)y.
\end{split}
\end{equation*}
We now sum the above equation over $k$ from $1$ to $n-1$, using \eqref{i1} and Proposition \ref{thm:Dnrecursion}. We have
\begin{equation*}
\begin{split}
&\sum_{k=1}^{n-1}\tilde{G}_k \star \cD_{n-k} \\
&=(W_0 \star \cD_{n-1})y-(W_{-n+1} \star \cD_0)y+\frac{q^{-1}}{q-q^{-1}}(x \star \cD_{n-1})y-\frac{q^{-1}}{q-q^{-1}}(\cD_{n-1} \star x)y \\
&=(x \star \cD_{n-1})y-\tilde{G}_n+\frac{q^{-1}}{q-q^{-1}}(x \star \cD_{n-1})y-\frac{q^{-1}}{q-q^{-1}}(\cD_{n-1} \star x)y \\
&=\frac{q}{q-q^{-1}}(x \star \cD_{n-1})y-\frac{q^{-1}}{q-q^{-1}}(\cD_{n-1} \star x)y-\tilde{G}_n \\
&=-\cD_n-\tilde{G}_n.
\end{split}
\end{equation*}
We have verified \eqref{i1.5}, and \eqref{convolution2} follows.
\end{proof}
\begin{definition}\label{def:zeta}\rm
(See \cite[Page 5]{ter_catalan}.) Let $\zeta:\mV \to \mV$ denote the $\mF$-linear map such that
\begin{itemize}
\item $\zeta(x)=y$,
\item $\zeta(y)=x$,
\item For any word $a_1 \cdots a_m$,
\[\zeta(a_1 \cdots a_m)=\zeta(a_m) \cdots \zeta(a_1).\]
\end{itemize}
\end{definition}
\noindent By the above definition, $\zeta$ is an antiautomorphism on the free algebra $\mV$. One can routinely check using the definition of $\star$ that $\zeta$ is also an antiautomorphism on the $q$-shuffle algebra $\mV$. Moreover, $\zeta$ fixes $\tilde{G}_n$ and $\cD_n$ for all $n \in \mN$.
\begin{proposition}\label{prop:inverse}\rm
We have
\[
\tilde{G}(t) \star \cD(t)=\m1=\cD(t) \star \tilde{G}(t).
\]
\end{proposition}
\begin{proof}
We have $\tilde{G}_0=\m1$ and $\cD_0=\m1$. By Lemma \ref{lem:inverse}, for any $n \geq 1$ we have
\begin{equation*}
\sum_{k=0}^n\tilde{G}_k \star \cD_{n-k}=0.
\end{equation*}
By these comments,
\begin{equation}\label{i6}
\tilde{G}(t) \star \cD(t)=\m1.
\end{equation}
Applying $\zeta$ to \eqref{i6}, we have
\[\cD(t) \star \tilde{G}(t)=\m1.\]
\end{proof}
\begin{theorem}\label{thm:DD}\rm
The following hold.
\begin{enumerate}[(i)]
\item $\cD(t)=D(t)$.
\item $\cD_n=D_n$ for any $n \in \mN$.
\item $\cD(w)=D(w)$ for any Catalan word $w$.
\end{enumerate}
\end{theorem}
\begin{proof}
Comparing Lemma \ref{lem:Dtinverse} and Proposition \ref{prop:inverse}, we obtain item (i). Item (ii) follows from item (i) by Definitions \ref{def:Dt} and \ref{def:cDt}. Item (iii) follows from item (ii) by Definitions \ref{def:Dn} and \ref{def:cDn}.
\end{proof}
\noindent This finishes our proof of Theorem \ref{thm:main}.
\section{Some facts about $D_n$}
In this section, we state some facts about $D_n$ that we find attractive.
\begin{proposition}\label{prop:Dnpoly}\rm
(See \cite[Lemma 9.7]{ter_alternating}.) For $n \geq 1$,
\begin{itemize}
\item $D_n$ is a polynomial in $\tilde{G}_1,\ldots,\tilde{G}_n$ of degree $n$, where each $\tilde{G}_i$ is given the degree $i$,
\item $\tilde{G}_n$ is a polynomial in $D_1,\ldots,D_n$ of degree $n$, where each $D_i$ is given the degree $i$.
\end{itemize}
\end{proposition}
\begin{proposition}\label{prop:Dncomm}\rm
(See \cite[Lemma 9.10]{ter_alternating}.) For $n,m \in \mN$,
\[
D_n \star \tilde{G}_m=\tilde{G}_m \star D_n, \hspace{4em} D_n \star D_m=D_m \star D_n.
\]
\end{proposition}
\begin{proposition}\label{cor:Dnrecursion1}\rm
For $n \geq 1$,
\begin{equation}\label{eq:Dnrecursion1}
D_n=\frac{(q^{-1} D_{n-1} \star x-q x \star D_{n-1})y}{q-q^{-1}}.
\end{equation}
\end{proposition}
\begin{proof}
Follows from Proposition \ref{thm:Dnrecursion} and Theorem \ref{thm:DD}.
\end{proof}
\begin{proposition}\label{cor:Dnrecursion2}\rm
For $n \geq 1$,
\[D_n=\frac{x(q^{-1}y \star D_{n-1}-qD_{n-1} \star y)}{q-q^{-1}}.\]
\end{proposition}
\begin{proof}
Apply the antiautomorphism $\zeta$ to each side of \eqref{eq:Dnrecursion1}, and note that $D_n$ is invariant under $\zeta$.
\end{proof}
\noindent Recall that for a linear combination $A$ of words ending in $y$, the element $Ay^{-1}$ is obtained from $A$ by removing the rightmost $y$ of each word. We make a similar notation that for a linear combination $B$ of words starting with $x$, the element $x^{-1}B$ is obtained from $B$ by removing the leftmost $x$ of each word.
\begin{proposition}\label{cor:Dnrecursion3}\rm
For $n \geq 2$,
\begin{equation}\label{eq:Dnrecursion3}
x^{-1}D_ny^{-1}+D_{n-1}=\frac{q^{-1}x^{-1}D_{n-1} \star x-q^3x \star x^{-1}D_{n-1}}{q-q^{-1}}.
\end{equation}
\end{proposition}
\begin{proof}
By the definition of the $q$-shuffle product, we have
\begin{equation*}\label{rec1}
x \star D_{n-1}=xD_{n-1}+q^2x(x \star x^{-1}D_{n-1}),
\end{equation*}
\begin{equation*}\label{rec2}
D_{n-1} \star x=xD_{n-1}+x(x^{-1}D_{n-1} \star x).
\end{equation*}
\medskip
The result follows from Proposition \ref{cor:Dnrecursion1} and the two equations above.
\end{proof}
\begin{proposition}\label{cor:Dnrecursion4}\rm
For $n \geq 2$,
\[x^{-1}D_ny^{-1}+D_{n-1}=\frac{q^{-1}y \star D_{n-1}y^{-1}-q^3D_{n-1}y^{-1} \star y}{q-q^{-1}}.\]
\end{proposition}
\begin{proof}
Apply the antiautomorphism $\zeta$ to each side of \eqref{eq:Dnrecursion3}, and note that $D_n$ is invariant under $\zeta$.
\end{proof}
\section{Acknowledgments}
The author is currently a Math Ph.D. student at the University of Wisconsin-Madison. The author would like to thank his supervisor, Professor Paul Terwilliger, for suggesting the paper topic and giving many helpful comments. The author would also like to thank his high school Math teacher, Yuefeng Feng, for guiding the author through an early tour of the fascinating world of combinatorics.
|
1,116,691,497,681 | arxiv | \section{Introduction}
Radio Frequency (RF)-powered cognitive radio networks are considered to be a promising solution which improves radio spectrum utilization and efficiency as well as addresses the energy constraint issue for low-power secondary systems, e.g., the IoT system~\cite{huynh2018}, \cite{li2018}, \cite{kang2018}. However, in the RF-powered cognitive radio networks, RF-powered secondary transmitters typically require a long time period to harvest sufficient energy for their active transmissions. This may significantly deteriorate the network performance. Thus, RF-powered cognitive radio networks with ambient backscatter\cite{liu2013} have been recently proposed. In the RF-powered backscatter cognitive radio network, a primary transmitter, e.g., a base station, transmits RF signals on a licensed channel. When the channel is busy, the secondary transmitters either transmit their data to a secondary gateway by using backscatter communications or harvest energy from the RF signals through RF energy harvesting techniques. When the channel is idle, the secondary transmitters use the harvested energy to transmit their data to the gateway. As such, the RF-powered backscatter cognitive radio network enables secondary systems to simultaneously optimize the spectrum usage and energy harvesting
to maximize their performance. However, one major problem in the RF-powered backscatter cognitive radio network is how the secondary gateway\footnote{We use ``secondary gateway'' and ``gateway'' interchangeably in the paper.} schedules the backscattering time, energy harvesting time, and transmission time among multiple secondary transmitters so as to maximize the network throughput.
To address the problem, optimization methods and game theory can be used. The authors in \cite{wang2018} optimized the time scheduling for the gateway in the RF-powered backscatter cognitive radio network through using the Stackelberg game. In the game, the gateway is the leader, and the secondary transmitters are the followers. The gateway first determines spectrum sensing time and an interference price to maximize its revenue. Based on the time and price, each secondary transmitter determines the energy harvesting time, backscattering time, and transmission time so as to maximize its throughput. However, the proposed game requires complete and perfect sensing probability information, and thus the game cannot deal with the dynamics of the network environment.
To optimize the performance of RF powered backscatter cognitive radio in the dynamic environment and with large state and action space, deep reinforcement learning (DRL) technique~\cite{mnih2015} can be adopted. In principle, the DRL implements a Deep Q-Network (DQN), i.e., the combination of a deep neural network and the Q-learning, to derive an approximate value of Q-values of actions, i.e., decisions. Compared with the conventional reinforcement learning, the DRL can improve significantly the learning performance and the learning speed. Therefore, in this paper, we propose to use the DRL for the time scheduling in the RF-powered backscatter cognitive radio network. In particular, we first formulate a stochastic optimization problem that maximizes the total throughput for the network. The DRL algorithm is adopted to achieve the optimal time scheduling policy for the secondary transmitters. To overcome the instability of the learning and to reduce the overestimation of action values, the Double DQN (DDQN) is used to implement the DRL algorithm. Simulation results show that the proposed DRL algorithm always achieves the better performance compared with non-learning algorithms. To the best of our knowledge, this is the first paper that investigates an application of DRL in the RF-powered backscatter cognitive radio network.
The rest of this paper is organized as follows. Section II reviews
related work. Section III describes the system model and
problem formulation. Section IV presents the DRL algorithm for the time scheduling in the RF-powered backscatter cognitive radio network. Section V shows the performance evaluation
results. Section VI summarizes the paper.
\section{Related Work}
Backscatter communications systems can be optimized to achieve optimal throughput. In~\cite{hoang2017}, the authors considered the data scheduling and admission control problem of a backscatter sensor network. The authors formulated the problem as a Markov decision process, and learning algorithm was applied to obtain the optimal policy that minimizes the weighted sum of delay of different types of data. In~\cite{lyu2018a}, the authors formulated an optimization problem for ambient backscatter communications networks. The problem aims to derive an optimal control policy for sleep and active mode switching and reflection coefficient used in the active mode. However, only single transmitter was considered. In~\cite{lyu2018b}, the authors extended the study in~\cite{lyu2018a} to a cognitive radio network. Specifically, the hybrid HTT and backscatter communications are adopted and integrated for a secondary user. The authors proposed an optimal time allocation scheme which is based on the convex optimization problem. While multiple secondary users were considered, the secondary user does not employ energy storage. The authors in~\cite{yang2018} analyzed the backscatter wireless powered communication system with multiantenna by using the stochastic geometry approach. The energy and information outage probabilities in the energy harvesting and backscatter were derived. Then, the authors introduced an optimization problem for time allocation to maximize overall network throughput. The authors in~\cite{kwan2018} introduced a two-way communication protocol for backscatter communications. The protocol combines time-switching and power splitting receiver structures with backscatter communication. The optimization problem was formulated and solved to maximize sum throughput of multiple nodes. Unlike the above works that consider time allocation, the authors in~\cite{gong2018} proposed a channel-aware rate adaptation protocol for backscatter networks. The protocol first probes the channel and adjusts the transmission rate of the backscatter transmitter. The objective is to minimize the number of channels to be used for successfully communicating with all backscatter nodes.
Although a few works in the literature studied the performance optimization of backscatter-based communications networks, almost all of them assume that information about the network is always available which may not be realistic under random and unpredictable wireless environments. Therefore, this paper considers a scenario in which the network does not have complete information. The network needs to learn to assign backscattering time, energy harvesting time, and transmission time to the secondary transmitters to maximize the total network throughput.
\section{System Model}
\begin{figure}[h]
\centering
\includegraphics[width=7.7cm, height = 5.6cm]{Backscatter_model}
\caption{Ambient backscatter model with multiple secondary transmitters.}
\label{backscatter_model}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{Timeframe}
\caption{Structure of frame $t$ and time scheduling.}
\label{Timeframe}
\end{figure}
We consider the RF-powered backscatter cognitive radio network. The network consists of a primary transmitter and $N$ secondary transmitters (Fig.~\ref{backscatter_model}). The primary transmitter transmits RF signal on a licensed channel. The transmission is organized into frames, and each frame is composed of $F$ time slots. In Frame $t$ (see Fig.~\ref{Timeframe}), the transmission duration of the primary transmitter or the busy channel period, i.e., the number of time slots the channel is busy, is denoted by $b(t)$ which is random. The secondary transmitters transmit data to the gateway. The gateway also controls the transmission scheduling of the secondary transmitters. In particular, during the busy channel period, the time slots can be assigned for energy harvesting by all the secondary transmitters. The number of time slots for energy harvesting is denoted by $\mu(t)$. Let $e^{\mathrm{h}}_n$ denote the number of energy units that secondary transmitter $n$ harvests in one busy time slot. The harvested energy is stored in energy storage, e.g., a super-capacitor, of the secondary transmitter, the maximum capacity of which is denoted by $C_n$. A secondary transmitter has a data queue which stores an incoming packet, e.g., from its sensor device. The maximum capacity of the data queue is denoted by $Q_n$, and the probability that a packet arrives at the queue in a time slot is denoted by $\lambda_n$. Then, the rest of the busy time slots, i.e., $b(t)-\mu(t)$, will be allocated for the secondary transmitters to transmit their data by using backscatter communications, i.e., the {\em backscatter mode}. The number of time slots for backscatter transmission by secondary transmitter $n$ is denoted by $\alpha_n(t)$, and the number of packets transmitted in each time slot is $d^{\mathrm{b}}_n$. Then, during the idle channel period which has $F-b(t)$ time slots, the secondary transmitters can transfer data to the gateway by using active-RF transmission, i.e., the {\em active mode}. The number of time slots for secondary transmitter $n$ to transmit in the active mode is denoted by $\beta_n(t)$. Each time slot can be used to transmit $d^{\mathrm{a}}_n$ packets from the data queue, and the secondary transmitter consumes $e^{\mathrm{a}}_n$ units of energy from the storage. The data transmission in the backscatter mode and in the active mode is successful with the probabilities $S^{\mathrm{b}}_n$ and $S^{\mathrm{a}}_n$, respectively. Hence, the total throughput of the RF-powered backscatter cognitive radio network is the sum of the number of packets successfully transmitted by all secondary transmitters.
\section{Problem Formulation}
To optimize the total throughput, we formulate a stochastic optimization problem for the RF-powered backscatter cognitive radio network. The problem is defined by a tuple $<{\mathcal{S}}, {\mathcal{A}}, {\mathcal{P}}, {\mathcal{R}}>$.
\begin{itemize}
\item ${\mathcal{S}}$ is the state space of the network.
\item ${\mathcal{A}}$ is the action space.
\item ${\mathcal{P}}$ is the state transition probability function, where $P_{s,s'}(a)$ is the probability that the current state $s \in {\mathcal{S}}$ transits to the next state $s' \in {\mathcal{S}}$ when action $a \in {\mathcal{A}}$ is taken.
\item ${\mathcal{R}}$ is the reward function of the network.
\end{itemize}
The state space of secondary transmitter $n$ is denoted by
\begin{equation}
{\mathcal{S}}_n = \Big\{ (q_n, c_n) ; q_n \in \{0,1,\ldots,Q_n\}, c_n \in \{0,1,\ldots,C_n\} \Big\},
\end{equation}
where $q_n$ represents the queue state, i.e., the number of packets in the data queue, and $c_n$ represents the energy state, i.e., the number of energy units in the energy storage. Let the channel state, i.e., the number of busy time slots, be denoted by ${\mathcal{S}}^{\mathrm{c}} = \{ (b); b \in \{0,1,\ldots,F \} \}$. Then, the state space of the network is defined by
\begin{equation}
{\mathcal{S}} = {\mathcal{S}}^{\mathrm{c}} \times \prod_{n = 1}^N {\mathcal{S}}_n ,
\end{equation}
where $\times$ and $\prod$ represent the Cartesian product.
The action space of the network is defined as follows:
\begin{eqnarray}
{\mathcal{A}} & = & \Bigg\{ (\mu, \alpha_1,\ldots,\alpha_N, \beta_1,\ldots,\beta_N ); \nonumber \\
& & \mu + \sum_{n=1}^N \alpha_n \leq b, \mu + \sum_{n=1}^N ( \alpha_n + \beta_n ) \leq F \Bigg\},
\label{eq:actionspace}
\end{eqnarray}
where again $\mu$ is the number of busy time slots that are used for energy harvesting by the secondary transmitters, $\alpha_n$ is the number of busy time slots that secondary transmitter $n$ transmits data in the backscatter mode, and $\beta_n$ is the number of idle time slots that secondary transmitters $n$ transmits data in the active mode. The constraint $\mu + \sum_{n=1}^N \alpha_n \leq b$ ensures that the number of time slots for energy harvesting and all backscatter transmissions do not exceed the number of busy time slots. Likewise, the constraint $\mu + \sum_{n=1}^N ( \alpha_n + \beta_n ) \leq F$ ensures that the number of time slots for energy harvesting, all transmissions in the backscatter and active modes do not exceed the total number of time slots in a frame.
Now, we consider the state transition of the network. In the busy channel period, the number of time slots assigned to secondary transmitter $n$ for harvesting energy is $b(t) - \alpha_n$. Thus, after the busy channel period, the number of energy units in the storage of the secondary transmitter changes from $c_n$ to $c_n^{(1)}$ as follows:
\begin{equation}
c_n^{(1)}=\min \big(c_n + (b(t) - \alpha_n)e^{\mathrm{h}}_n,C_n \big).
\end{equation}
Also, the number of packets in the data queue of secondary transmitter $n$ changes from $q_n$ to $q_n^{(1)}$ as follows:
\begin{equation}
q_n^{(1)} = \max \big(0, q_n - \alpha_n d^{\mathrm{b}}_n \big).
\end{equation}
In the idle channel period, secondary transmitter $n$ requires $q_n^{(1)}/d^{\mathrm{a}}_n$ time slots to transmit $q_n^{(1)}$ packets. However, the secondary transmitter is only assigned with $\beta_n$ time slots for the data transmission. Thus, it actually transmits its packets in $\min (\beta_n, q_n^{(1)}/d^{\mathrm{a}}_n)$ time slots.
After the idle channel period, the energy state of secondary transmitter $n$ changes from $c_n^{(1)}$ to $c'_n$ as follows:
\begin{equation}
c'_n = \max \big(0, c_n^{(1)} - \min (\beta_n, q_n^{(1)}/d^{\mathrm{a}}_n)e^{\mathrm{a}}_n \big).
\end{equation}
Also, the number of packets in the data queue of secondary transmitter $n$ changes from $q_n^{(1)}$ to $q_n^{(2)}$ as follows:
\begin{equation}
q_n^{(2)} = \max \big(0, q_n^{(1)} - \min (\beta_n, c_n^{(1)}/e^{\mathrm{a}}_n)d^{\mathrm{a}}_n \big).
\end{equation}
Note that new packets can arrive at each time slot with a probability of $\lambda_n$. We assume that the new packets are only added to the data queue when the time frame finishes. Thus, at the end of the time frame, the number of packets in the data queue of secondary transmitter $n$ changes from $q_n^{(2)}$ to $q'_n$ as follows:
\begin{equation}
q'_n = q_n^{(2)} + p_n,
\end{equation}
where $p_n$ is the number of packets arriving in the secondary transmitter during the time frame. $p_n$ typically follows binomial distribution $B(F,\lambda _n)$~\cite{bliss1953}. Then, the probability of $m$ packets arriving in the secondary transmitter during $F$ time slots is
\begin{equation}
Pr(p_n = m )=\binom{F}{m} \lambda_n^m(1-\lambda_n)^{F-m}.
\end{equation}
The reward of the network is defined as a function of state $s \in {\mathcal{S}}$ and action $a \in {\mathcal{A}}$ as follows:
\begin{equation}
\mathcal{R}(s,a)= \sum_{n=1}^N S^{\mathrm{b}}_n (q_n^{(1)} - q_n) + \sum_{n=1}^N S^{\mathrm{a}}_n (q_n^{(2)} - q_n^{(1)}).
\label{eq:reward}
\end{equation}
The first and the second terms of the reward expression in (\ref{eq:reward}) are for the total numbers of packets transmitted in the backscatter and active modes, respectively.
To obtain the mapping from a network state $s \in {\mathcal{S}}$ to an action $a \in {\mathcal{A}}$ such that the accumulated reward is maximized, conventional algorithm can be applied. The goal of the algorithm is to obtain the optimal policy defined as $\pi^* : {\mathcal{S}} \rightarrow {\mathcal{A}}$. In the algorithm, the optimal policy to maximize value-state function is defined as follows:
\begin{equation}
V(s) = \mathbb{E} \left[ \sum_{t=0}^{T-1} \gamma^t {\mathcal{R}}(s(t), a(t) ) \right] ,
\end{equation}
where $T$ is the length of the time horizon, $\gamma$ is the discount factor for $0\leq \gamma < 1$, and $\mathbb{E}[\cdot]$ is the expectation. Here, we define $a = \pi(s)$ which is the action taken at state $s$ given the policy $\pi$. With the Markov property, the value function can be expressed as follows:
\begin{eqnarray}
V(s) & = & \sum_{s' \in {\mathcal{S}} } P_{\pi(s) } (s, s') \left( {\mathcal{R}}(s,a) + \gamma V(s') \right) , \\
\pi(s) & = & \max_{ a \in {\mathcal{A}} } \left( \sum_{s' \in {\mathcal{S}} } P_{\pi(s) } (s, s') \Big( {\mathcal{R}}(s,a) + \gamma V(s') \Big) \right) .
\end{eqnarray}
With the Q-learning algorithm, Q-value is defined, and its optimum can be obtained from the Bellman's equation, which is given as
\begin{equation}
Q(s,a) = \sum_{s' \in {\mathcal{S}} } P_{\pi(s) } (s, s') \left( {\mathcal{R}}(s,a) + \gamma V(s') \right) .
\end{equation}
The Q-value is updated as follows:
\begin{align}
\label{Q_value_update}
Q^{\mathrm{new}}(s,a) = & (1-l) Q(s,a) \\
& + l \left( r(s,a) + \gamma \max_{a' \in {\mathcal{A}} } Q( s', a') \right).\notag
\end{align}
where $l$ is the learning rate, and $r(s,a)$ is the reward received.
However, the standard algorithms and Q-learning to solve the stochastic optimization problem all suffer from large state and action space of the networks. Thus, we resort to the deep reinforcement learning algorithm.
\section{Deep Reinforcement Learning Algorithm}
By using (\ref{Q_value_update}) to update Q-values in a look-up table, the Q-learning algorithm can efficiently solve the optimization problem if the state and action spaces are small. In particular for our problem, the gateway needs to observe states of all $N$ secondary transmitters and choose actions from their action spaces. As $N$ grows, the state and action spaces can become intractably large, and several Q-values in the table may not be updated. To solve the issue, we propose to use a DRL algorithm.
Similar to the Q-learning algorithm, the DRL allows the gateway to map its state to an optimal action. However, instead of using the look-up table, the DRL uses a Deep Q-Network (DQN), i.e., a multi-layer neural network with weights $\boldsymbol{\theta}$, to derive an approximate value of $Q^*(s,a)$. The input of the DQN is one of the states of the gateway, and the output includes Q-values $Q(s,a;\boldsymbol{\theta})$ of all its possible actions. To achieve the approximate value $Q^*(s,a)$, the DQN needs to be trained by using transaction $< s,a,r,s'>$, i.e., experience, in which action $a$ is selected through using the $\epsilon$-greedy policy. Training the DQN is to update its weights $\boldsymbol{\theta}$ to minimize a loss function defined as:
\begin{equation}
L= \mathbb{E}\left[ (y-Q(s,a;\boldsymbol{\theta}))^2\right],
\label{DQN_loss}
\end{equation}
where $y$ is the target value. $y$ is given by
\begin{equation}
y= r+ \gamma \max_{a' \in {\mathcal{A}} } Q( s', a';\boldsymbol{\theta^-}),
\label{DQN_y_value}
\end{equation}
where $\boldsymbol{\theta^-}$ are the old weights, i.e., the weights from the last iteration, of the DQN.
Note that the $\max$ operator in (\ref{DQN_y_value}) uses the same Q-values both to select and to evaluate an action of the gateway. This means that the same Q-values are being used to decide which action is the best, i.e., the highest expected reward, and they are also being used to estimate the action value. Thus, the Q-value of the action may be over-optimistically estimated which reduces the network performance.
To prevent the overoptimism problem, the action selection should be decoupled from the action evaluation~\cite{van2016}. Therefore, we use the Double DQN (DDQN). The DDQN includes two neural networks, i.e., an online network with weights $\boldsymbol{\theta}$ and a target network with weights $\boldsymbol{\theta^-}$. The target network is the same as the online network that its weights $\boldsymbol{\theta^-}$ are reset to $\boldsymbol{\theta}$ of the online network every $L^-$ iterations. At other iterations, the weights of the target network keep unchanged while those of the online network are updated at each iteration.
In principle, the online network is trained by updating its weights $\boldsymbol{\theta}$ to minimize the loss function as shown in~(\ref{DQN_loss}). However, $y$ is replaced by $y^{DDQN}$ defined as
\begin{equation}
y^{DDQN}=r + \gamma Q\Big{(} s', \arg\max_{a' \in \mathcal{A}} Q_i(s',a';\boldsymbol{\theta});\boldsymbol{\theta}^{-}\Big{)}.
\label{DQN_y_value_DDQN}
\end{equation}
As shown in (\ref{DQN_y_value_DDQN}), the action selection is based on the current weights $\boldsymbol{\theta}$, i.e., the weights of the online network. The weights $\boldsymbol{\theta^{-}}$ of the target network are used to fairly evaluate the value of the action.
The DDQN algorithm for the gateway to find its optimal policy is shown in Algorithm~\ref{DDQN_Algorithm}. Accordingly, both the online and target networks use the next state $s'$ to compute the optimal value $Q(s',a';\boldsymbol{\theta})$. Given the discount factor $\gamma$ and the current reward $r$, the target value $y^{DDQN}$ is obtained from (\ref{DQN_y_value_DDQN}). Then, the loss function is calculated as defined in~(\ref{DQN_loss}) in which $y$ is replaced by $y^{DDQN}$. The value of the loss function is back propagated to the online network to update its weights $\boldsymbol{\theta}$. Note that to address the instability of the learning of the algorithm, we adopt an experience replay memory $\mathcal{D}$ along with the DDQN. As such, instead of using the most recent transition, a random mini-batch of transactions is taken from the replay memory to train the Q-network.
\begin{algorithm}
\small
\caption{DDQN algorithm for time scheduling of the gateway.}\label{DDQN_Algorithm}
\hspace*{\algorithmicindent} \textbf{Input:} Action space $\mathcal{A}$; mini-batch size $L_b$; target network replacement frequency $L^{-}$.\\
\hspace*{\algorithmicindent} \textbf{Output:} Optimal policy $\pi^*$.
\begin{algorithmic}[1]
\State \textbf{Initialize:} Replay memory $\mathcal{D}$; online network weights $\boldsymbol{\theta}$; target network weights $\boldsymbol{\theta^-} = \boldsymbol{\theta}$; online action-value function $Q(s,a; \boldsymbol{\theta})$; target action-value function $Q({s}', {a}'; \boldsymbol{\theta^-})$; $k=i=0$.
\Repeat \text{ for each episode} $i$:
\State Initialize network state $s$ after receiving state massages from the primary transmitter and $N$ secondary transmitters.
\Repeat \text{ for each iteration $k$ in episode} $i$:
\State Choose action $a$ according to $\epsilon-greedy$ policy from $Q(s,a; \boldsymbol{\theta})$.
\State Broadcast time scheduling massages defined by $a$ to $N$ secondary transmitters.
\State Receive an immediate reward $r_k$.
\State Receive state massages from primary transmitter and $N$ secondary transmitters and update next network state $s'$.
\State Store tuple $(s, a, r_k, {s}')$ in $\mathcal{D}$.
\State Sample a mini-batch of $L_b$ tuples $(s, a, r_t, {s}')$ from $\mathcal{D}$.
\State Define $a^{\text{max}}=\arg \max_{a' \in \mathcal{A}} Q({s}', {a}'; \boldsymbol{\theta})$.
\State Determine
\begin{equation}
y_{t}^{DDQN}=\begin{cases}
r_t,\text{ if episode $i$ terminates at iteration } t + 1,\\
r_t+ \gamma Q\left ({s}', a^{\text{max}};\boldsymbol{\theta^-} \right), \text{ otherwise}.\notag\\
\end{cases}
\end{equation}
\State Update $\boldsymbol{\theta}$ by performing a gradient descent step on $(y_{t}^{DDQN}-Q(s,a;\boldsymbol{\theta}))^{2}$.
\State Reset $\boldsymbol{\theta^-} = \boldsymbol{\theta}$ every $L^{-}$ steps.
\State Set $s\leftarrow {s}'$.
\State Set $k=k+1$.
\Until {$k$ is greater than the maximum number of steps in episode $i$.}
\State Set $i=i+1$.
\Until {$i$ is greater than the desired number of episodes.}
\end{algorithmic}
\end{algorithm}
\section{Performance Evaluation}
In this section, we present experimental results to evaluate the performance of the proposed DRL algorithm. For comparison, we introduce the HTT~\cite{lyu2018b}, the backscatter communication~\cite{liu2013}, and a random policy as baseline schemes. In particular for the random policy, the gateway assigns time slots to the secondary transmitters for the energy harvesting, data backscatter, and data transmission, by choosing randomly a tuple $(\mu, \alpha_1,\ldots,\alpha_N, \beta_1,\ldots,\beta_N )$ in action space $\mathcal{A}$. Note that we do not introduce the reinforcement learning algorithm~\cite{huynh2018} since it cannot be run in our computation environment as the problem is too complex. The simulation parameters for the RF-powered backscatter cognitive radio network are shown in Table~\ref{table:parameters_CRN}, and those for the DRL algorithm are listed in Table~\ref{table:parameters_system}. The DRL algorithm is implemented by using the TensorFlow deep learning library. The Adam optimizer is used that allows to adjust the learning rate during the training phase. The $\epsilon$-greedy policy with $\epsilon=0.9$ is applied in the DRL algorithm to balance the exploration and exploitation. This means that a random action is selected with a probability of $\epsilon =0.9$, and the best action, i.e., the action that maximizes the Q-value, is selected with a probability of $\epsilon =0.1$. To move from a more explorative policy to a more exploitative one, the value of $\epsilon$ is linearly reduced from $0.9$ to $0$ during the training phase.
\begin{table}[!h]
\caption{Backscatter cognitive radio network parameters}
\label{table:parameters_CRN}
\centering
\begin{tabular}{lc}
\hline\hline
{\em Parameters} & {\em Value} \\ [0.5ex]
\hline
Number of secondary transmitters ($N$) & 3 \\
Number of time slots in a time frame ($F$) & 10 \\
Number of idle time slots in a time frame ($b(t)$)& {[}1;9{]} \\
Data queue size ($Q_n$) & 10 \\
Energy storage capacity ($C_n$) & 10 \\
Packet arrival probability ($\lambda_n $) & {[}0.1;0.9{]} \\
$d^\mathrm{b}_n$ & 1 \\
$d^\mathrm{a}_n$ & 2 \\
$e^\mathrm{h}_n$ & 1 \\
$e^\mathrm{a}_n$ & 1 \\
\hline
\end{tabular}
\label{table:parameters}
\end{table}
\begin{table}[!h]
\caption{System model parameters}
\label{table:parameters_system}
\centering
\begin{tabular}{ll}
\hline\hline
{\em Parameters} & {\em Value} \\ [0.5ex]
\hline
Number of hidden layers & 3 \\
Fully connected neuron network size & 32x32x32 \\
Activation & ReLU \\
Optimizer & Adam \\
Learning rate & 0.001 \\
Discount rate ($\gamma$) & 0.9 \\
$\epsilon$-greedy & 0.9 $\rightarrow$ 0 \\
Mini-batch size ($L_b$) & 32 \\
Replay memory size & 50000 \\
Number of iterations per episode & 200 \\
Number of training iterations & 1000000 \\
Number of iterations for updating target network ($L^{-}$)& 10000 \\
\hline
\end{tabular}
\label{table:parameters}
\end{table}
To evaluate the performance of the proposed DRL algorithm, we consider different scenarios by varying the number of busy time slots per time frame, i.e., by varying $\tau$, and the packet arrival probability $\lambda$. The simulation results for the throughput versus episode are shown in Fig.~ \ref{throughput_comparison}, those for the throughput versus the packet arrival probability are illustrated in Fig.~\ref{data_rate_changing}, and those for the throughput versus the number of busy time slots are provided in Fig.~\ref{busy_time_slot_changing}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{time_for_backscatter_actions}
\caption{Average time assigned for backscatter of secondary transmitter 1.}
\label{time_for_backscatter_actions}
\end{figure}
Note that the throughput is the sum of the number of packets successfully transmitted by all secondary transmitters. In particular for the proposed DRL algorithm, the throughput depends heavily on the time scheduling policy of the gateway. This means that to achieve the high throughput, the gateway needs to take proper actions, e.g., assigning the number of time slots to the secondary transmitters for the data backscatter, data transmission, and energy harvesting. Thus, it is worth to consider how the gateway takes the optimal actions for each secondary transmitter given its state. Without loss of generality, we consider the average number of time slots that the gateway assigns to secondary transmitter 1 for the data backscatter (Fig.~\ref{time_for_backscatter_actions}) and the data transmission (Fig.~\ref{time_for_transmit_actions}). From Fig.~\ref{time_for_backscatter_actions}, the average number of time slots assigned to secondary transmitter 1 for the backscatter increases as its data queue increases. The reason is that as the data queue is large, the secondary transmitter needs more time slots to backcastter its packets. Thus, the gateway assigns more time slots to the secondary transmitter to maximize the throughput. It is also seen from the figure that the average number of time slots assigned to the secondary transmitter 1 for the backscatter increases as its energy state increases. The reason is that as the energy state of the secondary transmitter is already high, the gateway assigns less time slots for the energy harvesting and prioritizes more time slots for the backscatter to improve the network throughput.
The secondary transmitter with a high energy state can transmit more packets in the active transmission. However, to transmit more packets, the gateway should assigns more time slots to the secondary transmitter. As illustrated in Fig.~\ref{time_for_transmit_actions}, by using the DRL algorithm, the average number of time slots assigned to secondary transmitter 1 increases as its energy state increases.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{time_for_transmit_actions}
\caption{Average time assigned for active transmission of secondary transmitter 1.}
\label{time_for_transmit_actions}
\end{figure}
The above results show that the proposed DRL algorithm enables the gateway to learn actions so as to improve the network throughput. As shown in Fig.~\ref{throughput_comparison}, after the learning time of around $2000$ episodes, the proposed DRL algorithm converges to an average throughput which is much higher than that of the baseline schemes. In particular, the average throughput obtained by the proposed DRL scheme is around $12$ packets per frame while those obtained by the random scheme, HTT scheme, and backscatter scheme are $9$, $7.5$, and $3$ packets per frame, respectively.
\begin{figure}[!h]
\centering
\includegraphics[width=6.7cm, height = 5.3cm]{convergence}
\caption{Average throughput comparison between the proposed DRL scheme and the baseline schemes.}
\label{throughput_comparison}
\end{figure}
The performance improvement of the proposed DRL scheme compared with the baseline schemes is maintained when varying the packet arrival probability and the number of busy time slots in the frame. In particular, as shown in Fig.~\ref{data_rate_changing}, the average throughput obtained by the proposed DRL scheme is significantly higher than those obtained by the baseline schemes. For example, given a packet arrival probability of $0.6$, the average throughput obtained by the proposed DRL scheme is around $15$ packets per frame while those of the random scheme, HTT scheme, and backscatter communication scheme respectively are $10$, $9.3$, and $3$ packets per frame. The gap between the proposed DRL scheme and the baseline schemes becomes larger as the packet arrival probability increases. The throughput improvement is clearly achieved as the number of busy time slots varies as shown in Fig.~\ref{busy_time_slot_changing}.
\begin{figure}[!h]
\centering
\includegraphics[width=6.7cm, height = 5.3cm]{data_rate_changing}
\caption{Average throughput versus packet arrival probability.}
\label{data_rate_changing}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=6.7cm, height = 5.3cm]{busy_time_slot_changing}
\caption{Average throughput versus the number of busy time slots.}
\label{busy_time_slot_changing}
\end{figure}
In summary, the simulation results shown in this section
confirm that the DRL algorithm is able to solve the computation expensive problem of the
large action and state spaces of the Q-learning. Also, the proposed DRL algorithm can be used for the gateway to learn the optimal policy. The policy allows the gateway to assign optimally time slots to the secondary transmitters for the energy harvesting, data backscatter, and data transmission to maximize the network throughput.
\section{Conclusions}
In this paper, we have presented the DRL algorithm for
the time scheduling in the RF-powered backscatter cognitive radio network. Specifically, we have formulated the time scheduling of the secondary gateway as a stochastic optimization problem. To solve the problem, we have developed a DRL algorithm using DDQN including the online and target networks. The simulation results show that the proposed DRL
algorithm enables the gateway to learn an optimal time scheduling policy which
maximizes the network throughput. The throughput obtained by the proposed DRL algorithm is significantly higher than those of the non-learning algorithms.
|
1,116,691,497,682 | arxiv | \section{Introduction}
The Racah matrices or $6j$-symbols play an important role in theoretical physics
since its very early days.
It is enough to say that a whole chapter is devoted to this subject in \cite{LL3}.
This reflects the significance of symmetries for description of nature,
but also provides an example of how difficult is the study of symmetries themselves.
In fact, evaluation of the Racah matrices is an old classical problem,
which is still far from its final solution.
The modern approach is to attack this kind of problems by considering
a physical (quantum field theory) model, which contains nothing else,
but the quantities of interest: in the present case a model, where the correlators
are made from the Racah matrices.
Such a theory is nowadays well known: it is either a $2d$ Wess-Zumino-Novikov-Witten (WZNW)
conformal theory \cite{WZNW} or a $3d$ Chern-Simons theory \cite{CS}, the observables in the latter case being known
as {\it knot polynomials} \cite{knotpols}.
Speaking in modern terms, the two theories are related by a kind of holographic duality,
which is, hence, a far-going generalization of this old archetypical example.
Difficulties with the Racah calculus are, therefore, the difficulties with evaluation
of knot polynomials or, what is nearly the same, of monodromies of the conformal blocks.
Both subjects are now under intensive study, and the purpose of this letter is to report
some recent progress.
Namely, using the newly developed highest weight method (HWM) of \cite{MMMS21},
we succeeded in evaluation of the {\it inclusive} (i.e. for all $Q\in R^{\otimes 3}$)
quantum Racah matrices ${\cal U}_Q$
\be
\Big\{(R\otimes R)\otimes R \longrightarrow Q\Big\} \ \ \
\stackrel{{\cal U}_Q}{\longrightarrow} \ \ \
\Big\{R\otimes (R\otimes R) \longrightarrow Q\Big\}
\label{racah}
\ee
for all representations
$R$ of sizes up to $4$: the really new part concerns $R=[3,1]$ and $R=[2,2]$.
For symmetric representations $R=[1],[2],[3],[4]$, the answers are known from \cite{MMMkn12,IMMMev},
and are actually described by the eigenvalue hypothesis of \cite{IMMMev},
for antisymmetric $R=[1,1],[1,1,1],[1,1,1,1]$ they are obtained by
a change of variable $q\longrightarrow q^{-1}$, for $R=[21]$ they were found in \cite{MMMS21}.
These ${\cal U}_Q$ are actually matrices, acting in the space of representations $Y\in R^{\otimes 2}$,
which appear in the first products $(R\otimes R)$ in (\ref{racah}) and contribute to a given $Q$.
For non-rectangular representations (i.e. when the Young diagram $R$ is not rectangular),
the representations $Y$ and $Q$ come with non-trivial multiplicities, which leads to certain
degeneracies and makes the problem really difficult.
A part of the problem is that in such cases the Racah {\it matrices} depend on the basis choices
and lack invariant (basis independent) definition, thus distracting pure mathematicians from such studies,
despite a severe need for explicit answers for purposes of quantum field and string theory.
We described this problem in some detail in recent \cite{mmms31}
and do not repeat the relevant details of the HWM here.
Instead in this short presentation, we concentrate on the case of $R=[2,2]$, which is rectangular
and free of these additional complications.
Our purpose is to enumerate ingredients of the calculations and immediate checks
performed after getting the answer.
The Racah matrices {\it per se} appear in expressions for the knot polynomials in two cases:
the exclusive matrices $(R,R,\bar R\longrightarrow R)$ describe arborescent (double fat) knots \cite{mmmrv,mmmsrv},
while the inclusive ones $(R,R,R\longrightarrow Q)$ the three-strand knots.
Exclusive are actually deducible from inclusive \cite{mmmsEI}, and here we concentrate on the latter ones.
Knots with more strands require complicated contractions of the Racah matrices
(called {\it mixing matrices} in \cite{MMMkn12}), which also turn to be much simpler and explorable than
they seem, but this is also beyond the scope of the present letter.
\section{Pattern of Racah matrices arising for the size $|R|=4$}
\subsection{Specification to the case of $R=[4]$}
This case of symmetric representation was exhaustively considered in \cite{MMMkn12,IMMMev} and can serve as a sample
for other representations. However, it is the simplest one, since all Racah matrices are given by the $U_q(sl_2)$ formulas.
Square of the symmetric representation $[4]$ is decomposed in a very simple way:
\be
[4]\otimes [4] = [8]_++[7,1]_- + [6,2]_+ + [5,3]_- + [4,4]_+
\ee
where subscript is plus or minus for the representations from the symmetric and antisymmetric
squares respectively.
The representation content of the cube is now
\be
[4]\otimes[4]\otimes[4] =
\,[12]+\,[6, 6]+3\,[7, 5]+5\,[8, 4]+4\,[9, 3]+3\,[10, 2]+2\,[11, 1]+\,[4, 4, 4]+2\,[5, 4, 3]+
\,[5, 5, 2]+ \nn \\
\!\!\!\!\!\!\!\!\!\!\!
+\,[6, 3, 3]+3\,[6, 4, 2]+2\,[6, 5, 1]+2\,[7, 3, 2]+4\,[7, 4, 1]+\,[8, 2, 2]+3\,[8, 3, 1]
+2\,[9, 2, 1]+\,[10, 1, 1]
\label{decocube}
\nn
\ee
and most items come with non-unit multiplicities.
However, these multiplicities are in one-to-one correspondence with the
intermediate representations $Y\in R^{\otimes 2}$.
From this, one can read off content of the inclusive Racah matrices:
{\footnotesize
\be
\begin{array}{|c|c|c|}
\hline
&&\text{number of}\\
\text{matrix size} & Q & \\
&&\text{matrices}\\ \hline && \\
1 & [12], [10, 1, 1], [8, 2, 2], [6, 6], [6, 3, 3], [5, 5, 2], [4, 4, 4] & 7 \\
&&\\ \hline && \\
2 & [11, 1], [5, 4, 3], [6, 5, 1], [7, 3, 2], [9, 2, 1] & 5 \\
&&\\ \hline && \\
3 & [10, 2], [8, 3, 1], [7, 5], [6, 4, 2] & 4 \\
&&\\ \hline && \\
4 & [9, 3], [7, 4, 1] & 2 \\
&&\\ \hline && \\
5 & [8, 4] & 1 \\
&&\\ \hline
\end{array}
\nn
\ee
}
The biggest matrix has size $5\times 5$.
This means that all Racah matrices arising in $4^{\otimes3}$ can be found with the help
of the eigenvalue hypothesis \cite{IMMMev}.
\subsection{Specification to the case of $R=[3,1]$
\label{hv31}}
This is a hard case, described in detail in \cite{mmms31}, where we refer the reader to.
The results are presented there in the same format, as in the present paper.
\subsection{Specification to the case of $R=[2,2]$}
This case is again relatively simple, still this is a new piece of knowledge.
Decomposition of the square
\be
\l[22]\otimes [22] = [44]\oplus [431]_-\oplus[422]\oplus [3311]\oplus [3221]_-\oplus [2222]
\ee
contains six irreducible representations.
Note that the symmetric diagrams $Y=[332]$ and $Y=[4211]$ do not contribute.
This implies that maximal size of the Racah mixing matrices will be $6\times 6$.
Another simplifying fact is that all representations come with no multiplicities,
moreover, all ${\cal R}$-matrix are different (there are no accidental coincidence,
which are often encountered for non-rectangular representations $R$).
Together these two facts give a chance that the mixing matrices can be defined
from eigenvalue hypothesis \cite{IMMMev}, which provides explicit expressions
for the entries of Racah matrices through eigenvalues of the ${\cal R}$-matrices
(they are given in \cite{IMMMev} for sizes up to $5$ and size $6$ was later described
in \cite{MMuniv}).
Representation content of the cube is
{\footnotesize
\be
[2,2]\otimes\Big([2,2]\otimes[2,2]\Big) &=&
3\,[4, 2, 2, 2, 2]+\,[4, 2, 2, 2, 1, 1]+\,[3, 3, 2, 2, 2]+3\,[3, 3, 2, 2, 1, 1]+2\,[3, 2, 2, 2, 2, 1]+\,[2, 2, 2, 2, 2, 2]+\nn \\
&+&4\,[5, 3, 2, 1, 1]+3\,[5, 3, 2, 2]+3\,[5, 3, 3, 1]+\,[3, 3, 3, 1, 1, 1]+2\,[6, 3, 2, 1]+3\,[4, 4, 3, 1]+\,[6, 3, 3]+\,[4, 4, 4]+\nn \\
&+&\,[6, 4, 1, 1]+2\,[5, 4, 1, 1, 1]+6\,[5, 4, 2, 1]+2\,[5, 4, 3]+3\,[5, 5, 1, 1]+\,[5, 5, 2]+2\,[3, 3, 3, 2, 1]+\,[3, 3, 3, 3]+\nn \\
&+&2\,[4, 3, 2, 1, 1, 1]+6\,[4, 3, 2, 2, 1]+3\,[4, 3, 3, 1, 1]+3\,[4, 3, 3, 2]+\,[6, 2, 2, 2]+\,[4, 4, 1, 1, 1, 1]+3\,[4, 4, 2, 1, 1]+\nn \\
&+&3\,[6, 4, 2]+2\,[5, 2, 2, 2, 1]+2\,[6, 5, 1]+\,[6, 6]+6\,[4, 4, 2, 2]
\nn
\ee
}
and the inclusive Racah matrices form the collection
{\footnotesize
\be
\begin{array}{|c|c|c|}
\hline
&&\text{number of}\\
\text{matrix size} & Q & \\
&&\text{matrices}\\ \hline && \\
1 & [6{,} 6]{,} [6{,} 3{,} 3]{,} [6{,} 4{,} 1{,} 1]{,} [6{,} 2{,} 2{,} 2]{,} [5{,} 5{,} 2]{,} [4{,} 4{,} 4]{,} [4{,} 4{,} 1{,} 1{,} 1{,} 1]{,} [4{,} 2{,} 2{,} 2{,} 1{,} 1]{,} [3{,} 3{,} 3{,} 3]{,} [3{,} 3{,} 2{,} 2{,} 2]{,} [3{,} 3{,} 3{,} 1{,} 1{,} 1], [2{,} 2{,} 2{,} 2{,} 2{,} 2] & 12 \\
&&\\ \hline && \\
2 & [6{,} 5{,} 1]{,} [6{,} 3{,} 2{,} 1]{,} [5{,} 4{,} 3]{,} [5{,} 4{,} 1{,} 1{,} 1]{,} [5{,} 2{,} 2{,} 2{,} 1]{,} [4{,} 3{,} 2{,} 1{,} 1{,} 1]{,} [3{,} 3{,} 3{,} 2{,} 1]{,} [3{,} 2{,} 2{,} 2{,} 2{,} 1] & 8 \\
&&\\ \hline && \\
3 & [6{,} 4{,} 2]{,} [5{,} 5{,} 1{,} 1]{,} [5{,} 3{,} 3{,} 1]{,} [5{,} 3{,} 2{,} 2]{,} [4{,} 4{,} 3{,} 1]{,} [4{,} 3{,} 3{,} 2]{,} [4{,} 4{,} 2{,} 1{,} 1]{,} [4{,} 3{,} 3{,} 1{,} 1]{,} [4{,} 2{,} 2{,} 2{,} 2]{,} [3{,} 3{,} 2{,} 2{,} 1{,} 1] & 10 \\
&&\\ \hline && \\
4 & [5{,} 3{,} 2{,} 1{,} 1] & 1 \\
&&\\ \hline && \\
5 & - & 0 \\
&&\\ \hline && \\
6 & [5{,} 4{,} 2{,} 1]{,} [4{,} 4{,} 2{,} 2]{,} [4{,} 3{,} 2{,} 2{,} 1] & 3 \\
&&\\ \hline
\end{array}
\nn
\ee
}
\noindent
All matrices of size up to four are nicely handled by the eigenvalue hypothesis
(though we checked them by the direct highest weight calculation as well).
We do not list them here.
The $6\times 6$ matrix
{\footnotesize
\be
{\cal U}_{[4,4,2,2]} = \left(\begin{array}{cccccc}
\frac{[2]^2}{[5][4]^2} & \sqrt{\frac{[3]}{[5]}}\frac{[2]^2}{[4]^2} & \sqrt{\frac{1}{[5]}}\frac{[2]}{[4]} & -\sqrt{[7][3]}\frac{[3][2]}{[5][4]} & -\sqrt{[7]}\frac{[2]^2}{[4]^2} & \sqrt{\frac{[7][3]}{[5]}}\frac{[6][2]}{[4]^2[3]} \\ \\
-\sqrt{\frac{[3]}{[5]}}\frac{[2]^2}{[4]^2} & -\frac{[3][2]([7]+[3]-2)}{[6][4]^2} & \sqrt{[3]}\frac{[8][3]}{[6][4]^2} & \sqrt{\frac{[7]}{[5]}}\frac{[8][3]}{[6][4]^2} & \sqrt{[7][5][3]}\frac{[3][2]}{[6][4]^2} & \sqrt{[7]}\frac{[2]^2}{[4]^2} \\ \\
\sqrt{\frac{1}{[5]}}\frac{[2]}{[4]} & -\sqrt{[3]}\frac{[8][3]}{[6][4]^2} & \frac{[3]([9]-[5]+[3]+2)}{[6][5][2]} & \sqrt{\frac{[7][3]}{[5]}}\frac{[3]}{[6][2]} & \sqrt{\frac{[7]}{[5]}}\frac{[8][3]}{[6][4]^2} & \sqrt{[7][3]}\frac{[2]}{[5][4]} \\ \\
\sqrt{[7][3]}\frac{[2]}{[5][4]} & -\sqrt{\frac{[7]}{[5]}}\frac{[8][3]}{[6][4]^2} & \sqrt{\frac{[7][3]}{[5]}}\frac{[3]}{[6][2]} & \frac{[3]([9]-[5]+[3]+2)}{[6][5][2]} & \sqrt{[3]}\frac{[8][3]}{[6][4]^2} & -\sqrt{\frac{1}{[5]}}\frac{[2]}{[4]} \\ \\
\sqrt{[7]}\frac{[2]^2}{[4]^2} & \sqrt{[7][5][3]}\frac{[3][2]}{[6][4]^2} & -\sqrt{\frac{[7]}{[5]}}\frac{[8][3]}{[6][4]^2} & -\sqrt{[3]}\frac{[8][3]}{[6][4]^2} & -\frac{[3][2]([7]+[3]-2)}{[6][4]^2} & -\sqrt{\frac{[3]}{[5]}}\frac{[2]^2}{[4]^2} \\ \\
\sqrt{\frac{[7][3]}{[5]}}\frac{[6][2]}{[4]^2[3]} & -\sqrt{[7]}\frac{[2]^2}{[4]^2} & \sqrt{[7][3]}\frac{[3][2]}{[5][4]} & -\sqrt{\frac{1}{[5]}}\frac{[2]}{[4]} & \sqrt{\frac{[3]}{[5]}}\frac{[2]^2}{[4]^2} & \frac{[2]^2}{[5][4]^2}
\end{array}\right)
\ee}
was evaluated by the HWM, but {\it a posteriori} is described
by the eigenvalue formula \cite[eq.(17)]{MMuniv} with
\be
\lambda_{[4,4]} = q^{-8}, \ \lambda_{[4,3,1]} = -q^{-4}, \ \lambda_{[4,2,2]} = q^{-2}, \ \lambda_{[3,3,1,1]} = q^{2}, \ \lambda_{[3,2,2,1]} = -q^{4}, \ \lambda_{[2,2,2,2]} = q^{8}.
\ee
The remaining two $6\times 6$ matrices correspond to mutually transposed diagrams which are thus
related by the change $q\longrightarrow -q^{-1}$ (rank-level duality \cite{DMMSS,GS,IMMMfe}), so that only one of them needs to be
calculated.
In this case, there are two coincident eigenvalues, thus one expects it to have
a block form in an appropriate basis, but we did not yet manage to confirm this
natural expectation.
Instead we found it by brute force, with the HWM:
{\footnotesize
\be
{\cal U}_{[5,4,2,1]} = \left(\begin{array}{cccccc}
\frac{[2]}{[5][4]} & \sqrt{\frac{[6][2]}{[5]\alpha_0}}\frac{[3]}{[4]} & -\sqrt{\frac{[8][3]}{[5][4]\alpha_0}}\frac{[2]}{[4]} & -\sqrt{\frac{1}{[5]}} & -\sqrt{\frac{[8][3]}{[4]}}\frac{1}{[5]} & \sqrt{\frac{[8]}{[5][4]}} \\ \\
\sqrt{\frac{[6][2]}{[5]\alpha_0}}\frac{[3]}{[4]} & \frac{[3]}{[2]^2\alpha_0} & -\sqrt{\frac{[8][3]}{[6][4][2]}}\frac{\alpha_1}{[4]\alpha_0} & \sqrt{\frac{1}{[6][2]\alpha_0}}\frac{[4][3]}{[2]^2} & -\sqrt{\frac{[8][3][2]\alpha_0}{[6][5][4]}}\frac{1}{[2]^2} & -\sqrt{\frac{[8][2]}{[6][4]\alpha_0}}\frac{[5][3]}{[4][2]} \\ \\
-\sqrt{\frac{[8][3]}{[5][4]\alpha_0}}\frac{[2]}{[4]} & -\sqrt{\frac{[8][3]}{[6][4][2]}}\frac{\alpha_1}{[4]\alpha_0} & \frac{[3]\alpha_2}{[6][4]\alpha_0} & \sqrt{\frac{[8][2]}{[4]\alpha_0}}\frac{[3]}{[4][2]} & -\sqrt{\frac{\alpha_0}{[5]}}\frac{[3]}{[6][2]} & \sqrt{\frac{[3]}{\alpha_0}}\frac{[7][3]}{[6][4]} \\ \\
-\sqrt{\frac{1}{[5]}} & \sqrt{\frac{1}{[6][2]\alpha_0}}\frac{[4][3]}{[2]^2} & \sqrt{\frac{[8][2]}{[4]\alpha_0}}\frac{[3]}{[4][2]} & -\frac{[3]^2-3[3]}{[6][2]} & -\sqrt{\frac{[8][4][3]}{[5]}}\frac{[3]}{[6][2]^2} & -\sqrt{\frac{[8]}{[4]}}\frac{[3]}{[6][2]} \\ \\
-\sqrt{\frac{[8][3]}{[4]}}\frac{1}{[5]} & -\sqrt{\frac{[8][3][2]\alpha_0}{[6][5][4]}}\frac{1}{[2]^2} & -\sqrt{\frac{\alpha_0}{[5]}}\frac{[3]}{[6][2]} & -\sqrt{\frac{[8][4][3]}{[5]}}\frac{[3]}{[6][2]^2} & -\frac{[8][3]}{[6][5][4][2]} & -\sqrt{\frac{[3]}{[5]}}\frac{[3]}{[6][2]} \\ \\
\sqrt{\frac{[8]}{[5][4]}} & -\sqrt{\frac{[8][2]}{[6][4]\alpha_0}}\frac{[5][3]}{[4][2]} & \sqrt{\frac{[3]}{\alpha_0}}\frac{[7][3]}{[6][4]} & -\sqrt{\frac{[8]}{[4]}}\frac{[3]}{[6][2]} & -\sqrt{\frac{[3]}{[5]}}\frac{[3]}{[6][2]} & -\frac{[3]}{[6][4]}
\end{array}\right)
\ee}
with $\alpha_2=[11]-[7]+1, \ \alpha_1=[9]+2[7]+2[5]+[3]-1, \ \alpha_0=[7]+[5]-1$.
\bigskip
We are now ready to consider a particularly important application of these results
on non-trivial Racah matrices: to the {\it colored} knot polynomials.
\section{[22]-colored polynomials and their properties}
\subsection{The 2- and 3-strand HOMFLY polynomial}
The basic formula in the theory of knot polynomials, revealing
their group theory nature is the one for the
HOMFLY polynomial of the 2-strand braid with $m$ crossings (which describes the torus knot/link),
colored by the representation (Young diagram) $R$:
\be
H_R^{(m)} = \sum_{Y\in R\otimes R} \frac{d_Y}{d_R} \cdot
\left(\frac{\epsilon_Y q^{\varkappa_Y}}{q^{4\varkappa_R}A^{|R|}}\right)^m
\label{2straH}
\ee
with
$\epsilon_Y=\pm 1$ depending on whether $Y$ belongs to the symmetric or antisymmetric
square of $R$, and with $q$ to the power of Casimir eigenvalue $\varkappa_Y = \sum_{(i,j)\in Y} \big(i-j\big)$
representing the eigenvalues of quantum ${\cal R}$-matrix in the representation $Y$
of quantum dimension
\be
d_Y = {\rm dim}_q(Y)={\rm Schur}_Y\left(p_k=\{A^k\}/\{q^k\}\right)=
\prod_{(i,j)\in Y} \frac{\{Aq^{i-j}\}}{\{q^{1+{\rm arm}(i,j)+{\rm leg}(i,j)}\}},
\ee
As usual, $\{x\} = x-x^{-1}$.
For multi-strand braids the {\it evolution factor} in power $m$ is substituted by a more
complicated element of the braid group, expressed via the Racah matrices intertwining ${\cal R}$ that
act on different pairs of the adjacent braids.
According to \cite{MMMkn12}, the link/knot polynomial for a closure of the 3-strand braid
$B^{m_1,n_1|m_2,n_2|\ldots}$ is equal to
\be
H_R^{(m_1,n_1|m_2,n_2|\ldots)} = \sum_{Q\in R^{\otimes 3}}
\ \frac{d_Q}{d_R}\cdot
\Tr_Q \Big\{ {\cal R}_Q^{m_1}{\cal U}_Q {\cal R}_Q^{n_1}{\cal U}^\dagger_Q
{\cal R}_Q^{m_2}{\cal U}_Q {\cal R}_Q^{n_2}{\cal U}^\dagger_Q \ldots\Big\}
\label{3strafla}
\ee
In the following picture $m_1=0,n_1= -2,m_2=2,n_2=-1,m_3=3$:
\bigskip
\unitlength 0.8mm
\linethickness{1pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(145.5,53)(-30,0)
\put(19.5,34.5){\line(1,0){13.25}}
\put(41.25,43.25){\line(1,0){11.25}}
\put(19.25,43){\line(1,0){13.25}}
\put(38.75,35){\line(1,0){13.75}}
\put(61.25,43.25){\line(1,1){8.75}}
\put(70,52){\line(1,0){14.75}}
\put(18.5,52){\line(1,0){41}}
\multiput(59.5,52)(.033505155,-.043814433){97}{\line(0,-1){.043814433}}
\put(58.25,35.25){\line(1,0){33.75}}
\multiput(92,35.25)(.033505155,.038659794){97}{\line(0,1){.038659794}}
\multiput(64.5,45)(.03289474,-.04605263){38}{\line(0,-1){.04605263}}
\put(65.75,43.25){\line(1,0){19}}
\multiput(84.5,43.5)(.0346153846,.0336538462){260}{\line(1,0){.0346153846}}
\multiput(84.75,52)(.03370787,-.03651685){89}{\line(0,-1){.03651685}}
\multiput(52.5,43)(.033653846,-.046474359){156}{\line(0,-1){.046474359}}
\multiput(52.5,35)(.03353659,.03353659){82}{\line(0,1){.03353659}}
\multiput(56.75,39)(.035447761,.03358209){134}{\line(1,0){.035447761}}
\multiput(32.25,43)(.033602151,-.041666667){186}{\line(0,-1){.041666667}}
\multiput(32.75,34.75)(.03333333,.03333333){75}{\line(0,1){.03333333}}
\put(37,39){\line(1,1){4.25}}
\put(99.75,35.25){\line(1,0){45.75}}
\multiput(100,35.5)(-.0336990596,.0352664577){319}{\line(0,1){.0352664577}}
\multiput(97.25,41)(.0336363636,.04){275}{\line(0,1){.04}}
\put(106.5,52){\line(1,0){7.75}}
\put(121.25,44){\line(1,0){6.75}}
\put(128,44){\line(5,6){7.5}}
\put(135.5,53){\line(1,0){8.25}}
\put(93.25,52.25){\line(1,0){5.75}}
\multiput(99,52.25)(.03353659,-.04268293){82}{\line(0,-1){.04268293}}
\multiput(103,47)(.03333333,-.05){60}{\line(0,-1){.05}}
\put(105,44){\line(0,1){0}}
\put(105,44){\line(1,0){9.5}}
\multiput(114.5,44)(.033632287,.036995516){223}{\line(0,1){.036995516}}
\put(122,52.25){\line(1,0){5.25}}
\multiput(127.25,52.25)(.03353659,-.03963415){82}{\line(0,-1){.03963415}}
\multiput(131.5,47)(.03333333,-.04166667){60}{\line(0,-1){.04166667}}
\put(133.5,44.5){\line(1,0){10.75}}
\multiput(114.25,52.25)(.03370787,-.03651685){89}{\line(0,-1){.03651685}}
\multiput(121,44)(-.03333333,.04666667){75}{\line(0,1){.04666667}}
\label{3strand}
\end{picture}
\vspace{-2.4cm}
Actually, (\ref{2straH}) is a particular case of (\ref{3strafla})
for the one-parametric subfamily $(m,0)$, and evolution in any parameter
$n$ can be described by its simple generalization:
\be
H_R^{(m,n_1|m_2,n_2|\ldots)} = \sum_{Y\in R\otimes R} \frac{d_Y}{d_R} \cdot
\left(\frac{\epsilon_Y q^{\varkappa_Y}}{q^{4\varkappa_R}A^{|R|}}\right)^m \cdot
C_{R,Y}^{(\cdot,n_1|m_2,n_2|\ldots)}(A,q)
\label{1famevo}
\ee
with correction coefficients $C_{R,Y}$ depending on all other parameters of the braid.
Despite they are not just unity as in (\ref{2straH}), they often have rather simple
and comprehensible structure, see \cite{mmms31} and eq.(\ref{C}) below for examples.
A similar formula for the double-parameter evolution
\be
H_R^{(m_1,n_1|m_2,n_2|\ldots)} = \sum_{Y_1,Y_2\in R\otimes R}
\left(\frac{\epsilon_{Y_1} q^{\varkappa_{Y_1}}}{q^{4\varkappa_R}A^{|R|}}\right)^{m_1}
\left(\frac{\epsilon_{Y_2} q^{\varkappa_{Y_2}}}{q^{4\varkappa_R}A^{|R|}}\right)^{m_2}\cdot
h_{R,Y_1,Y_2}^{(\cdot,n_1|\cdot,n_2|\ldots)}(A,q)
\label{2famevo}
\ee
plays the central role in the ${\cal U} - {\cal S}$ relation of \cite{mmmsEI}.
\subsection{Specification to $R=[2,2]$}
The newly calculated Racah matrices allow us to consider the case $R=[2,2]$.
So far the HOMFLY polynomials were available in this representation only for peculiar torus
knots, where the Rosso-Jones formula \cite{RJ,DMMSS} provides an exhaustive
generalization of (\ref{2straH}).
For a new list of $H_{[22]}$ for all knots from the Rolfsen table (i.e. up to 10 crossings) that have 3-strand braid representation
see \cite{knotebook}.
They all satisfy the factorization properties \cite{DMMSS,IMMMfe,chi}
\be
H_R(q=1,A) \ =\ H_{[1]}(q=1,A)^{|R|}
\ee
and \cite{Konfact}
\be
H_{[22]}=H_{[31]}=H_{[4]} = H_{[2]}^2 \ \ \ \ &{\rm at} \ \ q^4=1 \nn \\
H_{[22]}=H_{[4]} \ \ \ \ &{\rm at} \ \ q^6=1
\ee
Note that, since $R=[2,2]$ is not a single-hook diagram,
there is {\it no} simple statement about the colored Alexander polynomial at $A=1$.
Also, the properties of the perturbative \cite{Kont} and genus (Hurwitz) \cite{MMS} expansions,
described in sec.6 of \cite{mmms31} remain true,
though the checks are limited by the same reasons as in that paper.
As to the differential expansion \cite{DGR,IMMMfe,evo,arthdiff}, it is an interesting separate issue.
For the rectangular diagrams $R=[r^s]$, the elementary representation theory predicts that
\be
H^{\cal K}_{[r^s]} -1 \sim \{Aq^r\}\{A/q^s\}
\ee
but the next terms are not so straightforward.
We give here just a simple example for the figure eight knot,
which was the first origin of insights in this topic for symmetric representations \cite{IMMMfe}:
\be
\!\!\!\!\!\!
H_{[22]}^{4_1} = 1 + \{Aq^2\}\{A/q^2\}\Big\{[2]^2 - [3]\Big(\{Aq^3\}\{A/q\}+\{Aq\}\{A/q^3\}\Big)
+ \nn \\
+ [2]^2\{Aq^3\}\{Aq\}\{A/q\}\{A/q^3\} + \{Aq^3\}\{Aq^2\}\{Aq\}\{A/q\}\{A/q^2\}\{A/q^3\}\Big\}
\ee
The last thing we do in this letter
is a concise description of $H_{[22]}$ for a simple, but important sub-family of 3-strand braids.
\subsection{Evolution for $(m,-1|1,-1)$}
This family contains a number of interesting prime knots:
$4_1,5_2,6_2,7_3,8_2,9_3,10_2,\ldots$, corresponds to the pretzel knots $(m,\bar 2,1)$ \cite{MMSpret,mmms31}
and is described by the formula (\ref{1famevo}).
Our knowledge of the Racah matrices allows us to evaluate the correction coefficients $C$
for $R=[2,2]$ (we keep only diagram index $Y$ to simplify the formulas):
\be
H_{[2,2]}^{(m,-1\,|\,1,-1)}
= \frac{A^{-4m}}{\{q\}^4d_{[22]}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
&\Big(d_{[44]}C_{[44]}\cdot q^{8m} + d_{[431]}C_{[431]}\cdot (-q^4)^m +
d_{[422]}C_{[422]}\cdot q^{2m} + \nn \\
&+ d_{[3311]}C_{[3311]}\cdot q^{-2m} + d_{[3221]}C_{[3221]}\cdot (-q^{-4})^m
+ d_{[2222]}C_{[2222]}\cdot q^{-8m}\Big)
\ee
and
\be\label{C}
\begin{array}{cccccc}
C_{[44]} =& A^8 &- q^2[2]^2A^6\{q\} &+ q^4[3][2]A^4\{q\}^2\cdot (q+q^{-1}[3])
&-q^{6}[3][2]^3A^2\{q\}^3&+q^8[3][2]^2\{q\}^4\nn \\
C_{[431]} =& A^8 &- q^2[2]A^6\{q\}([3]-q^{-4})& + q^2[3]^2[2]A^4\{q\}^2\cdot (q-q^{-1} )
&-q^{2}[3][2]A^2\{q\}^3(q^4-[3])&-q^4[3][2]^2\{q\}^4\nn \\
C_{[422]} = &A^8 &+ q^{-2}[2]A^6\{q\}(1-q^5[2]) &+ q^4[3][2]A^4\{q\}^2\cdot (q-q^{-2}[2] )
&+q^{4}[3][2]A^2\{q\}^3(1-q^{-5}[2])&+q^2[3][2]^2\{q\}^4\nn \\
C_{[3311]} =& A^8 &- q^2[2]A^6\{q\}(1-q^{-5}[2]) &+ q^{-4}[3][2]A^4\{q\}^2\cdot (q^{-1}-q^{2}[2] )
&-q^{-4}[3][2]A^2\{q\}^3(1-q^{5}[2])&+q^{-2}[3][2]^2\{q\}^4\nn \\
C_{[3221]} =& A^8 &+ q^{-2}[2]A^6\{q\}([3]-q^{4})& + q^2[3]^2[2]A^4\{q\}^2\cdot (q^{-1}-q )
&+q^{-2}[3][2]A^2\{q\}^3(q^{-4}-[3])&-q^{-4}[3][2]^2\{q\}^4\nn \\
C_{[222]} = &A^8 &+ q^{-2}[2]^2A^6\{q\}& + q^{-4}[3][2]A^4\{q\}^2\cdot (q^{-1}+q [3])
&+q^{-6}[3][2]^3A^2\{q\}^3&+q^{-8}[3][2]^2\{q\}^4
\end{array}
\ee
For a similar result for the (anti)symmetric representations and for $R=[3,1]$ see
\cite{mmms31}, for extension from (\ref{1famevo}) to (\ref{2famevo}) see \cite{mmmsEI}.
\section{Conclusion}
To conclude, we reported the results of inclusive Racah matrix calculation for
representation $R=[2,2]$.
This is a nice example, where calculations are already complicated, but the outcome
is relatively concise and can be presented in a short communication.
Not only being a step forward in solving a long standing problem in theoretical physics,
the result has an immediate application to the study of colored knot polynomials
which generalize conformal blocks and are in the center of attention in modern studies.
The explicit calculation confirms some of the existing conjectures in this field and
poses new questions about the genus, Hurwitz and differential expansions,
important subjects in many branches of quantum field theory in various dimensions.
\section*{Acknowledgements}
This work was funded by the Russian Science Foundation (Grant No.16-11-10291).
|
1,116,691,497,683 | arxiv | \section{Introduction}
For any graph $G$ let $V$ denote the set of vertices where $|V| = n$, $E$ denote the set of edges where $|E| = m$, $A$ denote the adjacency matrix, $\chi(G)$ denote the chromatic number and $\omega(G)$ the clique number. Let $\mu_1 \ge \mu_2 \ge ... \ge \mu_n$ denote the eigenvalues of $A$, and then the inertia of $G$ is the ordered triple $(n^+, n^0, n^-)$, where $n^+, n^0$ and $n^-$ are the numbers of positive, zero and negative eigenvalues of $A$, including multiplicities. Note that $\mathrm{rank}(A) = n^+ + n^-$ and $\mathrm{nullity}(A) = n^0$. Let $s^+$ and $s^-$ denote the sum of the squares of the positive and negative eigenvalues of $A$, respectively.
Let $D$ be the diagonal matrix of vertex degrees, and let $L = D - A$ denote the Laplacian of $G$ and $Q = D + A$ denote the signless Laplacian of $G$. The eigenvalues of $L$ are $\theta_1 \ge ... \ge \theta_n = 0$ and the eigenvalues of $Q$ are $\delta_1 \ge ... \ge \delta_n$.
Let $\chi_q(G)$ and $\chi_q^{(r)}(G)$ denote the quantum and rank-$r$ quantum chromatic numbers, as defined by Cameron \emph{et al} \cite{cameron07}, where $\chi_q(G) = \min_r{(\chi_q^{(r)}(G))}$. It is evident that $\chi_q(G) \le \chi(G)$, and Cameron \emph{et al} \cite{cameron07} exhibit a graph on 18 vertices and 44 edges with chromatic number 5 and quantum chromatic number 4. Mancinska and Roberson \cite{mancinska162} have subsequently found a graph on 14 vertices with $\chi(G) > \chi_q(G)$, and they suspect this is the smallest possible example.
It is helpful to have a purely combinatorial definition of the quantum chromatic number, and the following definition is due to \cite[Definition 1]{mancinska162}.
For a positive integer $c$, let $[c]$ denote the set $\{0,1,\ldots,c-1\}$. For $d>0$, let $I_d$ and $0_d$ denote the identity and zero matrices in $\mathbb{C}^{d\times d}$.
\begin{dfn}
A quantum $c$-coloring of the graph $G=(V,E)$ is a collection of orthogonal projectors $\{ P_{v,k} : v\in V, k\in [c]\}$ in
$\mathbb{C}^{d\times d}$ such that
\begin{itemize}
\item for all vertices $v\in V$
\begin{eqnarray}
\sum_{k\in[c]} P_{v,k} & = & I_d \quad\quad \mathrm{(completeness)} \label{eq:complete}
\end{eqnarray}
\item for all edges $vw\in E$ and for all $k\in[c]$
\begin{eqnarray}
P_{v,k} P_{w,k} & = & 0_d \quad\quad \mathrm{(orthogonality)} \label{eq:orthogonal}
\end{eqnarray}
\end{itemize}
The quantum chromatic number $\chi_q(G)$ is the smallest $c$ for which the graph $G$ admits a quantum $c$-coloring for some dimension $d>0$.
\end{dfn}
According to the above definition, any classical $c$-coloring can be viewed as a $1$-dimensional quantum coloring, where we set $P_{v,k}=1$ if vertex $v$ has color $k$ and
we set $P_{v,k}=0$, otherwise. Therefore, quantum coloring is a relaxation of classical coloring. As noted in \cite{mancinska162}, it is surprising that the quantum chromatic number can be strictly and even exponentially smaller than the chromatic number for certain families of graphs.
\section{Spectral lower bounds for the chromatic number}
Most of the known spectral lower bounds for the chromatic number can be summarized as follows:
\begin{equation}\label{bounds}
1 + \max\left(\frac{\mu_1}{|\mu_n|} , \frac{2m}{2m- n\delta_n} , \frac{\mu_1}{\mu_1 - \delta_1 + \theta_1} , \frac{n^\pm}{n^\mp} , \frac{s^\pm}{s^\mp}\right) \le \chi(G) ,
\end{equation}
where, reading from left to right, these bounds are due to Hoffman \cite{hoffman70}, Lima \emph{et al} \cite{lima11}, Kolotilina \cite{kolotilina11}, Elphick and Wocjan \cite{elphick17}, and Ando and Lin \cite{ando15}. It should be noted that Nikiforov \cite{nikiforov07} pioneered the use of non-adjacency matrix eigenvalues to bound $\chi(G)$.
Let $c$ denote the number of colors used in a (classical) coloring. The authors (\cite{wocjan13} \cite{elphick15}, \cite{elphick17}), provided proofs of all of the bounds in (\ref{bounds}) except the last one using the following equality:
\begin{equation}\label{conversion}
\sum_{\ell\in[c]} U^\ell A (U^{\dagger})^\ell = 0_n,
\end{equation}
where $U$ is a diagonal unitary matrix in $\mathbb{C}^{n\times n}$ whose entries are $\chi$th roots of unity and $\dagger$ denotes the conjugate transpose. The last bound in (\ref{bounds}) was proved in \cite{ando15} essentially using the following equality:
\begin{equation}\label{partition}
\sum_{k\in[c]} P_k A P_k = 0_n\,,
\end{equation}
where $P_k$ are orthogonal projectors in $\mathbb{C}^{n\times n}$ that are diagonal in the \emph{standard basis} and their sum $\sum_{k\in[c]}P_k$ is equal to the identity matrix $I_n$.
A quantum $c$-coloring is an operator relaxation of a classical $c$-coloring. The latter corresponds to the special case when the dimension $d$ of the relevant Hilbert space is $1$.
We will show that the existence of a quantum $c$-coloring in dimension $d$ implies the existence of suitable orthogonal projectors $P_k$ and a suitable unitary matrix $U$ in $\mathbb{C}^{n\times n}\otimes\mathbb{C}^{d\times d}$ such that the above equalities hold for $A\otimes I_d$. Once these equalities are established, we can use the same approaches as in the above papers to prove that all bounds in (\ref{bounds}) are also lower bounds for the quantum chromatic number.
We note that all bounds are also valid for weighted adjacency matrices of the form $W \circ A$, where $W$ is an arbitrary Hermitian matrix and $\circ$ denotes the Hadamard product (also called the Schur product).
\section{Pinching and twirling}
We start by defining two operations from linear algebra: pinching and twirling.
\begin{rem}
The following observation is fairly obvious but important. Let $\{Q_k : k \in[c]\}$ be any collection of orthogonal projectors in $\mathbb{C}^{m\times m}$ that form a resolution of the identity matrix, that is,
\[
\sum_{k\in[c]} Q_k = I_m\,.
\]
Then, the orthogonal projectors are necessarily mutually orthogonal, that is, $Q_k Q_\ell = 0$ for $k,\ell\in[c]$ with $k\neq \ell$.
\end{rem}
The following definition of pinching can be found in \cite[Problem II.5.5.]{bhatiaBook}.
\begin{dfn}[Pinching]\label{dfn:pinching}
Let $\{Q_k : k \in [c]\}$ be any collection of orthogonal projectors in $\mathbb{C}^{m\times m}$ that form a resolution of the identity matrix.
Then, the operation $\mathcal{C}$ that maps an arbitrary matrix $X\in\mathbb{C}^{m \times m}$ to
\[
\mathcal{C}(X) = \sum_{k\in[c]} Q_k X Q_k
\]
is called pinching. We say that the pinching $\mathcal{C}$ annihilates $X$ if $\mathcal{C}(X)=0_m$.
\end{dfn}
Let $\{e_i : i\in [m]\}$ denote the standard basis of $\mathbb{C}^m$. The basis vector $e_i$ has $1$ in the $i$th position and $0$ in all other positions.
\begin{rem}[Partitioning and pinching]\label{partition}
Assume that the row and column indices of matrices $X\in\mathbb{C}^{m\times m}$ are partitioned into the following $c$ nonempty sets $S_k=\{s_k,\ldots,s_{k+1}-1\}$ for $k\in[c]$ for
given $0=s_0<s_1<\ldots<s_{c-1}<s_{c}=m$. Let
\[
X=\left(
\begin{array}{cccc}
X_{0,0} & X_{0,1} & \cdots & X_{0,c-1} \\
X_{1,0} & X_{1,1} & \cdots & X_{1,c-1} \\
\vdots & \vdots & \ddots & \vdots \\
X_{c-1,0} & X_{c-1,1} & \cdots & X_{c-1,c-1}
\end{array}
\right)
\]
be the corresponding partitioned matrix.
Let $\{ P_k : k \in [c]\}$ be a collection of projectors in $\mathbb{C}^{m\times m}$, where $P_k$ denote the projectors onto the subspaces
\[
\mathrm{span}\{ e_i : i \in S_k \}.
\]
Then, $P_k X P_\ell$ correspond to the submatrices $X_{k,\ell}$ of the above partition of $X$. Let $\mathcal{C}$ be the pinching
defined by the collection $\{ P_k : k \in [c] \}$ of the above orthogonal projectors. Then,
\[
\mathcal{C}(X) =
\left(
\begin{array}{cccc}
X_{0,0} & 0 & \cdots & 0 \\
0 & X_{1,1} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & X_{c-1,c-1}
\end{array}
\right)\,.
\]
\end{rem}
\begin{dfn}[Twirling]
Let $\{U_\ell : \ell \in [c]\}$ be a collection of arbitrary unitary matrices in $\mathbb{C}^{m\times m}$. Borrowing terminology from quantum information theory,
we call the operation $\mathcal{D}$ that maps an arbitrary matrix $X\in\mathbb{C}^{m\times m}$ to
\[
\mathcal{D}(X) = \frac{1}{c} \sum_{\ell\in [c]} U_\ell X U_\ell^\dagger
\]
twirling. We say that the twirling $\mathcal{D}$ annihilates $X$ if $\mathcal{D}(X)=0_m$.
\end{dfn}
It was shown in \cite{bhatia} that twirling can be constructed from pinching in a straightforward way so that both operations have the same effect. Observe that in this construction the unitary matrices $U_\ell$ above can be chosen to be powers of one unitary matrix $U$, that is, we have $U_\ell=U^\ell$. The special case when there are only two projectors is mentioned in \cite[Problem II.5.4]{bhatiaBook}.
\begin{lem}\label{lem:twirling}
It is known that pinching $\mathcal{C}$ defined in Definition~\ref{dfn:pinching} can also be realized as twirling $\mathcal{D}$ as follows.
Let $\omega=e^{2\pi i/c}$ be a $c$th root of unity and
\[
U = \sum_{k\in [c]} \omega^{k} Q_k\,.
\]
Then, twirling defined by
\begin{equation}\label{eq:averaging}
{\mathcal D}(X) = \frac{1}{c} \sum_{\ell\in[c]} U^\ell X (U^{\dagger})^\ell\,.
\end{equation}
satisfies
\[
{\mathcal C}(X) = {\mathcal D}(X)
\]
for all matrices $X\in\mathbb{C}^{m\times m}$.
\end{lem}
\proof{The $\ell$th power of $U$ is equal to
\[
U^\ell = \sum_{k\in [c]} \omega^{k\cdot \ell} Q_k
\]
since the projectors $Q_k$ are mutually orthogonal to each other. We obtain
\begin{eqnarray*}
{\mathcal D}(X)
& = &
\frac{1}{c} \sum_{\ell\in[c]} U^\ell X (U^{\dagger})^\ell \\
& = &
\frac{1}{c} \sum_{k,k'\in[c]} \sum_{\ell\in[c]} \omega^{(k-k')\cdot \ell} P_k X P_{k'} \\
& = &
\sum_{k\in[c]} P_k X P_k = {\mathcal C}(X).
\end{eqnarray*}
In the last step, we use that $\sum_{\ell\in[c]} \omega^{(k-k')\cdot \ell}=c\cdot\delta_{k,k'}$, where $\delta_{k,k'}$ denotes the Kronecker delta.
}
\section{Pinching from quantum coloring}
We will show how to construct pinchings from quantum colorings. In particular, we will show that if there exists a quantum $c$-coloring
in dimension $d$, then there exists a pinching with $c$ orthogonal projectors that annihilates $A\otimes I_d$.
Let $\{e_v : v \in V\}$ denote the standard basis in $\mathbb{C}^n$, where $n=|V|$.
Denote the entries of $A$ by $a_{uv}$, where $u,v\in V$ enumerate the rows and columns, respectively. We have
\[
A = \sum_{v,w\in V} a_{vw} e_v e_w^\dagger\,,
\]
where $a_{vw}=e_v^\dagger A e_w$.
\begin{thm}\label{thm:main}
Let $\{ P_{v,k} : v\in V, k\in [c]\}$ be an arbitrary quantum $c$-coloring of $G$ in $\mathbb{C}^d$. Then, the
following block-diagonal orthogonal projectors
\[
P_k = \sum_{v\in V} e_v e_v^\dagger \otimes P_{v,k} \in \mathbb{C}^{n\times n}\otimes\mathbb{C}^{d\times d}
\]
form a resolution of the identity matrix. Moreover, the corresponding pinching operation $\mathcal{C}$
\begin{itemize}
\item annihilates $A\otimes I_d$, and
\item leaves $E\otimes I_d$ invariant for all diagonal matrices $E\in\mathbb{C}^{n\times n}$.
\end{itemize}
\end{thm}
\proof{They form a resolution of the identity matrix because
\[
\sum_{k\in[c]} P_k = \sum_{v\in V} e_v e_v^\dagger \otimes \sum_{k\in [c]} P_{v,k} = \sum_{v\in V} e_v e_v^\dagger \otimes I_d = I_n \otimes I_d\,,
\]
where we make use of the completeness condition (1) that $\sum_{k\in [c]} P_{v,k} = I_d$ for all vertices $v\in V$.
The corresponding pinching operation $\mathcal{C}$ annihilates $A \otimes I_d$ because
\begin{eqnarray*}
& &
\mathcal{C}(A\otimes I_d) \\
& = &
\sum_{k\in[c]} P_k (A \otimes I_d) P_k \\
& = &
\sum_{k\in[c]} \left(\sum_{v\in V} e_v e_v^\dagger \otimes P_{v,k}\right) (A \otimes I_d) \left( \sum_{w\in V} e_w e_w^\dagger \otimes P_{w,k} \right) \\
& = &
\sum_{k\in[c]} \sum_{v,w\in V} a_{vw} \cdot e_v e_w^\dagger \otimes P_{v,k} P_{w,k} \\
& = &
\sum_{k\in[c]} \left( \sum_{vw\in E} 1 \cdot e_v e_w^\dagger \otimes 0_d + \sum_{vw\not\in E} 0 \cdot e_v e_w^\dagger \otimes P_{v,k} P_{w,k} \right) \\
& = & 0\,,
\end{eqnarray*}
where we made use of the orthogonality condition (2) $P_{v,k} P_{w,k} = 0$ for all $vw\in E$ (or equivalently, for all pairs $v,w\in V$ with $a_{vw}=1$).
The property that $\mathcal{C}$ leaves $E \otimes I_d$ invariant for all diagonal matrices $E\in\mathbb{C}^{d\times d}$ is verified similarily.
}
Theorem~\ref{thm:main} shows that the existence of a quantum $c$-coloring of a graph $G=(V,E)$ with adjacency matrix $A$ in dimension $d$ implies the existence of a pinching operation $\mathcal{C}$ that annihilates $A\otimes I_d$ and leaves $E\otimes I_d$ invariant for all diagonal matrices $E$. The converse direction is shown in the remark below \cite{robertson}.
\begin{rem}\label{rem:converse}
Theorem 4.25 in \cite{watrous} shows that the fixed points of any completely positive trace-preserving unital map commute with the Kraus operators of the map. In the present case, the completely positive trace-preserving map is the pinching $\mathcal{C}$, the Kraus operators are the orthogonal projectors $P_k$, and the fixed points of $\mathcal{C}$ are $E\otimes I_d$, where $E\in\mathbb{C}^{d\times d}$ is an arbitrary diagonal matrix. This result implies that the $P_k$ commute with $E \otimes I_d$, which in turn implies that the $P_k$ are block diagonal. These blocks are indexed by the vertices $v$ of $G$, so we can refer to them as $P_{v,k}$ for $v \in V$.
Using the block-diagonal nature of the $P_k$, it is now easy to show that these $P_{v,k}$ yield a quantum $c$-coloring of $G$:
\begin{itemize}
\item The property $\sum_{k\in[c]} P_k = I_{nd}$ implies $\sum_{k\in[c]} P_{v,k} = I_{d}$ for $v\in V$, thus giving the completeness condition in (\ref{eq:complete}).
\item The property $\mathcal{C}(A \otimes I) = 0_{nd}$ implies $\sum_{k\in[c]} P_{v,k} P_{w,k} = 0_d$ for all $vw\in E$. Let $\ell\in[c]$ be fixed by arbitrary. Multiplying the latter equation by $P_{v,\ell}$ from the left and by $P_{w,\ell}$ from the right shows that the summand $P_{v,\ell} P_{w,\ell}$ must be zero, thus giving the orthogonality condition in (\ref{eq:orthogonal}).
\end{itemize}
The pinching operation described in this paper can therefore be regarded as an algebraic reformulation of quantum coloring.
\end{rem}
\section{Lower bounds on quantum chromatic number}
Using Theorem~\ref{thm:main} and Lemma~\ref{lem:twirling}, it is possible to show that all the bounds in (\ref{bounds}) are also lower bounds on the quantum chromatic number.
We demonstrate this explicitly for the bound
\[
1 + \frac{2m}{2m- n\delta_n}\,,
\]
where $\delta_n$ is the minimum eigenvalue of the signless Laplacian $Q=D+A$.
Assume that there exists a quantum $c$-coloring in dimension $d$. Let $\{Q_k : k \in [c] \}$ denote projectors defining a pinching as in Theorem~\ref{thm:main} and $U^\ell=\sum_{k\in [c]} \omega^{k\cdot \ell} Q_k$ denote the corresponding twirling unitaries as defined in Lemma~\ref{lem:twirling}.
The proof is almost identical to the proof for the chromatic number \cite{elphick15}.
We use the identity $D-Q=-A$. We have:
\begin{eqnarray*}
A\otimes I_d
& = &
\sum_{\ell=1}^{c-1} U^\ell (-A\otimes I_d) (U^{\dagger})^\ell \\
& = &
\sum_{\ell=1}^{c-1} U^\ell \left( (D-Q)\otimes I_d \right) (U^{\dagger})^\ell \\
& = &
(c-1)(D\otimes I_d) - \sum_{\ell=1}^{c-1} U^\ell (Q\otimes I_d) (U^{\dagger})^\ell.
\end{eqnarray*}
Define the column vector $v=\frac{1}{\sqrt{nd}}(1,1,\ldots,1)^T$. Multiply the left and right most sides of the above matrix equation by $v^\dagger$ from the left and by $v$ from the right to
obtain
\[
\frac{2m}{n} = v^\dagger (A\otimes I_d)v = (c-1)\frac{2m}{n} - \sum_{\ell=1}^{c-1} v^\dagger U_\ell (Q \otimes I_d) U^\dagger_\ell v \le (c-1)\frac{2m}{n} - (c-1)\delta_n\,.
\]
This uses that $v^\dagger (A\otimes I_d) v = v^\dagger(D\otimes I_d) v = 2m/n$, which is equal to the sum of all entries of respectively $A$ and $D$ divided by $n$ due to the special form of $v$, and that $v^\dagger U_\ell (Q\otimes I_d) U^\dagger_\ell v \ge \lambda_{\min}(Q) = \delta_n$.
The other bounds in (\ref{bounds}) can be shown to be lower bounds for $\chi_q(G)$, by similarly modifying the proofs of these bounds, so that they can be applied to $A\otimes I_d$ instead of $A$.
\section{Implications for the quantum chromatic number}
We now discuss some implications of the bounds in (\ref{bounds}) being lower bounds on the quantum chromatic number. In particular, we discuss applications of the inertia bound: $\chi_q(G)\ge 1 + n^\pm/n^\mp$.
Let $\chi_v(G)$ denote the vector chromatic number of $G$ and $\theta(\overline{G})$ denote the Lov\'asz theta function of the complement of $G$. It is known that:
\[
1 + \frac{\mu_1}{|\mu_n|} \le \chi_v(G) \le \theta(\overline{G}) \le \chi_q(G)\,,
\]
where these inequalities (from left to right) are due to Bilu \cite{bilu06} , Karger \emph{et al} \cite{karger98} and Mancinska and Roberson \cite{mancinska16}. So it is already known that the Hoffman lower bound for $\chi(G)$ is also a lower bound for $\chi_q(G)$. Consequently it is the Lima \emph{et al}, Kolotilina, Ando and Lin and inertial bounds which are new. Experimentally the inertial bounds usually perform best in this context, so it is these bounds we focus on below. (The Lov\'asz theta bound is in general more difficult to compute than spectral bounds.)
In order for the inertial bounds to reveal potentially new information about $\chi_q(G)$ it is necessary for:
\[
1 + \max\left(\left\lceil\frac{n^+}{n^-}\right\rceil , \left\lceil\frac{n^-}{n^+}\right\rceil\right) > \max{\left(\omega(G) , 1 + \left\lceil\frac{\mu_1}{|\mu_n|}\right\rceil\right)}.
\]
This is the case for many graphs, and we tabulate in Table 1 a few examples.
\begin{table}[ht]
\caption{Inertia vs Hoffman bounds}
\centering
\begin{tabular}{c c c c c c c}
\hline \hline
Graph & $n$ & $\chi$ & $\chi_q$ & Inertia & Hoffman & $\omega$\\[0.5ex]
\hline
Cyclotomic & $13$ & $4$ & $4$ & $3.25$ & $2.51$ & $2$\\
Clebsch & $16$ & $4$ & $4$ & $3.2$ & $2.67$ & $2$\\
Generalised Quadrangle(2,4) & $ 27 $ & $ 6 $ & $ \ge5 $ & $ 4.5 $ & $ 3 $ & $ 3 $\\
Non-Cayley Transitive(28,3) & $ 28 $ & $ 4 $ & $ 4 $ & $ 3.1 $ & $ 2.67 $ & $ 2 $\\
\hline
\end{tabular}
\end{table}
So, for example, the Hoffman/Bilu bound implies that the Clebsch graph has $\chi_q(G) \ge 3$ but the inertial bound implies $\chi_q(G) = 4$. More generally, $\chi_q(G) = \chi(G)$ if the ceiling of the inertia bound equals $\chi(G)$.
\section{Conclusion}
Our results can be interpreted as follows. They demonstrate that any existing or future general lower bound on the minimum number of operations required for pinching or twirling to annihilate a given matrix representation of a graph, becomes automatically a lower bound on the quantum chromatic number of that graph.
\section{Acknowledgement}
We would like to thank David Roberson for insightful comments on an earlier version of this paper, and in particular for Remark~\ref{rem:converse}.
This research has been partially supported by the National Science Foundation Award \#1525943 ``Is the Simulation of Quantum Many-Body Systems Feasible on the Cloud?''
|
1,116,691,497,684 | arxiv | \section*{Methods summary}
In our experimental setup\cite{Frohlich2011}, we prepare a quantum degenerate Fermi gas of $^{40}$K atoms in a 50/50 mixture of the two lowest hyperfine states $|F=9/2,m_F=-9/2\rangle$ and $|F=9/2,m_F=-7/2\rangle$. We confine the quantum gas to two dimensions in a deep optical lattice formed by a standing wave laser field, preparing approximately 30 layers. The interaction strength between spin-up and spin-down particles is tuned at a Feshbach resonance near 202.1\,Gauss. The photoemission measurement couples the $|F=9/2,m_F=-7/2\rangle$ state to the weakly interacting state $|F=9/2,m_F=-5/2\rangle$ using a radiofrequency photon of frequency $\Omega$ with negligible momentum transfer. We measure the momentum distribution of the transferred atoms in a time-of-flight experiment and average the absorption signal azimuthally to obtain $A(k,\Omega)$, where $k=\sqrt{k_x^2+k_y^2}$.
\section*{methods}
\subsection{Experimental setup}
We evaporatively cool a 50/50 spin mixture of $^{40}$K atoms in the $|F=9/2,m_F=-9/2\rangle\equiv |-9/2\rangle$ and $|F=9/2,m_F=-7/2\rangle\equiv |-7/2\rangle$ states of the hyperfine ground state manifold\cite{Frohlich2011}. After reaching quantum degeneracy in a crossed-beam optical dipole trap with approximately 70000 atoms per spin state, we turn on an optical lattice potential in order to prepare two-dimensional Fermi gases\cite{Gunter2005,Martiyanov2010,Frohlich2011,Dyke2011}. The optical lattice is formed by a horizontally propagating, retro-reflected laser beam of wavelength $\lambda=1064$\,nm, focussed to a waist of 140\,$\mu$m. We increase the laser power over a time of 200\,ms to reach a final potential depth of up to $V_{lat}=83\,E_{rec}$, which is calibrated by intensity modulation spectroscopy. $E_{rec}=h^2/(2 m \lambda^2)$ is the recoil energy. The trapping frequency along the strongly confined direction is $\omega=2 \pi \times 78.5$\,kHz. After loading the optical lattice, we adiabatically reduce the power of the optical dipole trap such that the atoms are confined only by the Gaussian intensity envelope of the lattice laser beams. The radial trapping frequency of the two-dimensional gases is $\omega_\perp=2\pi\times 127$\,Hz for $V_{lat}=83\,E_{rec}$ and we confine on the order of $10^3$ atoms per two-dimensional gas at the center of the trap. Along the axial direction we populate approximately 30 layers of the optical lattice potential with an inhomogeneous peak density distribution. Approximately two thirds of the 2D layers with highest density dominate the measured signal and their relevant energy scales $E_F$, $E_B$, and $\Delta^2/2E_F$ are more than an order of magnitude larger than the trapping frequency $\omega_\perp$. Therefore, finite particle number effects do not influence the measured signal. After evaporation, we adiabatically increase the interaction strength by lowering the magnetic field, at a rate of up to 0.25\,G/ms, to a value near the Feshbach resonance at 202.1\,G. We apply a radio-frequency pulse near 47\,MHz with a Gaussian amplitude envelope with a full width at half maximum of 230\,$\mu$s to transfer atoms from the $|-7/2\rangle$ state to the $|F=9/2,m_F=-5/2\rangle$ state. Atoms in the $|9/2,-5/2\rangle$ state have a two-body s-wave scattering length of 130 Bohr radii with the $|-7/2\rangle$ state and 250 Bohr radii with the $|-9/2\rangle$ state\cite{Stewart2008}. We turn off the optical lattice 100\,$\mu$s after the radiofrequency pulse, switch off the magnetic field and apply a magnetic field gradient to achieve spatial splitting of the three spin components in a Stern-Gerlach experiment. For each run, the magnetic field is calibrated using spin-rotation with an rf pulse of an imbalanced mixture on the $|-9/2\rangle$/$|-7/2\rangle$ transition. The magnetic field accuracy deduced from these measurements is $<3$\,mG. We measure the temperature by ballistic expansion of a weakly interacting gas, and the quoted numbers refer to the average of $T/T_F$ across the whole sample.
\subsection{Determination of the energy threshold $E_{th}$ of the energy distribution curve}
We fit our data with a double-peak fitting function comprising of a Gaussian for the atomic signal and a modified Gumbel function $f(\Omega)=\alpha \exp[-(\Omega-\Omega_0)/b-a \exp(-(\Omega-\Omega_0)/(a b))]$ for the pairing peak. The parameter $\Omega_0$ measures the peak position and the parameters $a$ and $b$ measure skewness and width. For our further analysis, we only use the peak position $\Omega_0$, which does not depend on the line shape function used. From this fit we determine the maximum of the molecular peak $\nu_{max}=\Omega_0$ and the minimum between the atomic and the molecular peak $\nu_{min}$. Between $\nu_1=\nu_{max}$ and $\nu_2=\nu_{min}-2$\,kHz we fit the data with a linear function and determine the zero-crossing of the linear extrapolation as the energy threshold $E_{th}$. We correct the obtained result for our spectral resolution of $1.5$\,kHz, obtained from the width of the Gaussian fits. The data are normalized to the two-body binding energy in vacuum which we obtain from the transcendental equation\cite{Bloch2008}
\begin{equation}
l_z/a=\int_0^\infty \frac{du}{\sqrt{4 \pi u^3}} \left(1- \frac{\exp(-E_B u/(\hbar \omega))}{\sqrt{(1-\exp(-2 u))/(2 u)}}\right).
\end{equation}
Here, $l_z=\sqrt{\hbar/m\omega}$ and $a$ is the three-dimensional scattering length using the following parameters of the Feshbach resonance: $B_0=202.1$\,G, $\Delta B=7$\,G and $a_{BG}=174\,a_B$ where $a_B$ is the Bohr radius.
\subsection{Thermal singlet model}
We model our data on the BEC side of the resonance with a thermal ensemble of singlet pairs\cite{Gaebler2010}. The expression for the wave function of the bound state in two dimensions is $\psi_B(r)=\sqrt{2/a_{2D}}K_0(r/a_{2D})$, in which $K_0(x)$ is the modified Bessel function, and for the scattering state is $\psi_q(r) = J_0(qr) -\frac{i f(q)}{4} H^{(1)}_0 (qr)$, in which $J_0(x)$ is the Bessel function of the first kind and $H^{(1)}_0(x)$ is the Hankel function of the first kind\cite{Petrov2001}. $f(q)$ is the scattering amplitude between the state $|-7/2\rangle$ and the final state $|-5/2\rangle$. We compute the momentum resolved rf spectrum for the dissociation from the bound state to the scattering state, averaging over a thermal distribution of the center-of-mass momenta of the initial pairs using Monte-Carlo sampling. From the momentum-resolved rf spectrum we calculate the effective mass $m^*$ and the wave vector $k^*$ using the same fitting routines as for the experimental data. This model of tightly bound pairs in the normal state includes the correct short-range physics but neglects many-body pairing, interactions between atoms and between pairs, as well as quantum statistical effects. Therefore, we do not expect quantitative agreement in the strongly interacting regime or on the BCS side of the resonance.
\vspace{0.5 cm}
We thank A. Georges, C. Kollath, D. Pertot, D. Petrov, M. Randeria, W. Zwerger, and M. Zwierlein for discussions. The work has been supported by {EPSRC} (EP/G029547/1), Daimler-Benz Foundation (B.F.), Studienstiftung, and DAAD (M.F.).
The authors declare that they have no competing financial interests.
The experimental setup was devised and constructed by M.F., B.F., E.V., and M.K., data taking was performed by M.F., B.F., E.V., and M. Kos., data analysis was performed by M.F., B.F., and M.Kos., numerical modelling was performed by B.F., and the manuscript was written by M.K. with contributions from all coauthors.
Correspondence and requests for materials should be addressed to M.K.~(email: [email protected]).
|
1,116,691,497,685 | arxiv | \section{Introduction}
Traditional lattice calculations of quantum field theories often encounter sign problems in the presence of a chemical potential. An excellent example is Quantum Chromodynamics (QCD), where it is impossible to accurately compute quantities at a non-zero baryon density especially at low temperatures \cite{deForcrand:2010ys}. Over the past decade, ideas like the complex Langevin approach \cite{Seiler:2017wvd} and the Leftchetz thimble approach \cite{Cristoforetti:2012su} have been proposed as potential solutions to sign problems including QCD. When these methods are tested on simple models where exact results are available \cite{Mukherjee:2013aga,Tanizaki:2015rda,Fujii:2015vha}, we not only find potential pitfalls of the methods but also learn new directions to avoid them \cite{Nishimura:2015pba, Bloch:2016jwt,Aarts:2017vrv}. While these ideas have also been able to capture some of the qualitative features of more complex field theories \cite{Aarts:2016qrv,Mukherjee:2014hsa}, in these cases the numerical results are not always compared with benchmark calculations obtained with other methods where the errors can be controlled. An exception to this has been studies of bosonic field theories at finite densities where a controlled Monte Carlo algorithm in the world line representation free of sign problems is available \cite{Cristoforetti:2013wha,Aarts:2010aq}. Producing such benchmark calculations that truly test the method, especially in fermionic quantum field theories with a sign problem and similar to QCD in other aspects, would be helpful and is the main motivation behind our work.
Recently the Lefshetz thimble program got a boost when it was shown that it may be possible to use holomorphic flow in complex field space to sample multiple thimbles rather than perform calculations on a single thimble as was done in the past \cite{Alexandru:2015sua}. The focus has also turned to lattice Thirring models as a prototype example of the physics of QCD \cite{Alexandru:2015sua,Alexandru:2016ejd}. This model has also been studied earlier in higher dimensions using stochastic quantization \cite{Pawlowski:2013gag}. Also, the recent work has computed the average fermion number $\langle N\rangle$ on small but fixed spatial size $L_X$ as a function of the chemical potential, which is much more sensitive to the important physical scales in the problem, as compared to local densities on large space-time lattices. As shown schematically in Fig.~\ref{fig1}, at low temperatures (or large $L_t$) the plot of $\langle N\rangle$ as a function of the chemical potential $\mu$ is expected to show a series of jumps at critical values of the chemical potential, say $\mu_1,\mu_2,...$ where the average particle number jumps to $N_1,N_2,...$. The values of $\mu_i$'s and $N_i$'s are related to the physical scales of the problem like binding energies and scattering lengths and should become harder to compute due to sign problems, especially when $\mu_i$ and $N_i$ become large. Encouraged by the fact that some of these quantitative features may be within reach, recently the efforts have turned towards speeding up the calculations on larger lattices using machine learning algorithms \cite{Alexandru:2017czx}. It would indeed be exciting if this program is successful.
\begin{figure}[b]
\includegraphics[width=0.8\linewidth]{figs/fig1.pdf}
\caption{A schematic plot of the particle number as a function of chemical potential for a fixed spatial size. We propose that the values of $\mu_i$ and the corresponding $N_i$'s are easily calculable and can be used as benchmark quantities to validate a method that claims to solve a sign problem.}
\label{fig1}
\end{figure}
The motivation for our work is to help this program by accurately computing the $\mu_i$'s and $N_i$'s for a specific two dimensional lattice Thirring model constructed with staggered fermions. Our model is asymptotically free and a continuum limit can be defined at zero coupling. At nonzero couplings (finite lattice spacing) the fermion in the theory is massive and mimics a baryon, while bosonic excitations made with fermion-antifermion pairs are massless and mimic pions. Thus, the similarities of our model with QCD is striking. Of course the ground state does not break any symmetries and the pions are not really Goldstone bosons as was explained by Witten long ago \cite{Witten:1978qu}, but the fermion mass generation is dynamical like in QCD and from the point of view of sign problems the bosons being lighter than the fermion is also similar to QCD. Interestingly, we can solve the model both in the fermion world line method and the fermion bag approach. In the world line approach we argue in this work that the sign problem is absent with open boundary conditions and zero fermion mass. Thus, in this limit we are able to study large lattices and can accurately compute the the critical $\mu_i$'s and $N_i$'s. These could provide helpful benchmark to test new ideas that claim to solve sign problems in problems similar to QCD.
Our paper is organized as follows. In section \ref{model} we discuss the model we study and the various types of representations that can be used to solve it. In particular we show why the model in the massless limit with open boundary conditions has no sign problem in the worldline formulation. In section \ref{mcmethods} we discuss our Monte Carlo methods, especially the worm algorithm to update the worldline representation and the fermion bag algorithm. In section \ref{results} we discuss the results we have obtained. In particular we define the observables we measure and discuss our results in a variety of parameter ranges. We present our conclusions in section \ref{conclusions}.
\section{The Model} \label{model}
The lattice action of the model we study is given by
\begin{equation}
S = \sum_{x,y} \overline{\chi}_x (M_{x,y} + m\delta_{x,y}) \chi_y + U \sum_{x,\nu} \overline{\chi}_x \chi_{x+\nu} \overline{\chi}_{x+\nu} \chi_x.
\end{equation}
where the the matrix $M$ is the massless staggered fermion matrix defined as
\begin{equation}
M_{x,y} = \sum_{\nu} \frac {\eta_{x,\nu}}{2} \left ( e^{\mu \delta_{\nu,0} } \delta_{x+\nu,y} - e^{ - \mu \delta_{\nu,0 } } \delta_{x,y+\nu} \right ),
\end{equation}
where $\mu$ is the chemical potential, $m$ is the fermion mass and $\eta_{x,\mu}$ are the usual staggered phase factors ($\eta_{x,0} = 1$ and $\eta_{x,1} = (-1)^{x_1}$). The four-fermion coupling $U$ can be interpreted as a current-current interaction on neighboring sites and hence the name ``lattice Thirring model''. When $m=0$ the model contains the well known $U(1)$ chiral symmetry of staggered fermions. In the discussion below $L_X$ denotes the number of spatial sites and $L_T$ denotes the number of temporal sites in our two dimensional square lattice. Further, we always use anti-periodic boundary conditions in time, but study the effects of periodic, anti-periodic and open boundary conditions in space.
This model has a long history and has been studied extensively in three space-time dimensions in the auxiliary field formulation \cite{DelDebbio:1995zc,DelDebbio:1997dv} and the fermion bag approach \cite{Chandrasekharan:2009wc,Chandrasekharan:2011mn}. In three dimensions the model with $m=0$ has two phases: a weak coupling phase with massless fermions and a strong coupling phase with spontaneously broken $U(1)$ chiral symmetry, massive fermions and light pions. These phases are separated by a second order critical point, whose properties were studied in the earlier work. In two dimensions this critical point moves to the origin and the massless weak coupling phase disappears. Further, since a continuous chiral symmetry cannot break in two dimensions, the massive fermion phase becomes critical. Thus, the two dimensional model contains massive fermions and critical bosons, where the mass of the fermion can be used to set the lattice spacing. The continuum limit is taken by tuning $U$ towards the origin. As far as we know these features of the two dimensional model with $m=0$ were never studied using the Monte Carlo method even at $\mu=0$ where there is no sign problem. The similarity of the model with QCD makes it an interesting toy model for studies at non-zero chemical potential. At a large value of $m$, this was done recently in two space-time dimensions \cite{Li:2016xci}.
\subsection{The Auxiliary Field Representation}
The traditional approach to solve these models is by rewriting the partition function using an auxiliary field formulation so that it can be tackled by the Hybrid Monte Carlo algorithm. More explicitly
\begin{equation}
Z \ =\ \int \ [d\overline{\chi} d\chi]\ e^{-S} \ =\
\ \int \ [d\overline{\chi} d\chi] \ \int \ [dA]\ e^{-S_{\rm aux}}
\label{partitionfunction}
\end{equation}
where in the last step we have introduced a compact auxiliary field $0 \leq A_{x,\nu} < 2\pi$ associated with the bonds of the lattice and the auxiliary field action
\begin{equation}
S_{\rm aux} = \sum_{x,\nu} \frac{N_F}{g^2}\left ( 1-\cos A_{x,\nu} \right )
+ \sum_{x,y} \overline{\chi}_x (\tilde{M}_{x,y} + m' \delta_{x,y}) \chi_y,
\label{auxact}
\end{equation}
is now a Gaussian in the Grassmann fields. The Dirac matrix $\tilde{M}_{x,y}$ is defined as
\begin{equation}
\sum_{\nu} \frac {\eta_{x,\nu}}{2} \big( e^{i A_{x,\nu} + \mu \delta_{\nu,0}}
\delta_{x+\nu,y} - e^{-i A_{y,\nu} - \mu \delta_{\nu,0} } \delta_{x,y+\nu} \big)
\end{equation}
and the parameters $U$ and $m$ are related to $g$ and $m'$ through the relations
\begin{equation}
U= 0.25 \left( \frac{ \MB{0}{\frac{N_F}{g^2}}} { \MB{1}{\frac{N_F}{g^2}} } \right)^2-0.25,
\quad
m = \left ( \frac{ \MB{0}{\frac{N_F}{g^2}}} { \MB{1}{\frac{N_F}{g^2}} } \right ) m'.
\end{equation}
Here $I_0$ and $I_1$ are the Bessel function and the first modified Bessel function. The sign problem in the auxiliary field representation can be traced to the fact that $\mathrm{Det}(\tilde{M} + m')$ does not have any symmetries and can be complex when $\mu \neq 0$, like in QCD.
\subsection{The Fermion Bag Representation}
Can ideas of fermion bags help solve the sign problem present in the auxiliary field approach? In this approach we do not introduce the usual auxiliary fields, but try to regroup fermion worldlines differently. Unfortunately, this regrouping is not unique and needs some thought. One possible regrouping introduced earlier for $\mu=0$ case is based on introducing a new set of variables, the dimers $d_{x,\nu}$ for nearest neighbor interactions and monomers $n_x$ for the mass terms \cite{Chandrasekharan:2009wc}. This naturally emerges when we expand the Grassmann exponential of the mass and interaction terms:
\begin{align}
Z &= \int d\bar\chi d\chi e^{-\sum_{x,y} \overline{\chi}_x M_{x,y} \chi_y} \nonumber \\
&\times \prod_{x} \left ( 1 - m \bar\chi_{x} \chi_{x} \right )
\prod_{x,\nu} \left ( 1- U \bar\chi_{x} \chi_{x+\nu} \bar\chi_{x+\nu}\chi_x \right ).
\label{grpf}
\end{align}
We then interpret the expression
\begin{align}
\left ( 1 + m \bar\chi_{x} \chi_{x} \right ) = \sum_{n_x=0,1} \left ( -m \bar\chi_{x} \chi_{x} \right ) ^{n_x} .
\end{align}
on each site, as introducing a {\it monomer} field $[n]$ where $n_x=0$ takes values $0$ and $1$. The mass term $(-m \overline{\chi}_x\chi_x)$ is a monomer (single site fermion bag). Similarly the interaction term can be rewritten using a {\it dimer} field $[d]$ such that the interaction term $(-U \bar\chi_{x} \chi_{x} \bar\chi_{x+\nu}\chi_{x+\nu})$ is the dimer (two site fermion bag). The partition function then becomes the sum over all configurations of $[n]$ and $[d]$. Because of the Grassmann nature of the fermion field, dimers and monomers cannot touch each other. Grassmann fields can be integrated over the monomer and dimer sites first and this does not introduce any sign problems. The remaining Grassmann integral can then be performed on free sites $[f]$ that do not contain monomers or dimers. If we denote the fermion matrix $M$ restricted to the free sites as $W([f])$ we can write the partition function as
\begin{equation}
Z \ =\ \sum_{[d],[n]} m^{N_m} U^{N_d} \mathrm{Det}( W([f]).
\label{fbZ1}
\end{equation}
As an illustration we show a possible configuration of dimers and monomers on a $4\times 4$ block of lattice sites in Fig.~\ref{linksandmonomers}. The monomers are depicted as red circles spanning a single site and the dimers as blue links spanning two sites. The figure depicts a configuration with two free fermion bags that are isolated from each other by the dimers and monomers. Due to this the matrix $W([f])$ is block diagonal with block matrices $W_1[f_1]$ and $W_2[f_2]$ defined within the two independent free bags. The determinant of $W([f])$ is then the product of two determinants $\det(W([f]) = \prod_i \det(W_i([f_i]))$.
\begin{figure} \center
\includegraphics[width=0.45\linewidth]{figs/fig2.pdf}
\caption{ An illustration of a possible configuration of dimers and monomers in a $4\times 4$ block of the lattice. The red circles represent monomer sites and the blue links represent dimers.}
\label{linksandmonomers}
\end{figure}
When $\mu=0$, since the matrices $W([f])$ are always anti-symmetric, $\mathrm{Det}(W([f]) \geq 0$ and the sign problem is solved. However, in the case of $\mu \neq 0$ this property no longer holds and the determinants can be negative. This may seem surprising since in two space-time dimensions the fermion permutation sign is absent due to the fact that fermions cannot cross each other. In our model fermions have a flavor and they can change flavors while hopping. This is encoded in the staggered phase factors and this leads to a sign problem. Empirically we discovered that this remaining sign problem depends on the boundary conditions. While the sign problem is present with both periodic and anti-periodic boundary conditions, it is absent with open boundary conditions. This also means that on large space-time lattices with $L_X=L_T$ the sign problem essentially disappears, but for asymmetric lattices it can reemerge. In the most interesting case for our studies, where we fix the spatial lattice size $L_X$, and study very large values of $L_T$ the sign problem can become severe with periodic and anti-periodic boundary conditions.
\subsection{ World Line Representation }
In order to get a better understanding of the origin of the sign problem in our model, we look at the representation of the fermion determinant $\mathrm{Det}(W[f])$ inside free fermion bags as a sum over their world lines. This representation can be found by expanding the determinant back into the Grassmann integral form,
\begin{align}
&\det\left( W([f]) \right ) \label{wl_expanded}\\
&= \prod_{x\in[f]} \left ( \int d\bar\chi_x d\chi_x \right ) e^{-\sum_{x,y \in [f]} \overline{\chi}_x M_{x,y} \chi_y} \nonumber \\
& = \prod_{x\in[f]} \left ( \int d\bar\chi_x d\chi_x \right ) \prod_{x,x+\nu \in [f]} \nonumber \\
&\Big( 1 - \frac 12 \eta_{x,\nu} e^{\mu \delta_{\nu,0} } \overline{\chi}_x \chi_{x+\nu} + \frac 12 \eta_{x,\nu}^\dagger e^{-\mu \delta_{\nu,0} } \overline{\chi}_{x+\nu} \chi_{x} \Big). \nonumber
\end{align}
This product can be represented in terms of directed fermion link variables $l_{x,\pm \nu} = 0,\pm 1$, where $+1$ represents the term $\overline{\chi}_x\chi_{x\pm\nu}$ and $-1$ represents the term $\overline{\chi}_{x+\nu}\chi_x$. The determinant is replaced with a sum over all configurations of directed links.
\begin{figure}[b]
\includegraphics[width=0.45\linewidth]{figs/fig3a.pdf} \,\,\,\,
\includegraphics[width=0.45\linewidth]{figs/fig3b.pdf}
\caption{ Illustration of two fermion world line configurations with along with dimers and monomers.}
\label{wordlineswithlink}
\end{figure}
Configurations of links only have a nonzero weight when one $\bar \chi$ and one $\chi$ are chosen at each site. Thus each site must have one directed link pointing into it and one pointing out of it. The links will therefore form closed loops.. In Fig.~\ref{wordlineswithlink} we show two valid configurations with the directed links represented as arrows pointing from $\overline{\chi}$ to $\chi$. The weight of a configuration of fermion world lines is given by the product of the weights in Eq. \ref{wl_expanded} and a factor $-1$ for every closed loop arising from a reordering of $\chi_x$ and $\bar \chi_x$ to match the ordering of the measure.
\begin{align}
&\det\left( W([f]) \right ) \label{wlweight} \\
&= \sum_{[l]} (-1)^{N_{loops}}
\prod_{x,\alpha} \left ( e^{ - \mu l_{x,\alpha} \delta_{\alpha,0}} \frac {l_{x,\alpha} \eta_{x,\alpha}}2 \right )^{|l_{x,\alpha}|}, \nonumber
\end{align}
where $N_{loops}$ is the number of closed loops formed by the directed links.
It is easy to verify that there are valid configurations with a negative weight. For example, the configuration on the left in Fig.~\ref{wordlineswithlink} has a positive weight, but the configuration on the right has a negative weight.
Let us now prove that the sign problem disappears with open boundary conditions in the massless limit because configurations with a negative sign are absent at the worldline level. The weight of a configuration can be written as the product of the weights of the closed loops of fermion links
\begin{align}
&\det\left( W([f],\mu) \right ) \\
&= \sum_{[l]} \prod_{loop \in l} \left(-\prod_{x,\alpha \in loop} e^{- \mu l_{x,\alpha} \delta_{\alpha,0}} \frac {l_{x,\alpha} \eta_{x,\alpha}}2 \right ). \nonumber
\end{align}
It is therefore sufficient to show that all loops that can exist in a configuration have positive weight.
Consider first a loop that does not wrap around the volume. Note that by starting from the trivial loop that visits two neighboring sites, we can construct any non-wrapping loop using the two deformations depicted in Fig.~\ref{deformations}. The first deformation replaces a link with a staple and does not introduce any new sites inside. This deformation also does not change the sign of the loop. The second deformation inverts a corner and introduces a single site inside the loop. This does change the sign of the loop. Thus, any non-wrapping loop can be negative only if it encloses an odd number of sites. But in the massless limit a configuration with such a loop will have zero weight, since monomers are not allowed, dimers always take away two sites and all free fermion loops will touch an even number of sites. Given that the trivial loop has a positive sign, any allowed non-wrapping loop will have a positive sign. Similarly, any loop enclosing an odd number of sites has a negative sign.
\begin{figure} \center
\includegraphics[height=0.3\linewidth]{figs/fig4a.pdf} \,\,\,\,
\includegraphics[height=0.3\linewidth]{figs/fig4b.pdf}
\caption{ Two deformations that can be used to link any two loops with the same amount of spatial and temporal wrappings. The first replaces a link with a staple and never changes the sign of the loop. The second changes the order of two orthogonal links, changing the sign of the loop. This deformation also changes the number of enclosed sites by one.}
\label{deformations}
\end{figure}
\begin{figure}
\includegraphics[height=0.4\linewidth]{figs/fig5a.pdf} \,\,\,\,
\includegraphics[height=0.4\linewidth]{figs/fig5b.pdf}
\caption{The figure on the left shows a negative sign fermion loop that wraps along the temporal direction. Such a loop is generated when an odd number of sites cross the fermion world line as it is obtained through a series of deformations starting from a straight temporal loop. The figure on the right shows a loop with negative sign when the spatial boundary condition is (anti)symmetric. }
\label{negativeloops}
\end{figure}
Thus, any negative sign fermion loops in the massless theory must arise through loops that wrap around the temporal direction. Note that with open boundary conditions spatial winding is also forbidden. We can again construct any temporal wrapping loop by starting from a loop that goes straight in time without hops and deforming it using the two deformations discussed above. This time a negative sign in the loop can be introduced if an odd number of sites cross the fermion line during this deformation. An example of such a negative signed loop is shown in Fig.~\ref{negativeloops} on the left. Such a loop will be allowed if the left and right sides of the loop are connected through the boundary. However, with open boundary conditions such temporal loops will create regions on the left and right with an odd number of sites. This is forbidden in the massless limit for the same reasons as outlined above for non-wrapping loops.
With periodic and anti-periodic boundary conditions we can have other more complicated loops as shown in Fig.~\ref{negativeloops} on the right. Thus, the sign problem can be completely eliminated by open boundary conditions in the spatial direction. This feature of the world line formulation is well known and specific to two dimensional models \cite{Evertz:2000rk,Gattringer:2007em,Wolff:2007ip}. In higher dimensions the argument for the positivity of all fermion worldline configurations fails and significant cancellations between world line configurations will be necessary for alleviating the sign problem. The fermion bag approach can be helpful in this regard \cite{Li:2016xci}.
\begin{figure}
\includegraphics[height=0.25\linewidth]{figs/fig6a.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{{figs/fig6b}.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6c.pdf}\\
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6d.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6e.pdf}
\\\vspace{0.2cm}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6f.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6g.pdf}
\caption{An illustration of the worm update used in the world line formalism. The solid dots represent the head and the tail of the worm where the configuration has defects. At the end the defects disappear and a new allowed configuration is generated. }
\label{wlupdates}
\end{figure}
\section{Monte Carlo Updates}
\label{mcmethods}
Monte Carlo methods for updating both the worldline representation and the fermion bag representations are by now well developed \cite{PhysRevLett.87.160601,PhysRevE.66.046701,PhysRevE.74.036701,PhysRevLett.108.140404,PhysRevD.88.021701,PhysRevD.93.081701}. We use a worm algorithm to update the fermion lines and dimers, using updates like the one illustrated in Fig.~\ref{wlupdates}. To begin an update, we suggest randomly changing some fermion link $l_{x,\nu}$. The update is then accepted with the absolute value of the weight given in Eq. (\ref{wlweight}). If the link is changed two defects are generated in the lattice configuration which are allowed. The defects are the head and tail of the worm. The head of the worm then propagates by updating the neighboring links. When the head returns to its tail, the worm closes, the defects disappear and the update is complete. The various steps of how the defect propagates are shown in shown Fig.~\ref{wlupdates}. The configuration of dimers may also be updated during the worm update. When this is done we have to use the weights of including or removing a dimer. Fig.~\ref{wldimerud} shows the steps for an update that changes the dimer number.
\begin{figure}
\includegraphics[height=0.25\linewidth]{figs/fig6a.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{{figs/fig6b}.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6c.pdf}\\
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig7d.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig7e.pdf}
\\\vspace{0.2cm}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig7f.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig6m.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig7g.pdf}
\caption{An illustration of the worm update that changes the dimer number. }
\label{wldimerud}
\end{figure}
In contrast to the worm algorithm, we sample fermion bag configurations using a local Monte Carlo update that involves adding or removing dimers or pairs of monomers. Each proposal is accepted with the probability
\begin{align}
P_{acc} = \frac{m^{N'_m} U^{N'_d} \det\left( W([f'],\mu) \right ) }{m^{N_m} U^{N_d} \det\left( W([f],\mu) \right )},
\end{align}
where the new configuration is denoted with primed variables. The fact that one has to use ratios of fermion determinants that are non-local helps in reducing autocorrelation times. We also can update large regions of space-time by using a background field method used recently in \cite{PhysRevD.93.081701}. The sampling is made more efficient with a move that switches the places of a monomer and a dimer if the two are on neighboring sites. Since the weights of the two configurations are the same this update is very quick.
\section{Numerical results}
\label{results}
In this work we compute three observables in order to understand the physics of our model. The first is the chiral condensate susceptibility $\chi$, defined by the relation
\begin{align}
\chi = \frac{U}{V} \sum_{x,y} \ev{\bar\psi_x\psi_x \bar\psi_y\psi_y}\label{susceptibility}.
\end{align}
We can use it to understand the physics of bosonic excitations in our model. We also compute the chiral charge winding number susceptibility, defined by the relation
\begin{align}
\ev{Q_\chi^2} &= \frac U V \sum_{x \in S,y \in S'} \ev{J_{\alpha,x}^\chi J_{\alpha,y}^\chi},\\
J_{\alpha,x}^\chi &= \frac{\epsilon_x \eta_{x,\alpha}} 2 \left [ e^{\delta_{\alpha,0}\mu} \bar\psi_x \psi_{x+\alpha} - e^{-\delta_{\alpha,0}\mu}\bar\psi_{x+\alpha} \psi \right ]
\end{align}
where $S$ and $S'$ are surfaces orthogonal to the direction $\alpha$. In the thermodynamic limit, the winding number susceptibility helps us understand the status of chiral symmetry as we explain below. In the world line representation the chiral charge can be defined by the relation
\begin{align}
q_{x_\alpha}^\chi &= \epsilon_x ( l_{x,\alpha} + l_{x+\alpha,-\alpha} + 2d_{x,\alpha} ),
\end{align}
which means the susceptibility is simply
\begin{align}
\ev{Q_\chi^2} = \ev{\left ( \sum_{x\in\alpha} q_{x_\alpha}^\chi \right )^2 }
\end{align}
since the chiral charge is conserved on each configuration. Finally we measure the average fermion number using the relation $\langle N_f \rangle = \langle \sum_{x\in S} J_{0,x} \rangle$ where the fermion number current is given by
\begin{align}
J_{\alpha,x} &= \frac{ \eta_{x,\alpha}} 2 \left [ e^{\delta_{\alpha,0}\mu} \bar\psi_x \psi_{x+\alpha} - e^{-\delta_{\alpha,0}\mu}\bar\psi_{x+\alpha} \psi_x \right ],
\end{align}
and $S$ is a surface perpendicular to $\hat t$. In the worldline representation again the fermion number is straight forward to calculate and is given by
\begin{align}
\ev{N_f} = \sum_{x\in S} \ev{ l_{x,\hat t} - l_{x+\hat t,-\hat t} }.
\end{align}
In our definition the fermion number is normalized to count both the Dirac and flavor degrees of freedom from a continuum limit perspective.
\begin{table}
\center
\begin{tabular}{|c|c|c|c|}
\hline
$U$ & $2-\eta$ & $\ev{Q_\chi^2}$ & $m_f$ \\
\hline
0 & 0 & 0.25 & 0 \\
0.1 & 0.90(1) & 0.499(7) & 0.0098(4) \\
0.2 & 1.201(4) & 0.61(1) & 0.081(1) \\
0.3 & 1.303(4) & 0.780(8) & 0.183(1) \\
0.4 & 1.371(7) & 0.895(4) & 0.290(1) \\
0.5 & 1.393(3) & 0.972(3) & 0.395(3) \\
0.6 & 1.423(4) & 1.024(3) & 0.491(1) \\
1.0 & 1.467(4) & 1.128(2) & 0.793(1) \\
$\infty$ & 1.5 & 1.208(8) & $\infty$ \\
\hline
\end{tabular}
\caption{ The scaling dimension $\nu$, chiral charge susceptibility $\ev{Q_\chi^2}$ and the fermion mass measured on a square lattice. }
\label{table_large_volume}
\end{table}
\begin{figure}[b]
\includegraphics[width=0.8\linewidth]{figs/fig8.pdf}
\caption{ The fermion number density $\ev{n}$ as a function of the chemical potential on a square lattice with open boundary conditions. The dashed line shows a fit to the linear region at $L=64$.}
\label{open_square}
\end{figure}
\begin{table}[htb]
\begin{tabular}{|c|c||c|c||c|c|}
\hline
$ \mu $ & $\langle n \rangle $ & $ \mu $ & $\langle n \rangle $ & $ \mu $ & $\langle n \rangle $ \\
\hline
\multicolumn{6}{|c|}{L=10}\\
\hline
0.16 & $0.0276(2)$ & 0.32 & $0.0948(4)$ & 0.52 & $0.2397(5)$ \\
0.20 & $0.0396(2)$ & 0.36 & $0.1204(4)$ & 0.54 & $0.2582(5)$ \\
0.24 & $0.0545(3)$ & 0.40 & $0.1472(5)$ & 0.56 & $0.2740(5)$ \\
0.28 & $0.0729(3)$ & 0.48 & $0.2074(5)$ & 0.58 & $0.2926(5)$ \\
\hline
\multicolumn{6}{|c|}{L=40}\\
\hline
0.15 & $0.0016(1)$ & 0.30 & $0.0742(4)$ & 0.45 & $0.1753(4)$ \\
0.20 & $0.0103(2)$ & 0.35 & $0.1082(4)$ & 0.50 & $0.2100(5)$ \\
0.25 & $0.0387(4)$ & 0.40 & $0.1425(4)$ & 0.55 & $0.2456(4)$ \\
\hline
\multicolumn{6}{|c|}{L=64}\\
\hline
0.16 & $0.0004(1)$ & 0.32 & $0.089(1)$ & 0.52 & $0.221(1)$ \\
0.20 & $0.0062(4)$ & 0.36 & $0.116(1)$ & 0.54 & $0.236(1)$ \\
0.24 & $0.0309(6)$ & 0.40 & $0.142(1)$ & 0.56 & $0.250(1)$ \\
0.28 & $0.0618(6)$ & 0.48 & $0.194(1)$ & 0.58 & $0.264(1)$ \\
\hline
\end{tabular}
\caption{\label{table_os} Selected values of $\langle n \rangle$ plotted in Fig~\ref{open_square}.}
\end{table}
\begin{figure} \center
\includegraphics[width=0.7\linewidth]{figs/fig9.pdf}
\caption{Plot of the fermion mass as a function of $U$ for small values. We observe qualitatively the exponential scaling expected. The solid line is the one loop $\beta$ function.}
\label{mf_fit_beta}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\linewidth]{figs/fig10a.pdf}
\includegraphics[width=0.45\linewidth]{figs/fig10b.pdf}
\caption{ The average sign of $\det(W)$ at $U=0.3$ as a function of the chemical potential with $L_T=48$ and $L_X=6$ in the auxiliary field representation(left) and the fermion bag representation(right). }
\label{sign-boundaries}
\end{figure}
Using these observables we first focus on the physics of our model at $\mu=0$ in order to bring out the similarities to QCD. As we mentioned earlier, unlike in QCD the $U(1)$ chiral symmetry of the model cannot break in two dimensions. However, the lightest boson in the model is critical (i.e., it is massless but is not a Goldstone boson). Hence when $L_X=L_T=L$ we expect the chiral condensate susceptibility to scale as $\chi \sim L^{2-\eta}$ for large values of $L$. The exponent $\eta$ depends on $U$ like in the usual critical phase of the two dimensional $XY$ model. At infinite $U$ since the Thirring model becomes a closed packed dimer model and we expect $\eta=0.5$ \cite{Cecile:2008nb}. When $U=0$ the susceptibility diverges logarithmically with $L$ and hence $\eta=2$. Our results reproduce this and show how the exponent changes continuously between these two limits. In table \ref{table_large_volume} we give the values of $2-\eta$ obtained at various values of $U$.
\begin{figure*}[t]
\includegraphics[height=0.25\linewidth]{figs/fig11a.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig11b.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig11c.pdf}
\caption{ The fermion number $\ev{N_f}$ at $U=0.3$ as a function of the chemical potential with open, antiperiodic and periodic boundary conditions respectively for $ L_{X}=6, L_{T}=48 $. The solid line shows the value at $U=0$.}
\label{nf-boundaries}
\end{figure*}
In a chirally symmetric theory with massive excitations the chiral charge winding number susceptibility $\langle Q_\chi^2\rangle$ is expected to vanish because the chiral charge cannot wind across the spatial boundaries. However, when the phase is critical like in our model it is expected to be go to a constant in the thermodynamic limit. Our results are consistent with this expectation. The values we measured for $\langle Q_\chi^2\rangle$ at $L=256$ are given in table \ref{table_large_volume}. These values are found using open boundary conditions Further we find that $\langle Q_\chi^2\rangle=0.25$ at $U=0$, and grows monotonically to the value of roughly $1.2$ at $U=\infty$. All this is consistent with the fact that the bosonic sector of our theory is critical.
In contrast to the bosons, fermions are massive for all values of $U > 0$. We compute the fermion mass $m_f$ as a function of $U$ using large square lattices $(L_X=L_T=L)$ as follows. In the thermodynamic limit we expect the average fermion density $\langle n\rangle = \langle N_f\rangle/L_X$ to be zero when $\mu \leq m_f$ and rise linearly according to the relation
\begin{align}
\ev{n} = c\left( \mu-m_f \right).
\end{align}
for $\mu \geq m_f$. This behavior should also be an excellent approximation for sufficiently large lattices. To demonstrate this we show our results for $\langle n\rangle$ at $U=0.3$ with open boundary conditions in Fig.~\ref{open_square}. Selected values of this data are also tabulated in table \ref{table_os} as a benchmark for future calculations. As we can see for $L=10$ the curve does not show the expected non-analyticity, but for $L=40$ and $L=64$ the curves show it clearly. We can fit our data to the linear form which is shown as the dashed line in the fit. In table \ref{table_large_volume} we report the value of $m_f$ found using this method for several values of $U$. We used lattices of size of $L=64$, except for $U=0.1$, where the lattice size used was $L=128$.
The dynamical generation of fermion mass is an interesting feature of our model. While similar to the phenomenon of chiral symmetry breaking in QCD, the actual dynamical breaking of continuous symmetries is forbidden in two dimensional models. Nevertheless a fermion mass can be generated and a massless boson with critical correlations can arise \cite{Witten:1978qu}. Finally we note that four-fermion couplings are expected to be marginal in two dimensions and in our case it also happens to be marginally relevant (i.e., asymptotically free). Thus, at small $U$ the fermion mass $m_f$ is expected to vanish according to the relation
\begin{align}
m_f \approx C \ \exp\Big(\frac{-2\pi}{b_0U}\Big),
\end{align}
where $b_0=16$ is the one-loop coefficient of the $\beta$ function. Figure \ref{mf_fit_beta} shows the fermion mass values and compares it against the expected behavior. For purposes of illustration we use $C=0.49$. With these small masses lattice volumes up to $V=1024\times1024$ were necessary. It is well known that such asymptotic scaling fits don't work very well unless very large lattices are used \cite{PhysRevLett.80.1742}. Here we just use it to illustrate that the the fermion mass qualitatively does become exponentially small as $U$ becomes small.
\begin{figure*}[t]
\includegraphics[height=0.25\linewidth]{figs/fig12a.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig12b.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig12c.pdf}
\caption{ The fermion number $\langle N \rangle$ with open boundary conditions at $U=0$ as a function of the chemical potential. From left to right, $L_X = 12$, $16$ and $32$. }
\label{open_anisotropic_free}
\end{figure*}
\begin{figure*}[t]
\includegraphics[height=0.25\linewidth]{figs/fig13a.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig13b.pdf}
\includegraphics[height=0.25\linewidth]{figs/fig13c.pdf}
\caption{ The fermion number $\langle N \rangle$ with open boundary conditions at $U=0.3$ as a function of the chemical potential. The solid line shows the behavior at $U=0$ and $L_T=256$. From left to right, $L_X = 12$, $16$ and $32$. }
\label{open_anisotropic}
\end{figure*}
Next we turn to the physics of finite chemical potential. We first consider a small spatial lattice of $L_X=6$ and $L_T=48$ and study the sign problem in the traditional auxiliary field approach with periodic boundary conditions and compare it with the sign problem in the fermion bag approach with both periodic and anti-periodic boundary conditions. In Fig.~\ref{sign-boundaries} we plot the average sign as a function of the chemical potential in the auxiliary field approach at $U=0.3$ (left) and compare it with that of the fermion bag approach (right). We first wish to learn where the sign problem becomes severe. In the auxiliary field approach the sign becomes severe around $\mu\approx 0.4$, while in the fermion bag approach with anti-periodic boundary conditions it becomes severe around $\mu \approx 0.55$. In the fermion bag approach with periodic boundary conditions the sign problem is never severe, although it is enhanced both at $\mu \approx 0.4$ and then again at $\mu \approx 0.9$. Can we correlate this behavior with some physics?
\begin{table}
\begin{tabular}{| c | c || c | c || c | c || c | c |}
\hline
$\mu$ & $\langle N_f \rangle$ & $\mu$ & $\langle N_f \rangle$ &
$\mu$ & $\langle N_f \rangle$ & $\mu$ & $\langle N_f \rangle$ \\
\hline
\multicolumn{8}{|c|}{Periodic} \\
\hline
0.30 & 0(0) & 0.36 & 0.13(1) &
0.38 & 0.67(2) & 0.40 & 1.52(2) \\
0.42 & 1.91(1) & 0.50 & 2.0(0) &
0.90 & 2.36(3) & 0.92 & 2.87(2) \\
0.94 & 3.68(3) & 0.95 & 4.03(3) &
0.96 & 4.43(3) & 0.97 & 4.86(2) \\
0.98 & 5.21(2) & 0.99 & 5.47(2) &
1.00 & 5.65(2) & 1.10 & 5.98(2) \\
\hline
\multicolumn{8}{|c|}{Anti-periodic} \\
\hline
0.50 & 0.03(3) & 0.54 & 0.19(3) &
0.56 & 0.56(6) & 0.58 & 1.17(5) \\
0.60 & 1.61(9) & 0.61 & 1.71(5) &
0.62 & 1.86(5) & 0.63 & 1.90(4) \\
0.64 & 1.86(4) & 0.65 & 1.97(2) &
0.66 & 1.97(2) & 0.67 & 2.07(2) \\
0.68 & 2.04(1) & 0.69 & 2.11(1) &
0.70 & 2.19(2) & 0.72 & 2.50(3) \\
0.80 & 3.95(2) & 0.90 & 4.00(0) &
1.00 & 4.27(5) & 1.10 & 5.94(2) \\
\hline
\multicolumn{8}{|c|}{Open} \\
\hline
0.20 & 0.000(0) & 0.34 & 0.044(3) &
0.36 & 0.122(5) & 0.38 & 0.256(7) \\
0.42 & 0.755(8) & 0.44 & 0.974(8) &
0.46 & 1.218(7) & 0.48 & 1.468(8) \\
0.50 & 1.715(8) & 0.52 & 1.869(5) &
0.54 & 1.941(3) & 0.56 & 1.977(2) \\
0.58 & 1.992(1) & 0.60 & 1.996(1) &
0.62 & 2.000(0) & 0.66 & 2.008(1) \\
0.68 & 2.022(2) & 0.70 & 2.056(3) &
0.72 & 2.125(5) & 0.74 & 2.290(8) \\
0.80 & 3.298(6) & 0.82 & 3.600(9) &
0.84 & 3.825(6) & 0.86 & 3.934(4) \\
0.88 & 3.992(2) & 0.90 & 4.041(3) &
0.92 & 4.117(6) & 1.02 & 5.833(7) \\
1.04 & 5.937(4) & 1.06 & 5.975(2) &
1.08 & 5.991(1) & 1.14 & 6.000(0) \\
\hline
\end{tabular}
\caption{ The average fermion number $\langle N_f \rangle$ computed at $U=0.3$ with periodic, anti-periodic and open boundary conditions with $L_X=6$ and $L_T=48$.}
\label{N6-48-table}
\end{table}
\begin{table}[hbt]
\begin{tabular}{|l|c||l|c||l|c||l|c|}
\hline
$\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ \\
\hline
\multicolumn{8}{|c|}{$L_T=32$}\\
\hline
0.26 & $0.28(1)$ & 0.33 & $1.06(1)$ & 0.42 & $1.917(6)$ & 0.54 & $2.90(1)$ \\
0.28 & $0.46(1)$ & 0.34 & $1.16(1)$ & 0.44 & $2.014(5)$ & 0.56 & $3.14(1)$ \\
0.29 & $0.56(1)$ & 0.36 & $1.419(10)$ & 0.46 & $2.116(5)$ & 0.58 & $3.401(9)$ \\
0.30 & $0.67(1)$ & 0.37 & $1.528(9)$ & 0.48 & $2.248(7)$ & 0.60 & $3.604(8)$ \\
0.31 & $0.78(1)$ & 0.38 & $1.632(8)$ & 0.50 & $2.428(8)$ & 0.62 & $3.805(7)$ \\
0.32 & $0.90(1)$ & 0.39 & $1.683(9)$ & 0.52 & $2.631(10)$ & 0.64 & $3.955(6)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=64$}\\
\hline
0.26 & $0.059(5)$ & 0.33 & $1.07(1)$ & 0.42 & $1.988(2)$ & 0.54 & $2.82(1)$ \\
0.28 & $0.20(1)$ & 0.34 & $1.23(1)$ & 0.44 & $2.000(1)$ & 0.56 & $3.21(1)$ \\
0.29 & $0.31(1)$ & 0.36 & $1.65(1)$ & 0.46 & $2.014(2)$ & 0.58 & $3.580(9)$ \\
0.30 & $0.51(2)$ & 0.37 & $1.772(9)$ & 0.48 & $2.048(4)$ & 0.60 & $3.844(7)$ \\
0.31 & $0.64(2)$ & 0.38 & $1.863(7)$ & 0.50 & $2.178(8)$ & 0.62 & $3.956(3)$ \\
0.32 & $0.86(1)$ & 0.39 & $1.918(6)$ & 0.52 & $2.45(1)$ & 0.64 & $3.994(2)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=128$}\\
\hline
0.26 & $0.002(1)$ & 0.33 & $1.059(5)$ & 0.44 & $1.999(1)$ & 0.54 & $2.81(1)$ \\
0.28 & $0.04(1)$ & 0.34 & $1.302(5)$ & 0.46 & $1.999(1)$ & 0.56 & $3.27(1)$ \\
0.29 & $0.06(1)$ & 0.36 & $1.87(1)$ & 0.48 & $2.001(1)$ & 0.58 & $3.84(1)$ \\
0.30 & $0.25(2)$ & 0.37 & $1.957(6)$ & 0.50 & $2.019(3)$ & 0.60 & $3.98(1)$ \\
0.32 & $0.84(1)$ & 0.39 & $1.992(4)$ & 0.52 & $2.24(1)$ & 0.64 & $4.0$ \\
\bottomrule
\hline
\multicolumn{8}{|c|}{$L_T=256$}\\
\hline
0.26 & $0.00$ & 0.31 & $0.42(5)$ & 0.36 & $2.00$ & 0.54 & $2.91(1)$ \\
0.28 & $0.00$ & 0.32 & $0.86(2)$ & 0.40 & $2.00$ & 0.56 & $3.20(2)$ \\
0.29 & $0.00$ & 0.33 & $1.02(1)$ & 0.50 & $2.00$ & 0.58 & $3.97(1)$ \\
0.30 & $0.06(2)$ & 0.34 & $1.31(7)$ & 0.52 & $2.03(1)$ & 0.64 & $4.00$ \\
\hline
\end{tabular}
\caption{Monte Carlo results for $\langle N_f\rangle$ at $U=0.3$ with open boundaries at selected values of $\mu$ and $L_T$ for $L_X=12$. This data is plotted in Fig.~\ref{open_anisotropic}.}
\label{LX12}
\end{table}
\begin{table}[htb]
\begin{tabular}{|l|c||l|c||l|c||l|c|}
\hline
$\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ \\
\hline
\multicolumn{8}{|c|}{$L_T=32$}\\
\hline
0.21 & $0.14(1)$ & 0.28 & $0.80(1)$ & 0.34 & $1.61(1)$ & 0.44 & $2.65(1)$ \\
0.22 & $0.21(1)$ & 0.29 & $0.94(1)$ & 0.36 & $1.83(1)$ & 0.46 & $2.89(1)$ \\
0.23 & $0.28(1)$ & 0.30 & $1.09(1)$ & 0.37 & $1.90(1)$ & 0.48 & $3.17(1)$ \\
0.24 & $0.36(1)$ & 0.31 & $1.22(1)$ & 0.38 & $1.99(1)$ & 0.50 & $3.43(1)$ \\
0.25 & $0.44(1)$ & 0.32 & $1.37(1)$ & 0.39 & $2.09(1)$ & 0.52 & $3.68(1)$ \\
0.26 & $0.55(1)$ & 0.33 & $1.49(1)$ & 0.42 & $2.39(1)$ & 0.54 & $3.91(1)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=64$}\\
\hline
0.22 & $0.04(1)$ & 0.30 & $1.13(2)$ & 0.38 & $2.002(3)$ & 0.48 & $3.26(1)$ \\
0.24 & $0.09(1)$ & 0.32 & $1.55(1)$ & 0.42 & $2.17(1)$ & 0.50 & $3.62(1)$ \\
0.26 & $0.27(1)$ & 0.34 & $1.83(1)$ & 0.44 & $2.46(1)$ & 0.52 & $3.86(1)$ \\
0.28 & $0.64(2)$ & 0.36 & $1.950(5)$ & 0.46 & $2.85(1)$ & 0.54 & $3.97(1)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=128$}\\
\hline
0.23 & $0.00$ & 0.29 & $0.84(2)$ & 0.34 & $1.98(1)$ & 0.46 & $2.85(1)$ \\
0.26 & $0.07(1)$ & 0.30 & $1.15(3)$ & 0.35 & $1.99(1)$ & 0.48 & $3.31(2)$ \\
0.27 & $0.20(3)$ & 0.31 & $1.52(2)$ & 0.40 & $2.00$ & 0.50 & $3.88(1)$ \\
0.28 & $0.46(3)$ & 0.32 & $1.83(1)$ & 0.44 & $2.24(1)$ & 0.54 & $4.00$ \\
\hline
\multicolumn{8}{|c|}{$L_T=256$}\\
\hline
0.25 & $0.0$ & 0.30 & $1.19(4)$ & 0.42 & $2.00(1)$ & 0.48 & $3.32(2)$ \\
0.28 & $0.18(5)$ & 0.31 & $1.72(4)$ & 0.44 & $2.02(1)$ & 0.50 & $3.98(1)$ \\
0.29 & $0.89(3)$ & 0.34 & $1.99(1)$ & 0.46 & $2.87(2)$ & 0.52 & $3.99(1)$ \\
\hline
\end{tabular}
\caption{Monte Carlo results for $\langle N_f\rangle$ at $U=0.3$ with open boundaries at selected values of $\mu$ and $L_T$ for $L_X=16$. This data is plotted in Fig.~\ref{open_anisotropic}.}
\label{LX16}
\end{table}
\begin{table}[!t]
\begin{tabular}{|l|c||l|c||l|c||l|c|}
\hline
$\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ & $\mu$ & $\langle N_f \rangle $ \\
\hline
\multicolumn{8}{|c|}{$L_T=32$}\\
\hline
0.22 & $0.67(2)$ & 0.28 & $1.87(2)$ & 0.36 & $3.64(2)$ & 0.44 & $5.40(2)$ \\
0.24 & $1.03(2)$ & 0.3 & $2.32(2)$ & 0.38 & $4.11(2)$ & 0.46 & $5.83(1)$ \\
0.26 & $1.43(2)$ & 0.34 & $3.22(2)$ & 0.4 & $4.52(1)$ & 0.48 & $6.29(1)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=64$}\\
\hline
0.21 & $0.21(2)$ & 0.26 & $1.44(2)$ & 0.33 & $3.05(2)$ & 0.4 & $4.47(1)$ \\
0.22 & $0.35(2)$ & 0.28 & $1.90(2)$ & 0.35 & $3.54(2)$ & 0.42 & $4.93(1)$ \\
0.23 & $0.58(3)$ & 0.29 & $2.10(2)$ & 0.36 & $3.75(1)$ & 0.44 & $5.43(1)$ \\
0.24 & $0.82(2)$ & 0.31 & $2.50(2)$ & 0.37 & $3.92(1)$ & 0.46 & $5.85(1)$ \\
0.25 & $1.13(3)$ & 0.32 & $2.76(2)$ & 0.39 & $4.25(1)$ & 0.48 & $6.21(1)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=128$}\\
\hline
0.21 & $0.019(6)$ & 0.26 & $1.71(2)$ & 0.33 & $3.04(3)$ & 0.4 & $4.25(2)$ \\
0.22 & $0.08(2)$ & 0.28 & $1.98(1)$ & 0.35 & $3.77(2)$ & 0.42 & $4.92(2)$ \\
0.23 & $0.25(4)$ & 0.29 & $2.03(2)$ & 0.36 & $3.92(2)$ & 0.44 & $5.64(2)$ \\
0.24 & $0.68(5)$ & 0.31 & $2.31(2)$ & 0.37 & $3.97(1)$ & 0.46 & $5.96(1)$ \\
0.25 & $1.27(5)$ & 0.32 & $2.67(3)$ & 0.39 & $4.06(1)$ & 0.48 & $6.057(9)$ \\
\hline
\multicolumn{8}{|c|}{$L_T=256$}\\
\hline
0.25 & $1.41(9)$ & 0.28 & $1.96(4)$ & 0.33 & $3.06(4)$ & 0.42 & $4.90(3)$ \\
0.26 & $1.91(5)$ & 0.31 & $2.06(2)$ & 0.36 & $3.97(3)$ & 0.44 & $5.87(3)$ \\
0.27 & $1.95(4)$ & 0.32 & $2.61(4)$ & 0.4 & $4.01(2)$ & 0.48 & $5.99(1)$ \\
\hline
\end{tabular}
\caption{Monte Carlo results for $\langle N_f\rangle$ at $U=0.3$ with open boundaries at selected values of $\mu$ and $L_T$ for $L_X=32$. This data is plotted in Fig.~\ref{open_anisotropic}.
\label{LX32}}
\end{table}
Let us now explore how the fermion chemical potential ``dopes'' the system with fermions. We again focus first on a small lattice, $L_X=6$, $L_T=48$ at $U=0.3$. In table \ref{N6-48-table} we present all of our results for the total fermion number as a function of the chemical potential for open(left), anti-periodic(center) and periodic boundary conditions(right). In Fig.~\ref{nf-boundaries} we plot these results along with the results for free fermions as solid lines. Due to the flavor degeneracy of staggered fermions we expect all states to be at least doubly degenerate. With open periodic boundary conditions this means all jumps must be in steps of two. This is what is observed. With periodic and anti-periodic boundary conditions there is a symmetry between left and right moving particles. With periodic boundary conditions a zero momentum state is allowed which is non-degenerate, hence the first jump in $\langle N \rangle$ near $\mu \approx 0.4$ is only by two. However, the second jump near $\mu \approx 0.9$ is by a factor of four since now non-zero momentum states are excited and each state is doubly degenerate due to the two fermion flavors. With anti-period boundary conditions the lowest energy state already has momentum and hence again should have four-fold degeneracy. This is clearly seen as a jump of four in the free theory around $\mu \approx 0.5$. Surprisingly, in the interacting theory this degeneracy of the lowest energy state seems to be broken. We attribute this to the fact that bound state bosons with zero momentum can emerge. The next momentum state is non-degenerate for $L_X=6$ since effectively the lattice size is halved for staggered fermions. This remains unchanged for the interacting theory as well and two additional states are added when $\mu > 1$.
Note that the first step to $\langle N_f\rangle =2$ for both open and periodic boundary conditions occurs around $\mu\approx 0.4$. This coincides with the point where the sign problem becomes severe in the auxiliary field approach, and is somewhat enhanced in the fermion bag approach. The sign problem in the fermion bag approach disappears for large values of $\mu$ until around $\mu \approx 0.9$ where there is the second jump of four in the periodic case. The sign problem in the auxiliary field approach on the other hand never recovers. In the case of anti-periodic boundary conditions the severity of the sign problem coincides with the additional plateau at $\langle N_f\rangle = 2$ which is absent in the free theory as discussed above. While these correlations between sign problems and the underlying physics are not surprising, the fact that energies and degeneracies of the lowest lying states can be influenced by boundary conditions and interactions on small lattices offers an excellent opportunity for methods that claim to solve the sign problems to reproduce them.
Since the sign problem is absent with open boundary conditions we can use it to study the behavior of $\langle N_f\rangle$ on large asymmetric lattices ($L_X \neq L_T$) so as to understand the physics of fermion doping at a fixed $L_X$. One of the main results that our model shares with QCD is that fermions become massive entirely due to interaction effects and the value of the chemical potential where the first jump in $\langle N_f\rangle$ occurs will be this finite size fermion mass $m_f^{L_X}$. In order to see the effects of interactions we plot $\langle N_f\rangle$ as a function of $\mu$ in the free theory (Fig.~\ref{open_anisotropic_free}) and in the interacting theory with $U=0.3$ (Fig.~\ref{open_anisotropic}) both with open boundary conditions. Selected data points have also been tabulated in Tables.~\ref{LX12},\ref{LX16} and \ref{LX32} for benchmark purposes.
We study three different lattice sizes $L_X=12$ (left) $L_X=16$ (center) and $L_X=32$ (right). For each of these lattices we study the effects of increasing $L_T$. Note that the critical value of $\mu$ where the first jump to $\langle N_f\rangle =2$ occurs, shifts to lower values as $L_X$ increases in the free theory. We expect this value to vanish in the large $L_X$ limit since fermions are massless. However, in the interacting theory we note the jump change in the critical value is smaller and should approach $0.183(1)$ (see table \ref{table_large_volume}) as $L_x$ becomes large. Also the jump becomes sharper as the anisotropy (value of $L_T$) is increased and approaches a step function as expected. To quantify the value of $m_f^{L_X}$ we measure $\langle N_f\rangle$ for several values $\mu$ near the transition at two different values of $L_T$. In particular with $L_X=12$ we use $L_T=64,128$ and with $L_X=32$ we use $L_T=128,256$. We find the value of $\mu$ where $\langle N_f\rangle$ measured with different $L_T$'s cross using a linear fit near the crossing. These values of $\mu$ are taken to be estimates of $m_f^{12}$ and $m_f^{32}$. These numbers for different values of $U$ are tabulated in table \ref{table_mcrit_fit}. Similarly by fitting the chiral condensate susceptibility to the form
\begin{equation}
\chi = \chi_0 + B \mathrm{e}^{-m_b^{L_X} L_T}
\end{equation}
we can also extract the finite size boson mass $m_b^{L_X}$. These values are also given in table \ref{table_mcrit_fit} for $L_X=12$ and $32$. We find that while $m_f^{L_X}$ increases sharply with $U$, $m_b^{L_X}$ decreases mildly.
\begin{table}[tbh]
\center
\begin{tabular}{|c|c|c|c|c|}
\hline
$U$ & $m_b^{12}$ & $m_f^{12}$ & $m_b^{32}$ & $m_f^{32}$ \\
\hline
0 & 0.17207 & 0.120(1) & 0.067393 & 0.045(5) \\
0.1 & 0.158(6) & 0.163(2) & 0.061(4) & 0.0705(6) \\
0.2 & 0.184(6) & 0.235(10) & 0.06(1) & 0.1397(3) \\
0.3 & 0.156(4) & 0.328(2) & 0.066(7) & 0.247(2) \\
0.4 & 0.143(4) & 0.425(2) & 0.060(3) & 0.356(1) \\
0.5 & 0.143(7) & 0.519(2) & 0.057(2) & 0.465(1) \\
0.6 & 0.137(1) & 0.601(2) & 0.049(5) & 0.556(1) \\
1.0 & 0.121(4) & 0.871(1) & 0.050(4) & 0.842(2) \\
$\infty$ & 0.114(4) & $\infty$ & 0.0476(9)& $\infty$ \\
\hline
\end{tabular}
\caption{The fermion and boson masses measured using the fermion number with open boundary conditions. The boson mass at $U=0$ is calculated directly from the free correlator on a finite lattice. }
\label{table_mcrit_fit}
\end{table}
\section{Conclusions} \label{conclusions}
In this work we have studied the $1+1$ dimensional lattice Thirring model with staggered fermions at both zero and finite densities. We showed that the model is free of sign problems in the massless limit when open boundary conditions are used. In this case we used the worldline formulation to study the model. In the case of periodic and anti-periodic spatial boundary conditions the sign problem is mild on square lattices but becomes severe when on asymmetric lattices. However, the fermion bag formulation seems to alleviate the problem except at critical values of the chemical potential where fermion number jumps. We provide accurate estimates for the total particle number as a function of the chemical potential for a few lattice sizes. Our results could be used as a benchmark for future studies by other methods that attempt to solve the sign problem.
\section*{Acknowledgments}
We thank A.~Alexandru and P.~Bedaque for extensive discussions about their work and for providing their results so we can compare against our results on small lattices where such a comparison was possible. SC and JR's work was supported by the U.S. Department of Energy, Office of Science, Nuclear Physics program under Award Number DE-FG02-05ER41368. VA's work was supported by the U.S. Department of Energy under grant number DE-SC0010005 .
|
1,116,691,497,686 | arxiv |
\section{Introduction}
\label{sec:intro}
Galaxies are an important tool for studying the distribution of matter in the Universe and testing cosmological models. The correlation between the shapes of galaxies can be used to measure the weak gravitational lensing field, which is a direct measure of the total mass along lines of sight \citep[see, e.g.,][]
1999ARA&A..37..127M,2003ARA&A..41..645R}.
Because galaxies are a biased tracer of the mass, their clustering also carries within it cosmological information \citep[e.g.,][]
2020arXiv200308277N}.
In addition, the tangential distortion of background galaxies around the position of foreground galaxies---usually referred to as galaxy--galaxy lensing---can be used to study the correlation between the foreground galaxies and the matter around them \citep[e.g.,][]
2004AJ....127.2544S,2006MNRAS.368..715M,2012PhRvD..86h3504Y}.
The combination of all three measurements, sometimes referred to as ``3$\times$2pt'' for the use of three 2-point functions, can break the degeneracy between the galaxy bias and the clustering amplitude of matter \citep[see, e.g.,][and references therein]{
2017MNRAS.470.2100K}. These 3$\times$2pt{} analyses are the focus of ongoing surveys such as the Kilo Degree Survey (KiDS; \citealt{2017MNRAS.465.1454H}), the Dark Energy Survey (DES; \citealt{2005astro.ph.10346T,2016MNRAS.460.1270D}), and the Hyper Suprime-Camera Survey (HSC; \citealt{2018PASJ...70S...4A}). Many other analyses have also applied a similar combined-probes approach to a variety of data sets \citep[see, e.g.,][]{2013MNRAS.432.1544M,2015ApJ...806....2M,2018MNRAS.476.4662V,2018MNRAS.474.4894J,2018PhRvD..98d3526A}. These types of combined analyses can also help to mitigate systematics that impact only one of the three 2-point measurements.
Wide field stage \Romannumeral{2} dark energy experiments (such as the Sloan Digital Sky Survey [SDSS; \citealt{2000AJ....120.1579Y}], the WiggleZ Dark Energy Survey \citep{2010MNRAS.401.1429D} and the Canada-France-Hawaii Telescope Legacy Survey [CFHTLS; \citealt{2012SPIE.8448E..0MC}]) and stage \Romannumeral{3} dark energy experiments (e.g., KiDS, DES, HSC, and eBOSS) have provided imaging and spectra for hundreds of millions of galaxies, and stage \Romannumeral{4} experiments such as the Dark Energy Spectroscopic Instrument (DESI; \citealt{2013arXiv1308.0847L}), the Rubin Observatory's Legacy Survey of Space and Time (LSST; \citealt{2019ApJ...873..111I}), the \textit{Nancy Grace Roman Space Telescope} \citep{2015arXiv150303757S}, and \textit{Euclid} \citep{2011arXiv1110.3193L} are expected to increase that number substantially. As the number of observed galaxies increases, the statistical uncertainty on measurements made with them decreases.
Consequently, our understanding and treatment of the systematic effects that impact galaxy clustering measurements must be improved if the uncertainties on the inferred cosmological parameters from such galaxy surveys are to remain statistics-dominated.
There are a large number of potential contaminants that can result in this type of coherent fluctuations, e.g. star-galaxy separation, stellar occultation, extinction, and variations in observing conditions like airmass or sky brightness. Differentiating between the true cosmologically-sourced fluctuations and those caused by such survey properties has been the subject of many studies over the years \citep[see, e.g.,][and references therein]{2016MNRAS.457..786S,2018PhRvD..98d2006E,2020JCAP...03..044N,2020MNRAS.495.1613R,2020arXiv200714499W}
\citet{2020MNRAS.495.1613R} identify three broad categories of mitigation techniques:
\begin{enumerate*}[(a)]
\item Monte Carlo simulation of fake objects;
\item mode projection; and
\item regression.
\end{enumerate*}
The first of these methods, involving injecting artificial sources into real images, is extremely promising. It results in forward-modeling the survey selection mask imposed by real imaging properties. Examples of this method include \citet{2013A&C.....1...23B} and \citet{2016MNRAS.457..786S}. However, this technique is computationally expensive, and therefore less utilized than the other methods.
Techniques utilizing mode projection typically involve down-weighting the spatial modes that are strongly correlated with survey properties by assigning a large variance to them. This technique has been explained and utilized in, e.g., \citet{1992ApJ...398..169R,2020JCAP...03..044N}.
The variance of the estimated clustering increases as more survey properties are considered unless a threshold is used to limit the number of survey property maps. However, using such a threshold has been shown to introduce a bias in the resulting two-point function \citep{2016MNRAS.456.2095E}.
Regression-based techniques attempt to model the impact of the survey properties on the galaxy density, fitting the parameters of the model by cross-correlating the galaxies and systematic fluctuations or by using a least-squares estimate. For instance, \citet{2011MNRAS.417.1350R,2012ApJ...761...14H} fit for the impact of observing conditions in the correlation function and power spectrum, respectively. The disadvantage with this method is that any spurious correlation between the 2-point function of the galaxies and the survey properties will result in a correction, even if the fluctuations are not spatially related. This makes it easy to over-correct for systematic fluctuations which may bias the resulting correlation function estimate, although \citet{2020arXiv200714499W} show how the pseudo-Cl implementation of mode projection can be interpreted as an ordinary least squares regression approach that accounts for this over-correction.
As part of the analysis of the DES Y1 ``Gold'' data release, \mydefcitealias{2018PhRvD..98d2006E}{Paper~\Romannumeral{1}} also fit for the impact of survey properties, but using one of the alternative suggestions from \citet{2011MNRAS.417.1350R} of applying the corrections one at a time in order to account for potential correlations between different sources of systematic fluctuations. Briefly, the method of \citetalias{2018PhRvD..98d2006E} is as follows: the average number of galaxies per pixel $N_{\rm gal}$ is measured for all pixels with a survey property value $s$ within a bin $s \in [s_{\rm min}, s_{\rm max}]$ for one of the survey property maps, relative to the average number of galaxies per pixel in all pixels $\langle N_{\rm gal}\rangle$. A model is fit across all bins of the survey property values, and the $\Delta \chi^2$ for this model compared to a null test where $N_{\rm gal} / \langle N_{\rm gal}\rangle = 1$ is calculated. The significance of the survey property map is defined by comparing this $\Delta \chi^2$ to the sixty-eighth percentile of the equivalent quantity measured in \num{1000} contaminated Gaussian mock catalogs. This procedure is repeated for each survey property map and the maps are ranked by significance. A correction is applied for the most significant map to the measurements of $N_{\rm gal} / \langle N_{\rm gal}\rangle$, and the significance of each map is re-calculated. To avoid over-correction, this iterative process continues until none of the survey property maps have a significance above some target threshold. However, it is not necessarily the case that the effects of the various survey properties can be separated in this manner. For instance, this method precludes the possibility that significant systematic fluctuations can arise from the coherent contribution of multiple sources of systematics despite each individual survey property map being negligible by itself. Also, the analysis in \citetalias{2018PhRvD..98d2006E} included the spatial structure of the galaxy distribution only through the covariance in the galaxy densities binned by survey property. The analysis method introduced in this paper explicitly incorporates the density and spatial separations of neighboring pixels for determining the coefficients of the fluctuations sourced by survey properties: it is a much finer-grained look at that spatial structure.
Several other recent studies have attempted to use the regression-based technique directly with the galaxy density field while incorporating the spatial structure of the galaxy density field \citep[see, e.g.,][]{2016ApJS..224...34P,2017MNRAS.465.1831D}. However, as discussed in \citet{2020MNRAS.495.1613R}, these models are also often vulnerable to over-correction. The regression method used by \citet{2020MNRAS.495.1613R} differs from previous regression-based techniques in that it does not assume a functional form for the impact of the survey properties on the observed galaxy density. Instead, \citet{2020MNRAS.495.1613R} rely on a neural network approach and feature selection to achieve accurate systematic corrections without over-correction. However, this method fails to propagate the statistical and systematic uncertainties due to the correction into the error budget of the galaxy clustering signal.
In this paper, we implement an improved version of the linear model described in \citet{2016ApJS..224...34P}. Relative to that work, we reduce the number of free parameters by one by enforcing the condition that in the absence of systematic fluctuations, the observed galaxy density field will be equal to the true galaxy density field with a mean of zero (i.e., we do not include the constant term in equations 13 and 14 of that paper as a free parameter in our model). Our analysis explicitly incorporates the spatial clustering signal of the galaxy density field in an iterative approach, and mock catalogs are used to calibrate and correct for the residual bias due to over-correction. The combination of using a Markov chain Monte Carlo (MCMC) to fit our model and utilizing mock catalogs to correct for the bias allows us to estimate both the statistical and systematic uncertainty of our systematics-corrected galaxy correlation function. Our procedure therefore correctly inflates the error budget associated with the measurement of the galaxy correlation function, enabling us to trivially propagate these uncertainties into cosmological constraints downstream. We apply our model to the DES Y1 Gold redMaGiC{} catalog, and compare our results to those from \citetalias{2018PhRvD..98d2006E} and \citet{2018PhRvD..98d3526A}.
The paper is organised as follows: in \cref{sec:data}, we describe the redMaGiC{} catalog and the survey properties we use. We describe our method in \cref{sec:method}. The generation of our mock catalogs and the results of the validation in the mocks is discussed in \cref{sec:mocks}. We determine the impact of our systematics correction on the uncertainty in the correlation function in \cref{sec:noise}. Our results are presented in \cref{sec:results}, and we summarize our findings in \cref{sec:conclusions}.
\section{Data}
\label{sec:data}
We will estimate and correct for systematic-sourced fluctuations in the density of the DES Year 1 redMaGiC{} galaxy sample \citepalias{2018PhRvD..98d2006E}. We use the same redshift binning as the Y1 analysis, shown here in \cref{tab:zbins}, along with the number count and galaxy density in each bin. As described in \cref{sec:method}, our analysis leads us to remove survey regions with large systematic-sourced fluctuations. This cut removes \SI{\sim 3.5}{\percent} of the fiducial Y1 redMaGiC{} footprint, for a final area of \SI{\approx 1274}{\sqdeg}. The counts and galaxy density after our systematic cut is shown in the fourth and fifth columns of \cref{tab:zbins}. \Cref{fig:nz} compares the redshift distributions in each bin before and after the systematics cuts. The dotted lines of various colors are the distributions for the full redMaGiC{} sample, while the dashed lines of the same color are the distributions in the same bin after cutting based on systematics. The distributions are not normalized, so differences in height are caused by the difference in the number of galaxies before and after the cut.
\begin{table}
\centering
\input{include/redmagic_bin_table}
\caption{\label{tab:zbins} The redshift binning with information about the number of galaxies and number density both from the DES Y1 analysis and the current analysis. The second and fourth columns are the total number of galaxies in each of the redshift bins, while the third and fifth give the galaxy density per square arcminute. Note that there is a change in the mask in going from the Y1 counts and number density of columns two and three to our own in columns four and five, which reduces the area by \SI{\sim 3.5}{\percent}.}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{des_y1_redmagic_n_of_z.pdf}
\caption{\label{fig:nz} The redshift distribution for each redshift bin, found by stacking Gaussian distributions with mean and standard deviation equal to the redMaGiC{} redshift and error. The dashed lines are the distributions with our new mask, and the dotted lines of the corresponding colors are the corresponding distributions from \citetalias{2018PhRvD..98d2006E} in the same bins. The curves are not normalized, so differences in height are from the number of galaxies in the bin.}
\end{figure}
We estimate the galaxy correlation function using the \citet{1993ApJ...412...64L} estimator,
\begin{linenomath}\begin{equation}
\label{eq:cfEstimator}
\hat{w}(\theta) = \frac{DD - 2DR + RR}{RR},
\end{equation}\end{linenomath}
where $DD$, $DR$, and $RR$ are the number of pairs of galaxies with angular separation $\theta$ given a galaxy sample $D$ and a random catalog $R$. We measure the number of pairs using \texttt{TreeCorr}\footnote{\url{http://ascl.net/1508.007}} \citep{2004MNRAS.352..338J}, and our random catalog is the same one used by \citetalias{2018PhRvD..98d2006E}, except for the fact that we remove the random points in the survey regions excluded by our analysis.
We consider a total of 18 potential sources of systematics for the observed galaxy correlation function. Each of these is represented as a map which is pixelated on the sky using the \texttt{HEALPix}\footnote{\url{https://healpix.sourceforge.net}} \citep{2005ApJ...622..759G} pixelization scheme. The majority of the maps we consider are imaging properties from the DES Y1 `GOLD' catalog release \citep{2018ApJS..235...33D}. In each of the four bands ($griz$), we have maps of
\begin{enumerate}[(i),labelindent=\parindent,leftmargin=2\parindent]
\item total exposure time;
\item mean PSF FWHM;
\item mean sky brightness, due to e.g. the moon; and
\item mean airmass,
\end{enumerate}
For all mean quantities, the value on a pixel in a given band is computed as the weighted mean over all exposures in that band which contribute to the pixel. The exposure time is instead the sum of the exposure times for each exposure contributing to the pixel. Unlike \citetalias{2018PhRvD..98d2006E}, we do not include any depth maps, as these depend on the other imaging properties in a complicated way, and therefore are not linearly independent from the other imaging properties---including the depth maps would be double counting the other imaging properties and would likely increase any over-correction biases that might exist. We therefore have \num{16} imaging property maps. We also consider contamination due to foreground stars, for which we use the stellar density map described in section~5 of \citetalias{2018PhRvD..98d2006E}. Galactic extinction is included using the dust opacity map from the Planck Collaboration \citep{2014A&A...571A..11P}. Both stellar density and extinction were considered in \citetalias{2018PhRvD..98d2006E}, but were not found to have significant correlation with the galaxy density and thus were ultimately excluded from that correction. We include both here because we do not want to preclude the possibility that they could still add coherently with other potential sources of contamination and thus impact the observed galaxy density. We note that the Planck dust map is known to have some positive correlation with the galaxy density in some redshift bins, as do most other Galactic dust maps \citep[see, e.g.,][]{2019ApJ...870..120C}. Collectively, we refer to the set of \num{18} imaging property, stellar density, and Galactic extinction maps as ``survey property maps'' or SPs. Where necessary, we use the routines of \texttt{healpy} \citep{2019JOSS....4.1298Z} for manipulating both survey property and density maps.
\section{Method}
\label{sec:method}
We determine the impact of observing conditions in the clustering of galaxies by relying on the spatial structure of the survey properties. Specifically, we estimate the extent to which the galaxy density maps are contaminated by systematic fluctuations by measuring the extent to which the galaxy density map traces the various survey property maps.
We begin by constructing a low-resolution ($N_{\rm side}=128$) map of the galaxy density field. This choice limits the number of empty pixels to a small percentage (\SI{\leq 10}{\percent}) of the total pixels. Working at this resolution, the average number of galaxies per pixel is \num{\geq 10} at all redshifts.
We degrade the resolution of our survey property maps to match the resolution of our galaxy density map, properly accounting for the masked portions of every pixel. Specifically, the degraded survey property map $\mathcal{S}^\prime$ is related to the original survey property map $\mathcal{S}$ via
\begin{linenomath}\begin{equation}
\label{eq:degradedSys}
\mathcal{S}^{\prime\, j} = \frac{\sum_{i \in j} \mathcal{S}^i f^i}{\sum_{i \in j} f^i},
\end{equation}\end{linenomath}
where $f^i \in [0, 1]$ is the fraction of pixel $i$ (at the original map resolution) that is detected in the footprint. The sums are over all high resolution pixels $i$ that fall within low resolution pixel $j$. We also degrade the pixel fraction map $f^i$, such that the fraction $f^{\prime j}$ of low resolution pixel $j$ in the footprint is related to the high resolution fraction by
\begin{linenomath}\begin{equation}
\label{eq:degradedFrac}
f^{\prime j} = \frac{1}{\tilde{N}} \sum_{i \in j} f^i,
\end{equation}\end{linenomath}
where $\tilde{N}$ is the number of high resolution pixels within a low resolution pixel.
The degraded survey property maps are transformed into standardized fluctuation maps as follows. Let $\mathcal{S}^{\prime i}_\beta$ be the value of survey property map $\beta$ on low resolution pixel $j$.
We define the mean $\overline{\mathcal{S}}_\beta$ and fluctuation scale $\hat{\sigma}_\beta$ of $\mathcal{S}^{\prime j}_\beta$ via
\begin{linenomath}\begin{align}
\overline{\mathcal{S}}_\beta &\equiv \slashfrac{\sum_{j = 1}^{N_{\rm pix}} f^{\prime j} \mathcal{S}^{\prime j}_\beta}{\sum_{j = 1}^{N_{\rm pix}} f^{\prime j}} \label{eq:sysMean} \\
\intertext{and}
\hat{\sigma}_\beta &\equiv 1.4826 \mad\!\left(\mathcal{S}^{\prime j}_\beta\right) \label{eq:sysSTD} \, ,
\end{align}\end{linenomath}
The median absolute deviation in \cref{eq:sysSTD} is
\begin{linenomath}\begin{equation*}
\mad\!\left(\mathcal{S}^{\prime j}_\beta\right) \equiv \slashfrac{\sum_{j = 1}^{N_{\rm pix}} \left|\mathcal{S}^{\prime j}_\beta - \med\!\left(\mathcal{S}^{\prime j}_\beta\right)\right|}{N_{\rm mask}}\, ,
\end{equation*}\end{linenomath}
where $N_{\rm mask}$ is the number of pixels not removed by the mask. The ``fluctuation scale'' $\hat \sigma_\beta$ defined above is an estimator of the standard deviation for Gaussian fluctuations, but its value is more robust to outliers than estimates based on the sample variance. The standardized fluctuation map for survey property $\beta$ is defined as
\begin{linenomath}\begin{equation}
\label{eq:standardSys}
S^j_\beta \equiv \frac{\mathcal{S}^j_\beta - \overline{\mathcal{S}}_\beta}{\hat{\sigma}_\beta}\, .
\end{equation}\end{linenomath}
Rather than working with the fluctuation maps themselves, we construct an orthogonal map eigenbasis as follows. We assume the survey properties on each pixel are an independent random realization from an $N_{\rm maps}$-dimensional distribution. We find the covariance matrix $\mat{C}$ of the standardized maps at the fit resolution, where
\begin{linenomath}\begin{equation*}
\mat{C}_{\alpha \beta} = \langle \left(S_\alpha - \langle S_\alpha \rangle \right) \left(S_\beta - \langle S_\beta \rangle \right) \rangle ,
\end{equation*}\end{linenomath}
and $\langle \cdot \rangle$ is the spatial average over all observed pixels. We define the rotation matrix $\mat{R}$ from the eigenvectors of $\mat{C}$ such that
\begin{linenomath}\begin{equation*}
\mat{C} = \mat{R} \mat{D} \mat{R}^\ensuremath{\top} ,
\end{equation*}\end{linenomath}
where $\mat{D}$ is a diagonal matrix with the eigenvalues of $\mat{C}$ along the diagonal. The rotated and standardized survey property value for map $\alpha$ on pixel $j$ is
\begin{linenomath}\begin{equation}
\label{eq:sAlpha}
s^j_\alpha \equiv \mat{R}^\ensuremath{\top}_{\alpha \beta} S^j_\beta .
\end{equation}\end{linenomath}
Each $s^j_\alpha$ is, therefore, a linear combination of the fluctuations in the original SP maps $\{\mathcal{S}^{\prime\, j}_\beta\}$ on a given pixel. For the rest of the paper, unless otherwise noted, the term ``SP'' refers to the eigenmap $s^j_\alpha$ of \cref{eq:sAlpha} rather than the original survey property map $\mathcal{S}^i_\beta$.
Since fluctuations in the density field can't be sensitive to a constant non-zero SP value---any non-zero constant would simply shift the mean value of the galaxy density field---, the observed galaxy density must only depend upon the fluctuations of the SPs. Thus, we write $\delta^j_{\rm obs} \equiv \delta_{\rm obs}\!\left(\{s_\alpha^j\}\right)$, where $\{s_\alpha^j\}$ is a vector containing the value of pixel $j$ across all SP maps $\alpha$. Expanding around $\{s_\alpha^j\} = \vvec{0}$ to first order, we have
\begin{linenomath}\begin{equation}
\label{eq:deltaObs}
\delta_{\rm obs}^j\!\left(\{s_\alpha^j\}\right) \approx \delta^j_{\rm true} + \sum_\alpha a_\alpha s^j_\alpha ,
\end{equation}\end{linenomath}
where the coefficient $a_\alpha$ is the derivative of $\delta_{\rm obs}$ with respect to $s_\alpha$ at $\{s_\alpha^j\} = \vvec{0}$. Note that any impact on the monopole of the galaxy density field by the survey properties gets absorbed into the mean observed galaxy density, and therefore has no impact on the galaxy fluctuations. Since our expansion is at first order, we can ignore the monopole as any impact with couplings to the linear perturbations would be second order. In the expansion, we have used the fact that $\delta^j_{\rm obs}\!\left(\{s_\alpha^j\} = \vvec{0}\right) = \delta^j_{\rm true}$, where $\delta^j_{\rm true}$ is the true galaxy overdensity on pixel $j$. We have also assumed that the impact of SP on the galaxy density field is local: the SP in pixel $j$ only impact the galaxy density at pixel $j$.
Our task is to find the set of coefficients $\{a_\alpha\}$ in \cref{eq:deltaObs}. We do this by fitting the likelihood \@ifstar\cprobs\cprobn*{\vvec{\delta}_{\rm obs}}{\vvec{\delta}_{\rm sys}} of the observed overdensity map given the systematics map $\vvec\delta_{\rm sys} \equiv\sum_\alpha a_\alpha \vvec{s}_\alpha$, where the vector symbol denotes the full map. As discussed below, our procedure allows for covariance between pixels, so that this likelihood distribution does not in general reduce to a product over all pixels. We assume a Gaussian likelihood for $\vvec{\delta}_{\rm obs}$. This explains why it is important for the mean number of galaxies in the galaxy density map to be large. We test our sensitivity to using a Gaussian distribution in \cref{sec:mockResults}. The ensemble average over realizations of the observed density field at fixed systematics is simply
\begin{linenomath}\begin{equation}
\label{eq:deltaObsMean}
\left\langle \vvec{\delta}_{\rm obs}\right\rangle = \vvec{\delta}_{\rm sys} .
\end{equation}\end{linenomath}
We can thus write our Gaussian likelihood for $\vvec{\delta}_{\rm obs}$ as
\begin{linenomath}\begin{align}
\ln \@ifstar\cprobs\cprobn*{\vvec{\delta}_{\rm obs}}{\vvec{\delta}_{\rm sys}} = &-\frac{1}{2} \log \left|\mat{\Sigma^{\rm obs}}\right| \nonumber \\
&- \frac{1}{2} \left(\vvec{\delta}_{\rm obs} - \vvec{\delta}_{\rm sys}\right)^{\!\ensuremath{\top}} \left(\mat{\Sigma^{\rm obs}}\right)^{-1} \left(\vvec{\delta}_{\rm obs} - \vvec{\delta}_{\rm sys}\right) , \label{eq:like}
\end{align}\end{linenomath}
where we have dropped all constant terms, and again
\begin{equation}
\vvec{\delta}_{\rm sys} = \sum_\alpha a_\alpha s^j_\alpha.
\end{equation}
The model parameters characterizing $\vvec{\delta}_{\rm sys}$ are the coefficients $a_\alpha$ for each survey property, which we aim to recover from the data. With this notation, both $\vvec{\delta}_{\rm sys}$ and $\vvec{\delta}_{\rm obs}$ are vectors of length $N_{\rm pix}$ and $\mat{\Sigma^{\rm obs}}$ is an $N_{\rm pix} \times N_{\rm pix}$ matrix, where $N_{\rm pix}$ is the number of pixels within the footprint (i.e. the number of observed pixels).
The covariance matrix for our likelihood can be written as the sum of two terms,
\begin{linenomath}\begin{equation}
\label{eq:deltaObsCov}
\mat{\Sigma^{\rm obs}} = \mat{\Sigma^{PN}} + \mat{\Sigma^{SV}} .
\end{equation}\end{linenomath}
The first term contains the Poisson noise in the density field, and takes the form
\begin{linenomath}\begin{equation*}
\mat{\Sigma^{PN}}_{jk} = \sigma_g^2 \delta_{jk} ,
\end{equation*}\end{linenomath}
where $\sigma_g$ is a constant for which we can fit and $\delta_{jk}$ is the Kronecker delta. It will become clear shortly why we allow $\sigma_g$ to be an unknown constant, rather than fixing it to the Poisson expectation. The second term in \cref{eq:deltaObsCov} accounts for the sample variance.
We fit for our SP coefficients in two iterations. During the first iteration, we assume there is no sample variance, so that $\mat{\Sigma^{\rm obs}}$ is diagonal. In this case, we can analytically solve for the variance $\sigma_g^2$ and coefficients $\{a_\alpha\}$ that minimize the likelihood in \cref{eq:like} by solving the simultaneous set of equations obtained when setting all of the partial derivatives with respect to the survey parameter coefficients and $\sigma_g^2$ to zero. We are also able to find the $19\times19$-dimensional parameter covariance matrix analytically as the inverse of the Hessian matrix evaluated at the minimum---we use this parameter covariance matrix (excluding the row and column corresponding to $\sigma_g^2$) in the second iteration to select random starting locations within the \num{18}-dimensional parameter space.
Once we complete our first iteration, we use our results to estimate $\hat{\vvec{\delta}}_{\rm true}$. We then define $\mat{\Sigma^{SV}}$ via
\begin{linenomath}\begin{equation*}
\mat{\Sigma^{SV}}_{jk} = (1 - \delta_{jk})\, \hat{w}_{\rm true}\!\left(\theta_{jk}\right) ,
\end{equation*}\end{linenomath}
where $\hat{w}_{\rm true}$ is the correlation function of our estimated true overdensity field $\hat{\vvec{\delta}}_{\rm true}$ and $\theta_{jk}$ is the angular separation between pixels $j$ and $k$. We artificially set the diagonal elements of $\mat{\Sigma^{SV}}$ to zero because we cannot differentiate between the sample variance and Poisson noise within a single pixel. This also explains why we treated $\sigma_g$ as an unknown constant: $\sigma_{\rm g}$ is really the sum of the Poisson and zero-offset sample variance terms. We therefore continue to use the $\sigma_g$ obtained from the minimization in the first iteration as the only term on the diagonal of $\mat{\Sigma^{\rm obs}}$ in the second iteration.
We use the resulting ``Poisson'' and sample variance noise estimates to refit for the coefficients of each of the SP parameters. In the second iteration, we use a Markov Chain Monte Carlo (MCMC) algorithm (specifically \texttt{emcee}; \citealt{2013PASP..125..306F}) to sample our parameter space and estimate the posterior distribution while holding both $\mat{\Sigma^{PN}}$ and $\mat{\Sigma^{SV}}$ fixed. Our best fit coefficients after the second iteration are the mean parameter values from the chain\footnote{We run our chain with \num{36} walkers for \num{1000} steps each. We do not use a burn-in when fitting to the real data as we generate the initial positions by drawing from a multivariate Gaussian with a mean and covariance matrix given by the coefficients and parameter covariance from the first iteration. We use a burn-in of \num{300} steps per walker when fitting to mock catalogs.}. To check for convergence, we look at the shift in the coefficients between the first and second halves of each chain relative to the error from the chain. We find a median shift (over all \num{18} parameters) of \numlist[list-final-separator = {, and }]{0.19;0.29;0.18;0.26;0.14} for redshift bins \numrange[range-phrase = { through }]{1}{5} respectively, and the worst convergence in any single parameter for each redshift bin is \numlist[list-final-separator = {, and }]{0.60;0.72;0.55;0.53;0.34}. We have verified that using the coefficients from the second iteration to update $\mat{\Sigma^{SV}}$ and performing a second MCMC (i.e. getting a third iteration of the coefficients) does not have a significant impact on our results.
Once we have our coefficients, we correct for the effect of systematic fluctuations on the correlation function. We do so by defining weights for each galaxy based on the systematics map value on the pixel containing the galaxy. For calculating galaxy weights, we use the systematics map at a resolution of $N_{\rm side} = 4096$. While we must fit at low resolution to ensure that our likelihood is roughly Gaussian, the fundamental assumption of our method is that survey properties only produce local modulations of the galaxy density field. Since our model is linear, all the local modulations add together when smoothing to go to lower resolution, so the relation between the survey properties and the galaxy density must be the same at low and high resolution. We standardize and rotate the high resolution maps as we did with the low resolution maps, but we use the mean, fluctuation scale, and rotation matrix determined from the low resolution maps for the purposes of defining the high resolution eigen-maps. This is critical, as the definition of the maps must match that employed in our fits. The weight for a galaxy on high-resolution pixel $i$ is
\begin{linenomath}\begin{equation}
\label{eq:weight}
w^i = \frac{1}{1 + \sum_\alpha a_\alpha s^i_\alpha} .
\end{equation}\end{linenomath}
We refer to the correlation function measured using these weights as $w_{\rm corr0}$. As previously mentioned, when calculating the systematics-corrected correlation function, we also exclude any galaxies on pixels with $\delta_{\rm sys}^i > 0.2$. This should restrict us to only areas of the sky where our first order approximation is valid. The resulting footprint is \SI{\sim 3.5}{\percent} smaller than the original Y1 footprint, and a total of \num{23359} galaxies are removed across all redshift bins. We expect complications due to the interpolation to higher resolution to be small as we find that only \SI{\sim 12.8}{\percent} (\SI{\sim 0.8}{\percent}) of galaxies have a weight that differs from unity by more than \SI{10}{\percent} (\SI{20}{\percent}) before applying the cut based on $\delta_{\rm sys}^i$.
The above procedure tends to over-correct the data for the impact of SPs. We calibrate the amount of over-correction in the correlation function from our method using mock galaxy catalogs, and use these to de-bias our procedure, which will result in an updated systematics-corrected correlation function estimate $w_{\rm corr1}$. The details of this de-biasing are presented in the next section. We describe how we incorporate statistical and systematic uncertainties due to our correction in the error budget of the observed correlation function in \cref{sec:noise}.
\section{Methodology Validation with Mock Catalogs}
\label{sec:mocks}
There are three potential sources of systematic bias in our analysis. These are, in no particular order,
\begin{enumerate*}[(i)]
\item the first order approximation from \cref{eq:deltaObs} is not accurate,
\item the Gaussian likelihood is not correct, and
\item the estimates of the SP coefficients are noisy and too much correlation is removed from the data, an effect usually referred to as over-correction.
\end{enumerate*}
As mentioned in \cref{sec:data}, we restrict our final data set to pixels where the linear prediction of the SP-sourced galaxy density fluctuations are \num{\leq 0.2}. This serves to minimize potential biases from non-linear responses in the systematics correction. We test the robustness of our methodology to non-Gaussian fields and noise by testing it on log-normal mock galaxy catalogs. We further use these catalogs to calibrate the bias in our method due to over-correction.
\subsection{Mock Catalog Generation}
\label{sec:mockData}
To create our log-normal mock catalogs, we use the fiducial cosmological parameters from \citetalias{2018PhRvD..98d2006E}: $\Omega_m = 0.295$, $A_s = 2.260574 \times 10^{-9}$, $\Omega_b = 0.0468$, $h = 0.6881$, and $n_s = 0.9676$. We run \texttt{CAMB} \citep{2000ApJ...538..473L,2012JCAP...04..027H} and \texttt{Halofit\_Takahashi} \citep{2003MNRAS.341.1311S,2012ApJ...761..152T} using \texttt{CosmoSIS} \citep{2015A&C....12...45Z} to compute the angular galaxy clustering power spectrum. We then use this power spectrum to generate a log-normal random field for the true galaxy over-density, $\delta_{\rm true}$, in each of our five redshift bins via the code \texttt{psydocl}\footnote{\url{https://bitbucket.org/niallm1/psydocl/src/master/}}. This galaxy density field is generated at high resolution ($N_{\rm side}=4096$). When appropriate (i.e. depending on the test being pursued, see below), we add systematic fluctuations to the galaxy density field using our linear model. We then calculate the expected number of galaxies in each pixel, taking into account the masked fraction in each pixel. Finally, we randomly place $N$ galaxies within each pixel, where $N$ is a Poisson realization of the expected number of galaxies.
We generate \num{100} independent realizations of $\delta_{\rm true}$ for each redshift bin. Each realization is then used to create two mock catalogs, one with no SP contamination and another with SP applied using the best fit coefficients from our analysis of the DES Y1 data set. We refer to these as uncontaminated and contaminated mocks, respectively. Note that while both the uncontaminated and contaminated mocks share the same underlying over-density fields, they have different Poisson realizations.
We use our methodology from \cref{sec:method} to estimate the impact of SPs in our mock galaxy catalogs, and compare the resulting corrected correlation function to the underlying true mock galaxy correlation function. To increase computational efficiency, we restrict our mock catalogs to the final mask employed in our analysis of the DES Y1 galaxies. That is, we do not re-apply the $\delta_{\rm sys}^i \leq 0.2$ cut in every mock. Doing so would have forced us to recompute random pairs for every mock due to slight differences in the final footprint. Because systematic fluctuations are linear in the mock catalog by construction, this additional restriction has no bearing on the conclusions drawn from our simulations. Unfortunately, this also means our mock catalogs do not allow us to test how sensitive our method is to non-linear contamination.
We test whether our contaminated mock galaxy catalogs have comparable levels of SP contamination to the data as follows. For the data and both sets of mock galaxy catalogs we compute the raw observed correlation function, and the corrected correlation function $w_{\rm corr0}$ as described in \cref{sec:method}. We then calculate the difference between these two correlation functions in all three cases.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,keepaspectratio]{w_corr0_bias_cont_data_mean+std_const_cov_v6.pdf}
\caption{\label{fig:correctionSize} Comparison of the bias between the systematics-corrected ($w_{\rm corr0}$) and uncorrected ($w_{\rm cont}$) correlation functions for the DES Y1 data and the uncontaminated and contaminated mocks, relative to the DES Y1 errors (see text for details). The blue solid line is the result for the data. The mean and sample standard deviation for the contaminated mocks is shown as the orange dashed line and orange shaded region, while the green dash-dotted line and green shaded region show the same for the uncontaminated mocks. These error regions do not include the correction factor discussed in \cref{sec:mockResults}. By eye, we see $1 \sigma$ agreement between the contaminated mocks and the data for three of the five redshift bins, and $2 \sigma$ agreement in bins 2 and 3. The gray shaded region is once again the small scale cut used by \citetalias{2018PhRvD..98d2006E}.}
\end{figure*}
The blue solid line in \cref{fig:correctionSize} shows the biased systematic correction of the DES Y1 redMaGiC{} data computed using the first iteration of our method, while the orange dashed line is the mean correction from the 100 contaminated mock galaxy catalogs. The green dashed-line is the mean of the uncontaminated galaxy catalogs. The width of the bands show the sample standard deviation for each of the two sets of mocks. It is immediately apparent that the amplitude of the systematic correction in our uncontaminated mocks is significantly smaller than that of the data in redshift bins 3, 4, and 5. That is to say, we have robustly detected the presence of systematic fluctuations in the DES Y1 data set. More generally, the correction derived from our contaminated mocks is comparable to that in the data, particularly for the redshift bins that exhibit strong systematic fluctuations. Thus, \cref{fig:correctionSize} provides evidence that the contaminated mock galaxy catalogs used in our analysis are a reasonable match to the data.
\subsection{Methodology Validation: Recovery of the SP Coefficients}
\label{sec:mockResults}
We fit for the SP coefficients in both sets of \num{100} mocks for each redshift bin, for a total of \num{1000} independent mock catalogs to be analyzed. Because we know the SP coefficients used to generate the mocks, we can test whether we correctly recover the input coefficients with our analysis. To do so, we calculate the $\chi^2$ of the mean coefficients estimated from our posterior and the input for each mock. That is, for each mock catalog $\nu$ we compute
\begin{linenomath}\begin{equation}
\label{eq:chi2}
\chi^2_\nu = \left(\{\hat{a}_\alpha\}_\nu - \{a_\alpha\}_{0, \nu}\right)^\ensuremath{\top} \hat{\mat{C}}_\nu^{-1} \left(\{\hat{a}_\alpha\}_\nu - \{a_\alpha\}_{0, \nu}\right) ,
\end{equation}\end{linenomath}
where $\{a_\alpha\}_{0, \nu}$ is the input vector of \num{18} coefficients used in generating mock catalog $\nu$, $\{\hat{a}_\alpha\}_\nu$ is the mean vector of the posterior from our analysis for mock $\nu$ with length \num{18}, and $\hat{\mat{C}}_\nu$ is the parameter covariance matrix estimated from the MCMC chain for mock $\nu$ with dimensions $18\times18$. We show the distribution of the $\chi^2_\nu$ statistics for all \num{1000} mocks as the blue histogram in \cref{fig:chi2hist}. For reference, the green line is the expected $\chi^2$ distribution for \num{18} degrees of freedom, \num{18} being the number of SPs. It is clear that the distribution of $\chi^2$ values is biased relative to our expectation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{chi2_histogram_flat_const_cov_nside128_nburnin300_nsteps700_v6.pdf}
\caption{\label{fig:chi2hist} The distribution of $\chi^2$, as defined in \cref{eq:chi2}, for all contaminated and uncontaminated mocks in all redshift bins. The blue histogram is the original distribution. The orange histogram is the result of re-scaling every $\chi^2$ by $\sfrac{18}{\langle \chi^2\rangle}$. The green line is the expected $\chi^2$ distribution with \num{18} degrees of freedom, for reference. Note that both histograms are normalized.}
\end{figure}
\citet{2007A&A...464..399H} pointed out that noise in the covariance matrix biases $\chi^2$ statistics. In our case, the noise in the covariance matrix is only partly due to a finite number of realizations in the MCMC: noise in the data will also generate noise in the empirically estimated covariance matrix, which will in turn bias the recovered $\chi^2$. In the absence of a first principles prescription for the expected bias in our analysis, we adopt an ad-hoc correction by demanding the average $\chi^2$ over all our simulations be equal to the number of degrees of freedom in the problem (\num{18}). That is, we de-bias every $\chi^2$ value by dividing it by the factor $\lambda \equiv 22.33/18=1.24$. The resulting distribution is shown as the orange histogram in \cref{fig:chi2hist}, which is now an excellent match to expectations.
As discussed in \citet{2007A&A...464..399H}, the bias due to noise in the covariance matrix estimate propagates into the parameter posteriors. Consequently, we increase the statistical uncertainty in our recovered corrections for the correlation function by a factor of $\sqrt{1.24}$. The fact that our recovered distribution of $\chi^2$ values matches expectation implies that we are successfully recovering the input systematic coefficients within our re-scaled noise estimate.
\subsection{Over-correction Calibration}
\label{sec:calibration}
The orange dashed line and shaded band in \cref{fig:corrections} show the mean and $1 \sigma$ region for the difference between the observed and true correlation functions of our \num{100} independent systematics-contaminated mock catalogs, in units of the statistical uncertainty of the DES Y1 analysis. The $1 \sigma$ region is computed as the error on the mean. The blue solid line and shaded band are the same as the orange, but for the systematics-corrected correlation function with no bias correction (i.e. $w_{\rm corr0}$). While there is a significant improvement when going from no correction to our systematics correction, it is also clear that our method somewhat over-corrects the data.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,keepaspectratio]{w_cont_corr0_corr1_bias_truth_mean+eom_const_cov_v6.pdf}
\caption{\label{fig:corrections} The difference between the various correlation functions for the contaminated mocks and the true correlation function. The orange dashed line shows the offset for the correlation function without any corrections. The blue solid line shows the offset when the systematics weights are applied, but no bias correction is used. The green dash-dotted line is the final offset, with both the systematics weights and the bias correction. Each line is the mean for the \num{100} mocks, and the shaded regions are the error on the mean. Note that the offset is also divided by the sample standard deviation of the true correlation function. We only scales with $\theta > \SI{8}{\arcmin}$ for clarity. The gray shaded region shows the small scale cut used by \citetalias{2018PhRvD..98d2006E}, so any scales within that region will not impact the cosmology results.}
\end{figure*}
We seek to calibrate the amount of over-correction for our method based on the results from \cref{fig:corrections}. However, \emph{note that the level of over correction is itself sensitive to the input amount of contamination}. This is apparent in \cref{fig:biasTruth}, which shows the mean and error on the mean of the over-correction for both uncontaminated (orange) and contaminated (blue) mock galaxy catalogs.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,keepaspectratio]{w_corr0_bias_truth_mean+eom_const_cov_v6.pdf}
\caption{\label{fig:biasTruth} Bias in the systematics-corrected correlation function, relative to the sample standard deviation of the true correlation function. The orange dashed line shows the mean bias for the \num{100} uncontaminated mocks, and the orange shaded region is the error on the mean. Similarly, the blue solid line and shaded region are the mean and error on the mean for the \num{100} contaminated mocks. Note that there is a non-trivial bias even for the contaminated mocks indicating that we are over-correcting for SPs. We only show scales with $\theta > \SI{8}{\arcmin}$ for clarity. The gray shaded regions are once again the small scale cuts from \citetalias{2018PhRvD..98d2006E}.}
\end{figure*}
We use the results in \cref{fig:biasTruth} to reduce the impact of over-correction, and to characterize the remaining systematic uncertainty associated with this effect. Because we see that the level of over-correction is sensitive to the amount of contamination and we do not know the actual contamination level in the data, we must account for this sensitivity when we de-bias. The contaminated and uncontaminated mocks represent the two extreme possibilities for the data, so we de-bias our correlation functions using the mean of the over-correction measured in the contaminated and uncontaminated mocks. That is, we define
\begin{linenomath}\begin{equation}
\label{eq:bias}
\Delta w(\theta) \equiv \frac{1}{2} \left[\langle w_{\rm corr0}^{\rm cont}(\theta) - w_{\rm true}(\theta)\rangle + \langle w_{\rm corr0}^{\rm uncont}(\theta) - w_{\rm true}(\theta)\rangle\right] ,
\end{equation}\end{linenomath}
where $w_{\rm corr0}^{\rm cont}(\theta)$ is the systematics-corrected correlation function at $\theta$ for the contaminated mock galaxy catalogs prior to de-biasing, and $w_{\rm corr0}^{\rm uncont}$ is the equivalent quantity computed for the uncontaminated mock galaxy catalogs. The average $\langle \cdot \rangle$ above is over the simulated data sets. The difference between the two terms in \cref{eq:bias} is indicative of the systematic uncertainty of this bias correction, as explained in \cref{sec:noise}. Given $\Delta w$, we define an updated systematics-corrected correlation function $w_{\rm corr1}$ via
\begin{linenomath}\begin{equation}
\label{eq:corr1}
w_{\rm corr1}(\theta) \equiv w_{\rm corr0}(\theta) - \Delta w(\theta)\, .
\end{equation}\end{linenomath}
The green dash-dotted line and shaded band in \cref{fig:corrections} show the mean and $1 \sigma$ region for the difference between our updated systematics-corrected correlation function estimates $w_{\rm corr1}$ and the true correlation function, as estimated using \num{100} contaminated mock catalogs. Recall that the y-axis is scaled in units of the purely statistical uncertainty of the DES Y1 analysis. It is clear from the figure that while a residual bias remains, the amplitude and uncertainty is much smaller than the statistical uncertainties for the DES Y1 data set. Moreover, the true underlying correlation function is within the expected errors in the measurement.
\section{The Impact of Systematics Removal on the Noise}
\label{sec:noise}
The covariance matrix used in \citetalias{2018PhRvD..98d2006E} when fitting the galaxy clustering signal was solely based upon theoretical considerations, as described in \citet*{2017arXiv170609359K}. In particular, it accounted only for Poisson noise and sample variance in the galaxy density field, where the latter includes both Gaussian and connected terms, as well as the super-sample covariance contribution. In practice, removing the imprint of systematic fluctuations on the galaxy density field carries with it additional uncertainty that needs to be propagated into the covariance matrix used to analyze the data. We now characterize this additional noise contribution.
We start with the statistical uncertainty in our method, the uncertainty due to the noise in our estimates of the linear coefficients of the SP maps. Because we use an MCMC to fit for the coefficients describing the impact of SPs, we can readily sample the posterior distribution to obtain realizations of the coefficients. For each such set of coefficients, we calculate the systematics-corrected correlation function $w_{\rm corr0}$, resulting in many realizations of systematics-corrected correlation functions. We calculate the covariance matrix from these realizations, and re-scale it by the factor of $\langle \chi^2\rangle/18=1.24$ from the discussion in \cref{sec:mockResults}. This defines the statistical covariance matrix $\mat{C^{\rm stat}}$, which characterizes statistical uncertainties in the systematics correction. We note that our estimation does not allow for statistical covariance in the SP corrections across the redshift bins. Since the noise in the coefficients of the SP maps depends on the galaxy density field, which will be correlated across bins, this is not true in detail. However, we expect the relative quality of the redMaGiC{} photometric redshifts implies that any such correlation is small, particularly when propagated onto the SP coefficients.
In the process of de-biasing our systematics-corrected correlation functions, i.e. going from $w_{\mathrm{corr0}}$ to $w_{\mathrm{corr1}}$, our corrections may remove clustering modes from the correlation function. This will in turn remove some of the sample variance from $w_{\mathrm{corr1}}$, particularly at large scales. To test for this possibility, we compared the variance in $w_{\mathrm{corr1}}$ as measured in our simulations to the quantity
\begin{equation*}
\hat{\var}\left(w_{\mathrm{corr1}}\right) \equiv \var\left(w_{\mathrm{true}}\right) + \diag \mat{C^{\mathrm{stat}}} ,
\end{equation*}
where $\mat{C^{\mathrm{stat}}}$ is the statistical covariance matrix including the re-scaling from the previous paragraph. The blue shaded region in \cref{fig:cstat_debias_correction} shows the ratio of $\var\left(w_{\mathrm{true}}\right) + \diag \mat{C^{\mathrm{stat}}}$ and $\hat{\var}\left(w_{\mathrm{corr1}}\right)$ averaged over all redshift bins. The width of the band represents the $1-\sigma$ region around the mean. We see that at small scales the variance in our simulations is consistent with our error estimate. By contrast, we overestimate the error at large scales, likely due to the removal of clustering modes in our systematic correction algorithm. We have found that accounting for this reduced variance tends to make the covariance matrix of the 3$\times$2pt{} data vector non-invertible due to the cross terms between probes. Because the systematic bias in the variance estimate is small, and because our revised errors have little impact on cosmological posteriors (see below), we will leave the problem of how to adequately model the resulting decrease in variance to future work.
\begin{figure}
\centering
\includegraphics[width=\linewidth,keepaspectratio]{c_stat_bias_plot_v6.pdf}
\caption{A comparison of $\var\left(w_{\mathrm{corr1}}\right)$ in our simulations and our revised error estimate $\var\left(w_{\mathrm{true}}\right) + \diag \mat{C^{\mathrm{stat}}}$. Our revised error estimate correctly describes the variance of the simulations at small scales, but slightly overestimates the noise at large scales. This likely reflects reduced sample variance due to removal of clustering modes at large scales.}
\label{fig:cstat_debias_correction}
\end{figure}
The systematic uncertainty associated with our de-biasing procedure of \cref{sec:calibration} is calculated as the sum in quadrature of two distinct terms. The first term sets the systematic uncertainty to half the amplitude of the applied correction, i.e. large corrections will result in large uncertainties. The second term accounts for the difference in the amount of over-correction inferred from the contaminated and uncontaminated mocks. If the inferred over-corrections are vastly different, the resulting mean correction should be assigned a large uncertainty. This uncertainty is set to half the difference between the over-correction inferred from the contaminated and uncontaminated mocks. The corresponding covariance matrix characterizing these systematic uncertainties takes the form
\begin{linenomath}\begin{equation}
\label{eq:sysCov}
\mat{C^{\rm sys}}_{ab} \equiv \frac{1}{4} \left[\Delta w(\theta_a) \Delta w(\theta_b) + \delta w(\theta_a) \delta w(\theta_b)\right]\, ,
\end{equation}\end{linenomath}
where $a$ and $b$ index angular bins, and where we have defined
\begin{linenomath}\begin{equation*}
\delta w(\theta) \equiv \frac{1}{2} \left[ \left\langle w_{\rm corr0}^{\rm cont}(\theta) - w_{\rm true}(\theta)\right\rangle - \left\langle w_{\rm corr0}^{\rm uncont}(\theta) - w_{\rm true}(\theta) \right\rangle\right]\, .
\end{equation*}\end{linenomath}
As in \cref{eq:bias}, the average $\langle \cdot \rangle$ above is over all simulated data sets.
The final covariance matrix estimate for the data is $\mat{C^{Y1}} + \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$, where $\mat{C^{Y1}}$ is the theoretical covariance matrix used in \citetalias{2018PhRvD..98d2006E}. The green dash-dotted line and band in \cref{fig:covComponents} show the mean and uncertainty of the ratio between the diagonal elements of $\mat{C^{\rm sys}}$, as defined in \cref{eq:sysCov}, to the diagonal elements of $\mat{C^{Y1}}$. The orange dashed line and band is the same ratio but for $\mat{C^{\rm stat}}$. We have checked that increasing the number of realizations used to estimate $\mat{C^{\rm stat}}$ does not significantly change our measured covariance. The combination of the systematic and statistical covariance relative to the Y1 covariance is shown as the blue solid line and band. The gray shaded region in each panel shows the region excluded by the small scale cuts for the cosmology analysis in \citetalias{2018PhRvD..98d2006E}, for which our changes will not impact the inferred cosmological parameters. While uncertainties in our de-biasing procedure for over-correction are negligible, we see that the statistical uncertainties in our systematics mitigation algorithm start to become comparable to statistical uncertainties in the correlation function at large scales.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,keepaspectratio]{cov_diags_sys+stat_y1_mean_const_cov_fit128_nside4096_nsteps250_nmocks100_v6_stat-uncorrected.pdf}
\caption{\label{fig:covComponents} A comparison of the diagonal elements from various components of the covariance matrix relative to the diagonal elements of the theoretical covariance matrix utilized in \citetalias{2018PhRvD..98d2006E}. In all cases, the denominator of the quantity on the y-axis is the diagonal elements of $\mat{C^{Y1}}$. The numerator for the blue solid line corresponds to our updated error estimates, including both statistical (orange dashed) and systematic (green dash-dotted) errors. The red squares show the corresponding empirical estimates of the variance in our simulations. The gray shaded region is the small scale cut from \citetalias{2018PhRvD..98d2006E}, and will not impact the cosmology results.}
\end{figure*}
\section{Results}
\label{sec:results}
As a brief summary of \cref{sec:method,sec:mocks,,sec:noise}, we assume fluctuations in SPs introduce artificial galaxy fluctuations through a local linear response. We calibrate these response coefficients using the observed galaxy density maps and SP maps, and use them to remove the impact of systematic fluctuations in the galaxy density field. Using mock galaxy catalogs, we demonstrate that our method results in some small amount of over-correction, which we calibrate. We further characterize the additional statistical and systematic uncertainty introduced by our systematics-mitigation algorithm. We now apply our full systematics-correction algorithm to the DES Y1 data set.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,height=0.5\textheight,keepaspectratio]{wtheta_const_cov_fit128_nside4096_nreal250_nmocks100_v6_uncorrected.pdf}
\caption{\label{fig:wthetaY1} The correlation function in each redshift bin for the DES Y1 redMaGiC{} galaxies. The gray dashed line is the correlation function without correcting for SPs. The orange solid line is the systematics-corrected correlation from \citetalias{2018PhRvD..98d2006E}. The blue points are the de-biased correlation function using our linear model weights, and the error bars are obtained from the full ($\mat{C^{Y1}} + \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$) covariance matrix. Note that while the gray and orange lines are computed with the DES Y1 mask, the blue points use our restricted mask with $\delta_{\rm sys} \leq 0.2$, resulting in \SI{\sim 3.5}{\percent} less area.}
\end{figure*}
In \cref{fig:wthetaY1}, we show the angular correlation function in each of the five redshift bins using our systematics weights and bias correction as blue circles, with errors from the combined $\mat{C^{Y1}} + \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$ covariance matrix. For comparison, we also show the correlation function without correction and the systematics-corrected correlation function from \citetalias{2018PhRvD..98d2006E}. We note that in arriving at our updated correlation function, there is a small change in the mask to mitigate the impact of non-linear systematic fluctuations, so that the areas over which the correlation functions are computed are not precisely the same. The bottom panel in each figure shows the difference of each of the correlation function relative to the systematics-corrected estimate of \citetalias{2018PhRvD..98d2006E}. We see that the two different methods for estimating systematic corrections are in excellent agreement relative to the statistical uncertainty of the DES Y1 data set. Nevertheless, some small differences are clearly present. It is interesting to note that in the second redshift bin, our correction results in slightly \emph{more} correlation than the uncorrected correlation function, rather than \emph{less}. This boost is due to the over-correction de-biasing procedure calibrated in the mocks.
\begin{table}
\centering
\begin{tabular}{lccccc}
\toprule
$z$ bin & $\chi^2_{\rm stat+sys}$ & $\chi^2_{\rm tot}$ & $\chi^{\prime 2}_{\rm stat+sys}$ & $\chi^{\prime 2}_{\rm tot}$ & \# Bins \\ \midrule
1 & 18.02(0.02) & 0.463 & 9.563(0.30) & 0.286 & 8 \\
2 & 120.0(0.00) & 2.04 & 57.03(0.00) & 1.37 & 10 \\
3 & 97.46(0.00) & 0.900 & 26.71(0.01) & 0.502 & 11 \\
4 & 45.65(0.00) & 0.922 & 67.91(0.00) & 0.611 & 12 \\
5 & 344.6(0.00) & 1.89 & 22.82(0.04) & 0.354 & 13 \\ \bottomrule
\end{tabular}
\caption{\label{tab:wChi2} The $\chi^2$ for the systematics-corrected correlation function from \citetalias{2018PhRvD..98d2006E} and this work in each redshift bin. The last column is the number of angular bins used to calculate the $\chi^2$, which are the bins outside the small scale cut represented by the gray shaded regions in \cref{fig:wthetaY1}. The second column is the $\chi^2$ when including only the uncertainty from the systematics correction, while the third column is the $\chi^2$ relative to the full covariance matrix. The fourth and fifth columns are the same as the second and third, but the $\delta_{\rm sys} > 0.2$ mask is applied to the galaxies with the DES Y1 weights. The numbers in parentheses in the second and fourth columns show the probability to exceed the $\chi^2$ given the number of angular bins in the last column (the probability to exceed is \num{\sim 1.0} for all bins in both the third and fifth columns). Our updated correlation function is only consistent with that from \citetalias{2018PhRvD..98d2006E} in the first redshift bin, but it is consistent with the correlation function with our mask and the Y1 weights in bins \numlist{1;3;5}.}
\end{table}
To quantify the difference in the correlation functions from the two different weighting methods, \cref{tab:wChi2} shows the $\chi^2$ statistic for the DES Y1 correlation function and our correlation function, namely
\begin{linenomath}\begin{equation*}
\chi^2 = \left(w_{Y1}(\theta) - w_{\rm corr1}(\theta)\right)^\ensuremath{\top} \mat{C}^{-1} \left(w_{Y1}(\theta) - w_{\rm corr1}(\theta)\right),
\end{equation*}\end{linenomath}
where the choice of covariance matrix $\mat{C}$ used requires some discussion (see below). In calculating $\chi^2$, we exclude any angular bins that are removed with the small scale cut (the gray regions in \cref{fig:wthetaY1}). The number of remaining angular bins after the small scale cut is shown in the last column of the table. The difference between the correlation functions should not be subject to Poisson noise or sample variance, as these are the same for both correlation functions. Therefore, in the second column of \cref{tab:wChi2}, we show the $\chi^2$ when we use $\mat{C} = \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$, with the probability to exceed the given $\chi^2$ shown in parentheses. While in principle this comparison should also be subject to the uncertainty due to the method of \citetalias{2018PhRvD..98d2006E}, that paper demonstrated that the uncertainties in their systematics correction didn't impact the cosmological priors and therefore those uncertainties were not characterized. Consequently, our comparison does not account for the uncertainty in the Y1 systematics correction. It is clear that our weights method results in a correlation function that is formally inconsistent with that of the Y1 analysis assuming zero uncertainty from the Y1 weights method. However, the size of the cosmology contours is sensitive to the full covariance matrix $\mat{C^{Y1}} + \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$. The third column in \cref{tab:wChi2} shows the $\chi^2$ when we use the full covariance matrix for $\mat{C}$. Notice that in this case, the $\chi^2/\mathrm{dof} \leq 0.1$ for most redshift bins. This result explicitly demonstrates that the difference in the correlation function produced by the two methods is small relative to the statistical uncertainty.
While our updated correlation function is inconsistent with the correlation function presented in \citetalias{2018PhRvD..98d2006E} when excluding the statistical uncertainty, the difference between them is actually sourced by two effects: the difference in the weights produced by the two corrections, and the difference in the footprint. In particular, removal of the pixels with $\delta_{\rm sys}^i > 0.2$ improves the agreement between the two. To determine whether the corrections are consistent after accounting for the different masks, we recompute the galaxy correlation function using the fiducial Y1 weights over our new mask. The $\chi^2$ statistics for this comparison are shown in the fourth and fifth columns of \cref{tab:wChi2}. As before, the $\chi^2_{\rm tot}/\mathrm{dof} \lesssim 0.1$ in all bins when using the full covariance matrix. However, the correlation functions with the different weights in this case are in much better agreement. We take the two systematic corrections to be consistent with one another if the probability to exceed the observed $\chi^2$ between them is at least \num{0.01}. Based on this definition, the corrections for redshift bins \numlist{1;3;5} are consistent with each other, as opposed to only the first redshift bin when the masks were different. The second and fourth bins are inconsistent in both cases.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,keepaspectratio]{desy1_contour_3x2_eduardo_cosmolike.pdf}
\caption{\label{fig:3x2ptContours} A comparison of the cosmology contours for the 3$\times$2pt{} analysis, with each 2-dimensional contour showing the \SI{68}{\percent} and \SI{95}{\percent} confidence levels, and the shaded regions in the 1-dimensional plots signifying the \SI{68}{\percent} confidence level. The blue contours are the public DES Y1 results as in \citet{2018PhRvD..98d3526A}. The red contours are the results with our new correlation function and updated covariance matrix. Note that the blue and red contours use a different version of \texttt{CosmoLike} and different samplers. The black dashed lines also show the contours using the DES Y1 data vector, but using the same version of \texttt{CosmoLike} and same sampler as was used to generate the red contours. The minimum $\chi^2$ for the DES Y1 data vector and our updated data vector are shown as the black and red text, respectively, with \num{444} degrees of freedom.}
\end{figure}
We use our new de-biased systematics-corrected correlation function and full $\mat{C^{Y1}} + \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$ covariance matrix in combination with the cosmic shear and galaxy-galaxy lensing data vectors and covariance matrices from the DES Y1 cosmology analysis \citep{2018PhRvD..98d3526A} to re-run the DES 3$\times$2pt{} cosmology analysis. The resulting cosmology contours for $\Omega_m$, $A_s$, and $S_8$ are shown in blue in \cref{fig:3x2ptContours}. For our analysis, we use an updated version of \texttt{CosmoLike} \citep{2017MNRAS.470.2100K,2020MNRAS.497.2699F} and use \texttt{emcee} \citep{2013PASP..125..306F} as our sampler. The result with this pipeline and our updated data vector and covariance matrix are shown in red in \cref{fig:3x2ptContours}.
As we use a different pipeline and sampler than the fiducial Y1 analysis of \citet{2018PhRvD..98d3526A}, it is unclear how much of the difference between the red and blue contours in \cref{fig:3x2ptContours} is because of our changes to the data vector and covariance matrix and how much is a reflection of the differences in the modelling pipeline. We therefore show as black dashed lines in \cref{fig:3x2ptContours} the results of using the updated \texttt{CosmoLike} pipeline when run on the fiducial Y1 data vector and covariance matrix. The differences between the red and black contours are due to the difference in the estimated correlation function and its corresponding covariance matrix. It is clear that our weighting method does not have a significant impact on the cosmological inference relative to the Y1 analysis. This is expected given that both the difference in the correlation functions with the two different weighting methods and the uncertainty in our systematic correction are small relative to the statistical uncertainty of the measurement.
The black and red text above the histogram of $S_8$ in \cref{fig:3x2ptContours} show the minimum $\chi^2$ values for the fiducial Y1 data vector and our updated data vector, respectively, for each data vector compared to the model with \num{444} degrees of freedom \citep[see][]{2018PhRvD..98d3526A}. The minimum $\chi^2$ in each case is $-2 \log L_{\rm max}$ at the maximum likelihood point in the MCMC chain. It is encouraging to see that even though our method does not significantly change the cosmological inference, it does result in a significant improvement in the goodness of fit ($\Delta \chi^2 = -6.5$ with no additional parameters). This improvement in the $\chi^2$ is due to both the increased error from our systematics correction and the shifts in the data vector that occur when replacing the Y1 weighting method with ours. To show that this is the case, we consider the calculation of the best fit $\chi^2$ with our updated data vector and covariance matrix, which we now write as
\begin{linenomath}\begin{equation*}
\chi^2_{\rm new} = \left(\vvec{d}_{Y1} + \vvec{\Delta} - \vvec{m}_{Y1}\right)^\ensuremath{\top} \left(\mat{C^{Y1}} + \mat{\delta C}\right)^{-1} \left(\vvec{d}_{Y1} + \vvec{\Delta} - \vvec{m}_{Y1}\right),
\end{equation*}\end{linenomath}
where $\vvec{d}_{Y1}$ is the original data vector from the Y1 analysis, $\vvec{m}_{Y1}$ is the best fit model vector from the original Y1 analysis, $\mat{\delta C} \equiv \mat{C^{\rm stat}} + \mat{C^{\rm sys}}$ is the change in the covariance matrix, and $\vvec{\Delta}$ is the change to the difference between the data vector and best fit model vector introduced by our weights method. Note that this means that $\vvec{\Delta}$ is sensitive to both the change in the data vector as well as changes to the best fit parameters. We can expand this equation around $\mat{\delta C} = 0$, dropping terms that are beyond first order in $\mat{\delta C}$ as well as terms involving $\vvec{\Delta}^\ensuremath{\top} \mat{\delta C}$. Doing so, we find
\begin{linenomath}\begin{align*}
\Delta \chi^2 \approx &\, \vvec{\Delta}^\ensuremath{\top} \left(\mat{C^{Y1}}\right)^{-1} \left[2 \left(\vvec{d}_{Y1} - \vvec{m}_{Y1}\right) + \vvec{\Delta}\right] \\ - &\left[\left(\mat{C^{Y1}}\right)^{-1} \left(\vvec{d}_{Y1} - \vvec{m}_{Y1}\right)\right]^\ensuremath{\top} \mat{\delta C} \left[\left(\mat{C^{Y1}}\right)^{-1} \left(\vvec{d}_{Y1} - \vvec{m}_{Y1}\right)\right].
\end{align*}\end{linenomath}
The first term in this expression gives the $\Delta \chi^2$ resulting from changing the data vector and the difference in the resulting best fit model vector. The second term is the $\Delta \chi^2$ caused by the change to the covariance matrix from our systematics correction. We find $\Delta \chi^2 \approx -3.6$ for the first term and $\Delta \chi^2 \approx -3.2$ for the second. From this, we conclude that both the shift in the data vector (and resulting shift in the best fit) and the increased uncertainty due to our systematics correction contribute to the improvement in the fit.
\section{Conclusions}
\label{sec:conclusions}
We have presented a method for using a linear model to mitigate the effect of systematic fluctuations in galaxy clustering analyses due to observing conditions. Our method uses a Gaussian likelihood to fit the linear model to the observed galaxy over-density and SP maps on each pixel. Our analysis explicitly incorporates the fact that neighboring pixels in the sky are correlated using an iterative approach: our first iteration uses a diagonal covariance matrix, while the second builds a non-diagonal covariance matrix from the systematics-corrected correlation function estimated from the first iteration. We further use mock catalogs to calibrate the remaining over-correction bias, which we then remove from the data correlation function.
We apply our methodology to the DES Y1 redMaGiC{} data set. Our method has four important advantages relative to that adopted in the DES 3$\times$2pt{} analysis presented in \citet{2018PhRvD..98d3526A}, namely:
\begin{itemize}
\item Our method does not require that decisions be made with regards to which survey properties matter and which don't. This also allows for the possibility of multiple survey properties ``conspiring'' to create an observationally significant signal without any single systematic reaching that threshold.
\item Our method properly increases the error budget of the galaxy correlation function estimate by accounting for the statistical and systematic uncertainty associated with systematic mitigation. We have found doing so non-trivially impacts the goodness-of-fit statistic of the best fit cosmological model.
\item Our method explicitly incorporates clustering information from neighboring pixels in our calibration of the impact of survey properties on the galaxy density field.
\item Our method is fully automated: it can be run from start to finish with minimal supervision, enabling for quick turn around for future data sets, with no extra tuning.
\end{itemize}
While our updated systematics-corrected correlation function in the DES Y1 data set is formally inconsistent with that of \citetalias{2018PhRvD..98d2006E}, the two are in good agreement relative to the level of statistical uncertainty in DES Y1, and the corrections are consistent in redshift bins \numlist{1;3;5} when applied over the same footprint. Because the statistical uncertainty in the measurement is larger than the uncertainty in our correction, we observe no significant impact on the cosmological inference using a data vector with our systematics weights relative to the Y1 3$\times$2pt{} cosmology analysis. Encouragingly, however, we do see an improvement in the goodness of fit, which is caused by both the change to the data vector with our new weights and the increased error from our systematics correction. We also expect the difference in the data vector and the uncertainty in the correction to become more important in the near future as the large number of galaxies observed by upcoming surveys decreases the statistical uncertainty in galaxy clustering measurements.
\section*{Acknowledgements}
ELW and ER were supported by the DOE grant DE-SC0015975. XF is supported by NASA ROSES ATP 16-ATP16-0084 grant. Some calculations in this paper use High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and RDI and maintained by the UA Research Technologies department.
This paper has gone through internal review by the DES collaboration.
Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain,
the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing
Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago,
the Center for Cosmology and Astro-Particle Physics at the Ohio State University,
the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos,
Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and
the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inova{\c c}{\~a}o, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ{\'e}ticas,
Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh,
the Eidgen{\"o}ssische Technische Hochschule (ETH) Z{\"u}rich,
Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC),
the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe,
the University of Michigan, NFS's NOIRLab, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth,
SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A\&M University, and the OzDES Membership Consortium.
Based in part on observations at Cerro Tololo Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID 2012B-0001; PI: J. Frieman), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171.
The DES participants from Spanish institutions are partially supported by MICINN under grants ESP2017-89838, PGC2018-094773, PGC2018-102021, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509, some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya.
Research leading to these results has received funding from the European Research
Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478.
We acknowledge support from the Brazilian Instituto Nacional de Ci\^encia
e Tecnologia (INCT) e-Universe (CNPq grant 465376/2014-2).
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
Some of the results in this paper have been derived using the \texttt{healpy} and \texttt{HEALPix} packages. This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{2013A&A...558A..33A,2018AJ....156..123A}.
\section*{Data Availability}
The DES Y1 redMaGiC{} catalog is available for download at \url{https://des.ncsa.illinois.edu/releases/y1a1/key-catalogs/key-redmagic}. The DES observing condition maps, excluding dust and stellar density, are available for download at \url{https://des.ncsa.illinois.edu/releases/y1a1/gold/systematics}. The dust map is available from Planck at \url{https://irsa.ipac.caltech.edu/data/Planck/release_1/all-sky-maps/previews/HFI_CompMap_ThermalDustModel_2048_R1.20/index.html}. The stellar density catalog is not publicly available but can be constructed from the DES Y1GOLD badmask (\url{https://des.ncsa.illinois.edu/releases/y1a1/gold/footprint}) and the first public data release (\url{https://des.ncsa.illinois.edu/releases/dr1}). It may also be made available upon request with permission of the Dark Energy Survey Collaboration.
\bibliographystyle{mnras_2author}
|
1,116,691,497,687 | arxiv | \section{Introduction}
In the field of controlling dynamical systems, one of the major missions is to find efficient control policies for stabilizing ordinary differential equations (ODEs) to targeted equilibriums. The policies for stabilizing linear or polynomial dynamical systems have been fully developed using the standard Lyapunov stability theory, e.g., the linear quadratic regulator (LQR) \cite{khalil2002nonlinear} and the sum-of-squares (SOS) polynomials through the semi-definite planning (SDP) \cite{parrilo2000structured}. As for stabilizing more general and nonlinear dynamical systems, linearization technique around the targeted states is often utilized and thus the existing control policies are effective in the vicinity of the targeted states \citep{40741} but likely lose efficacy in the region far away from those states. Moreover, in real applications, the explicit {forms} of the controlled nonlinear systems are often partially or completely unknown, so it is very difficult to directly design controllers only using the Lyapunov stability theory. To overcome these difficulties, designing the controllers via training neural networks (NNs) become one of the mainstream approaches in the community of cybernetics \citep{486648}. Recent outstanding developments using NNs include enlarging the safe region \citep{richards2018lyapunov}, learning the stable dynamics \citep{ takeishi2021learning}, and constructing the Lyapunov function and the control function simultaneously \citep{chang2020neural}. In \cite{kolter2019learning} a projected NN has been constructed to directely learn a stable dynamical system and fits the observed time series data well, but it did not focus on learning a control policy to stabilize the original dynamics. All these existing developments are formulated only for deterministic systems but inapplicable directly to the dynamical systems described by stochastic differential equations (SDEs), requiring us to include the stochasticity appropriately into the use of neural controls to different types of dynamical systems.
\begin{wrapfigure}{r}{7cm}
\vskip -0.2in
\centering
\includegraphics[width=0.5\textwidth]{framework_12.pdf}
\caption{\footnotesize Sketches of the two frameworks of neural stochastic controller. Both the ES and AS find control function $\bm{u}$ with fully connected feedforward NN (FNN).
}
\label{sketch0}
\vskip -0.2in
\end{wrapfigure}
The stability theory for stochastic systems has been systematically developed in the past several decades. Representative contributions in the literature include the Lyapunov-like stability theory for SDEs \cite{mao2007stochastic}, stabilization of unstable states in ODEs only using noise perturbations \cite{mao1994stochastic}, and the stability induced by randomly switching structures \cite{guolin2018tac}.
Generally, for any SDEs governed by $\mathrm{d} {\bm{x}}=f({{\bm{x}}}) \mathrm{d} t+g({{\bm{x}}}) \mathrm{d} B_t$, control policies as ${{\bm{u}}}=({{\bm{u}}}_f,{{\bm{u}}}_g)$ are introduced, which transforms the original equations into the controlled system $\mathrm{d} {{\bm{x}}}=[f({{\bm{x}}})+{{\bm{u}}}_f({{\bm{x}}})]\mathrm{d}t+[g({{\bm{x}}})+{{\bm{u}}}_g({{\bm{x}}})]\mathrm{d}B_t$.
Appropriate forms of control policies are able to steer the controlled system to the equilibriums that are unstable in the original SDEs. Traditional control methods focus on designing deterministic control ${\bm{u}}_f$ and regard noise as negative part. Innovatively, we treat noise as a beneficial part and design stochastic control ${\bm{u}}_g$ to achieve the stabilization.
In this article, we articulate two frameworks of neural stochastic control which can complement each other in terms of convergence rate and the computational time of training NNs. Additionally, we analytically investigate the convergence time and the energy cost for the classic linear control { and the proposed neural stochastic control} and numerically compare them.
{We further extend our frameworks to model-free case with existing data reconstruction methods.}
The major contributions of this article are multi-folded, including:
\begin{itemize}
\setlength{\itemsep}{1.0pt}
\setlength{\parsep}{1.0pt}
\setlength{\parskip}{1.0pt}
\item designing two frameworks of neural stochastic control, viz., the ES and the AS, and presenting their advantages in the stochastic control,
\item {providing theoretical estimation for ES/AS and classic linear control in terms of convergence time and the energy cost,}
\item computing the convergence time and the energy cost of particular stochastic neural control,
\item demonstrating the efficacy of the proposed stochastic neural control in important control problems arising from representative physical systems, and we make our code available at \href{https://github.com/jingddong-zhang/Neural-Stochastic-Control}{\texttt{https://github.com/jingddong-zhang/Neural-Stochastic-Control}}.
\end{itemize}
\subsection{Related Works}
\paragraph{\fcircle[fill=black]{2.5pt} Lyapunov Method in Machine Learning}
The recent work \cite{chang2020neural} proposed an NN framework of learning the Lyapunov function and the linear control function simultaneously for stabilizing ODEs.
In comparison, we select several specific types of NNs which have typical properties of the Lyapunov function. For instance, we use the input convex neural network (ICNN) \cite{icnn}, constructing a positively definite convex function as a neural Lyapunov function \citep{kolter2019learning,takeishi2021learning}, and we construct the NN in a quadratic form \citep{richards2018lyapunov,gallieri2019safe} for linear or sublinear systems where the SDP method is often used to find the SOS-type Lyapunov function \citep{henrion2005positive,jarvis2003some,parrilo2000structured}.
\paragraph{\fcircle[fill=black]{2.5pt} Stochastic Stability Theory of SDEs}
Stochastic stability theory for SDEs have been systematically and fruitfully achieved in the past several decades \citep{kushner, arnold, mao1991stability,mao1994exponential}.
The positive effects of stochasticity have also been cultivated in control fields \citep{mao2002environmental,deng2008noise,caraballo2003stochastic,appleby2008stabilization, mao2007stabilization}.
These, therefore, motivate us to develop \textit{only} neural stochastic control to stabilize different sorts of dynamical systems in this article. More stochastic stability theory for different kinds of systems are included in \cite{appleby2006stochastic,appleby2003stabilisation,caraballo2004stabilisation,wang2017stability}.
\section{Preliminaries}
To begin with, we consider the SDE which is
written in a general form as:
\begin{equation}\label{SDE0}
\mathrm{d}{{\bm{x}}}(t) = F({{\bm{x}}}(t))\mathrm{d}t + G({{\bm{x}}}(t))\mathrm{d}B_t,
~~t\ge0,~{\bm{x}}(0)={\bm{x}}_0\in\mathbb{R}^d,
\end{equation}
where $F:\mathbb{R}^d\to\mathbb{R}^d$ is the drift function, and $G:\mathbb{R}^d\to\mathbb{R}^{d\times r}$ is the diffusion function with $\mathbb{R}^{d\times r}$, a space of $d\times r$ matrices with real entries, and $B_t\in\mathbb{R}^r$, a {$r${-dimensional} ({$r$-D})} Brownian motion. Without loss of generality, we set $F({\bm{0}})={\bm{0}}$ and $G({\bm{0}})={\bm{0}}$ so that ${{\bm{x}}}_0={\bm{0}}$ is a zero solution Eq.~\eqref{SDE0}.
\textbf{Notations.}~Denote by $\Vert \cdot \Vert$ the $L^2$-norm for any given vector in $\mathbb{R}^d$. Denote by $\vert \cdot\vert$ the absolute value of a scalar number or the modulus length of a complex number number. For $A=(a_{ij})$, a matrix of dimension $d\times r$, denote by $\|A\|^2_{\rm F}=\sum_{i=1}^{d}\sum_{j=1}^{r}a_{ij}^2$ the Frobenius norm.
\begin{assumption}(Locally Lipschitzian Continuity)\label{assum2}
For every integer $n\ge1$, there is a number $K_n >0$ such that
\begin{equation*}
\|F({\bm{x}})-F({\bm{y}})\|\le K_n\|{\bm{x}}-{\bm{y}}\|,~
\|G({\bm{x}})-G({\bm{y}})\|_{\rm F} \le K_n\|{\bm{x}}-{\bm{y}}\|,
\end{equation*}
for any ${\bm{x}},{\bm{y}} \in \mathbb{R}^d$ with $\|{\bm{x}}\|\vee\|{\bm{y}}\|\le n$.
\end{assumption}
\begin{definition}(Derivative Operator)\label{derivative}
Define the differential operator $\mathcal{L}$ associated with Eq.~\eqref{SDE0} by
\begin{equation*}
\mathcal{L} \triangleq \sum_{i=1}^{d}F_i({\bm{x}})\dfrac{\partial}{\partial x_i}+\dfrac{1}{2}\sum_{i,j=1}^{d}\left[G({\bm{x}})G^{\top}({\bm{x}})\right]_{ij}\dfrac{\partial^2}{\partial x_i\partial x_j}.
\end{equation*}
\end{definition}
\begin{definition}
(Exponential Stability) The zero solution of Eq.~\eqref{SDE0}
is said to be \textit{almost surely exponentially stable}, if
$\limsup_{t\to\infty}\frac{1}{t}\log\|{\bm{x}}(t;{{\bm{x}}}_0)\|<0 \ a.s. $ for all ${{\bm{x}}}_0\in\mathbb{R}^d$. Here and throughout, $a.s.$ stands for the abbreviation of
almost surely.
\end{definition}
Then, the following Lyapunov stability theorem will be used in the establishment of our main results.
\begin{thm}\label{thm1}
\cite{mao2007stochastic}
Suppose that Assumptions \ref{assum2} holds. Suppose further that there exist a function $V\in C^{2}(\mathbb{R}^d;\mathbb{R}_{+})$
with $V({\bm{0}})=0$, constants $p>0$, $c_1>0$, $c_2\in\mathbb{R}$ and $c_3\ge 0$, such that
$\rm{(i)}$ $c_1\|{\bm{x}}\|^p\le V({{\bm{x}}})$,
$\rm{(ii)}$ $\mathcal{L}V({{\bm{x}}})\le c_2V({{\bm{x}}})$, and
$\rm{(iii)}$ $|\nabla V^\top({\bm{x}})G({\bm{x}})|^{2}\ge c_3V^2({\bm{x}})$
for all $\bm{x}\neq0$ and $t\ge 0$.
Then,
\begin{equation}\label{exponential stable}
\limsup_{t\to\infty}\dfrac{1}{t}\log\|{\bm{x}}(t;t_0,{{\bm{x}}}_0)\|\le -\dfrac{c_3-2c_2}{2p}~~a.s..
\end{equation}
In particular, if $c_3-2c_2>0$, the zero solution of Eq.~\eqref{SDE0}
is exponentially stable almost surely.
\end{thm}
The following asymptotic theorem also will be used in the establishment of our main results.
\begin{thm}
\label{thm2}
\cite{appleby2008stabilization}
Suppose that Assumptions \ref{assum2} holds.
Suppose further $\min_{\|{\bm{x}}\|=M}\|{{\bm{x}}}^{\top}G({\bm{x}})\|>0$ for any $M>0$ and there exists a number $\alpha\in(0,1)$ such that
\begin{equation}\label{asymptotic cond}
\begin{aligned}
\|{\bm{x}}\|^2(2\langle {\bm{x}},F({\bm{x}})\rangle+\|G({\bm{x}})\|_{\rm F}^2 )-(2-\alpha)\|{{\bm{x}}}^{\top}G({\bm{x}})\|^2\le 0,~~\forall {\bm{x}}\in\mathbb{R}^d.
\end{aligned}
\end{equation} Then, the unique and global solution of Eq.~\eqref{SDE0} satisfies
$\lim_{t\to\infty} {\bm{x}}(t,{{\bm{x}}}_0) = {\bm{0}}~ a.s.$, and we call this property as asymptotic attractiveness.
\end{thm}
\section{Designing Stable Stochastic Controller}
Here, we assume that the zero solution of the following SDE:
\begin{equation}\label{SDE}
\mathrm{d} {\bm{x}}=f({{\bm{x}}}) \mathrm{d} t+g({{\bm{x}}}) \mathrm{d} B_t
\end{equation}
is unstable. Note that, for any nontrivial targeted equilibrium ${\bm{x}}^*$, a direct transformation ${\bm{y}}={\bm{x}}-{\bm{x}}^*$ can make the zero solution as the equilibrium of the transformed system. Thus, our mission is to stabilize the zero solution only. As such, we are to use the NNs to design the control ${\bm{u}}: \mathbb{R}^d\to\mathbb{R}^{d\times r}$ with ${\bm{u}}({\bm{0}})={\bm{0}}$ and apply it to Eq.~\eqref{SDE} as
\begin{equation}\label{SDEed}
\mathrm{d} {\bm{x}}=f({{\bm{x}}}) \mathrm{d} t+[g({{\bm{x}}}) + {\bm{u}}({\bm{x}})]\mathrm{d} B_t.
\end{equation}
Since ${\bm{u}}$ is integrated with $\mathrm{d} B_t$ in the controlled system~\eqref{SDEed}, we regard it as a stochastic controller. In what follows, two frameworks of neural stochastic control, the exponential stabilizer (ES) and the asymptotic stabilizer (AS), are articulated, respectively, in Sections \ref{ES} and \ref{sec_AS}. All these control policies are intuitively depicted in Figure~\ref{sketch0}.
\subsection{Exponential Stabilizer}\label{ES}
Once we find the Lyapunov function $V$ and the neural controller ${\bm{u}}$, making the controlled system~\eqref{SDEed} meet all the conditions assumed in Theorem~\ref{thm1}, the equilibrium $\bm{0}$ can be exponentially stabilized. To this end, we first provide two different types of functions for constructing $V$, which actually could be complementary in applications. Then, we design the explicit forms of control function and loss function.
\textbf{ICNN $V$ Function.}
We use the ICNN \citep{icnn} to represent the candidate Lyapunov function $V$. This guarantees $V$ as a convex function with respect to the input ${\bm{x}}$. In order to further make $V$ as a true Lyapunov function, we use the following form:
\begin{equation}\label{v1}
\begin{aligned}
{{\bm{z}}}_1 &= \sigma_0(W_0{{\bm{x}}}+b_0),~~
{{\bm{z}}}_{i+1} = \sigma_i(U_i{{\bm{z}}}_i+W_i{{\bm{x}}}+b_i),\\
g({{\bm{x}}}) &\equiv {{\bm{z}}}_k, ~~ i=1,\cdots,k-1, \\
V({{\bm{x}}}) &= \sigma_{k+1}(g(\mathcal{F}({{\bm{x}}}))-g(\mathcal{F}({\bm{0}})))+\varepsilon\Vert {{\bm{x}}}\Vert^2,\\
\end{aligned}
\end{equation}
as introduced in \cite{deepstable}.
Here, $W_i, b_i$ are real-valued weights, $U_i$ are positive weights, $\sigma_i$ are convex, monotonically non-decreasing activation functions in the $i$-th layer, $\varepsilon$ is a small positive constant, and $\mathcal{F}$ is a continuously
differentiable and invertible function. In our framework, we require $V\in C^2(\mathbb{R}^d; \mathbb{R}_{+})$ according to Definition \ref{derivative}; however, each activation function $\sigma_i\equiv\sigma$ in \cite{deepstable} is $C^1$ only. Thus, we modify the original function as:
\begin{wrapfigure}[4]{r}{6cm}
\centering
\includegraphics[width=0.41\textwidth]{smooth_relu_05.pdf}
\caption{\footnotesize The smoothed \textbf{ReLU} $\sigma(\cdot)$.}
\label{smooth relu}
\end{wrapfigure}
\begin{equation}
\label{Eq_Smoth_Relu}
\sigma(x) = \left\{
\begin{array}{ll}
0, & \text{if}\ x\leq 0,\\
(2dx^3-x^4)/{2d^3}, & \text{if}\ 0 < x\le d, \\
x-d/2, & \text{otherwise}
\end{array}
\right.
\end{equation}
which not only approximates the typical \textbf{ReLU} activation but also becomes continuously differentiable to the second order (see Figure~\ref{smooth relu}).
\textbf{Quadratic $V$ Function.}
For any ${\bm{x}}\in\mathbb{R}^d$, let $V_{{\bm{\theta}}}\in\mathbb{R}^d$ be a multilayered feedforward NN of the input ${\bm{x}}$ and with ${\rm tanh}(\cdot)$ as the activation functions, where ${\bm{\theta}}$ is the parameter vector. To meet the condition used in Definition \ref{derivative}, we cannot use the \textbf{ReLU}, a non-smooth function,
as the activation function. Hence, we use the candidate Lyapunov function as:
\begin{equation}\label{v2}
V({\bm{x}}) = {{\bm{x}}}^\top
\left[\varepsilon I+ V_{{\bm{\theta}}}({\bm{x}})^{\top}V_{{\bm{\theta}}}({\bm{x}})\right]{\bm{x}},
\end{equation}
which was introduced in \cite{gallieri2019safe}. Here, $\varepsilon$ is a small positive constant.
\textbf{Control Function.}
We introduce a multi-layer feedforward NN (FNN), denoted by $\textbf{NN}({{\bm{x}}})\in\mathbb{R}^r$, to design the controller ${\bm{u}}$. Since we require ${{\bm{u}}}({\bm{0}})={\bm{0}}$, we set ${{\bm{u}}}({{\bm{x}}})\triangleq\textbf{NN}({{\bm{x}}})-\textbf{NN}({\bm{0}})$ or ${{\bm{u}}}({{\bm{x}}})\triangleq{\rm{diag}}({{\bm{x}}})\textbf{NN}({{\bm{x}}})$ with $r=d$. Here, ${\rm{diag}}({{\bm{x}}})$ is a diagonal matrix with its $i$-th diagonal element as ${x}_i$.
\begin{remark}
As reported in \cite{chang2020neural}, a single-layer NN without the bias constants in its arguments, which degenerates as {linear control}, could sufficiently take effect in {the} stabilization of many deterministic systems. However, this is NOT always the case for achieving the stabilization of highly nonlinear systems or even SDEs. The following proposition with Figure \ref{fig3} provides an example, where neither the classic linear controller nor the stochastic linear controller can stabilize the unstable equilibrium in a particular SDE. The proof of this proposition is included in Appendix \ref{proof1}.
\end{remark}
\begin{wrapfigure}{r}{7cm}
\centering
\includegraphics[width=0.5\textwidth]{Prop_3.1_03.pdf}
\caption{\footnotesize (a) $u(x)=kx$, (b) $u(x)=2x^2$.
}
\label{fig3}
\vskip -0.15in
\end{wrapfigure}
\begin{prop}\label{prop1}
Consider the following {$1${-D}} SDE:
\begin{equation}
\label{eq in prop1}
\mathrm{d}x(t)=x(t)\log \vert x(t)\vert \mathrm{d}t + u(x(t))\mathrm{d}B_t,\
\end{equation}
with a zero solution $x^{*}=0$. Then, for $u(x)=kx$ with any $k$ and ${x}_0\neq0$, $x^{*}=0$ is neither exponentially stable nor of globally asymptotic attractiveness almost surely. For $u(x)=2x^2$, $x^{*}=0$ is of globally asymptotic attractiveness. For $u(x)\equiv 0$, the deterministic system cannot be stabilized by any classic linear controller.
\end{prop}
\textbf{Loss Function.}
When the learning procedure updates the parameters in the NNs such that the constructed $V$ and ${\bm{u}}$ with the coefficient functions, $f$ and $g_{{\bm{u}}}\triangleq g+\bm{u}$, in the controlled system \eqref{SDEed} meet all the conditions assumed in Theorem \ref{thm1},
the exponential stability of the controlled system is assured. Thus, we demand a suitable loss function to evaluate the likelihood that those conditions are satisfied. First, from Theorem \ref{thm1}, it follows that $V({{\bm{x}}})\ge \varepsilon \|{\bm{x}}\|^2$ for all ${\bm{x}}\in\mathbb{R}^d$. Thus, Conditions \rm{(ii)}-\rm{(iii)} together with $c_3-2c_2>0$ in Theorem~\ref{thm1} equivalently become
\begin{equation}\label{strict loss}
\inf_{{\bm{x}}\neq0}\dfrac{(\nabla V({{\bm{x}}})^\top g_{{\bm{u}}}({\bm{x}}))^2}{V({{\bm{x}}})^2}\ge b\cdot\sup_{{\bm{x}}\neq0}
\dfrac{\mathcal{L}V({{\bm{x}}})}{V({{\bm{x}}})},\ \ b>2.
\end{equation}
These conditions further imply that
\begin{equation}
\dfrac{(\nabla V({{\bm{x}}})^\top g_{{\bm{u}}}({\bm{x}}))^2}{V({{\bm{x}}})^2}-b\cdot\dfrac{\mathcal{L}V({{\bm{x}}})}{V({{\bm{x}}})}\ge0,\ \ b>2,\ {\bm{x}}\neq0. \label{loss cond1}
\end{equation}
With these reduced conditions, we design the ES loss function for the controlled system \eqref{SDEed} as follows.
\begin{definition}\label{lya risk}
(\rm{ES loss}) Consider a candidate Lyapunov function $V$ and a controller ${\bm{u}}$ for the controlled system \eqref{SDEed}.
Then, the ES loss is defined as
\begin{equation*}
\begin{aligned}
L_{\mu,b,\varepsilon}({\bm{\theta}},{\bm{u}})=
\mathbb{E}_{{\bm{x}}\sim\mu}\left[\max\left(0, \dfrac{b\cdot \mathcal{L}V({{\bm{x}}})}{V({{\bm{x}}})}-\dfrac{(\nabla V({{\bm{x}}})^\top g_{{\bm{u}}}({\bm{x}}))^2}{V({{\bm{x}}})^2}\right)\right],
\end{aligned}
\end{equation*}
where the state variable ${\bm{x}}$ obeys the distribution $\mu$. In practice, we consider the following empirical loss function:
\begin{equation}
\begin{aligned}
L_{N,b,\varepsilon}({\bm{\theta}},{\bm{u}})
\ \ =\dfrac{1}{N}\sum_{i=1}^{N}\max\left(0,\dfrac{b\cdot \mathcal{L}V({{\bm{x}}}_i)}{V({{\bm{x}}}_i)}-\dfrac{(\nabla V({{\bm{x}}}_i)^\top
g_{{\bm{u}}}({{\bm{x}}}_i))^2}{V({{\bm{x}}}_i)^2}\right), \label{loss1}
\end{aligned}
\end{equation}
where $\{{{\bm{x}}}_i\}_{i=1}^{N}$ are sampled from the distribution $\mu=\mu(\Omega)$ and $\Omega$ is some closed domain in
$\mathbb{R}^d$.
\end{definition}
For convenience, we summarize the developed framework in Algorithm~\ref{algo1}. Here, $b$ is a hyper-parameter that can be adjusted as required by solving a specific problem.
\begin{remark}
In {Section~\ref{experiments}}, we show numerically that the conditions reduced in \eqref{loss cond1} are sufficiently effective for designing the ES loss. Actually, it is not necessary to design the
loss function using the conditions in \eqref{strict loss}.
\end{remark}
Now, for controlling any nonlinear ODEs or SDEs, we design the ES according to Algorithm~\ref{algo1}. As such, using the ES framework can not only stabilize those unstable equilibriums (constant states) of the given systems, but also can stabilize those unstable oscillators, e.g., the limit cycle. This is because the solution corresponding to the oscillator can be regarded as a zero solution of {the controlled system} after appropriate transformations are implemented.
Another point needs attention. During the construction of $V$ in \eqref{v1}, $\varepsilon\Vert {\bm{x}}\Vert^2$, the $L^2$-regularization, is used to guarantee the positive definiteness of $V$. However, often in the application of the Lyapunov stability theory, the form of $\Vert {\bm{x}}\Vert^2$ is not always a suitable candidate {for} the Lyapunov function. It may restrict the generalizability of using our framework, so it needs necessary adjustments. The following example illustrates this point.
\begin{ex}\label{eq in ex1}
Consider a $2${-D} SDE as follows:
\begin{equation*}
\left\{
\begin{aligned}
&\mathrm{d}x_1(t)=x_2(t)\mathrm{d}t,\\
&\mathrm{d}x_2(t)=[-2x_1(t)-x_2(t)] \mathrm{d}t + x_1(t)\mathrm{d}B_t.
\end{aligned}
\right.
\end{equation*}
In Appendix~\ref{proof ex1}, the zero solution of this system is validated to be exponentially stable almost surely; however, $k\Vert{\bm{x}}\Vert^2$ for any $k\in\mathbb{R}$ cannot be a useful auxiliary function to identify the exact stability of the zero solution.
\end{ex}
To be candid, using the current framework takes a longer time for training and constructing the neural Lyapunov function.
In the next subsection, we thus establish an alternative control framework that can reduce the training time.
\subsection{Asymptotic Stabilizer}\label{sec_AS}
Here, in light of Theorem~\ref{thm2}, we are to establish the second framework, the AS, for stabilizing the unstable equilibrium of system~\eqref{SDEed}. This framework only makes the equilibrium asymptotically attractive almost surely. Its control function is designed in the same way as the one used in the ES framework, whereas the loss function is differently designed.
\begin{definition}(\rm{AS loss})
Utilization of the notations used in Definition \ref{lya risk}, the loss function for the controlled system (\ref{SDEed}) with the controller ${{\bm{u}}}$ is defined as:
\begin{equation*}
\begin{aligned}
L_{\mu,\alpha}({{\bm{u}}}) =
\mathbb{E}_{{{\bm{x}}}\sim\mu}
\big[\max\big(0,(\alpha-2)\|{{\bm{x}}}^{\top}g_{{\bm{u}}}({{\bm{x}}})\|^2 +\|{{\bm{x}}}\|^2(\langle {{\bm{x}}},f({{\bm{x}}})\rangle+
\|g_{{\bm{u}}}({{\bm{x}}})\|_{\rm F}^2)\big)\big].
\end{aligned}
\end{equation*}
Akin to Definition \ref{lya risk}, we set the empirical loss function as:
\begin{equation}\label{asymploss}
\begin{aligned}
L_{N,\alpha}({{\bm{u}}}) = \dfrac{1}{N}\sum_{i=1}^{N}\big[\max\big(0,(\alpha-2)\|{{\bm{x}}}_i^{\top}g_{{\bm{u}}}({{\bm{x}}}_i)\|^2+ \|{{\bm{x}}}_i\|^2(\langle {{\bm{x}}}_i,f({{\bm{x}}}_i)\rangle+\|g_{{\bm{u}}}({{\bm{x}}}_i)\|_{\rm F}^2)\big)\big].
\end{aligned}
\end{equation}
\end{definition}
Here, $\alpha$ is an adjustable parameter, which is related to the convergence time and the energy cost using the controller ${{\bm{u}}}$. We show in Appendix \ref{section hyperparameter} the influence of selecting different $\alpha$. For convenience, we summarize the AS framework in Algorithm~\ref{algo2}.
\section{Convergence Time and Energy Cost}
{The convergence time and the energy cost} are the crucial factors to measure the quality of a controller \citep{yan2012controlling, li2017fundamental, tradeoff}. In this section, we provide a {comparative} study between the traditional stochastic linear control and the ES/AS, the above-articulated neural stochastic control.
To this end, we first present a theorem on the estimations of the convergence time and the energy cost for the stochastic linear control on a general SDE.
\begin{thm}\label{thm3}
Consider the SDE with a stochastic linear controller as:
\begin{equation}\label{energy}
\mathrm{d}{{\bm{x}}} = f({{\bm{x}}}){\rm d}t+u({{\bm{x}}})\mathrm{d}B_t,~~{{\bm{x}}}(0)={{\bm{x}}}_0\in\mathbb{R}^d,
\end{equation}
where $\langle {{\bm{x}}},f({{\bm{x}}})\rangle\le L\Vert {{\bm{x}}}\Vert^2$ and $u({{\bm{x}}})=k{\bm{x}}$ with $\vert k\vert >\sqrt{2L}$. Then, for $\epsilon<\Vert {{\bm{x}}}_0\Vert$, we have
\begin{equation*}
\left\{
\begin{aligned}
&\mathbb{E}[\tau_{\epsilon}]\le T_{\epsilon}= \frac{2\log\left({\Vert {{\bm{x}}}_0\Vert}/{\epsilon}\right)}{k^2-2L},\\
&\mathcal{E}(\tau_{\epsilon},T_{\epsilon})\le \frac{k^2\Vert {{\bm{x}}}_0\Vert^2}{k^2+2L}\left[\exp\left(\dfrac{2(k^2+2L)\log\left({\Vert{{\bm{x}}}_0\Vert}/{\epsilon}\right)}{k^2-2L}\right)-1\right],
\end{aligned}
\right.
\end{equation*}
where, for a sufficiently small
$\epsilon>0$, we denote the stopping time by
$\tau_{\epsilon}\triangleq \inf\{t>0:\Vert {{\bm{x}}}(t)\Vert=\epsilon\}$ and denote the energy cost by
\begin{equation*}
\begin{aligned}
\mathcal{E}(\tau_{\epsilon},T_{\epsilon})\triangleq\mathbb{E}\left[\int_{0}^{\tau_{\epsilon}{\wedge T_{\epsilon}}}\|{{\bm{u}}}\|^2\mathrm{d}t\right]=\mathbb{E}\left[\int_{0}^{T_{\epsilon}}\|{{\bm{u}}}\|^2\mathbbm{1}_{\{t<\tau_{\epsilon}\}}\mathrm{d}t\right].
\end{aligned}
\end{equation*}
\end{thm}
The proof of this theorem is provided in Appendix \ref{proof2}.
{
We further consider the case for NN controller ${\bm{u}}({\bm{x}})$. In general, the ${\bm{u}}({\bm{x}})$ is Lipschitz continuous with Lipshcitz constant ${k_{{\bm{u}}}}$ under a suitable activation function such as \textbf{ReLU} in NN \citep{fazlyab2019efficient,pauli2021training}. Then we have the following upper bound estimation of the convergence time and the energy cost for ES and AS, whose proofs are provided in Appendix~\ref{proof4.2},~\ref{proof4.3}.
\begin{thm}\label{thm4}(\rm{Estimation for ES})
For ES stabilizer ${\bm{u}}({\bm{x}})$ in \eqref{energy} with $\langle {\bm{x}},f({\bm{x}})\rangle\le L\Vert{\bm{x}}\Vert^2,~\varepsilon<\Vert{\bm{x}}_0\Vert$, under the same notations and conditions in Theorem \ref{thm1} with $c_3-2c_2>0$, we have
\begin{equation*}
\left\{
\begin{aligned}
&\mathbb{E}[\tau_{\epsilon}]\le T_{\epsilon}= \frac{2\log\left({V({\bm{x}}_0)}/{c_1\varepsilon^p}\right)}{c_3-2c_2},\\
&\mathcal{E}(\tau_{\epsilon},T_{\epsilon})\le \frac{{k_{{\bm{u}}}}^2\Vert {{\bm{x}}}_0\Vert^2}{{k_{{\bm{u}}}}^2+2L}\left[\exp\left(\dfrac{2({k_{{\bm{u}}}}^2+2L)\log\left(V({\bm{x}}_0)/c_1\varepsilon^p)\right)}{c_3-2c_2}\right)-1\right].
\end{aligned}
\right.
\end{equation*}
\end{thm}
\begin{thm}\label{thm5}(\rm{Estimation for AS})
For \eqref{energy} with $\langle {\bm{x}},f({\bm{x}})\rangle\le L\Vert{\bm{x}}\Vert^2,~\varepsilon<\Vert{\bm{x}}_0\Vert$, under the same notations and conditions in Theorem \ref{thm2}, if the left term in \eqref{asymptotic cond} further satisfies $\max_{\Vert{\bm{x}}\Vert\ge\varepsilon}\Vert{\bm{x}}\Vert^{\alpha-4}(\|{\bm{x}}\|^2(2\langle {\bm{x}},f({\bm{x}})\rangle+\|{\bm{u}}({\bm{x}})\|_{\rm F}^2 )-(2-\alpha)\|{{\bm{x}}}^{\top}{\bm{u}}({\bm{x}})\|^2)=-\delta_\varepsilon<0$, then for NN controller ${\bm{u}}({\bm{x}})$ with Lipschizt constant ${k_{{\bm{u}}}}$, we have
\begin{equation*}
\left\{
\begin{aligned}
&\mathbb{E}[\tau_{\epsilon}]\le T_{\epsilon}= \frac{2\left(\Vert {\bm{x}}_0\Vert^\alpha-\varepsilon^\alpha\right)}{\delta_\varepsilon\cdot\alpha},\\
&\mathcal{E}(\tau_{\epsilon},T_{\epsilon})\le \frac{{k_{{\bm{u}}}}^2\Vert {{\bm{x}}}_0\Vert^2}{{k_{{\bm{u}}}}^2+2L}\left[\exp\left(\dfrac{2({k_{{\bm{u}}}}^2+2L)\left(\Vert {\bm{x}}_0\Vert^\alpha-\varepsilon^\alpha\right)}{\delta_\varepsilon\cdot\alpha}\right)-1\right].
\end{aligned}
\right.
\end{equation*}
\end{thm}
Based on the theoretical results for ES and AS, we can further analyze the effects of hyperparameters $b,\alpha$ and neural network structures on the convergence time and energy cost in the control process. There are some interesting phenomena such as the monotonicity of $T_\varepsilon$ along $\alpha$ for AS change with the relative relationships of $\Vert{\bm{x}}_0\Vert$ and $\varepsilon$, this inspires us to select suitable $\alpha$ according to the specific problems. We leave more discussions in Appendix~\ref{analysis for new theorem}.
}
Now, we numerically compare the performances of the linear controller $u(x)=kx$ and the AS on the convergence time and the energy cost of the controls applied to system \eqref{energy} with specific configurations (see Figure~\ref{log_energy}). We numerically find that
$u(x)=kx$ can efficiently stabilize the equilibrium for $k>k_{\rm c}=5.6$.
Without loss of generality, we fix $k=6.0$, and compare the corresponding performances. As clearly shown in Figure~\ref{log_energy}, the AS outperforms $u(x)=kx$ from both perspectives, the convergence speed and the energy cost.
In the simulations, the energy cost $\mathcal{E}(\tau_{\epsilon},T_{\epsilon})$ defined above is computed in a finite-time duration as
$\mathcal{E}(\tau_{\epsilon},T\wedge T_{\epsilon})$,
where $T<\infty$ is selected to be appropriately large. We leave more results of the comparison study in Appendix \ref{appendix energy}.
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[width=1.0\textwidth]{Figure_energy_04.pdf}}
\caption{The performances of system~\eqref{energy} with specific configurations: $f(x)=x\log(1+x)$. (a) $u(x)=kx,k=0.2\cdot j,j=1,\cdots,50$,
plot $\log(1+x(1))$ against $k$.
(b) Linear controller with $k=6.0,~\mathcal{E}(\tau_{0.1},1)=38,388$. (c) AS control with $\mathcal{E}(\tau_{0.1},1)=1438$.
}
\label{log_energy}
\end{center}
\vskip -0.4in
\end{figure}
\section{Experiments}\label{experiments}
In this section, we demonstrate the efficacy of the above-articulated frameworks of stochastic neural control, the ES and the AS, on several representative physical systems. We also compare these two frameworks, highlighting their advantages and weaknesses. The detailed configurations for these experiments are included in Appendix~\ref{experiment detail}. Additional illustrative experiments are included in Appendix~\ref{more experiment}.
\subsection{Harmonic Linear Oscillator}
\begin{wrapfigure}[14]{r}{7cm}
\centering
\includegraphics[width=0.47\textwidth]{Harmonic_SDEs_03.pdf}
\caption{\footnotesize The solid lines are obtained through averaging the $20$ sampled trajectories, while the shaded areas stand for the variance
regions.}
\label{harmonic}
\vskip -0.2in
\end{wrapfigure}
First, consider the harmonic linear oscillator $\ddot{y}+2\beta\dot{y}+w^2y=0$, where $w$ is the natural frequency and $\beta>0$ is the damping coefficient representing the strength of the external force on the vibrator \citep{dekker1981classical}. Although this system is exponentially stable, the system with stochastic perturbations $\ddot{y}+(2\beta+\xi_2)\dot{y}+(w^2+\xi_1)y=0$ becomes unstable even if $\mathbb{E}[\xi_1(t)]=\mathbb{E}[\xi_2(t)]=0$ \citep{arnold1983stabilization}.
Now, we apply the nonlinear ES(+ICNN), the linear ES(+Quadratic), and the nonlinear AS, respectively, to stabilizing this unstable dynamics \eqref{eq in harmonic} with $w^2=1$, $\beta =0.5$, $\zeta_1=-3$, and $\zeta_2=2.15$, the results are shown in Figure~\ref{harmonic}. Indeed, we find that the two nonlinear stochastic neural controls are more robust than the linear control, and that the ES(+ICNN), rather than the AS, makes the controlled system more stable.
\begin{wraptable}[7]{r}{0.5\linewidth}
\vskip -0.15in
\caption{Performance on Harmonic Linear Oscillator}
\label{table1}
\centering
\scalebox{0.78}{
\begin{tabular}{lcccc}
\toprule
\multicolumn{1}{l}{} & Tt & Ni & Di & Ct \\
\midrule
\multicolumn{1}{l}{ES(+ICNN)} &$276.385$s &$121$ &$\mathbf{1e}$-$\mathbf{9}$ &$\mathbf{0.459}$ \\
\multicolumn{1}{l}{ES(+Quadratic)} &$78.071$s &$\mathbf{107}$ &$0.049$ &$3.683$ \\
\multicolumn{1}{l}{AS} &$\mathbf{4.839}$s &$184$ &$0.027$ &$2.027$\\
\bottomrule
\end{tabular}
}
\end{wraptable}
In Table \ref{table1} we compute the training time (Tt) used for the loss function converging to $0$, the number of iterations (Ni), the distance (Di) between the trajectory and the targeted equilibrium at time $T=4$, and the convergence time (Ct) when the {distance} between the trajectory and the targeted equilibrium is less than $0.05$. The results are obtained through averaging the corresponding quantities produced by $20$ randomly-sampled trajectories and the detailed training configurations are shown in Appendix~\ref{section harmonic}.
\begin{wrapfigure}{r}{6cm}
\vskip -0.18in
\centering
\includegraphics[width=0.44\textwidth]{comparison2.pdf}
\vskip -0.05in
\caption{\footnotesize {Comparison with existing methods.}}
\label{comparison}
\vskip -0.2in
\end{wrapfigure}
%
\textcolor{black}
{
We further provide a comparison study of our newly proposed ES(+ICNN) with HDSCLF in \citep{sarkar2020high}, BALSA in \citep{fan2020bayesian} and classic LQR controller in controlling the harmonic linear oscillator. Both HDSCLF and BALSA are based on the Quadratic Program(QP), and they seek the control policy dynamically for each state in the control process. By contrast, our proposed learning control policy is directly used in the control process. Hence, our method is more efficient in the practical control problems. The results are shown in
Figure.~\ref{comparison} (Please see more details in Appendix~\ref{section harmonic}). As can be seen that our learning control methods outperforms all others.
}
\subsection{Stuart-Landau Equations}
In this subsection, we show that our frameworks are beneficial to realizing the control and the synchronization of complex networks. To this end, we consider the single Stuart–Landau oscillator which is governed by the following complex-valued ODE:
\begin{equation}\label{eq in stuart}
\dot{Z} = (\beta+{\rm i}\gamma+\mu\vert Z\vert^2)Z,\ \ Z\in\mathbb{C}.
\end{equation}
This equation is a paradigmatic model undergoing the so-called Andronov bifurcation \citep{kuznetsov2013elements}: Stability of the equilibrium changes and the limit cycle emerges as the parameter passes some critical value. In what follows, we consider two cases based on system \eqref{eq in stuart}.
\paragraph{\fcircle[fill=black]{2.5pt} Case 1}
We set $\beta=-25,\ \gamma=1$, and $\mu=1$, so that system (\ref{eq in stuart}) has a stable equilibrium $\rho=0$ and an unstable limit cycle $\rho=5$. Here, $Z=x+{\rm i}y=\rho{e}^{{\rm i}\theta}$. Now, the AS steers the dynamics to the unstable limit cycle, as successfully shown in Figure~\ref{hopf compare}. The trajectories (the left column) and the phase orbits (the right column) of system \eqref{eq in stuart}, {initiated} from 30 randomly-selected initial states, without control (the upper panels) and with control of the AS (the lower panels). The initial values inside (resp., outside) the limit cycle $\rho=5$ are indicated by the blue (resp., purple) pentagrams.
\paragraph{\fcircle[fill=black]{2.5pt} Case 2}
Next, we consider the synchronization problem of the coupled Stuart-Landau equations.
Successful deterministic methods have been systematically developed for realizing synchronization, including the adaptive control with time delay \cite{stuart1} and the open-loop temporal network controller \cite{zhang2021designing}. These methods majorly {depend} on the technique of linearization in the vicinity of the synchronization manifold. Here, we show how to apply our framework to achieving the synchronization in the coupled system. We set the corresponding Laplace matrix $L=(L_{jk})_{n\times n}$ as $\sum_{k=1}^{n}L_{jk}=0$, which guarantees the synchronization manifold is an
invariant manifold of the coupled system \citep{pecora1998master}. Specifically, we select as $n=20,\ \sigma=0.01,\ c_1=-1.8,\ c_2=4$, and $L_{jk} = \delta_{jk}-\frac{1}{n}$, where $\delta_{jk}$ is the Kronecker function. Then, we apply the AS to this system and realize the stabilization of the synchronous manifold, as shown in Figure~\ref{stuart_20_test}.
\begin{figure}[t]
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{Hopf_03.pdf}
\caption{\footnotesize The trajectories (the left column)
and the phase orbits (the right column) of system \eqref{eq in stuart}.}
\label{hopf compare}
\end{minipage}%
~~~~~~
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.\textwidth]{stuart_20_test_Phase.pdf}
\caption{\footnotesize The dynamics of the first (resp., second) component of the coupled oscillators are shown in the panels in the left (resp., middle) column. The dynamics in the phase space (the right column).}
\label{stuart_20_test}
\end{minipage}
\vskip -0.25in
\end{figure}
\subsection{Data-Driven Pinning Control for Cell Fate Dynamics}
\begin{wrapfigure}{r}{7cm}
\vskip -0.15in
\centering
\includegraphics[width=0.5\textwidth]{cell_fate.pdf}
\caption{\footnotesize Pinning control for cell fate dynamics.}
\label{cell fate}
\vskip -0.15in
\end{wrapfigure}
Indeed, our frameworks can be extended to the model-free version via a combination with existing data reconstruction method. To be concrete, we show that our framework can combine with Neural ODEs (NODEs) \citep{chen2018neural} to learn the control policy from time series data for the Cell Fate system \citep{tradeoff,laslo2006multilineage}, which describes the interaction between two suppressors during cellular differentiation for neutrophil and macrophage cell fate choices. The system $\dot{{\bm{x}}}=f({\bm{x}}),~{\bm{x}}=(x_1, ..., x_6)$ has three steady states: $P_{1,2,3}$, where $P_{2,3}$ correspond to different cell
fates and are stable and $P_1$ represents a critical expression
level connecting the two fates and is unstable. The network structure of this $6$-D system is a treemap, where one root node $x_1$ can stabilize itself under the original dynamic. Hence, we choose root node $x_2$ with maximum our degree and add pinning control on it to stabilize the system to unstable state $P_1$.
The original trajectory that converges to $P_2$ (left) and the controlled trajectory that converges to $P_1$ (right) are shown in Figure~\ref{cell fate}. The original trajectory is used to train the NODE to reconstruct the vector field $\hat{f}$, then we use the sample of $\hat{f}$ as training data to learn our stochastic pinning control. We provide experimental details in Appendix~\ref{cell fate appendix}.
In addition to the above controlled systems, we include the other illustrative examples, {the controlled inverted pendulum}, reservoir computing and the controlled Lorenz system, in Appendix~\ref{experiment detail},\ref{more experiment}.
\section{Conclusion and Future Works} \label{conclusion}
In this article, we have proposed two frameworks of neural stochastic control for stabilizing different types of dynamical systems, including the SDEs. We have shown that the neural stochastic control outperforms the classic stochastic linear control in both the convergence time and the energy cost for typical systems. More importantly, using several representative physical systems, we have demonstrated the advantages of our frameworks and showed part of their weaknesses possibly emergent in real applications. Also, we present some limitations of the proposed frameworks in Appendix~\ref{limitations}.
Moreover, we suggest several directions for further investigations:
(i) acceleration of the training process of the ES, (ii) the basin stability of the neural stochastic control \citep{menck2013basin}, (iii) the trade-off between the deterministic controller $\bm{u}_{g}$ using the NNs and the stochastic controller $\bm{u}_{f}$ using the NNs, (iv) the safe learning in Robotic control with small disturbances \citep{berkenkamp2017safe}, and (v) the design of the purely data-driven stochastic neural control.
\section{Acknowledgments}
We thank the anonymous reviewers for their valuable and constructive comments that helped us to improve the work. Q.Z. is supported by the Shanghai Postdoctoral Excellence Program (No. 2021091) and by the STCSM (Nos. 21511100200 and 22ZR1407300).
W.L. is supported by the National Natural Science Foundation of China (No. 11925103) and by the STCSM (Nos. 19511101404, 22JC1402500, 22JC1401402, and 2021SHZDZX0103).
|
1,116,691,497,688 | arxiv | \section{Introduction}
Solar-like pre-main sequence (PMS) stars and, in general, the low mass PMS stars (M$_* < 2 M_{\odot}$), or T~Tauri stars (TTSs), are complex dynamical systems made of two basic components, star and accretion disk, as well as a dynamical interface,
the stellar magnetosphere (see G\'omez de Castro 2013, for a recent review). Magnetic fields of few kG have been detected in the surface of the TTSs (Guenther et al 1999; Johns-Krull et al 1999; Johns-Krull 2007). The surface field is not bipolar but it has a rather complex structure as the Sun's field has (Johns-Krull et al 2004). Hence, though higher order, multi-polar components fall off more rapidly with radius than the dipolar field, it is difficult to track the final path followed by matter from the inner disk border to the stellar surface. In addition, the magnetosphere has its own dynamics and forcing due to the interaction with the disk.
Unfortunately, the characteristics of the TTSs extended atmosphere and magnetospheres are still escaping the diagnosis (see Hartmann 2009). Little is known about them, apart from having a density of $\sim10^9-10^{11}$~cm$^{-3}$ and an electron temperature between some few thousand Kelvin and 100,000~K (G\'omez de Castro \& Verdugo 2007, hereafter GdCV2007). Recent attempts to derive the magnetospheric properties from optical lines have shown that, for instance, the H$\alpha$ profile is strongly dependent on densities and temperatures assumed inside the magnetosphere and in the disk wind reagion (Lima et al. 2010). The line widths of typical atmospheric/magnetospheric tracers are about 200-300 km/s that exceed by far what expected from thermal or rotational broadening even if the lines are assumed to be formed in a magnetosphere that extends to some 4-5 stellar radii and corotates with the star.
As today, it is still unclear whether the broadening is produced by unresolved macroscopic flows or by magnetic waves propagating on the magnetospheric field (Hartmann et al. 1982) . In general, the broadening of magnetospheric tracers does not vary significantly in time, pointing out that the average motions are rather stable.
The combined effect of funnel flows and inclined magnetic rotators as simulated by Romanova et al (2004,2012) is the current baseline for the numerical simulation of TTSs magnetospheres. The stellar field is assumed to be anchored in the inner part of the disk creating a sheared layer between the rigid body rotation of the star and the Keplerian rotation of the disk. This interaction has a profound influence on the star and the accretion flow but also acts as a dynamo that transform part of the angular momentum excess in the inner disk into magnetic field amplification that self-regulates through quiescent periods of field building-up and eruptions when the energy excess is released (see G\'omez de Castro \& von Rekowsky 2011, for an evaluation of the UV output from such interface). However, magnetospheric heating processes are poorly known and accretion shock models are unable to predict the observed line fluxes/broadening (Johns-Krull 2009).
The atmospheric and magnetospheric energy output is released mainly in the ultraviolet. The richness of spectral tracers for a broad range of magnetospheric temperatures and densities is unmatched by any other range. There are several studies of the atmospheric/magnetospheric properties of the TTSs based on low dispersion UV data (Lemmens et al. 1992, Hu\'elamo et al. 1998, Johns-Krull et al. 2000, Yang et al. 2012, G\'omez de Castro \& Marcos-Arenal, 2012, hereafter GdCMA2012). However, only the Cosmic Origins Spectrograph (COS) (see Green et al 2012 for a description of the instrument), on board the Hubble Space Telescope (HST), has been sensitive enough to gather high signal-to-noise ratio (SNR) profiles of hot plasma tracers such as the N~V [UV1] resonance multiplet or the He~II H$\alpha$ transition with a resolution above 10,000 in late K and M-type TTSs (see Penston \& Lago 1984 for C~IV profiles obtained with the IUE, Ardila et al 2002, for HST profiles obtained with the Goddard High Resolution Spectrograph and Ayres 2005, for the CoolCat set based on HST data obtained with the Space Telescope Imaging Spectrograph and compare them with the presented in Sect.~2).
Following a previous work where the magnetospheric properties of TTSs were examined based on low dispersion HST data (GdCMA2012), the high resolution profiles of the N~V [UV1], O~III] and He~II(1640 \AA) transitions are analysed in this work with two main objectives. Firstly, it is intended to determine whether it is feasible to discriminate the contribution to the high energy radiation flux from the high density atmospheric plasma and from the accretion flow.
Moreover, the unexpected correlation between X-ray flux and the high energy UV tracers found by GdCMA2012 is re-examined on the light of the kinematic information contained in the high resolution HST/COS profiles.
In Sect.~2 the observations are described. The results are presented in Sect.~3. Two components are found to contribute to the He~II flux a Low Density Component (LDC) associated with the accretion flow and a High Density Component (HDC) of more uncertain origin. The discussion on the possible source of the HDC, its association with accretion shocks and the properties of the LDC are addressed in Sect.~4. The article concludes with a brief summary on the relevance of these results.
\section{Hubble Space Telescope (HST) observations}
The He~II profiles of TTSs in the Taurus-Aurigae star forming region have been retrieved from the HST archive. Only high resolution observations have been considered. Most of them have been obtained with the Cosmic Origins Spectrograph (COS) and the gratings G130M and G160M. The resolution is $\sim$24,000 and each target has been observed three times with slight offsets in the wavelength range to guarantee that the 18.1\AA\ gap between the two segments in the FUV detector is covered (see COS Handbook).
Moreover, some few TTSs have been observed with the Space Telescope Imaging Spectrograph (STIS), namely, T~Tau, DR~Tau, and DF~Tau. Only the T~Tau profile had a high enough signal-to-noise ratio (SNR) in the He~II line to be considered for this work. The log of the observations is provided in Table 1; additional information on the HST programs that obtained these observations (programs IDs. 11533, 11616 and 8627) can also be found in the Table.
The observational strategy allowed to search for variations in the profiles in time scales of $\sim 20$ minutes but neither flux, nor morphology were found to display significant changes. The one-dimensional spectra produced by the COS calibration pipeline (CALCOS v2.17.3) were aligned and co-added\footnote{Note that there are small wavelength windows in the spectra without flux measurements. This effect has been taken into account in the calculation of the average spectra from the, typically, three observations obtained per star.}. COS targets must be centred to within $0.1-0.2$ arcsec to achieve the nominal wavelength accuracy of $\pm 15$~km~s$^{-1}$. The (R(3)~1-7) 1489.636~\AA\ and (P(5)~1-7) 1504.845~\AA\ H$_2$ lines have been used to set the zero of the wavelength scale for the targets in Table~1. H$_2$ emission is dominated by the molecular disk around the stars in most sources (France et al. 2012). The lines have been selected to be strong (from Herczeg et al. 2002) and detectable in most of the sources. The profiles of the 1489.636~\AA\ line are plotted in Figure~1 (see also Figure~3 in France et al.~2012) and the He~II profiles are plotted in Figure~2. No shifts have been applied to DN~Tau and IP~Tau observations because the H$_2$ emission was too weak to be used for this purpose. Neither shifts have been applied to HBC~427, LkCa~19 and LkCa~4 because H$_2$ emission was not detected. The H$_2$ profiles are sometimes asymmetric with respect to the rest wavelength, very especially in RW~Aur. In this case, the original zero from the CALCOS pipeline has been left. Only a subset of AA~Tau observations has been used since guide star acquisition failed (see Fig.~3). As a result, the first two exposures produced similar profiles while the last one produces a slightly broader and more red-shifted (by $\sim 0.1$\AA\ or 18~km~s$^{-1}$ at 1640 \AA ) profile. For this star, the last observation was rejected and only the two first observations were averaged to produce the profiles in Figs.~1 and 2.
The He~II profiles can be generically described as composed of a bright and narrow emission feature and a broad, weaker component that differs from one star to another. Notice that the He~II lines are very strong; this fact, together with the strong H$_2$ emission, contributes to the continuum jump in the low resolution Advanced Camera System on HST reported by GdCMA2012.
Close to the He~II line, there are the O~III]$_{1665}$ intercombination lines; a doublet with components at $\lambda \lambda 1660.802$ and 1666.156, which originate under transitions from the level 2s2p$^3$ $^5$S$_2$ to the term 2s$^2$2p$^2$ $^3$P with J=1 and J=2, respectively. The components should have an intensity ratio $\simeq$1:3; equal to the ratio of their transition probabilities (145~s$^{-1}$ and 426~s$^{-1}$). The transition is optically thin to a critical density of $3.4 \times 10^{10}$~cm$^{-3}$. The O~III] profiles are represented in Fig.~4. The two lines of the multiplet are observed in GM~Aur, DF~Tau, HN~Tau, DR~Tau, SU~Aur, RW~Aur and DE~Tau; however the flux of the weakest, 1660.802~\AA\ line, has only been measured for strong sources. In all cases, the flux ratio between the two lines of the multiplet is 1:3 (within the error bars).
To complete the view on the distribution of hot plasma in the TTSs environment, also the profiles of the resonance UV~1 multiplet of the N~V have been retrieved from the HST archive (see Table~2). In the blue edge of the 1238.8~\AA\ line, the strongest in the doublet, there are some narrow emission lines produced by molecular hydrogen (lines: $\lambda 1237.918$\AA\ 1-2 P(8), $\lambda 1237.589$\AA\ 2-2 R(11)) that somewhat blur the profile. The zero of the wavelength scale has been set again, resourcing to H$_2$ emission lines. The (P(2)~0-4) 1338.63~\AA\ line has been used for this purpose, since it is strong in most of the stars and it is not blended with other features (see Figure~5 with the H$_2$ profiles). As mentioned above, the original zero of the wavelength scale has not been shifted for DN~Tau, IP~Tau, HBC~427, LkCa~19 and LkCa~4 because the H$_2$ lines were either very weak or absent. The N~V profiles can be most generally described by a single component that ranges from being narrow in stars like HBC~427, LkCa~19 and LkCa~4 to be broad and asymmetric in AA~Tau or GM~Aur (see Fig.~6).
Some relevant properties of the TTSs to be used in Sect.~3 are gathered in Table~3. Notice that there are wide variations in published values of important parameters such as the stellar luminosity or the extinction (see also comments in GdCMA2012); data in Table~3 are gathered for reference for other researchers. The X-ray fluxes have been retrieved from the XMM-Newton extended survey of the Taurus molecular cloud (XEST) (Guedel et al. 2007). The He~II, the 1238.82~\AA\ N~V and the O~III] lines fluxes have been measured after subtracting the local continuum. Also the fluxes for the 1489.636 and 1338.63 H$_2$ lines have measured (see Sect.~5).The lines fluxes are provided in Table~4; they are not extinction corrected. For some sources, the measurement of the 1238.82~\AA\ N~V flux has required subtracting the nearby H$_2$ features. In such cases, the H$_2$ flux has been subtracted by linear interpolation in the N~V profile. However, there are some few profiles where this linear interpolation was uncertain (see quality flags in Table 4).
\section{Results}
From figures 1 to 6 a generic trend can be inferred:
\begin{itemize}
\item The {\it weak-line TTSs (WTTSs)} in the sample, {\it i.e.} evolved TTSs with no evidence of mass infall (LkCa~19,
LkCa~4, HBC~427), do not display H$_2$ emission, neither nebular O~III] emission. They have only rather narrow He~II
and N~V lines.
\item {\it Intermediate objects (TTSs)} like IP~Tau, DN~Tau and V836~Tau have weak H$_2$ and no nebular O~III]
emission. Both He~II and N~V lines have narrow emission profiles.
\item {\it The Classical TTSs (CTTSs)} emit in all these tracers (see below).
\end{itemize}
CTTSs cover a broad range of profiles morphologies. Strong H$_2$ emission is detected in all of them and the lines are
narrow unless in RW~Aur~A (see France et al 2012). Nebular O~III] emission is detected in all of them unless in DM~Tau,
UX~Tau~A and AA~Tau. O~III] is especially strong in T~Tau, DF~Tau, HN~Tau, DR~Tau and RW~Aur. SU~Aur, and DE~Tau
seem to be intermediate objects. Hints of possible O~III] emission at 1666~\AA\ are seen in GM~Aur, unfortunately
the S/N is low and the weakest component of the multiplet is not detected. A low S/N feature is detected at 1666~AA
in DN~Tau, IP~Tau, AA~Tau and UX~Tau spectra. Observations of the C~III] intercombination transition line are only
available for three of the stars in the sample, namely, DE~Tau, T~Tau and RW~Aur and no significant differences have
been found between the O~III] and the C~III] profiles obtained either with the Goddard High Resolution Spectrograph
(GHRS) (compare Fig.~4 with Fig.1 in G\'omez de Castro et al 2001) and STIS (compare Fig.~4 with Fig.1 in
G\'omez de Castro et al 2003).
In general, the He~II profiles can be described by a narrow emission component superimposed on a broader contribution
that mimics the observed in the O~III] nebular lines, when high enough S/N data profiles are observed.
This effect is clearly apparent in HN~Tau and RW~Aur. Note that in DR~Tau and DF~Tau, the overlap is lost at
bluewards shifted velocities; there is bluewards shifted O~III] emission with no He~II counterpart. This bluewards
shifted excess could be caused by the contribution of an unresolved jet to the line emission. As shown for RY~Tau by
GdCV2003, the semiforbidden emission has two components: one associated with the jet and another
with the accretion flow that were disentangled for this source due to their variability. The N~V profiles however, do not follow the same morphological trend.
A rather narrow symmetric profile is observed in DE~Tau that becomes wider in DM~Tau, DF~Tau, UX~Tau~A, GM~Aur
and SU~Aur, all of them sources with absent or weak O~III] nebular emission. DR~Tau, HN~Tau, RW~Aur and AA~Tau
display very peculiar profiles that clearly indicate that N~V emission is not produced in the stellar atmosphere
but in another dynamical component. In particular, the N~V profiles of RW~Aur, HN~Tau and AA~Tau are asymmetric
extending from red-wards shifted velocities to peak at blue-wards shifted velocities; moreover, the RW~Aur profile
peaks at the velocity of the optical jet (Hirth et al 1997) that was also detected in the C~III] intercombination
line by G\'omez de Castro \& Verdugo, 2003. This type of profile asymmetry has also been detected in the C~III]
profile of RY~Tau (GdCV2007), who pointed out that the line could be formed in a pre-main sequence analogue of the
Solar wind. In this context, it is worth remarking that the H$_2$ profile of RW~Aur has a completely different
asymmetry, the flux peaks to the red of the line, suggesting the presence of infalling cold molecular gas similar
to that observed in the pre-main sequence close binary AK~Sco (see also G\'omez de Castro et al 2013).
These groups can be cleanly recognised in velocity dispersion diagrams. The characterisation of the underlying velocity field and thermal properties of the line emission region is complex in the TTSs environment. Instead of using the standard fitting to Gaussian or Voight profiles, it is preferable to characterise the profile in terms of the standard Pearson statistics moments and measures, i.e., mean or centroid, dispersion, kurtosis and skewness. They provide a quantitative measurement of the deviation of the profile from the expected for a thermal plasmas; a normal distribution convolved with the Line Spread Function (LSF) of COS (Kriss 2011).
Note that in this approach, the profile is assumed to be formed by the contribution to the line flux of independent gas parcels, {\it i.e.} the (background subtracted) profile is treated as a histogram of the flux emitted per parcel in the wavelength (velocity) space; a similar approach was followed by GdCV2007 to compare the observed C~III] and Si~III] profiles of RY~Tau with the theoretical predictions. This treatment permits to characterise the profiles that are formed in complex velocity fields. In Table~5, the dispersions of the He~II and N~V lines are provided together with that obtained for two control lines, the H$_2$ transitions at 1339\AA\ and 1489\AA ; RW~Aur, HN~Tau and AA~Tau are not included because their N~V profiles are peculiar. As shown in the bottom panel of Fig.~8, the average dispersion of the H$_2$ profiles is $31\pm 5$~km~s$^{-1}$ and $43\pm 13$~km~s$^{-1}$ for the 1339\AA\ and 1489\AA\ lines, respectively.
There is not any correlation between the broadening of these two H$_2$ lines, pointing out that the scattering of the dispersions in the diagram is related with random effects associated with the measurement process. A different trend is drawn from the N~V and He~II dispersions. The dispersions of the WTTSs and DN~Tau are comparable to those measured in the H$_2$ line. In intermediate objects, like V836Tau and IP~Tau, the dispersions in the He~II line are comparable to those measured in the H$_2$ lines but the N~V lines are significantly broader. Finally in the CTTSs both $\sigma (N~V)$ and $\sigma (He~II)$ are larger than $\sigma (H_2)$. Note that stellar rotation may contribute to the line dispersion; SU~Aur, the fastest rotator in the sample has also the largest dispersion. However, a large dispersion can also be produced by the profile asymmetry. For instance, DM~Tau has dispersions comparable to those measured in SU~Aur and it is one of the slowest rotators in the sample (see Table~3). Two objects do not follow this trend: DR~Tau and DE~Tau, both have intermediate dispersions in the N~V lines and large dispersions in the He~II lines caused by the broad emission component. A quick inspection in the summary of the TTSs properties (see Table~3) indicates that the only possible cause of this discrepancy is the high accretion rate, as otherwise, expected.
\subsection{Two hot plasma components in the TTSs}
From figures 2, 4 and 6, it is clearly inferred that there are, at least, two different plasmas contributing to the spectral lines under study:
\begin{itemize}
\item {\it A low density component (LDC)} that it is most conspicuously traced by the O~III] line. The LDC also produces the He~II broad emission component observed in RW~Aur, HN~Tau, DR~Tau and DF~Tau. The critical density of the O~III] sets an upper limit to the electron density\footnote{ Note that for electron densities above the critical density, there may still be line emission though it damps rapidly.} of the plasma in the LDC of $\simeq 3.4 \times 10^{10}$cm$^{-3}$,
(see also Sect.~4.2). The LDC profiles display a non-thermal broadening and draw a complex velocity field around the stars, i.e., it is not associated to a simple standing atmospheric structure whose kinematics is dominated by stellar rotation. In fact, the LDC profiles seem to rather trace some kind of complex magnetospheric infalling pattern, high above the stellar surface. Also, in some cases, could be associated to unresolved wind structures (G\'omez de Castro \& Ferro-Font\'an 2005), as the reported for RY~Tau (GdCV2007). Notice that the co-existence of O~III] and He~II radiation from the same kinematic structure would point out an unrealistically high electron temperatures for the line emission region ($\log T_e (K) \sim 5.4$), if collisional equilibrium at a single temperature is assumed and electron densities below the O~III] critical density are considered\footnote{Calculations made using the Chianti data base: www.chiantidatabase.org.}. In fact, its UV spectrum is reminiscent of that observed in photoionized nebulae (see also Sect.~4). To the current sensitivity, the contribution of this component to the N~V flux is negligible.
\item {\it A high density/temperature component (HDC)} that dominates the N~V emission. O~III] profiles are very different from N~V profiles
suggesting that the density is of the N~V formation region is higher than the O~III] critical density.
\end{itemize}
Though the kinematics of the N~V emission region is clearly different from that traced by the LDC, the N~V flux is
correlated with the He~II flux as shown in Fig.~9; the Spearman rank correlation coefficient is $r_s = 0.87$ (with significance level, $\alpha = 0.001$, see Sachs 1982 for details) and, \\
$$\log \left(F(HeII)\right) = (0.8 \pm 0.1) \log \left( F(NV) \right) - (2.5 \pm 1.7)$$
\noindent
with RMS = 0.29 (see bottom panel in Fig.~9). Also the fluxes normalised to the stellar surface are correlated
with $r_s = 0.82$, with $\alpha = 0.002$ (see top panel in Fig.~9) and,\\
$$\log \left( \frac{F(L(HeII)}{F_{\rm bol}} \right) = (0.9 \pm 0.1) \log \left( \frac{F(NV)}{F_{\rm bol}} \right) + (0.0 \pm 0.6)$$
\noindent
with an RMS=0.31. The normalised flux is defined as the rate $F_{He II}/F_{\rm bol,*}$ or $F_{N V}/F_{\rm bol,*}$
and was introduced by GdCMA2021 to provide a measure of the line emissivity weighted over an unknown thickness but corrected from stellar radii and surface temperature. In this manner, the normalised fluxes compensate for scaling effects associated with the broad range of mass, luminosity and stellar radius covered by the TTSs. Stars whose He~II flux
has a significant contribution from the LDC are marked in the plot. Notice that they are evenly distributed
in the figure suggesting that He~II and N~V fluxes are correlated, independently of whether the He~II flux is dominated by the narrow
emission component.
\subsection{The connection between UV and the X-ray radiation from the TTSs}
Based on low resolution observations, GdCMA2012 pointed out that the normalised He~II flux anti correlates with the strength of the X-ray flux as derived in the XEST survey (Guedel et al 2007) carried out with the XMM-Newton telescope in the 0.3-10.0~keV band. In Fig.~10, the normalised He~II fluxes from the high resolution COS/HST observation (see Table~3) are represented against the normalised X-ray luminosities as derived from the XEST survey. Also the Chandra/ACIS observations of LkCa~4, DE~Tau, GM~Aur from Yang et al. (2012) are used; they are integrated X-ray luminosities in the 0.3-10~keV range. The He~II flux has been extinction corrected according to Valencic et al (2004) assuming R=3.1 and extinctions in Table~3. The low dispersion, GdCMA2012
data, have been also plotted for those sources with no available high dispersion data. There is significant contribution to the He~II flux from the LDC in some few sources: SU~Aur, HN~Tau, DE~Tau and DM Tau, that are marked in the figure.
A first inspection of the Fig~10, shows three groups. The WTTSs (HBC~427, LkCa~4, HD283472), UX~Tau and the fast rotator
SU~Aur are very close to the main sequence stars regression line. The CTTSs follow the generic trend pointed out
by GdCMA2012, the He~II flux increases as the X-ray flux decreases. Notice that the sources with strong LDCs are far from
this trend, excluding the fast rotator SU~Aur. If HN~Tau, DE~Tau and DM Tau are excluded, the Spearman rank correlation coefficient is $r_s=-0.591$,
with $\alpha = 0.033$, and the least square fit is,
$$\log \left( \frac{F(L_X)}{F_{\rm bol}} \right) = (-0.20 \pm 0.05) \log \left( \frac{F(HeII)}{F_{\rm bol}} \right) - (4.3 \pm 0.2)$$
\noindent
with RMS = 0.15. Uncertainties in the A$_V$ values may produce a shift towards large $f_{He II}$ in the plot reinforcing the trend; note that the regression line runs nearly parallel to the extinction direction. To examine its effect, the same diagram has been plotted in the bottom panel of Fig~10 using, stellar luminosities, X-ray luminosities and A$_V$ values from the recent compilation by Yang et al (2012). As shown, the three groups are no so cleanly separated and the
regression line is more clear; only DE~Tau is far from the trend and has been excluded from the calculation.
The results are similar though the negative slope is softer:
$$\log \left( \frac{F(L_X)}{F_{\rm bol}} \right) = (-0.17 \pm 0.07) \log \left( \frac{F(HeII)}{F_{\rm bol}} \right) - (4.1 \pm 0.1)$$
\noindent
with RMS = 0.15 and $r_s = -0.561$, with $\alpha = 0.036$.
In summary, though the clean separation between WTTSs and accreting TTSs may be an extinction associated effect, the trend
to release preferentially the high energy excess in the UV rather than in the X-ray channel in accreting objects holds with the only possible exception of the sources with a strong contribution the He~II flux from the LDC.
From the current data sets and observations, it cannot be ascertain whether there is a statistically meaningful deviation of the sources with strong LDCs from the main trend.
Unfortunately, there are not X-ray measurements of DR~Tau or DF~Tau and RW~Aur was only detected to have a low soft X-ray flux with the EINSTEIN satellite (Damiani et al 1995).
If present, such a trend could indicate that X-ray radiation is dominated by different components in sources with strong LDCs (strong nebular component) and in sources with
weak or absent LDCs. The X-rays energy distribution of the TTSs is often modelled by two components: a soft component at $T_s \simeq (2-5)~10^6$~K and a hard component at $T_h \simeq (1.5-3)~10^7$~K (see i.e. Glassgold et al, 2000). The hard X-ray component is thought to be associated with magnetic energy
release in the stellar coronae. The nature of the soft X-ray component is more uncertain and often, it has been hypothesised that could be formed in accretion shocks (Lamzin 1998, Gullbring et al 1998). Unfortunately, only three stars in our sample namely, T~Tau, SU~Aur and HBC~427 have a high enough count rate to allow a spectral fitting to two different optically thin plasmas and non-conclusive results could
be derived from the fits (see Table~6 with the two-components fit to the X-ray spectrum of these sources (from Table~6 in Guedel et al. (2007)). The X-ray spectrum of SU~Aur, a CTTS, is dominated by the low temperature component with T=5.22~MK however, both soft and hard X-ray component, have similar emission measures in HBC~427,a non-accreting WTTSs. Moreover, the hard X-ray component dominates the X-ray spectrum of the CTTS, T~Tau.
\section{Discussion}
TTSs are complex objects; they are convective PMS stars where a solar-like dynamo begins to set in, while still the fossil magnetic field is diffusing. TTSs are surrounded by an external dynamo that powers and makes rise the stellar magnetosphere to the inner border of the molecular disk (see Romanova et al 2012 for recent simulations). Matter from the disk, slides down onto the star along the magnetospheric field lines to end, free-falling onto the open holes of the magnetic configuration. In this environment, hot plasma radiating in the UV tracers studied in this work can be located in the magnetosphere, in the atmosphere, in the accretion shocks and also in the outflow (either solar-like or driven from the star-disk magnetic interface or the disk). Both, magnetosphere and outflow, have significantly lower densities than the stellar atmosphere or the accretion shocks; as a result, spectral lines radiation is dominated by radiative de-excitation processes and forbidden and semiforbidden transitions are strong from this plasma. In the dense atmosphere, collisional de-excitation is relevant and forbidden transitions are quenched. UV semiforbidden transitions cannot be observed from the accretion shock itself, because it is too hot and dense however, the soft X-ray radiation produced in the shock front photoionizes the preshock gas, which has a
density similar to that of the stellar magnetosphere and may produce forbidden lines radiation (G\'omez de Castro \& Lamzin, 1999). Unfortunately, the high column density prevents the UV radiation from the photoionization cascade to escape easily from the accretion column. Also, the profiles of some tracers, from the infrared He I transition (Beristain et al. 2001, Fisher et al. 2008) to the UV lines do not agree with the predictions of accretion shock models (Johns-Krull 2009). Within this context, the data presented in Sect.~3 provide some amazing results:
\begin{itemize}
\item The He~II, O~III] and N~V fluxes do not depend on the spectral type. This confirms that {\it the line emission is not dominated by main sequence like atmospheric magnetic activity}, {\it i.e.} with the release of the magnetic energy produced by the stellar dynamo, since this one depends on the spectral type (see e.g. Ayres et al 1995 and GdCM2012).
\item The He~II and N~V fluxes correlate well independently of whether the line profiles are very different, {\it e.g.}, independently of whether the line emission is produced in the same physical structure. This confirms that all processes
(accretion, atmospheric emission and outflow) are coupled, as otherwise expected (see Gomez de Castro 2013 for a recent review).
In turn, it makes difficult to get specific tracers of individual processes without the kinematical information, {\it i.e.}
without high resolution spectroscopy.
\item The high resolution profiles of the N~V line show a symmetric line broadening that increases from non accreting to accreting stars, being significantly suprathermal in these last sources (see Table~5). The profile shape and the density of the line formation region suggests an atmospheric origin (with the exceptions already mentioned in Sect.~3). The connection between line broadening and accretion suggests that
the density and extent of the high atmospheric layers depends on the accretion rate, {\it i.e.} on the evolutionary state, as otherwise predicted from the theoretical models (D'Antona \& Mazzitelli 1997, Siess et al 2000). Transport of magnetic energy from the stellar interior to the surface is expected to occur at a different pace in accreting sources than in WTTSs. Moreover, the extended magnetosphere powered by the disk-star magnetic locking, must affect the stellar atmosphere introducing new sources of stirring and turbulence (see e.g. Kivelson \& Russell, 1995).
\item However, the physical source of the narrow emission component in the He~II profile keeps being uncertain. It could either be associated with accretion shocks or with atmospheric features.
\end{itemize}
In this section, the possible source of the He~II narrow emission component is analysed as well as some constraints on the extent of the
magnetosphere inferred from the semiforbidden line radiation.
\subsection{On the source of the narrow component of the He~II line: accretion shocks or bulk atmospheric phenomena?}
The kinematics of the region where the narrow component forms, is clearly distinct from that of the N~V
or the O~III] lines formation region (see Fig.~8). The dispersion of the narrow emission component of the He~II ranges from
$\sim 20$~km~s$^{-1}$ to $\sim 60$~km~s$^{-1}$
while the dispersion of the N~V line varies varies from $\sim 40$~km~s$^{-1}$ to $\sim 130$~km~s$^{-1}$ for the
same sources (see Fig.~11). The dispersion of the narrow emission component in the He~II profile has been evaluated as above (see Sect.3) but setting an upper wavelength cut-off
to reject the contribution of the LDC. Note that even setting this cut-off there is
an unknown contribution from the LDC to the flux.
Moreover, the narrow component seems to be slightly redshifted in {\it all} sources,
from 24~km~s$^{-1}$ in UX~Tau to 37~km~s$^{-1}$ in DE~Tau (see Appendix~1 and Table~7).
Hence, it would be tempting to suggest that the line is produced in accretion shocks.
Accretion shocks are produced by the impact of the free-falling material from the disk onto the stellar surface.
The kinetic energy of the infalling matter, with typical free-fall speeds of $\sim 300$~km~s$^{-1}$ is
involved into gas heating at the shock front that reaches temperatures of about 1MK. The soft X-ray radiation
from the shock front is expected to photoionize (pre-ionize) the infalling gas column (see i.e. Lamzin 1998,
G\'omez de Castro \& Lamzin 1999, Muzerolle et al 2001, Orlando et al 2009). Also, the shock front could
back illuminate the stellar surface, becoming a source of atmospheric photoionization to be added to the coronal
X-ray radiation. In this context, the narrow component of the He~II line could be produced very close to the shock front.
The slight, but small redshift, could be interpreted as an indication of the line being formed in postshock material,
and the small width could be caused by thermal broadening; the thermal velocity of fully ionized, solar abundance
plasma at 50,000~K is 23~km~s$^{-1}$. However, this cannot be concluded from these data alone. Firstly, the apparent
redshift could be caused by the blending of the narrow component with the broad component which is asymmetric
in most sources, {\it e. g.}, with very low or absent flux at bluewards shifted velocities.
The line broadening is affected by the same problem though no so dramatically given the relative strengths of the
narrow and broad components. Unfortunately, unless very high S/N profiles ($\sim 100$)
of the semiforbidden and the He~II lines are obtained, any fitting is hampered by these uncertainties. Finally,
it is also, intriguing that the broadening of the He~II emission line in non-accreting TTSs, such as LkCa~4, LkCa~19
and HBC~427, is comparable to the observed in UX~Tau or in V836~Tau that are TTSs with low accretion rates.
In this respect, it is worth noticing that the correlation between the He~II flux and the accretion luminosity, as derived
from the U-band excess (Ingleby et al 2009, Gullbring et al 1998) is mild\footnote{Note that the He~II flux
and the accretion luminosity measurements are not simultaneous. However, the variability of the HeII flux and, in general, of the UV tracers (continuum lines) is typically smaller than a factor of 2 (G\'omez de Castro \& Franqueira 1997; Hu\'elamo et al. 2000). Measurements of the accretion rate are based in the U-band excess that also varies typically by this amount
(Gullbring et al 1998).}, as shown in Fig.~12 (see also GdCMA2012).
Taking into account all these facts, as well as the good correlation between the He~II and the N~V fluxes (Fig.~9)
and the convergence of the He~II and N~V lines broadenings towards the non-accreting WTTSs, a contribution to the
narrow component of the He~II line from the stellar atmosphere cannot be neglected. In summary, the data analysed in
this work are non-conclusive concerning the source of the narrow component of the He~II line. For all the reasons mentioned above,
it is most probable that both physical components, accretion shocks and stellar atmosphere, contribute to the flux.
\subsection{Properties of the LDC}
The profiles produced in the LDC cannot be ascribed to a simple kinematics shared by all sources, neither the radiating plasma
can be modelled by simple collisional plasma models. However, some constraints on its overall physical properties can be derived from the ratios of the intercombination lines of O~III], Si~III] and C~III]. The plasma density can be constrained from the Si~{\sc iii}]$/$C~{\sc iii}] ratio (G\'omez de Castro \& Verdugo 2001). The detection of O~{\sc iii}]$_{1661,1666}$ can be used to fix up the temperature since the O~{\sc iii}]$/$Si~{\sc iii}] ratio is very sensitive to it.
The Si~III] and C~III] lines have been observed with STIS for three sources in the sample: RW~Aur (G\'omez de Castro \& Verdugo 2003, hereafter GdCV2003), DE~Tau and T~Tau (G\'omez de Castro et al 2003). T~Tau profiles display a non-negligible contribution from the large scale jet. DE~Tau Si~III] and C~III] profiles are rather narrow and similar to the O~III] line.
From RW~Aur study, GdCV2003 pointed out that the emitting volume is clumpy with a rather small filling factor, as otherwise expected if magnetospheric radiation is produced in plasma filaments and clumps.
This clumpy nature together with the broad temperature range covered by the various spectral tracers suggest that the excitation mechanism could be photoionization instead of collisional excitation.
GdCV2003 produced two grids of photonionization models to explore possible regimes for line excitation making
use of CLOUDY (Ferland 1996), a code designed to simulate emission line regions in astrophysical environments.
The first set assumed that LDC had a belt like geometry, being illuminated by the ambient X-ray radiation field:
soft and hard components, at $3.5\times 10^6$~K and 2.8$\times 10^7$~K respectively, with a total X-rays luminosity of
$3\times10^{29}$~erg~s$^{-1}$. This model turned out to be unable to reproduce comparable strengths of the three
spectral tracers. However, if the O~III], Si~III] and C~III] emission is assumed to be produced in dense gas
around small X-ray sources, such as reconnecting loops, the lines ratios could be reproduced for electron densities
of $n_e \geq 10^{11}$~cm$^{-3}$. For soft X-ray sources, with T$_e = 10^6$~K, luminosities of $10^{27}$~erg~s$^{-1}$ and
radii of 10$^8$~cm$^{-3}$, the inferred O~III] emissivity is $\sim 10^{-3.5}$ erg~s~cm$^{-2}$. Using this one as a fiducial
value, an estimate of the LDC volume, $V_{LDC}$ could be derived from the line strength as,
$\frac {V_{LDC}} {\eta} = \frac {F(O III]) 4 \pi d^2}{\epsilon _{O III]}}$ where $F(O III])$ is the reddening corrected line flux and $\eta$ is the filling factor of the hot plasma. For filling factors of 10, and assuming that the emission
is concentrated in a spherical shell of radius R$_{LDC}$ and thickness 0.01R$_{LDC}$, LDC radii from 4 to 9 R$_{\odot}$
are inferred. These values are within a factor of 1.5 of the magnetospheric radii derived
from accretion luminosities for stars with known magnetic fields (Johns-Krull 2007, GdCMA2012). Unfortunately, the uncertainties in the plasma distribution prevent more detailed evaluations.
\section{Conclusions}
The UV radiation from N~V, He~II and O~III] is dominated by the contribution of three main components of the
TTSs atmospheric/magnetospheric environment: the magnetospheric flow of infalling matter, the disturbed upper
atmospheric layers and the accretion shock.
The diffuse magnetospheric plasma (LDC) is best traced by the O~III] line: it produces asymmetric profiles,
preferentially red-shifted. He~II emission is observed from the same kinematical structures that radiate in O~III].
This plasma is not excited by thermal collisions at a single electron temperature. In fact, there are
indications that photoionization processes could be significant.
The hot, dense layers of the stellar atmosphere are best traced by the N~V line. However, the line dispersion
increases steadily with the accretion rate suggesting a connection between the disturbances in the upper
atmospheric layers and the accretion flow.
The He~II flux is dominated by a narrow emission component of uncertain origin. In this work, we have presented
evidences indicating that it may be formed in hot postshock material in accretion shocks but also, evidences
of its connection with atmospheric tracers. Both accretion shocks and the upper atmospheric layers seem to
contribute to the narrow component of the He~II profile.
The anticorrelation between X-ray and UV flux found but GdCMA2012 has been confirmed, suggesting that
the dissipation of magnetic energy proceeds in the TTSs differently than in main
sequence stars. The denser environment produced by mass accretion (see i.e. Petrov et al. 2011)
seems to favour the ultraviolet channel for the dissipation of the magnetic energy excess.
All the observations indicate that the UV radiation field during PMS evolution is much harder than the
usually implemented in the modelling of protostellar disks evolution. An example of its effect in the
dust grains charging and charging profile can be found in Pedersen \& G\'omez de Castro 2011. Theoretical
modelling of protostellar disks chemistry and life generation environments should take this fact into account.
\acknowledgments
Kevin France brought to my attention the HST/COS data corresponding to AA~Tau.
The analysis of the data pointed out that the 1640\AA\ jump in the HST/ACS data
was dominated by an unexpectedly strong He~II line. This article has grown from
extending this analysis to the TTSs observed with HST/COS. I would like to thank
an anonymous referee for suggesting the use of the H$_2$ lines to set the
zero of the wavelength scale. This work has been
partly funded by the Ministerio de Economia y Competitividad of Spain through grant
AYA2011-29754-C03-C01.
|
1,116,691,497,689 | arxiv | \section{Introduction}
The fundamental building blocks of projective geometry
are the theorems of Pappus of Alexandria, living in the
fourth century A.D., and the theorem of Desargues published
in Paris by Girard Desargues a French architect, engineer
and mathematician in 1629.
The celebrated Desargues perspective theorem in the plane,
over any field or skew field, states that when two triangles
are in perspective the meets of corresponding sides are collinear.
The theorem and the convserse
can be proven by using
coordinates or by invoking the principle of duality in projective geometry.
In Coxeter (\cite{coxeter}) the author writes:
\begin{itemize}
\item[]
Is it possible to develop a geometry having no circles,
no distances, no angles, no intermediacy (or ``betweenness''),
and no parallelism?
Surprisingly, the answer is Yes: what remains is projective geometry:
a beautiful and intricate series of propositions, simpler than
Euclid’s but not too simple to be interesting...
The original motivation for this kind of geometry
came from the fine arts. It was in 1425 that the
Italian architect Brunelleschi began to discuss the
geometrical theory of perspective which was consolidated
into a treatise by Alberti a few years later.
\end{itemize}
Vanishing points in drawing become mathematical points at infinity,
all lying on a line at infinity. For an elaboration of this
we refer to ``Art and Geometry'' by William Ivins,
a former curator of the Metropolitan Museum of Art~\cite{ivins}.
Crannell and Douglas~\cite{crannell}
and Lord~\cite{lord} also provide interesting and related
background.
A brief overview of the connection of the
Desargues theorem with axiomatics is as follows.
We start with a point $V$ and line $l$ which may or may not
be on $V$ in a projective plane and study a possible
``central collineation'' $T$ that fixes $V$, all lines
through $V$ and all points on $l$.
Let $A$ be a point unequal $V$ and not on $l$.
Then $T(A)$ has to be a point $D$ on the line $VA$.
Let $B$ be chosen with $B$ not on $l$ or $VA$.
$T(B)$ is a point $E$ on $VB$.
Using the corresponding pair $\{A,D\}$ we see that $E$
must be the intersection of $BV$ and $DZ$ where $AB$ meets $l$ in $Z$.
Similarly, let $C$ to be a point on $VC$ where the lines
$VA$, $VB$, $VC$ are distinct.
To find $T(C)=F$ we can use the pair $\{A,D\}$.
Then $F$ is the point of intersection of $VC$ and $DW$,
where $AC$ meets $l$ in $W$. We can also use the pair $\{B,E\}$.
Then $F$ must be the point where $CV$ and $EU$
meet where $BC$ meets $l$ in $U$. Both constructions must give
the same answer for $F$.
Thus, for the central collineation to exist, the Desargues theorem
must hold for the triangles $ABC$, $DEF$, in perspective from $V$,
since the intersections of the corresponding sides must all lie
on the line $l$.
Conversely, if for all triangles $ABC$, $DEF$ in perspective
from $V$, the intersections of corresponding lines lie on a
line then there exists a such a central collineation $T$,
fixing $V$, all lines on $V$ and a line $l$ pointwise,
where $l$ contains the intersections of corresponding lines.
This suggests an indirect, and involved, proof of the
extended Desargues theorem in any dimension.
However, the emphasis here is on synthetic reasoning and the
resulting configurations. From~\cite[p.\ 141]{rota}:
``The proof of Desargues’ theorem of projective geometry
comes as close as a proof can to the Zen ideal.
It can be summarized in two words: `I see'.''
The topic of geometrical configurations has undergone a resurgence
in recent years: see for example
Conway and Ryba~\cite{conway},
Luotoniemi~\cite{luotoniemi}
and
G\'evay~\cite{gevay}.
In this paper, inter alia, we show that the analogue of the
Desargues theorem holds in all dimensions over infinite and
finite fields of sufficiently large order. The result is that
if two simplexes with no common points or faces are in perspective
from a point then the intersections of corresponding $t$-spaces
are $t-1$ spaces, all lying in a hyperplane $H$ for $t=1,2,....n-1$.
The simple, synthetic proof of this result, and the converse,
in Section~\ref{section:DesInNDim}, valid in all dimensions,
only assumes that pairs of corresponding edges meet in a point.
The second proof is based on arcs as in~\cite{BruenBruenMcQuillan}.
In the planar case, although several authors use a 5-point in
3 dimensions the proofs can still be quite complicated.
The elegant proof in~\cite{conway} uses a different approach -
see Section~\ref{section:fourthProof}.
Regarding previous work we mention that in
1916~\cite[p.\ 43-44]{veblenYoungVol1} the authors offer a
proof for $n=3$. The accompanying diagram for this 3-dimensional
case includes 15 points and many lines. There are many interesting
recent papers, too numerous to cite, in the general area.
The paper by G\'evay includes several references. The
papers by Luotenemi on models of important configurations
such as the Desargues configuration and the double six
are informative
and very helpful for visualization.
A potential problem with synthetic proofs in geometry is
unanticipated coincidences of points or collinearities of lines.
Concerning the classical Hessenberg theorem, showing that
Pappus implies Desargues, we have the following in Pedoe -
see Introduction (\cite{pedoe}),
``It should interest those who may be disposed to believe
that all outstanding problems in classical projective geometry
have been solved to note that Pickert in
his {\em Projective Ebenen} (Berlin 1955) lists eight defective
versions of the classical Hessenberg theorem.
Two of these defective proofs are by Hessenberg himself''.
In~\cite{veblenYoungVol1}, for the case $n=3$,
although they do not state it, the authors assume
that the two simplexes do not share a face.
If the simplexes have a face in common the above result
is false for the case $t=n-1$. Merely assuming perspectivity
from a point is not enough.
In \cite[p.\ 54, problem 26]{{veblenYoungVol1}},
the writers enunciate the extension of Desargues theorem
in $n$ dimensions. No proof is offered. They cite research
by A.~Cayley
``Sur quelque th\'eor\`eme de la g\'eom\'etrie de position' ,
Crelle’s Journal, vol.~31, (1846): Collected papers,
Volume 1, p.~317, and also a paper by G.~Veronese,
``Behandlung der projectivischen Verh\"altnisse der R\"aume
von verschiedenen Dimensionen durch das Princip des
Prjjicirens und Schneidens'', {\em Math Annalen},
vol.~34, 1889 together with a paper by W.~B.\ Carver~\cite{carver}.
In this paper the author states that Cayley's paper
describes sections by the plane or 3-dimensional space of
the complete $n$-point in higher dimensions and that the
Veronese paper is concerned with ``the configurations
thus obtained in $r$ dimensions''. In a footnote Carver
refers to the Desargues theorem in the plane as an
``incidental occurrence of these configurations''.
On line~7 the author writes: ``both Cayley and Veronese
state that these same configurations can also be obtained
as projections of higher-dimensional figures''.
It appears that no proof of the extended Desargues Theorem
has ever been written down.
Apart from the sketch of the proof above, using
homogeneous coordinates, we offer two further,
different proofs of the extended theorem, valid for all $n$.
In the 1964 paper by S.~R.\ Mandan~\cite{mandan}
entitled ``Desargues Theorem in $n$-space'' the author
describes a result concerning two simplexes of size $n+2$ in
$n+1$-space which, between them, span a $2n$-space or a
$2n+1$ space. For $n=1$ the Desargues theorem in the plane
follows from one of the cases. However, the extended Desargues theorem
deals with two simplexes in an $n$-space which therefore
span just that $n$-space.
Thus, the result in~\cite{BruenThasBlokhuis}
does not apply to the result here.
In Rota~\cite[p.\ 145]{rota} referring to the first of the
6 volumes of H.~F.\ Baker's {\em Principles of Geometry}
he writes: ``After an argument that runs well over
one hundred pages, Baker shows that beneath the statement
of Desargues’ theorem, another far more interesting
geometric structure lies concealed. This structure is
nowadays called the Desargues configuration''.
The new results here on the configuration also form a
central part of this paper. We present the detailed structure
of the intersections of corresponding edges of the two
simplexes in perspective. Those points all lie in a hyperplane~$H$.
The consequence of the assumption that the simplexes do not
share a face plays a crucial part in our simple proof of
the extended Desargues theorem in Section~\ref{section:DesInNDim}.
For $n=3$ that configuration in the plane~$H$ consists of the 4 points
and 6 lines of a complete quadrilateral. We begin with two simplexes,
with no point or face in common, and the vertex of
perspective,
9 points in all. We then complete the configuration
by adjoining the set of six intersection points in~$H$ which are disjoint
from the simplexes and the vertex for a total of fifteen points.
Mirabile dictu, each and every one of these 15 points then
serves as the vertex of two simplexes in perspective with
no point or face in common, such that the intersections
of their corresponding edges are in the 15-point set and
form a complete quadrilateral lying in a plane.
The analogous result holds in all dimensions and this is
just the beginning of the fun. There is much more to be had!
As in~\cite{BruenBruenMcQuillan} over finite fields
we can also, in principle, enumerate the total number
of configurations of simplexes in perspective in $n$ dimensions
since arcs can be enumerated. The use of arcs clarifies
the classical idea of ``points in general position''.
Because of the method used in labelling points we know
that points in the configuration are distinct. There is
no concern about unexpected possible coincidences of points
or collinearities of triples such as those detailed in
Pedoe in the proof of the Hessenberg theorem.
The inherent combinatorial reciprocity in the configurations
can frequently be realized as a geometrical polarity,
at least when the characteristic is not~2.
Working over general fields, and not just the real or
complex numbers, adds fresh insights and opens up new problems.
For example the question of the number of self-conjugate
points also arises. In the plane there are at most 4 such
points: this is achieved only in
characteristic~3 \cite{BruenBruenMcQuillan},
\cite{bruenMcQGeomConfigs},
\cite{bruenMcQFourSC}.
The issues above will be discussed in a future paper.
\section{Preliminaries: Definitions and Notation.}
\label{section:preliminaries}
a. {\em Dimension.}
In this paper the dimension of a projective space
is the projective dimension which is one less than the rank
of the underlying vector space. For example a projective plane
has dimension~2 while the underlying vector space has rank~3.
The space generated by subspaces $A, B$ - their
union or join - or by a set of points $S$ is
denoted by $\langle A, B\rangle$ or $\langle S\rangle$ respectively.
The rank formula for vector subspaces states the
following:
$rank(E+F)= rank(E) + rank (F) - rank (E\cap F)$.
For projective subspaces, with $Dim$ donating dimension,
we have a similar dimension formula as follows:
$Dim \langle E, F\rangle =Dim E +Dim F - Dim (E\cap F)$, where,
$\cap$
denotes the intersection of spaces $E,F$.
[A word of caution: if the intersection is the zero vector
its vector rank is zero. To make the formula work in the
projective case, the dimension of the empty intersection is
one less than zero, i.e., -1! . A test case is afforded by
set of 2 skew lines in 3 dimensions].
b. {\em Simplex.}
In $\Sigma_n$, the projective space of dimension $n$, a
simplex is a set $X$ of $n+1$ points which generate the space.
This implies that each $t$-subset of $X$ generates a $(t-1)$-space.
Dually, we can also describe a simplex as a set of $n+1$
hyperplanes or faces, that is, subspaces of
dimension $n-1$ generated by subsets of $X$ of size $n$.
In the plane a simplex is a set of 3 non-collinear points.
The faces are the 3 lines joining pairs of points.
The 3 points or 3 lines are different descriptions of
what is really the same object ie a triangle with
its points and lines. A simplex is a self-dual concept.
The line joining a pair of points of the simplex is called an edge.
Each subset of $t$ points of $X$ generates a subspace of
dimension $t-1$ for $t$ lying between 2 and $n$.
Dually, the intersection of m faces is a subspace of
dimension $n-1-[m-1]$, i.e., of dimension $n-m$, $m=1,2,\ldots$.
In dimension 3 a simplex is a set $X$ of 4 points,
$X=\{1,2,3,4\}$, not all lying in a plane.
The four faces are the 4 triangles generated by the
sets $\{1,2,3\},\{1,2,4\},\{1,3,4\}, \{2,3,4\}$.
The simplex can be visualized by drawing a figure
where the point 4 is above the plane generated by $1,2,3$.
It represents a tetrahedron and, as mentioned, is
self-dual - \cite{maxwell}.
A reciprocity can be set up by mapping each point to a face.
For example we can map 1 to the opposite
face $\langle234\rangle$, 2 to $\langle 134\rangle$ 3
to $\langle 124\rangle$ and 4 to $\langle 123\rangle$.
Then $\langle1,2\rangle$ maps to the intersection
of $\langle 234\rangle$ with $\langle1,3,4\rangle$,
i.e., to $\langle 34\rangle$, and so on.
\section{The Desargues theorem in $n$ dimensions.}
\label{section:DesInNDim}
We start with two simplexes
$$A= \{A_1,A_2,\ldots,A_n,A_{n+1}\}\hbox{ and }
B= \{B_1, B_2,\ldots,\allowbreak B_{n+1}\}$$
in $\Sigma_n$, an $n$-space.
$A, B$ are perspective from the vertex $V$ if there are $n+1$ lines
on $V$ with each line containing the points
$A_i, B_i$ for $i=1,2,\ldots,n, n+1$.
\begin{theorem}
Let $A,B$ denote two simplexes in $\Sigma_n$ which
do not share a point or face. Assume there is a
correspondence between the points $A_i, B_i$ such
that the edge joining $A_i$ to $A_j$ intersects
the edge joining $B_i$ to $B_j$ in a point, $1\leq i,j\le n+1$.
Then $A, B$ are in perspective from a vertex $V$.
\label{theorem:simplexesInPersp}
\end{theorem}
\begin{proof}
A priori, a pair of corresponding edges might be the same
line or they might be skew. This is ruled out by the hypothesis.
The result holds for $n=2$ by the planar Desargues theorem.
We assume that $n\ge 3$.
By assumption, the edges $A_1A_2$ and $B_1B_2$ are
distinct and meet in a point. Thus the lines
$A_1B_1$ and $A_2B_2$ meet in a point $V$.
Let $A_3,B_3$ be any remaining pair of corresponding points.
We claim that triangles $A_1A_2A_3$ and $B_1B_2B_3$ are not coplanar.
For suppose they lie in plane $\pi$. Let $C_4$ be the point
of intersection of lines $A_3A_4$ and $B_3B_4$.
If $C_4$ is in $\pi$ then $A_4$ is in $\pi$ [as is $B_4$].
This contradicts the simplex property.
Thus the sets of points $\{A_1,A_2,A_3,A_4\}$ and
$\{B_1,B_2,B_3,B_4\}$ both lie in the same 3-dimensional
space generated by $A_1,A_2,A_3,C_4$.
Iterating this we end up with two corresponding faces
namely
$$\{A_1,A_2,\ldots,A_n\}\hbox{ and }
\{B_1,B_2,\ldots,B_n\}$$
lying in the same $(n-1)$-space.
Then $A,B$ have a common face. But this contradicts the hypothesis.
Thus the triangles are not coplanar and lie in two different planes.
Since the line $A_i A_j$ intersects the line $B_iB_j$,
for $j=1,2,3$ the triangles are perspective from the line of
intersection of the two planes. Thus the triangles
$A_1A_2A_3$ and $B_1B_2B_3$ are in perspective from
a point~\cite[2.31]{coxeter}.
Therefore $V$ lies on $A_3B_3$.
But $A_3$, $B_3$ are an arbitrary pair of corresponding points.
Thus all lines $A_i B_i$ pass through $V$, for $i=1,2,\ldots,n,n+1$.
\end{proof}
\begin{lemmaN}
The point $P= P_{ij}=A_iA_j \cap B_iB_j$ is distinct
from the point $P_{rs}$, defined similarly
with $r,s$ replacing $i,j$ unless $\{i,j\}= \{r,s\}$.
\label{lemma:pijDistinctFromPrs}
\end{lemmaN}
\begin{proof}
If $i=r$ and $j$ is not equal to $s$ then the points
$A_i,A_j,A_s$ are collinear contradicting the fact
that they are part of a simplex.
If $\{i,j,r,s\}$ consists of 4 distinct numbers
this means that the 4 points $A_i, A_j, A_r,A_s$ all
lie in the plane containing $P$ and the 2 lines $A_i A_j$
and $A_r A_s$. But this contradicts the fact that the 4 points
form a simplex if $n=3$ or partial simplex for $n=4, 5,\ldots$.
\end{proof}
In what follows $\alpha_k$, $\beta_k$ are corresponding faces of $A,B$.
\begin{theorem}
\label{theorem:simplexesIntersection}
Let $A,B$ denote simplexes in the $n$-dimensional space $\Sigma_n$ as in
Theorem~\ref{theorem:simplexesInPersp}.
Then
\begin{enumerate}[a.]
\item
\label{theorem:simplexesIntersectionParta}
The intersections of corresponding $t$-spaces are $(t-1)$-spaces,
for $t=1,2,\ldots,n-1$.
\item
\label{theorem:simplexesIntersectionPartb}
The spaces $(\alpha_i\cap \alpha_j)$ with $(\beta_i \cap \beta_j)$
generate an $(n-1)$ space, a hyperplane of
$\Sigma_n$,
where $i,j$ are distinct.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of a.]
The result holds, by hypothesis, for $t=1$.
Let $t=2$. We have 2 triangles $A_1,A_2,A_3$ and $B_1,B_2,B_3$.
Each edge of the triangle from $A$ meets the corresponding edge
from $B$ in a point. The line of intersection of the planes of
the triangles is a line containing 3 distinct
points $P_{ij}$, $1\le i,j \le 3$.
Let $t=3$. We have two tetrahedra $A,B$ with 4 pairs of corresponding faces.
Each face contains a triangle. Each corresponding pair of faces
intersect in a line.
From Lemma~\ref{lemma:pijDistinctFromPrs}
no two of the lines are the same.
Thus, for $t=3$, the dimension of the
intersection is at least two.
To establish the result we induct on $t$.
Assume, by induction, that the intersection of two $t$-spaces
has dimension $t-1$. The intersection of two
corresponding $(t+1)$-spaces contains the union of
the intersections of corresponding pairs of $t$-spaces.
Each such pair intersect in a $(t-1)$-space.
Distinct pairs yield distinct intersections from
Lemma~\ref{lemma:pijDistinctFromPrs}.
Thus the dimension of the intersection is at least~$t$.
From the dimension formula the corresponding union
has dimension at most $t+1$.
From the argument in the proof of
Theorem~\ref{theorem:simplexesInPersp}
the dimension of that union of the two subspaces is at least $t+1$.
We conclude that the dimension of the union is $t+1$,
and that of the intersection is $t-1$, for all
values of $t$ between~1 and $n-1$.
\end{proof}
\begin{proof}[Proof of b.]
$Dim \langle (\alpha_i\cap \alpha_j)\rangle$
and $Dim \langle(\beta_i\cap \beta_j) \rangle$
are $n-2$ as each face is generated by $n-1$ points of $A,B$.
From part~a the dimension of their intersection is $n-3$.
From the dimension formula the dimension of their union is $n-1$.
\end{proof}
\begin{theorem}
\label{theorem:twoSimplexesFourParts}
Let $A,B$ denote two simplexes in perspective from a
vertex $V$ in $\Sigma_n$ which have no common points or faces.
Then
\begin{enumerate}[a.]
\item
The intersection points of corresponding edges form a set $S$ of
$n+1 \choose 2$
distinct points in $\Sigma_n$.
None of these points lie in~$A$ or~$B$ or are equal to~$V$.
\item
The intersection of corresponding $t$-spaces of $A,B$
is a $(t-1)$-space in $\Sigma_n$ for $t=1,2,\ldots,n-1$.
\item
The intersection of two corresponding faces of $A,B$
is an $(n-2)$-space which lies in a fixed hyperplane $v$ of $\Sigma_n$.
\item
In particular the set $S$ lies in $v$.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $A,B$ have no common points the lines $A_iB_i, A_jB_j$,
for distinct $i,j$, exist, are distinct and contain $V$.
It follows that lines $A_i A_j$ and $B_i B_j$ intersect in a
unique point which cannot be $V$ and
which does not lie in~$A$ or~$B$. From
Lemma~\ref{lemma:pijDistinctFromPrs},
$S$ contains ${n+1 \choose 2}$ distinct points.
This proves part~a.
Since corresponding edges meet in a point,
part~b follows from
Theorem~\ref{theorem:simplexesIntersection}
part~\ref{theorem:simplexesIntersectionParta}
The dual of
Theorem~\ref{theorem:simplexesInPersp}
asserts that the intersection of pairs of corresponding
faces of $A, B$ lies in a fixed hyperplane $v$ provided that
Theorem~\ref{theorem:simplexesIntersection},
part~\ref{theorem:simplexesIntersectionPartb},
holds.
Since
Theorem~\ref{theorem:simplexesIntersection}
applies, in particular, to simplexes $A, B$ in perspective
from a point this proves
part~c.
Each point of $S$ lies in the intersection of (several pairs of)
corresponding faces of $A, B$. This proves part~d.
\end{proof}
We have now shown a (strong) converse to
Theorem~\ref{theorem:twoSimplexesFourParts},
as follows.
\begin{theorem}
Let $A,B$ be simplexes which do not share a common point or face.
Assume that there is a correspondence between points $A_i, B_i$
such that lines $A_i A_j$, $B_i B_j$ meet in a point.
Then $A,B$ are perspective from a point and a, b, c, d of
Theorem~\ref{theorem:twoSimplexesFourParts},
hold.
\end{theorem}
\section{Arcs and coordinate systems}
\label{section:ArcsAndCoordSystems}
In an $n$-dimensional projective space $\Sigma_n$ a
{\em coordinate system} is a set of $n+2$ points such that any
subset of size $n+1$ is a simplex~\cite{hirschfeld}.
A simplex and a coordinate system are examples of arcs.
An {\em arc} in $\Sigma_n$ is a set of $m$ points, with $m$ at least $n+1$,
with the property that every subset of size $n+1$ forms a simplex.
This implies that any subset of the arc of size $t$ generates a
subspace of dimension $t-1$ when $t$ is at most $n+1$.
If the underlying field is finite, of order $q$, it is conjectured that,
if $n \le q$, the maximum size of an arc in $\Sigma_n$
is $q+1$ if $q$ is odd $q$ or $q+2$ if $q$ even:
see~\cite{BruenThasBlokhuis}.
\begin{theorem}
For each $n$ there exists a coordinate system $\Gamma_{n+2}$
in $\Sigma_n$ such that no point lies in a given hyperplane $H$
of $\Sigma_n$ so long as the underlying field $F$ has order greater than~2.
\end{theorem}
\begin{proof}
Using homogeneous coordinates $(x_1,x_2,\ldots x_n, x_{n+1})$ let $K$
be the hyperplane with equation $x_1+x_2+x_3+\cdots+x_n+ x_{n+1}=0$.
We choose the first $n+1$ points $P_i$ of the arc $\Gamma_{n+2}$ to
have 1 in position $i$ and zeros elsewhere, $i=1,2,3,\ldots,n, n+1$.
These points form a simplex. Point number $n+2$ is $(1,1,1,\ldots,1,z)$,
where $z$ is any non-zero number in $F$. This point
is off $K$ provided $n +1+z$ is non- zero. This can always be done
if $F$ has order greater than~2. These $n+2$ points form a coordinate system
with no point in $K$.
Finally we use a collineation of $\Sigma_n$ mapping $K$ to the
given hyperplane $H$. This will map the set of $n+2$ points
above to a coordinate system having none of its points on $H$.
\end{proof}
\begin{remarkN}
If $F$ has order~2 then
Theorem~\ref{theorem:simplexesInPersp}
is false. To see this let $n=3$. Let $W$ be a 5-arc in
$\Sigma_3=PG(3,2)$. Suppose the result holds.
Then as in~\cite{BruenBruenMcQuillan}
the section by $PG(2,2)$ yields a configuration with 10 points.
But $PG(2,2)$ only has 7 points.
\end{remarkN}
Henceforth we assume that $F$ has order greater than~2.
\section{The section of a coordinate system in $\Sigma_{n+1}$ by a
hyperplane $H=\Sigma_n$.}
\label{section:secCoordSysByHyperplane}
We start with $\Sigma_n$ embedded as a hyperplane $H$ in $\Sigma_{n+1}$.
Let $\Gamma=\Gamma_{n+3}$ denote a coordinate system in
$\Sigma_{n+1}$ consisting of $n+3$ points $1,2,\ldots,n+2, n+3$
where none of these points lie in the hyperplane $H$.
By joining pairs of points of $\Gamma$ we
generate ${n+3 \choose 2}$ lines in $\Sigma_{n+1}$.
The section of the line joining points $i,j$ of the
arc by $H$ is the point denoted by the unordered pair $(i,j)$.
All told this yields ${n+3\choose 2}$ points in $H$, with $i,j$
lying between 1 and $n+3$.
We claim that these points are distinct. From the arc property
no 3 points of $\Gamma$ are collinear.
Further, suppose that the
line joining~1 to~2 meets the line joining~3 to~4 say in
the same point of~$H$. Then the 4 points $1,2,3,4$ lie in a plane,
which is forbidden by the arc property.
A triangle formed from 3 points of $\Gamma$, say $\{1,2,3\}$
has as its section by $H$ three collinear points $(1,2),(1,3),(2,3)$.
In general, points $(i,j)$ and $(k,l)$ lie on a line in $H$
if and only if the points have a symbol in common, in which case
they lie on a line with 3 points.
\begin{exampleN}
\label{example:completeQuadrilateral}
Suppose we section the figure formed from 4 of the 6 points in~$\Gamma$,
say the points in $S=\{1,2,3,4\}$, by $H$.
These points form a tetrahedron with 4 points,
6 lines and 4 planes in $\Sigma_3$.
The section of the figure has
${4 \choose 2}$, i.e,
6 points and 4 lines. These lines come from the 4 triangles
formed by the 4 triples in $S$.
Any pair of triples share a pair of points.
Thus any 2 of these 4 lines in $H$ meet in a point,
and no 3 lines are concurrent.
This figure lies in a plane in H and is a
complete quadrilateral \cite[p.\ 7]{coxeter}, \cite[p.\ 7]{lord}
If we project the tetrahedron to $H$ from a general point of
$\Sigma_3$ we end up with a planar figure having,
dually, 6 lines and 4 points which is known as a
complete quadrangle \cite[p.\ 7]{coxeter}, \cite[p.\ 18]{lord}.
If we project just the points in the set $\{2,3,4\}$
from the point~1 to $H$ we obtain a triangle with vertices
$(1,2),(1,3),(1,4)$.
From these 3 points we generate 3 more,
namely $(2,3),(2,4),(3,4)$.
Thus we end up with the same set of six points as the section of
$\{1,2,3,4\}$ by $H$.
\end{exampleN}
\begin{theorem}
\label{theorem:t+1arcGamman+3}
Let $S$ denote a subset of size $t+1$ of an
arc $\Gamma_{n+3}$ in $\Sigma_{n+1}$. Then
\begin{enumerate}[a.]
\item
$S$ generates a subspace of dimension $t$ in the space
$\Sigma_{n+1}$.
\item
The section of $\langle S\rangle$ by the hyperplane
$H=\Sigma_n$ generates a subspace $M$ of dimension $t-1$ in $H$.
$M$ is also generated by the projection of $S$ to $H$ from
any point of $S$.
\end{enumerate}
In summary the section of (the space generated by) a set $S$
of $t+1$ points of the $(n+3)$-arc in
$\Sigma_{n+1}$ by $H$ is a space of dimension $t-1$ in
$H=\Sigma_n$.
\end{theorem}
\begin{proof}
Part~a follows from the definition of an arc.
For Part~b, let
$S=\{1,2,3,\ldots,t, t+1\}$.
Denote by $L$ the subspace generated by
$\{2,3,\ldots,t, t+1\}$.
$L$ has dimension $t-1$.
Projecting
$L$ from the point~1,
which is not in $L$, we obtain a $(t-1)$-dimensional space $M$
generated by the $t$ points $(1,2),(1,3),\ldots,(1,t), (1,t+1)$
in $\Sigma_n$. Since $(1,2),(1,3)$ are in $M$
the point $(2,3)$ on the line joining them is also in $M$.
Then the points $(2,1)$ [$=(1,2)$], $(2,3), (2,4),\ldots,(2,t), (2,t+1)$
are in $M$. Proceeding, we see that $M$ contains all points
$(i,j)$ where $i, j$ lie between 1 and $t+1$ so $M$
is generated by the section of $\langle S\rangle$.
We generate the same space $M$ by using any point in $S$
instead of the point~1.
Alternative proof of~b.
As shown in
Example~\ref{example:completeQuadrilateral}
for the case $n=3$,
$M$ is generated by
$\{(1,2), \ldots, (1,t), (1,t+1)\}$, and has
dimension $x$ say.
Since the point~1 is not in
$\Sigma_n$ it is not in $M$. Thus
$K=\langle 1,M\rangle$ has dimension $x+1$.
Since $K$ contains points 1 and $(1,j)$ it contains
the line joining them and so the point $j$ for
$j=2,3,\ldots,t, t+1$. $K$ also has the point 1.
Thus $K$ contains $t+1$ arc points and has dimension $t$.
Therefore $t=x+1$, so $x=t-1$.
Thus $M$, the section of the $(t+1)$-set $S$ of the arc
by $H$ generates a space of dimension $t-1$.
\end{proof}
From the above the section of the space generated by a
$(t+1)$-set of the arc $\Gamma$ by $H$ is a
$(t-1)$-space.
We also have the following result.
\begin{theorem}
Given the arc $\Gamma$ with $n+3$ points in
$\Sigma_{n+1}$ the section by the hyperplane
$H=\Sigma_n$ has ${n+3\choose 2}$ points, ${n+3\choose 3}$ lines,
${n+3\choose 4}$ planes, $\ldots$, and
${n+3\choose n+1}$ spaces of dimension $n-1$
i.e. hyperplanes in $H=\Sigma_n$.
\end{theorem}
\section{From arcs to simplexes in perspective.}
\label{section:arcsToSimplexesInPersp}
We continue with the same notation.
$H=\Sigma_n$ is a hyperplane in
$\Sigma_{n+1}$.
$\Gamma=\Gamma_{n+3} = \{1,2,3,\ldots,n+3\}$
is an arc of $n+3$ points in the space $\Sigma_{n+1}$.
\begin{theorem}
\label{theorem:n+1PointsSimplex}
Let $S=\{(1,3), (1,4),\ldots,(1,n+2), (1,n+3)\}$.
Then the $n+1$ points of $S$
generate a subspace of dimension $n$
and form a simplex in~$H$.
\end{theorem}
\begin{proof}
This follows as in the proof of
Theorem~\ref{theorem:t+1arcGamman+3}.
\end{proof}
Similarly we have:
\begin{theorem}
\label{theorem:similarlyPointsSimplex}
Let $T= \{(2,3), (2,4),\ldots,(2,n+2),(2,n+3)\}$.
Then the points in $T$ form a simplex
in $H$.
\end{theorem}
\begin{theorem}
\label{theorem:STAugmentedBy12Arc}
The sets $S,T$, augmented by the point $(1,2)$,
form a coordinate system yielding an arc of
size $n+2$ in $H$ in each case.
\end{theorem}
\begin{proof}
$S$ is a simplex. We show that any $n$-subset $Z$ of $S$,
when augmented by $(1,2)$ is a simplex.
Let $Z=\{(1,3),(1,4),\ldots,(1,n+2)\}$.
We must show that $X=\{(1,2),(1,3),\ldots,(1,n+2)\}$
is a simplex. $X$ is the projection of
$U=\{2,3,4,\ldots,n+2\}$ from the point 1 onto $H$.
Since $\langle U\rangle$ has dimension $n$
we conclude that $X$ is a set of $n+1$ points
that generates a space of dimension $n$
i.e. $X$ is a simplex in $H$.
Similarly $T$, augmented by $(1,2)$, forms a coordinate system.
\end{proof}
\begin{theorem}
\label{theorem:simplexesPointsFaces}
\begin{enumerate}[a.]
\item
The sets $S,T$ are simplexes which are in perspective from
the vertex $V=(1,2)$.
\item
$S,T$ have no points in common.
\item
$S$ and $T$ have no faces in common.
\end{enumerate}
\end{theorem}
\begin{proof}
From
Theorem~\ref{theorem:n+1PointsSimplex}
and
Theorem~\ref{theorem:similarlyPointsSimplex},
$S$ and $T$ are simplexes.
They are in perspective from the point $V=(1,2)$ because $(1,2)$
is collinear with points $(1,i)$ and $(2,i)$ for $i$
between 3 and $n+3$.
The labels of their points show that $S$ and $T$ have no points in common.
For part~c, suppose that $S$ and $T$ have a face
$\Lambda$ in common.
The lines joining the $n$ corresponding points
$A_i, B_i$ in $\Lambda$ all contain
the vertex $V$.
Then $\Lambda$, augmented by $V$, is not a
simplex in $H$, contradicting
Theorem~\ref{theorem:STAugmentedBy12Arc}.
\end{proof}
\section{From simplexes in perspective to arcs.}
\label{section:simplexesToArcs}
Our goal now is to show that two simplexes in perspective
which have no common points or faces arise from an arc as
developed in
Section~\ref{section:arcsToSimplexesInPersp}.
\begin{theorem}
\label{theorem:twoSimplexesNoPtNoFaceArcHyperplane}
In $\Sigma_n$ let
$$A =\{A_3,A_4,\ldots,A_{n+2}, A_{n+3}\}$$
and
$$B=\{B_3,B_4,\ldots,B_{n+2}, B_{n+3}\}$$
denote two simplexes in $\Sigma_n$
which are in perspective from a point $V$
and have no point in common.
Assume also that the two simplexes have no face
in common or, equivalently, that
$V$ does not lie on a face of $A$ or of $B$.
Then, as in
Theorem~\ref{theorem:simplexesPointsFaces},
$A,B$ arise from the section of the
space generated from an arc of size $n+3$ in
$\Sigma_{n+1}$ by $H=\Sigma_n$, an $n$-dimensional
hyperplane of $\Sigma_{n+1}$.
\end{theorem}
\begin{proof}
We choose a line $l$ on $V$ not lying in
$H$ and on it we choose two points
labelled $1,2$ which are different from $V$.
Define a set $\Gamma_{n+3}$ as follows:
Point~$i$ is the intersection of the lines $1A_i$ and $2B_i$,
$3\le i\le n+3$.
This is well-defined as, for each $i$, those two lines
lie in a plane containing two distinct lines on $V$ i.e.
the lines joining $V$ to the point~$1$ and the point $A_i$.
We claim that $\Gamma_{n+3} = \{1,2 ,3, 4,\ldots,n+2, n+3\}$
is an arc of size $(n+1) +2$ yielding a coordinate system
in $\Sigma_{n+1}$ such that none of its points lie in $\Sigma_n$.
We must show that every $(n+2)$-subset of $\Gamma_{n+3}$
generates the space $\Sigma_{n+1}$.
Such a subset must contain either 1, or 2, or both.
Case 1.
We show that the $n+2$-set
$S= \{1,3,4,\ldots,n+2, n+3\}$
generates $\Sigma_{n+1}$ as follows.
The $n+1$ points $\{A_3,A_4,\ldots,An+3\}$
generate a face of $A$, which is an $n$-space in
$\Sigma_n$ as $A$ is a simplex.
The face is a projection of the space
$U=\langle \{3, 4,\ldots,n+2,n+3\}\rangle$
from the external point~1.
Thus $U$ has dimension $n$.
When we adjoin~1, the enlarged space, generated by
$\{1,3, 4,\ldots,n+2, n+3\}$, has dimension~$n+1$.
[Alternatively the above face of $A$ is an $n$-space.
Since the point~$1$ is not in $\Sigma_n$
adjoining it to this face generates an
$n+1$-space, namely the space generated by
$\{1,3,4,\ldots,n+2,n+3\}$].
Case 2.
The $n+2$-set $\{2,3,4,\ldots,n+2,n+3\}$
also generates $\Sigma_{n+1}$.
The proof is the same as for
Case~1 upon interchanging points $1, 2$.
Case 3.
We show that the set $\{1,2,3,\ldots,n+2\}$
generates $\Sigma_{n+1}$.
The $n$ points
$\{A_3, A_4,\ldots,A_{n+2}\}$
form a face of $A$ and generate a hyperplane $K$ of
dimension $n-1$ in
$\Sigma_n$.
By hypothesis, the point $V$ is not in $K$, so
$\langle K,V\rangle$ has dimension $n$.
Adjoining the point~$1$ to $\langle K,V\rangle$
yields an $(n+1)$-dimensional space $W$ in
$\Sigma_{n+1}$.
The points $1,2$ and $V$ are collinear.
So $W$
contains $1$ and $2$.
Since the points $1,i$ and $A_i$ are collinear,
$W$ also contains points $3,4,\ldots,n+1,n+2$.
Since $\langle 1,2,3,\ldots,n+1,n+2\rangle$
contains $K,V$ and the point~$1$ it is an $(n+1)$-space
contained in $W$ so it must be $W$.
In summary, the set
$\Gamma_{n+3}$ above is an $(n+3)$-arc in
$\Sigma_{n+1}$.
Moreover, no point of it lies in $\Sigma_n$.
To see this, the points $1,2$ lie outside $\Sigma_n$.
Suppose that a point~$i$ other than the points $1,2$
lies in $\Sigma_n$.
Since $A_i$ lies in $\Sigma_n$ and the points
$1,i, A_i$ are collinear, this implies that the
point~$1$ is in $\Sigma_n$, which is a contradiction.
We examine the section of the arc
$\Gamma_{n+3}$ in
$\Sigma_{n+1}$ by the hyperplane $H=\Sigma_n$,
assigning new labels to the two simplexes, and to $V$, as follows.
$V$ is on the line joining points $1$ and $2$ and is relabelled $(1,2)$.
$A_i$ is on the line joining points $1$ and $i$ so it becomes $(1,i)$.
Similarly $B_i$ is now the point $(2,i)$ for
$i= 3,4,\ldots,n+2$.
Because the points $(1,i),(2,i)$ and $(1,2)$
are collinear we have that $A_i$ and $B_i$
are in perspective from the vertex $(1,2)$.
In summary the two simplexes $A, B$
are contained in the section of the arc
$\Gamma_{n+3}$ in $\Sigma_{n+1}$
which yields a set $X$ of
${n+3\choose 2}$
points in $\Sigma_n$.
$X$ contains two simplexes,
$A=\{(1,3), (1,4),\ldots,(1,n+3)\}$, $B=\{(2,3), (2,4),\ldots,(2,n+3)\}$.
They are in perspective from $(1,2)$ and they do not
share a point. The vertex of perspective
which is the point $(1,2)$, does not lie on
any face of $A$ or $B$.
Thus $A$ and $B$ do not share a face.
This proves
Theorem~\ref{theorem:twoSimplexesNoPtNoFaceArcHyperplane}.
\end{proof}
\section{An extension of the Desargues theorem.}
\label{section:extensionOfDesargues}
It is time to reap the benefit of the work in
Sections~\ref{section:ArcsAndCoordSystems}-\ref{section:simplexesToArcs}.
This section contains a second proof of the extended Desargues theorem
in all dimensions.
\begin{definitionN}
Let $A, B$ be simplexes of $\Sigma_n$ which have
no common points or faces. They are defined to be
in perspective from a hyperplane $v$ if there is
a correspondence between the points of $A,B$ – and
therefore the subspaces of $A,B$ - such that the following holds:
the intersection of corresponding $t$-spaces of $A,B$
is a $(t-1)$-space lying in a fixed hyperplane $v$ for
$t=1,2,\ldots,n-1$.
\end{definitionN}
\begin{theorem}
\label{twoSimplexesNoPointNoHyperplaneTwoInPerspective}
Let $A, B$ denote two simplexes in the space
$\Sigma_n$ with no common point or hyperplane.
Then, if $A, B$ are in perspective from a point
they are in perspective from a hyperplane.
\end{theorem}
\begin{proof}
From
Theorem~\ref{theorem:twoSimplexesNoPtNoFaceArcHyperplane}.
we may assume that $A,B$ are as follows:
$$A= \{(1,3), (1,4),\ldots,(1,n+3)\},$$
$$B=\{(2,3), (2,4),\ldots,(2,n+3)\}.$$
They are in perspective from the vertex $(1,2)$
since the line joining corresponding points $(1,i)$ and $(2,i)$
contains the vertex $(1,2)$,
$3\le i\le n+3$.
The edge of $A$ joining $(1,i)$ to $(1,j)$ contains the point $(i,j)$.
The edge of $B$ joining $(2,i)$ to $(2,j)$ contains the point $(i,j)$.
Moreover each point $(i,j)$ arises as the intersection of
two corresponding edges,
$3\le i,j\le n+3$.
Thus the set $S$ of points $(i,j)$, which is
the set of all intersections of corresponding edges of $A$ with $B$
is found as the section of the configuration generated
by the $(n+1)$-set $U=\{3,4,\ldots,n+3\}$ in $\Sigma_{n+1}$
by the hyperplane $H=\Sigma_n$.
$U$ is an $(n+1)$-subset of the arc $\Gamma_{n+3}$ in $\Sigma_{n+1}$.
From
Theorem~\ref{theorem:t+1arcGamman+3} part~b,
$Dim(\langle S\rangle)= n-1$ and $S$ generates a hyperplane
in $H=\Sigma_n$.
The number of points in $S$ is ${n+1\choose 2}$.
More generally, let $Q,R$ denote sets of $t+1$
corresponding pairs of points of $A,B$.
For example let $Q=\{(1,3) ,(1,4),\ldots,(1,t+3)\}$ and
and let $R=\{(2,3),(2,4),\ldots,(2,t+3)\}$.
Then $Q,R$ each generate a $t$-space.
As above, in the case when $t=n$, the set of intersections of
all pairs of corresponding edges consists of points $(i,j)$
for $i,j$ lying between $3$ and $t+3$ with $i$ unequal $j$.
These points generate a space of dimension $t-1$ in $H$.
\end{proof}
The dual of
Theorem~\ref{twoSimplexesNoPointNoHyperplaneTwoInPerspective},
where we use the reciprocity that interchanges points
and faces as mentioned in Section~\ref{section:preliminaries},
provides a converse to it as follows.
\begin{theorem}
Let $A, B$ be two simplexes in a
projective space $\Sigma_n$ with no common point
and no common hyperplane. Then if $A,B$ are in perspective
from a hyperplane they are in perspective from a point.
\end{theorem}
\section{The Configurations.}
\label{section:theConfigs}
To recap, we are working in $\Sigma_n$ which is contained
as a hyperplane $H$ in $\Sigma_{n+1}$.
The arc $\Gamma_{n+3}$ in $\Sigma_{n+1}$,
when sectioned by $H$, yields a set $W$ of
${n+3\choose 2}$ points
in $H$. As in
Section~\ref{section:arcsToSimplexesInPersp},
$W$ contains two simplexes $A, B$ accounting for $2(n+1)$ points.
The vertex of perspective is one point. Then, from
Section~\ref{section:extensionOfDesargues},
the intersections of pairs of corresponding edges yield
${n+1\choose 2}$
additional points in a hyperplane of
$H=\Sigma_n$.
This accounts for all points of the configuration!
The proof follows from the following identity.
\begin{equation}
{n+3 \choose 2} =2(n+1) +1 +{n+1 \choose 2}.
\end{equation}
\bigskip
From the manner of assigning pairs $(i,j)$ to points in $H$,
different pairs yield different points. Thus the points of the
simplexes, the vertex and the edge-intersections
are distinct sets of points.
\begin{theorem}
\label{theorem:n+3choose2pointsVertexPersp}
Each of the ${n+3\choose 2}$ points
of the configuration is a vertex of perspectivity for a
pair of simplexes having no points or faces in common.
\end{theorem}
\begin{proof}[Proof 1.]
Choose any point $(i,j)$ in $H$.
It lies on the line joining points $i$ and $j$ of the arc.
It will be the vertex. The section by $H$ of the lines
joining $i,j$ to the other points of the existing arc
yields two simplexes in perspective from the point $(i,j)$.
The intersections of corresponding edges of the new pair
of simplexes also lie in the original configuration.
\end{proof}
\begin{proof}[Proof 2.]
Any permutation $T$ of $\{1,2,3,....n+3\}$ when applied to
the points of the configuration yields two simplexes in
perspective from a vertex, with the intersections
of corresponding edges lying in a hyperplane.
This is so because two points $(x,y)$ and $(z,w)$
are contained in a line i.e., share a symbol,
if and only their images under $T$ share a symbol.
\end{proof}
\begin{unnamed}
\label{unnamed:initialArcInSigma5}
The case $n=4$.
The underlying initial arc in $\Sigma_5$
has $5+2$, i.e. $7$, points named $1,2,3,\ldots,7$.
The section of lines joining pairs of points of the arc
by the hyperplane $H=\Sigma_4$ yields a set of 21 distinct points.
From
Theorem~\ref{twoSimplexesNoPointNoHyperplaneTwoInPerspective}
the section of the lines joining pairs of points of
the 5-subset $X=\{3,4,5,6,7\}$ by $H$ generates a
$\Sigma_3$ subspace of $H$ denoted by $v$.
The section of (the lines generated by) $X$ contains
10 points $(i,j)$ where $i$ and $j$ are different elements of $X$.
These 10 points are the intersections of pairs of
corresponding edges of the given simplexes $A, B$ in $H$
as in Theorem~\ref{twoSimplexesNoPointNoHyperplaneTwoInPerspective}.
\end{unnamed}
We mention some facts on the structure of these ten points.
Choose any of the ten points to be a vertex, say the point $(3,4)$.
Then we have two triangles namely $\{(3,5), (3,6), (3,7)\}$ and
$\{(4,5),(4,6),(4,7)\}$ which are in perspective from $(3,4)$.
From the arc property the triangles lie in different planes.
[If, for example, $(4,7)$ was in the plane
containing $(3,5),(3,6),(3,7)$
then the arc points $3,4,5,6,7$
would only generate a 3-space.]
The intersection of the corresponding triangle edges
lies on the line of intersection of the two planes.
The intersection points are $\{(4,5),(4,6),(4,7)\}$.
The two planes of the triangles
in perspective meet in a line in $v$.
Each line in $v$ is descended from a 3-subset in $\Sigma_5$.
A 3-set such as $\{3,4,5\}$ lies in two 4-subsets of
$\{3,4,5,6,7\}$. Thus each line in $v$ lies in just 2 planes of $v$.
Each point such as $(3,4)$ lies on 3 lines formed from the triples
$\{345\},\{346\}, \{347\}$ and on 3 planes formed from the
4-sets $\{3456\} ,\{3457\},\{3467\}$.
The structure of the 10 points is symmetric in the
sense that any point is the vertex of perspective of
two triangles such that the intersection of pairs
of corresponding edges lie on a line, the axis of perspectivity.
As mentioned above each of the 21 points serves as a
vertex of perspective of two simplexes in $\Sigma_4$.
The intersections
of corresponding edges yield a set of 10 points
as described above. So we have 21 such sets of ten points.
\section{Self replication of configurations.}
\label{section:selfReplicationOfConfigs}
For want of better terminology we first
define what is meant by a ``semi-simplex''.
\begin{unnamed}
In $\Sigma_n$ a semi-simplex is defined to be a set
of $n$ points which generate an $(n-1)$-space.
A simplex is a set of $n+1$ points which generate the $\Sigma_n$.
We examine the configuration of a pair of semi-simplexes
in perspective and the intersections of corresponding edges.
$n=1$.
Here a semi-simplex pair is a set of 2 points $A,B$ on a line $l$.
Let $V$ be another point on $l$.
Then $A,B$ are in perspective from $V$.
So the configuration is a line with 3 points $A,B$ and $V$.
$n=2$.
In the plane a semi-simplex is a pair of points $A_1, B_1$ on a line $L_1$.
Let $A_2$, $B_2$ form another line $L_2$ in the plane.
Let $V$ denote the intersection of $A_1A_2$ with $B_1B_2$.
We now have two semi-simplexes in perspective from $V$.
Next, the intersections of corresponding edges is the point
$C_3$ where lines $A_1A_2$ and $B_1B_2$ meet.
In summary we have a complete quadrilateral with 6 points.
$n=3$.
A semi-simplex is a triangle. A pair of semi-simplexes
in perspective is simply is a pair of triangles
in perspective
from a vertex. In our situation,
because of the arc property,
the triangles will not be coplanar. The general result is as follows.
\end{unnamed}
\begin{theorem}
\label{theorem:twoSimplexesArcPointsn+3choose3}
In $\Sigma_n$,
let $A,B$ denote two simplexes in perspective from a point $V$
such that $A,B$ share no points or hyperplanes. Let
$\Gamma= \Gamma_{n+3}$ denote the underlying arc in
$\Sigma_{n+1}$ giving rise to $A,B$ as in
Section~\ref{section:simplexesToArcs}.
Let $X_n$ denotes the ${n+3 \choose 2}$
points $(i,j)$ in $\Sigma_n$ which are the section of lines joining points
$i,j$ of the arc $\Gamma$. Then
\begin{enumerate}[a.]
\item
$X_n$ consists of two simplexes $A, B$ in perspective,
the vertex of perspective, and a subset $Y_n$ of $X_n$
consisting of the intersections of pairs of corresponding edges of $A,B$.
\item
The points of $Y_n$ form a semi-simplex pair $C,D$
in perspective from a vertex in $\Sigma_{n-1}$.
This pair, the vertex of perspective and the intersections
of pairs of corresponding edges
of $C,D$ account for all the points in $Y_n$.
\item
The intersections of pairs of corresponding edges of $C,D$
form a semi-simplex pair $E,F$
in
$\Sigma_{n-2}$
which are in perspective from a vertex.
\end{enumerate}
\end{theorem}
\begin{proof}
The general case is analogous to the configuration of 10 points in
$\Sigma_3$ for the case $n=4$ in~\ref{unnamed:initialArcInSigma5}.
$X_n$ consists of the ${n+3 \choose 2}$
points $(i,j)$ for $i,j$ between $1$ and $n+3$.
As in
Theorem~\ref{twoSimplexesNoPointNoHyperplaneTwoInPerspective}
the intersections of pairs of corresponding edges is
the subset $Y_n$ of $X_n$
consists of points $(i,j)$ with $i,j$ lying between $3$ and $n+3$.
$Y_n$ lies in $K$, with $K$ a hyperplane of
$\Sigma_n$ of dimension $n-1$, and contains ${n+1 \choose 2}$
points. If we choose say, the point $(3,4)$ as vertex
we have two min-simplexes in perspective from it,
namely $\{(3,5),(3,6),\ldots,(3,n+3)\}$ and
$\{(4,5),(4,6),\ldots,(4,n+3)\}$.
The set $Y_n$, lying, in $\Sigma_{n-1}$,
contains 2 mini-simplexes in $\Sigma_{n-1}$,
each having $n-1$ points,
along with the point $(3,4)$ as a vertex of perspective
and contains also ${n-1 \choose 2}$
points of intersection of corresponding edges which lie in a
$\Sigma_{n-2}$.
This accounts for all points in $Y_n$
as follows from the following identity:
\begin{equation}
{n+1 \choose 2} = 2(n-1) +1 + {n-1 \choose 2}.
\end{equation}
This identity is simply
Theorem~\ref{twoSimplexesNoPointNoHyperplaneTwoInPerspective}
with $n$ replaced by $n-2$.
Part~c follows from an iteration of the above procedure.
\end{proof}
\section{Some further extensions of Desargues theorem.}
\label{section:furtherExtensionsOfDes}
Using the above notation,
we consider 3 semi-simplexes $A, B,C$ in $\Sigma_n$
such that each pair is in perspective from one of
three vertices of perspective that lie on a line in $X_n$.
Recall that $X_n$ is the set of ${n+3 \choose 2}$
points obtained from the arc $\Gamma_{n+3}$
consisting of the points $1,2,\ldots,n+2,n+3$.
Without loss of generality the 3 vertices are $(1,2)$, $(1,3)$
and $(2,3)$. Neither one of the pair of semi-simplexes
in perspective from $(1,2)$ is allowed to contain $(1,3)$ or $(2,3)$.
The pair $A,B$ of semi-simplexes, perspective from $(1,2)$
must look like $\{(1,4),(1,5),\ldots,(1,n+2) ,(1,n+3)\}$,
$\{(2,4),(2,5),\ldots,(2,n+2), (2,n+3)\}$.
If the simplex $C$ is in perspective with $B$ from the point $(2,3)$ then
$C=\{(3,4), (3,5),\ldots,(3,n+2), (3,n+3)\}$.
The pair $A,C$ are then
in perspective from the point $(1,3)$.
We now have the following result which is shown for the
case $n=3$ \cite[p.\ 64]{lord}.
\begin{theorem}
Let $A,B ,C$ be three semi-simplexes in
$\Sigma_n$ such that each of the 3 pairs are
in perspective from one of three collinear points.
Then each pair of semi-simplexes is perspective
from the same hyperplane in $\Sigma_n$.
\end{theorem}
\begin{proof}
As in the proof of
Theorem~\ref{twoSimplexesNoPointNoHyperplaneTwoInPerspective}
the intersections of corresponding pairs of edges for each
pair of simplexes lies in a hyperplane $Z$
generated by the set $\{(i,j)\}$
where $i,j$ lie between $4$ and $n+3$.
Alternatively
as in the proof of
Theorem~\ref{theorem:t+1arcGamman+3},
$Z=\langle \{(4,5), (4,6),\ldots,(4,n+2),(4,n+3)\}\rangle$.
\end{proof}
\section{A fourth proof of an extended Desargues Theorem.}
\label{section:fourthProof}
\begin{theorem}
Let $\mathbf{A},\mathbf{B}$ be simplexes in $\Sigma_n$
which are in perspective from a point and share no points or faces.
Then the intersections of corresponding edges of
$\mathbf{A},\mathbf{B}$ lie in a hyperplane of $\Sigma_n$.
\end{theorem}
\begin{proof}
We use the method in~\cite{conway} in the case $n=2$.
In detail, let $A_1,A_2,A_3$ and $B_1,B_2,B_3$ form triangles
in the plane $\pi$ which are in perspective from $V$
and share no vertices or edges.
Choose any point $W$ of the 3-space containing $\pi$
that is not in $\pi$.
Let $A_2^*\ne W,A_2$ be a point on the line $WA_2$.
Then $A_2^*$ is not in $\pi$.
The plane $\sigma$ formed from $W,A_2,B_2$ also contains
$A_2^*$ and $V$.
We define $B_2^*$ as the intersection of $VA_2^*$ and $WB_2$.
The triangles $A_1,A_2^*,A_3$ and $B_1,B_2^*,B_3$
are in perspective from $V$ and do not lie in a plane.
Thus, the intersections of corresponding lines of these two triangles
lie in a line $l^*$ which is the intersection of the
planes of the two triangles.
$W$ projects the two triangles to the plane $\pi$.
It projects lines $A_1A_2^*, B_1B_2^*$ to lines $A_1A_2,B_1B_2$.
The intersection of $A_1A_2^*$ and $B_1B_2^*$
is projected by $W$ to the point of intersection in $\pi$
of lines $A_1A_2,B_1B_2$.
Similarly the intersection point of lines $A_2^*A_3,B_2^*B_3$
is projected to the intersection point of lines $A_2A_3$
and $B_2B_3$.
Since $A_1,A_3,B_1,B_3$ are in $\pi$
the point $A_1A_3\cap B_1B_3$ is projected to itself.
In summary, $W$ projects $l^*$ to the line $l$ in $\pi$
containing the 3 points of intersection of corresponding
lines of the triangles $A_1A_2A_3$, $B_1B_2B_3$.
This proves the planar theorem.
In $\Sigma_n$ we have two simplexes $\mathbf{A},\mathbf{B}$
in perspective from a point $V$ with
$\mathbf{A}=\{A_1,A_2,\ldots,A_{n+3}\}$,
$\mathbf{B}=\{B_1,B_2,\ldots,A_{n+3}\}$.
$\mathbf{A},\mathbf{B}$ share no points or faces.
As above, a point $W$ not in $\Sigma_n$ lifts
$A_2,B_2$ to points $A_2^*,B_2^*$ in $\Sigma_{n+1}$,
not in $\Sigma_n$.
The lifted simplexes $\mathbf{A^*}=\{A_1,A_2^*,A_3,\ldots,A_{n+3}\}$
and $\mathbf{B^*}=\{B_1,B_2^*,B_3,\ldots,B_{n+3}\}$
lie in distinct hyperplanes
$H_1,H_2$ of $\Sigma_{n+1}$.
The intersections of corresponding edges lie in
$L^*=H_1\cap H_2$.
As above
$W$ projects $L^*$ to a hyperplane $L$ of
$\Sigma_n$ containing the intersection
of corrspondinding edges of $\mathbf{A},\mathbf{B}$
in $\Sigma_n$. This proves the theorem.
\end{proof}
\section{Concluding Remarks.}
\bigskip\noindent
{\bf
Acknowledgement:}
The author acknowledges the support of the
National Science and Engineering Research Council of Canada
over the last fifty years. He is also grateful to the
National Research Council of Italy for supporting his work
over many of these fifty years.
He thanks Professor James McQuillan of Western Illinois University
for his insights and assistance with this work.
\bigskip
|
1,116,691,497,690 | arxiv |
\section{Introduction}
\label{sec:intro}
The top quark is the most massive elementary particle in the Standard Model (SM).
Its mass is close to the scale of electroweak symmetry breaking, implying a unique sensitivity to interactions beyond the SM.
The production of top quarks at the Large Hadron Collider (LHC) is dominated by pair production of top and antitop quarks ($\ttbar$) via the strong interaction.
Possible new phenomena beyond the SM can modify the kinematic properties of the $\ttbar$ system.
Thus measurements of these distributions provide a means of testing the SM prediction at the TeV scale.
In addition, more accurate and detailed knowledge of top quark pair production
is an essential component of the wide-ranging LHC physics program,
since $\ttbar$ events are the dominant background to many searches for new physics as well as Higgs boson measurements.
The large $\ttbar$ production cross-section at the LHC leads to a large number of $\ttbar$ pairs, allowing precise inclusive and differential measurements in a wide kinematic range.
The inclusive $\ttbar$ production cross-section ($\ensuremath{\sigma_{\ttbar}}$) has been measured
in proton-proton ($pp$) collisions at \rts{} = 7 TeV, 8 TeV and 13 TeV by the ATLAS
and CMS experiments~\cite{TOPQ-2013-04, CMS-TOP-11-005, CMS-TOP-12-007, CMS-TOP-13-004, TOPQ-2015-09, CMS-TOP-15-003},
with a best reported precision of 3.6\% (3.7\%) at 7 (8) TeV~\cite{CMS-TOP-13-004}.
Measurements of the $\ttbar$ differential cross-section as a function of the kinematic properties of the top quark or the $\ttbar$ pair have also been performed by ATLAS~\cite{TOPQ-2011-07, TOPQ-2012-08, TOPQ-2013-07, TOPQ-2014-15, TOPQ-2015-06}
and CMS~\cite{CMS-TOP-11-013, CMS-TOP-12-028, CMS-TOP-14-012, CMS-TOP-14-018}.
This paper presents measurements of the normalized differential \ttbar{} cross-sections
as a function of the invariant mass ($\ensuremath{m_{\ttbar}}$), the transverse momentum ($\ensuremath{\pt^{\ttbar}}$),
and the rapidity ($\ensuremath{|y_{\ttbar}|}$) of the \ttbar{} system
in $pp$ collisions at \rts{} = 7 TeV and 8 TeV
recorded by the ATLAS detector~\cite{PERF-2007-01}.
The dilepton $\ttbar$ decay mode used in this measurement yields a clean signal and thus provides an accurate test for the modeling of $\ttbar$ production.
This paper complements other ATLAS measurements that use the lepton+jets ($\ell$+jets) $\ttbar$ decay mode~\cite{TOPQ-2011-07, TOPQ-2012-08, TOPQ-2013-07, TOPQ-2014-15, TOPQ-2015-06}.
A top quark pair is assumed to decay into two $W$ bosons and two $b$-quarks with a branching ratio of 100\%.
The dilepton decay mode of $\ttbar$ used in this analysis refers to the mode
where both $W$ bosons decay into a charged lepton (electron or muon) and a neutrino.
Events in which the $W$ boson decays into an electron or a muon through a $\tau$ lepton decay
are also included.
Dileptonic $\ttbar$ events are selected by requiring two leptons (electron or muon) and at least two jets,
where at least one of the jets is identified as containing a $b$-hadron.
The specific decay modes refer to the $\diel$, $\dimu$, and $\emu$ channels.
In the 8 TeV measurement, one lepton must be an electron and the other must be a muon (the $\emu$ channel).
This channel provides a
data sample large enough for the measurement to be limited by systematic uncertainties at 8 TeV.
In the 7 TeV analysis,
where the integrated luminosity is smaller,
events containing same-flavor electron or muon pairs (the $\diel$ and $\dimu$ channels) are also selected
in order to maximize the size of the available dataset.
\section{ATLAS detector}
\label{sec:detector}
The ATLAS detector\footnote{ATLAS uses a right-handed coordinate system with
its origin at the nominal interaction point (IP) in the center of the detector
and the $z$-axis along the beam pipe.
The $x$-axis points from the IP to the center of the LHC ring,
and the $y$-axis points upward.
Cylindrical coordinates $(r,\phi)$ are used in the transverse plane,
$\phi$ being the azimuthal angle around the beam pipe.
The pseudorapidity is defined in terms of the polar angle $\theta$
as $\eta = -\ln{\tan(\theta/2)}$,
and transverse momentum and energy are
defined as $$p_{\mathrm{T}}$ = p \sin\theta$ and $$E_{\mathrm{T}}$ = E \sin \theta$.
Distances in ($\eta$, $\phi$) space are denoted by
$\Delta R = \sqrt{(\Delta\eta)^2 + (\Delta\phi)^2}$.}
is a general-purpose,
cylindrically symmetric detector
with a barrel and two endcap components.
The inner detector (ID) is closest to the interaction point
and provides precise reconstruction of charged-particle tracks.
It is a combination of high-resolution silicon pixel and strip detectors, and a straw-tube tracking detector.
The ID covers a range of $|\eta| < 2.5$ and is surrounded by
a superconducting solenoid that produces a 2 T axial field within the ID.
Surrounding the ID are electromagnetic and hadronic sampling calorimeters.
The liquid argon (LAr) sampling electromagnetic calorimeter covers the pseudorapidity range of $|\eta| < 3.2$ with high granularity.
The hadronic sampling calorimeters use steel/scintillator-tiles in $|\eta| < 1.7$ and LAr technology for $1.5 < |\eta| < 4.9$.
The muon spectrometer is the outermost subdetector and is composed
of three layers of chambers.
It is designed for precision measurement and detection of muons
exploiting the track curvature in the toroidal magnetic field.
The trigger system involves a combination of hardware- and software-based triggers at three levels to reduce the raw trigger rate of 20 MHz to 400 Hz.
\section{Data and simulation samples}
\label{sec:samples}
The datasets used in this analysis were collected from LHC $pp$ collisions
at $\sqrt{s}$ = 7 and 8 TeV in 2011 and 2012.
The total integrated luminosities are
4.6\,\ifb{} with an uncertainty of 1.8\% at \rts{} = 7 TeV and
20.2\,\ifb{} with an uncertainty of 1.9\% at \rts{} = 8 TeV.
The luminosity was measured using techniques
described in Refs.~\cite{DAPR-2011-01,DAPR-2013-01}.
The average number of $pp$ interactions per bunch crossing (pileup)
is about 9 for the 7 TeV dataset and increases to about 21 for the 8 TeV dataset.
The data sample was collected using single-lepton triggers.
The $\sqrt{s}=7$ TeV dataset uses a single-muon trigger requiring at least one muon with transverse momentum $$p_{\mathrm{T}}$$ above 18$\GeV$ and a single-electron trigger requiring at least one electron with a $$p_{\mathrm{T}}$$ threshold of either 20 or 22$\GeV$, with the $$p_{\mathrm{T}}$$ threshold being increased during data-taking to cope with increased luminosity.
In the $\sqrt{s}=8$ TeV dataset, the logical OR of two triggers is used in order to increase the efficiency for isolated leptons at low transverse momentum, for each lepton type.
For electrons the two $$p_{\mathrm{T}}$$ thresholds are 24 GeV and 60 GeV,
and for muons the thresholds are 24 GeV and 36 GeV,
where only the lower-$$p_{\mathrm{T}}$$ triggers impose lepton isolation requirements.
Samples of Monte Carlo (MC) simulated events are used
to characterize the detector response and efficiency
for reconstructing \ttbar{} events,
to estimate systematic uncertainties, and to predict the background contributions
from various physics processes.
The samples were processed through the {\sc Geant4}~\cite{Agostinelli:2002hh}
simulation of the ATLAS detector~\cite{SOFT-2010-01}
and the ATLAS reconstruction software.
For the evaluation of some systematic uncertainties,
generated samples are passed through
a fast simulation using a parameterization of the performance of the ATLAS electromagnetic
and hadronic calorimeters~\cite{ATL-SOFT-PUB-2014-01}.
The simulated events include pileup interactions to emulate the multiple pp interactions in each event present in the data.
The nominal signal \ttbar{} sample, {\sc Powheg+Pythia}, is generated
using the {\sc Powheg}
({\sc Powheg}-hvq patch4, revision 2330, version 3.0)
\cite{Nason:2004rx, Frixione:2007vw, Alioli:2010xd, Frixione:2007nw}
generator,
which is based on next-to-leading-order (NLO) QCD matrix element calculations.
The CT10~\cite{CT10} parton distribution functions (PDF) are
employed and the top quark mass ($m_{t}$) is set to 172.5 GeV.
The $h_{\rm damp}$ parameter in {\sc Powheg},
which controls the $p_{\mathrm{T}}${} of the first additional emission beyond the Born configuration,
is set to infinity for the 7 TeV sample and set to $m_{t}$ for the 8 TeV sample.
The main effect of this parameter is to regulate the high-$p_{\rm T}$ emission against which the top quark pair system recoils.
In studies~\cite{ATL-PHYS-PUB-2015-002,ATL-PHYS-PUB-2015-011}
using data from $\sqrt{s}$ = 7 TeV ATLAS $\ttbar$ differential cross-section
measurements in the $\ell$+jets channel~\cite{TOPQ-2012-08},
$h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ was shown to give a better description of data
than $h_{\rm damp}=\infty$,
especially in the $\ensuremath{\pt^{\ttbar}}$ spectrum~\cite{ATL-PHYS-PUB-2015-002,ATL-PHYS-PUB-2015-011}.
Thus, the {\sc Powheg} $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ sample was generated at 8 TeV as the nominal sample.
At 7 TeV, while only the {\sc Powheg} $h_{\rm damp}=\infty$ full MC sample is available,
the generated parton-level distributions with $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ can be accessed and are used for comparison to the results.
Parton showering and hadronization are simulated with {\sc Pythia}~\cite{Sjostrand:2006za}
(version 6.427)
using the Perugia 2011C (P2011C) set of tuned parameters (tune)~\cite{Skands:2010ak}
and the corresponding leading-order (LO) CTEQ6L1 PDF set~\cite{Pumplin:2002vw}.
The effect of
the choice of generators and parton showering models
are studied with predictions from
{\sc MC@NLO}~\cite{Frixione:2002ik, Frixione:2003ei}
(version 4.01)
interfaced to {\sc Herwig}
\cite{Corcella:2000bw}
(version 6.520)
for parton showering and hadronization, and to {\sc Jimmy}
\cite{Butterworth:1996zw}
(version 4.31)
for modeling multiple parton scattering in the underlying event
using the ATLAS AUET2 tune~\cite{ATL-PHYS-PUB-2011-008} and the CT10 PDFs,
and predictions from {\sc Powheg} interfaced to {\sc Herwig}.
The uncertainties
in the modeling of extra QCD radiation in $t\bar{t}$ events
are estimated with samples generated using
{\sc Alpgen} (version 2.14) ~\cite{ALPGEN} with CTEQ5L~\cite{cteq5} PDFs interfaced to {\sc Pythia}
with varied radiation settings
and {\sc MC@NLO} interfaced to {\sc Herwig}
with varied renormalization and factorization scales
(\rts{} = 7 TeV),
or {\sc Powheg} interfaced to {\sc Pythia}
(\rts{} = 8 TeV) in which the parton shower parameters
are varied to span the ranges compatible with the results of measurements of
\ttbar{} production in association with jets~\cite{TOPQ-2011-21, TOPQ-2012-03, ATL-PHYS-PUB-2015-002}.
All \ttbar{} samples are normalized to the NNLO+NNLL cross-sections
\cite{Cacciari:2011hy,Baernreuther:2012ws,Czakon:2012pz,Czakon:2012zr,Czakon:2013goa,Czakon:2011xx}:
$\ensuremath{\sigma_{\ttbar}}=177.3^{+10}_{-11}$~pb at $\sqrt{s}=7$ TeV and
$\ensuremath{\sigma_{\ttbar}}=253^{+13}_{-15}$~pb at $\sqrt{s}=8$ TeV.
Backgrounds with two real prompt leptons from decays of $W$ or $Z$ bosons (including those produced via leptonic $\tau$ decays) include
$Wt$ single-top production,
$Z$+jets production,
and diboson ($WW$, $WZ$, and $ZZ$)+jets production.
The largest background in this analysis, $Wt$ production, is modeled
using {\sc Powheg (Powheg}-st\_wtch)~\cite{Re:2010bp}
with the CT10 PDF set
and showered with {\sc Pythia}
using the Perugia 2011C tune
and the corresponding CTEQ6L1 PDF set.
The baseline $Wt$ sample uses the ``diagram removal'' scheme to remove interference terms involving \ttbar{} production,
and an alternative method using the ``diagram subtraction'' scheme~\cite{Frixione:2008yi}
is used to cross-check the validity of the prediction from the diagram removal scheme
and to assess systematic uncertainties.
The cross-section employed for $Wt$ single-top event generation is 15.7$\pm$1.2~pb (\rts{} = 7 TeV)
and 22.4$\pm$1.5~pb (\rts{} = 8 TeV),
as obtained from NLO+NNLL calculations~\cite{Kidonakis:2010ux}.
The $Z(\rightarrow\ell\ell)$+jets
background is modeled using {\sc Alpgen} with the CTEQ6L1 PDFs,
interfaced either to {\sc Herwig} and {\sc Jimmy} with the ATLAS AUET2 tune and the CT10 PDFs (\rts{} = 7 TeV)
or to {\sc Pythia6} with the Perugia P2011C tune and the CTEQ6L1 PDFs, including LO matrix elements for $Zb\bar{b}$ and $Zc\bar{c}$ production (\rts{} = 8 TeV).
Inclusive $Z$ boson cross-sections are set to the NNLO predictions from FEWZ~\cite{fewz},
but the normalizations of $Z(\rightarrow\diel/\dimu)$+jets in the \rts{} = 7 TeV analysis are determined from data
using the same procedure used in Refs.~\cite{TOPQ-2010-01,TOPQ-2011-01}.
The diboson background is modeled using
{\sc Alpgen} with the CTEQ6L1 PDFs interfaced to {\sc Herwig} and {\sc Jimmy} with the AUET2 tune and the CT10 PDFs, and the cross-sections are normalized to NLO QCD calculations~\cite{Campbell:2011}.
Background processes where one or more of the reconstructed lepton candidates are nonprompt or misidentified (referred to as ``fake leptons'')
arise from
\ttbar{} production,
$W$+jets production,
and single-top production in the $t$-channel or $s$-channel.
The \rts{} = 7 TeV analysis uses a matrix method~\cite{TOPQ-2010-01} to estimate the fake-lepton background directly from data, while the \rts{} = 8 TeV analysis uses event samples of same-sign leptons in both data and simulations to estimate the fake-lepton contributions in these processes~\cite{TOPQ-2013-04}.
The fake-lepton contributions from $\ttbar$ production are simulated from the same baseline $\ttbar$ signal sample,
which includes the $\ell$+jets decay channel,
and $\ttbar$+$V$ samples where $V=W$ or $Z$, modeled by {\sc Madgraph}~\cite{madgraph} interfaced to {\sc Pythia} with the Perugia P2011C tune and the CTEQ6L1 PDFs.
The $W$+jets production is simulated using {\sc Alpgen} with the CTEQ6L1 PDFs interfaced to {\sc Pythia6} with the Perugia P2011C tune and the CTEQ6L1 PDFs, including LO matrix elements for $Wb\bar{b}$, $Wc\bar{c}$, and $Wc$ processes.
The $t$-channel single-top production is modeled using the {\sc AcerMC}~\cite{ACERMC} generator,
while {\sc Powheg} is used for the production in the $s$-channel,
and both generators are interfaced to {\sc Pythia6} using the Perugia P2011C tune and the CTEQ6L1 PDFs.
Different methods are used in the two datasets due to the different trigger conditions and because the 7 TeV analysis uses all 3 dilepton channels.
Other backgrounds
are negligible after the event selections used in this analysis.
Table~\ref{tab:mcsamples} summarizes the baseline signal and background MC simulated samples used in the 7 TeV and 8 TeV analyses.
\begin{table}
\begin{center}
\begin{tabular}{l|cc}
\toprule
Physics process & 7 TeV analysis & 8 TeV analysis \\
\midrule
\ttbar{} & {\sc Powheg+Pythia} ($h_{\rm damp}=\infty$) & {\sc Powheg+Pythia} ($h_{\rm damp}=m_{t}$) \\
$Wt$ & {\sc Powheg+Pythia} & {\sc Powheg+Pythia} \\
$Z(\rightarrow\tau\tau)$+jets & {\sc Alpgen+Herwig} & {\sc Alpgen+Pythia} \\
$Z(\rightarrow\diel/\dimu)$+jets & {\sc Alpgen+Herwig} and data & - \\
Diboson+jets & {\sc Alpgen+Herwig} & {\sc Alpgen+Herwig} \\
Fake leptons & Data & Various MC samples and data \\
\bottomrule
\end{tabular}
\caption{List of baseline MC samples used in the 7 TeV and 8 TeV analyses.
The $Z(\rightarrow\diel/\dimu)$+jets process is not included in the 8 TeV analysis as the analysis uses only the $\emu$ channel.
}
\label{tab:mcsamples}
\end{center}
\end{table}
\section{Object and event selection}
\label{sec:selection}
\subsection{Object definition}
\label{subsec:object}
Electron candidates are reconstructed as charged-particle tracks in the inner detector associated with energy deposits in the electromagnetic calorimeter, and must satisfy tight identification criteria~\cite{PERF-2013-03}.
Electron candidates are required to have transverse energy $\ensuremath{E_{\rm T}} > 25 \GeV$ and pseudorapidity $|\eta|<2.47$, while excluding the transition region between the barrel and the endcap calorimeters ($1.37 < | \eta | < 1.52$).
Isolation requirements on calorimeter and tracking variables are used to reduce
the background from nonprompt electrons.
The calorimeter isolation variable is based on the energy sum
of cells within a cone of size
$\Delta R=0.2$ around the direction of each electron candidate.
This energy sum excludes cells associated with the electron cluster and is corrected for leakage from the electron
cluster itself and for energy deposits from pileup.
The tracking isolation variable is based on the track
$$p_{\mathrm{T}}$$ sum around the electron in a cone of size
$\Delta R=0.3$, excluding the electron track.
In every $$p_{\mathrm{T}}$$ bin,
both requirements are chosen to result separately in a 90\% (98\%) electron selection efficiency for prompt electrons
from $Z\rightarrow\diel$ decays in the 7 TeV (8 TeV) analysis.
Muon candidates are identified by matching track segments in the muon spectrometer with tracks in the
inner detector, and are required to be in the region $|\eta| < 2.5$ and have $$p_{\mathrm{T}}$ > 20 (25)\GeV$ in the 7 TeV (8 TeV) analysis.
To reduce the background from muons originating from heavy-flavor decays inside jets,
muons are required to be separated by
$\Delta R=0.4$ from the nearest jet, and to be isolated.
In the 7 TeV analysis,
the isolation of muons requires the calorimeter transverse energy within a cone of fixed size $\Delta R = 0.2$ and the sum of track $$p_{\mathrm{T}}$$ within a cone of fixed size $\Delta R=0.3$ around the muon, except the contribution from the muon itself, to be less than 4$\GeV$ and 2.5$\GeV$, respectively.
In the 8 TeV analysis, muons are required to satisfy
$I^{\ell}<0.05$ where the isolation variable is the ratio of
the sum of $$p_{\mathrm{T}}$$ of tracks, excluding the muon, in a cone of variable size
$\Delta R = 10 \GeV / $p_{\mathrm{T}}$(\mu)$ to the $$p_{\mathrm{T}}$$ of the muon~\cite{Rehermann:2010vq}.
Both isolation requirements result in an efficiency of about 97\% for prompt muons from $Z\rightarrow\dimu$ decays.
Jets are reconstructed by the anti-$k_{t}${} algorithm~\cite{akt2}
with a radius parameter $R=0.4$ using
calorimeter energy clusters~\cite{topocluster},
which are calibrated at the electromagnetic energy scale for the \rts{} = 7 TeV dataset,
or using the local cluster weighting method for \rts{} = 8 TeV~\cite{PERF-2011-03}.
The energies of jets are then calibrated
using an energy- and $\eta$-dependent simulation-based
calibration scheme
with {\it in situ} corrections based on data.
Different calibration procedures were used for the 7 TeV and 8 TeV datasets due to the different pileup conditions.
The effects of pileup on the jet energy calibration at 8 TeV are further reduced using the jet area method as described in Ref.~\cite{ATLAS-CONF-2013-083}.
Jets with $$p_{\mathrm{T}}$$ > 25 GeV and $|\eta|$ < 2.5 are accepted.
To suppress jets from pileup,
a requirement on the jet vertex fraction (JVF),
the ratio of the sum of the $p_{\mathrm{T}}${} of tracks associated with both the jet and the primary vertex
to the sum of the $p_{\mathrm{T}}${} of all tracks associated with the jet,
is imposed based on the different pileup conditions in the \rts{} = 7 TeV and \rts{} = 8 TeV~\cite{TOPQ-2013-04}. At 7 TeV, jets are required to satisfy |JVF| > 0.75 while at 8 TeV, jets with $$p_{\mathrm{T}}$$ < 50 GeV and $|\eta|$ < 2.4 are required to satisfy |JVF| > 0.5.
To prevent double-counting of electron energy deposits as jets,
the closest jet lying $\Delta R < 0.2$ from
a reconstructed electron is removed; and finally,
a lepton lying $\Delta R < 0.4$ from a selected jet is discarded
to reject leptons from heavy-flavor decays.
The purity of $\ttbar$ events in the selected sample is improved by tagging jets containing $b$-hadrons (``$b$-tagging'').
Information from the track impact parameters, secondary vertex position, and decay topology
is combined in a multivariate discriminant (MV1)~\cite{ATLAS-CONF-2012-043,ATLAS-CONF-2014-046}.
Jets are defined to be $b$-tagged if the MV1 discriminant value is larger than a threshold (operating point)
corresponding to an average 70\% efficiency for tagging $b$-quark jets from top quark decays in $\ttbar$ events,
with about 1\% and 20\% probability of misidentifying light-flavor jets and charm-jets, respectively.
The missing transverse momentum $\ensuremath{$E_{\mathrm{T}}$^{\rm miss}}$ is derived
from the vector sum of calorimeter cell energies within $|\eta|<4.9$
associated with physics objects (electrons, muons, and jets) and corrected with their dedicated calibrations,
as well as the transverse energy deposited in the calorimeter cells not associated with these objects~\cite{PERF-2011-07}.
\subsection{Event selection}
\label{subsec:event}
Events in the 7 TeV and 8 TeV analyses are selected based on the above definitions of reconstructed objects and the event quality.
All events are required to
have at least one primary vertex\footnote{The primary vertex is defined to be the reconstructed vertex with the highest $\sum $p_{\mathrm{T}}$^2$ of the associated tracks in the event.} reconstructed from at least five tracks with $$p_{\mathrm{T}}$>0.4$ GeV,
and events compatible with cosmic-ray interactions are rejected.
All jets are required to pass jet quality and timing requirements and at least one lepton is required to
match in ($\eta$, $\phi$) space with particle(s) that triggered the event.
The dilepton event sample is selected by requiring exactly two charged leptons (electrons or muons) with opposite-sign charge
and at least two jets, including at least one that is $b$-tagged.
To suppress backgrounds from Drell-Yan and multijet processes in the $\diel$ and $\dimu$ channels in the 7 TeV analysis,
the missing transverse momentum $\ensuremath{$E_{\mathrm{T}}$^{\rm miss}}$
is required to be greater than 60 GeV,
and the dilepton invariant mass $\mbox{$m_{\ell\ell}$}$ is required to be outside the $Z$ boson mass window $|\mbox{$m_{\ell\ell}$} - 91\GeV|$ > 10 GeV.
The dilepton invariant mass is also required to be above 15 GeV in the $\diel$ and $\dimu$ channels to reject backgrounds from bottom-quark pair and vector-meson decays.
No $\ensuremath{$E_{\mathrm{T}}$^{\rm miss}}$ nor $\mbox{$m_{\ell\ell}$}$ requirements are applied in the $\emu$ channel,
but a reconstructed variable, $\HT$, defined to be the scalar sum of the $$p_{\mathrm{T}}$$ of all selected leptons and jets in an event,
is required to be greater than 130 GeV to suppress remaining background from $Z/\gamma^*$+jets processes at 7 TeV.
In the 8 TeV analysis the $\HT$ requirement is not applied, since the improvement is negligible due to a higher muon $$p_{\mathrm{T}}$$ requirement than the 7 TeV analysis.
In the 7 TeV analysis, an additional requirement using the invariant mass of a jet and a lepton
is also applied to reject events where the reconstructed jet does not originate from the $\ttbar$ decay (wrong-jet events).
Exploiting the kinematics of top quark decay with the constraint from the top quark mass $\ensuremath{m_{\mathrm{top}}}$,
the invariant mass of the jet with the second highest value of the $b$-tagging discriminant $j_{2}$ and either of the leptons $\ell^{+}/\ell^{-}$
is required to be less than 0.8 of $\ensuremath{m_{\mathrm{top}}}$ ($m_{j_{2}\ell^{+}}/\ensuremath{m_{\mathrm{top}}}<0.8$ OR $m_{j_{2}\ell^{-}}/\ensuremath{m_{\mathrm{top}}}<0.8$).
This cut value was optimized to provide about 94\% selection efficiency while rejecting about 16\% of the wrong-jet events in the simulated $\ttbar$ dilepton event sample.
Table \ref{tab:eventselection} shows a summary of the event selections for the 7 TeV and 8 TeV analyses.
The numbers of events that fulfill all selection requirements are shown in Table \ref{tab:eventtable}.
\begin{table}[htbp]
\centering
\sisetup{round-mode=places, round-precision=0, retain-explicit-plus=true}
{
\begin{tabular}{l|c|c|c|c}
\toprule
& \multicolumn{3}{c|}{7$\TeV$} & 8$\TeV$ \\
\midrule
Selection & \hspace{1.3cm}$\diel$\hspace{1.3cm} & \hspace{1.25cm}$\dimu$\hspace{1.25cm} & $\emu$ & $\emu$ \\
\midrule
Leptons & \multicolumn{4}{c}{Exactly 2 leptons, opposite-sign charge, isolated} \\
& \multicolumn{4}{c}{Electrons: $$E_{\mathrm{T}}$ > 25 \GeV$, $|\eta| < 2.47$, excluding $1.37 < |\eta| < 1.52$} \\
& \multicolumn{3}{c|}{Muons: $$p_{\mathrm{T}}$ > 20 \GeV$, $|\eta| < 2.5$} & $$p_{\mathrm{T}}$ > 25 \GeV$, $|\eta| < 2.5$ \\
Jets & \multicolumn{4}{c}{$\geq$ 2 jets, $\pt > 25 \GeV$, $|\eta| < 2.5$} \\
& \multicolumn{4}{c}{$\geq$ 1 $b$-tagged jet at $\epsilon_b$ = 70\%} \\
$\mbox{$m_{\ell\ell}$}$ & \multicolumn{2}{c|}{$|\mbox{$m_{\ell\ell}$}-91\GeV| > 10 \GeV$, $\mbox{$m_{\ell\ell}$} > 15 \GeV$} & None & None \\
$\MET$ or $\HT$ & \multicolumn{2}{c|}{$\MET>60\GeV$} & $\HT > 130\GeV$ & None \\
$m_{j\ell}$ & \multicolumn{3}{c|}{$m_{j_{2}\ell^{+}}/\ensuremath{m_{\mathrm{top}}}<0.8$ OR $m_{j_{2}\ell^{-}}/\ensuremath{m_{\mathrm{top}}}<0.8$} & None \\
\bottomrule
\end{tabular}
}
\caption{Summary of the event selections for the 7 TeV and 8 TeV analyses.}
\label{tab:eventselection}
\end{table}
\begin{table}[htbp]
\centering
\sisetup{round-mode=places, round-precision=0, retain-explicit-plus=true, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
{
\begin{tabular}{lc@{ $\pm$ }cc@{ $\pm$ }cc@{ $\pm$ }cc@{ $\pm$ }c}
\toprule
& \multicolumn{6}{c}{7$\TeV$} & \multicolumn{2}{c}{8$\TeV$} \\
\midrule
Channel & \multicolumn{2}{c}{$\diel$} & \multicolumn{2}{c}{$\dimu$} & \multicolumn{2}{c}{$\emu$} & \multicolumn{2}{c}{$\emu$} \\
\midrule
$\ttbar$ & \numRF{484.61}{2} & \numRF{36.24}{1} & \numRF{1421.35}{3} & \numRF{57.84}{1} & \numRF{3738.53}{3} & \numRF{169.749}{2} & \numRF{26715}{3} & \numRF{803}{1} \\
$Wt$ & \numRF{20.38}{2} & \numRF{4.34}{1} & \numRF{57.93}{2} & \numRF{15.17}{2} & \numRF{154.78}{3} & \numRF{23.03}{2} & \numRF{1283}{3} & \numRF{108}{2} \\
Fake leptons & \numRF{12.17}{2} & \numRF{6.08}{1} & \numRF{11.44}{3} & \numRF{3.43}{2} & \numRF{50.22}{2} & \numRF{20.09}{2} & \numRF{232}{2} & \numRF{107}{2} \\
$Z(\rightarrow \tau\tau)$+jets & \numRF{0.43}{2} & \numRF{0.33}{2} & \numRF{2.56}{2} & \numRF{1.18}{2} & \numRF{5.76}{2} & \numRF{1.23}{2} & \numRF{80}{2} & \numRF{34}{2} \\
$Z(\rightarrow \diel/\dimu)$+jets & \numRF{2.15}{2} & \numRF{1.00}{2} & \numRF{5.77}{1} & \numRF{4.33}{1} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\
Diboson+jets & \numRF{1.03}{3} & \numRF{0.31}{2} & \numRF{3.22}{2} & \numRF{0.96}{1} & \numRF{8.95}{2} & \numRF{2.37}{2} & \numRF{77}{2} & \numRF{31}{2} \\
\midrule
Predicted & \numRF{520.76}{2} & \numRF{38.8274}{1} & \numRF{1502.27}{3} & \numRF{62.3895}{1} & \numRF{3958.25}{3} & \numRF{179.581}{2} & \numRF{28449}{3} & \numRF{819}{1} \\
\midrule
Observed & \multicolumn{2}{c}{\num{532}} & \multicolumn{2}{c}{\num{1509}} & \multicolumn{2}{c}{\num{4038}} & \multicolumn{2}{c}{\num{28772}} \\
\bottomrule
\end{tabular}
}
\caption{Predicted event yields and uncertainties for $\ttbar$ signal and backgrounds compared to observed event yields in the 7 TeV and 8 TeV analyses.
The uncertainties include all systematic uncertainties discussed in Section~\ref{sec:uncertainties} except $\ttbar$ modeling.}
\label{tab:eventtable}
\end{table}
\section{Reconstruction}
\label{sec:reconstruction}
To reconstruct the $\ttbar$ system the two jets identified as most likely to contain $b$-hadrons are used.
This choice improves the resolution of the $\ttbar$-system observables as the jets are more likely to have originated from top quark decay. In both the 7 TeV and 8 TeV analyses, the fractional resolution for $\ensuremath{m_{\ttbar}}$ is typically below 20\%, while for $\ensuremath{\pt^{\ttbar}}$ the fractional resolution is 35\% at 100 GeV and improves as a function of $\ensuremath{\pt^{\ttbar}}$. The resolution for $\ensuremath{|y_{\ttbar}|}$ is on average 17\%.
An approximate four-momentum of the \ttbar{} system
is reconstructed from two leptons,
two jets, and missing transverse momentum \MET{}
as:
\begin{eqnarray}
E_{\rm total} &=& E(\ell_1) + E(\ell_2) + E(j_1) + E(j_2) + E_{\mathrm{T}}^{\rm miss} \nonumber \\
p_{x} &=& p_{x}(\ell_1) + p_{x}(\ell_2) + p_{x}(j_1) + p_{x}(j_2) + E_{x}^{\rm miss} \nonumber \\
p_{y} &=& p_{y}(\ell_1) + p_{y}(\ell_2) + p_{y}(j_1) + p_{y}(j_2) + E_{y}^{\rm miss} \nonumber \\
p_{z} &=& p_{z}(\ell_1) + p_{z}(\ell_2) + p_{z}(j_1) + p_{z}(j_2) \nonumber
\label{eq:ttreco}
\end{eqnarray}
where $E$ indicates the energy of the corresponding objects,
the $p_{x,y,z}$ is the momentum along to $x$-, $y$-, or $z$-axis,
and the indices $\ell_1$, $\ell_2$, $j_1$, and $j_2$ indicate the two leptons and two jets, respectively.
The $\ttbar$-system observables in consideration (invariant mass, transverse momentum, and rapidity) are obtained from this four-momentum.
Figures~\ref{fig:ttbar_system_opt2} and \ref{fig:cplots} show the distributions
of the reconstructed $\ensuremath{m_{\ttbar}}$, $\ensuremath{\pt^{\ttbar}}$, and $\ensuremath{|y_{\ttbar}|}$ together with the MC predictions at 7 TeV and 8 TeV, respectively.
The bottom panel shows the ratio of the data to the total prediction.
Overall there is satisfactory agreement between data and prediction.
\begin{figure}[htbp]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.325\textwidth]{fig_01a}
\includegraphics[width=0.325\textwidth]{fig_01b}
\includegraphics[width=0.325\textwidth]{fig_01c}
\label{fig:ttbar_mass_opt2}
}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.325\textwidth]{fig_01d}
\includegraphics[width=0.325\textwidth]{fig_01e}
\includegraphics[width=0.325\textwidth]{fig_01f}
\label{fig:ttbar_pt_opt2}
}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.325\textwidth]{fig_01g}
\includegraphics[width=0.325\textwidth]{fig_01h}
\includegraphics[width=0.325\textwidth]{fig_01i}
\label{fig:abs_ttbar_rapidity_opt2}
}
\caption{
Distributions of
\protect\subref{fig:ttbar_mass_opt2} the invariant mass,
\protect\subref{fig:ttbar_pt_opt2} the transverse momentum,
and
\protect\subref{fig:abs_ttbar_rapidity_opt2} the rapidity of the $\ttbar$ system
at the reconstruction level
obtained from the $\sqrt{s}$ = 7 TeV data
compared with the total signal and background predictions,
in the $\diel$ (left), $\dimu$ (center) and $\emu$ (right) channels.
The bottom panel shows the ratio of data to prediction.
The error band includes all systematic uncertainties except $\ttbar$ modeling uncertainties.
The {\sc Powheg}+{\sc Pythia} with $h_{\rm damp}=\infty$ sample is used for the signal $\ttbar$ and is normalized to NNLO+NNLL calculations.
}
\label{fig:ttbar_system_opt2}
\end{figure}
\begin{figure}[htbp]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.325\textwidth]{fig_02a}
\label{fig:cplots_Mtt}
}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.325\textwidth]{fig_02b}
\label{fig:cplots_pTtt}
}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.325\textwidth]{fig_02c}
\label{fig:cplots_ytt}
}
\caption{
Distributions of
\protect\subref{fig:cplots_Mtt} the invariant mass,
\protect\subref{fig:cplots_pTtt} the transverse momentum,
and
\protect\subref{fig:cplots_ytt} the rapidity of the $\ttbar$ system
at the reconstruction level
obtained from the $\sqrt{s}$ = 8 TeV data
compared with the total signal and background predictions.
The bottom panel shows the ratio of data to prediction.
The error band includes all systematic uncertainties except $\ttbar$ modeling uncertainties.
The {\sc Powheg}+{\sc Pythia} with $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ sample is used for the signal $\ttbar$ and is normalized to NNLO+NNLL calculations.
}
\label{fig:cplots}
\end{figure}
\section{Differential cross-section determination}
\label{sec:unfolding}
The normalized differential cross-sections with respect to the $\ttbar$-system observables,
denoted as $X$, are obtained as follows.
The estimated background contributions are subtracted from the observed number of events for each bin in the distribution of the reconstructed observable.
The background-subtracted distributions are then corrected for detector acceptance and resolution effects (unfolded)
and the efficiency to pass the event selection,
thus extrapolated to the full phase space of $\ttbar$ production at parton level.
The differential cross-sections are finally normalized by the total $\ttbar$ cross-section,
obtained by integrating over all bins for each observable.
The differential cross-section is obtained from
\begin{equation}
\frac{d\sigma_{\ttbar}}{dX_{i}} = \frac{1}{\Delta X_{i}\cdot {\cal L}\cdot \sum_{\alpha}({\cal B}^{\alpha} \cdot \epsilon_{i}^{\alpha})} \sum_{\alpha}\sum_{j}({\cal M}^{-1}_{ij})^{\alpha}(N^{{\rm obs}, \alpha}_{j}-N^{{\rm bkg}, \alpha}_{j}),
\label{eq:unfold}
\end{equation}
where
$i$ ($j$) indicates the bin for the observable $X$ at parton (detector) level,
$N^{\rm obs}_{j}$ is the number of observed events in data,
$N^{\rm bkg}_{j}$ is the estimated number of background events,
${\cal M}^{-1}_{ij}$ is the inverse of the migration matrix representing the correction for detector resolution effects,
$\epsilon_{i}$ is the event selection efficiency with respect to the channel,
${\cal B}$ is the branching ratio of the $\ttbar$ decays in the dilepton channel,
${\cal L}$ is the integrated luminosity,
$\Delta X_{i}$ is the bin width,
and $\alpha$ is the dilepton channel being considered,
where $\alpha$ = $\diel$, $\dimu$ or $\emu$ for 7$\TeV$
and $\alpha$ = $\emu$ for 8$\TeV$.
The measured cross-section at each bin $i$ represents the bin-averaged value at the bin.
The normalized differential cross-section is obtained as
$1/\sigma_{\ttbar} \cdot d\sigma_{\ttbar}/dX_{i}$,
where $\sigma_{\ttbar}$ is the inclusive \ttbar{} cross-section.
The unfolding from reconstruction level to parton level
is carried out using the \verb|RooUnfold| package~\cite{Adye:2011gm}
with an iterative method inspired by Bayes' theorem~\cite{bayesianUnfold}.
The number of iterations used in the unfolding procedure balances the goodness of fit and statistical uncertainties.
The smallest number of iterations with $\chi^2$/NDF ($\chi^2$ between the unfolded and parton-level spectra over number of degrees of freedom) less than one is chosen for the distribution.
In the 7 TeV analysis, two to four iterations are used depending on the observable;
in the 8 TeV analysis, four iterations are used for all observables.
The effect of varying the number of iterations by one was tested and confirmed to be negligible.
The detector response is described using a migration matrix
that relates the generated parton-level distributions to the measured distributions.
The migration matrix ${\cal M}$ is determined using $\ttbar$ Monte Carlo samples,
where the parton-level top quark is defined as the top quark
after radiation and before decay.\footnote{
The generator status code for the top or antitop quark is required to be 3 in {\sc Pythia} and 155 in {\sc Herwig}.}
Figure~\ref{fig:resp} presents the migration matrices of $\ensuremath{\pt^{\ttbar}}$ for both 7 TeV and 8 TeV in the $\emu$ channel.
The matrix ${\cal M}_{ij}$ represents the probability for an event generated at parton level with $X$ in bin $i$ to have a reconstructed $X$ in bin $j$,
so the elements of each row add up to unity (within rounding uncertainties).
The probability for the parton-level events to remain in the same bin in the measured distribution is shown in the diagonal, and the off-diagonal elements represent the fraction of parton-level events that migrate into other bins.
The fraction of events in the diagonal bins are the highest for $\ensuremath{\pt^{\ttbar}}$,
while for other observables more significant migrations are present due to the effect of $p_{z}$ of the undetected neutrinos in the reconstruction. In the 7 TeV analysis, the effect of bin migrations in the $\diel$ and $\dimu$ channels are similar to those in the $\emu$ channel.
In the 8 TeV analysis, the bin boundaries for $\ensuremath{m_{\ttbar}}$ and $\ensuremath{|y_{\ttbar}|}$ are determined separately for the parton-level and reconstruction-level observables, based on the migrations between them.
The event selection efficiency $\epsilon_i$ for each bin $i$ is evaluated as the ratio of the parton-level spectra before and after implementing the event selection at the reconstruction level.
In both the 7 TeV and 8 TeV analyses,
the efficiencies generally increase towards higher $\ensuremath{m_{\ttbar}}$ and $\ensuremath{\pt^{\ttbar}}$,
while at high values of $\ensuremath{|y_{\ttbar}|}$
the efficiency decreases
due to leptons and jets falling outside the required pseudorapidity range for reconstructed leptons and jets.
The efficiencies are
typically in the range of 15--20\%
for the $\emu$ channel at both 7 and 8 TeV,
and
3--5\% and 8--13\%
for the $\diel$ and $\dimu$ channels, respectively, in the 7 TeV analysis.
The lower values in the same-flavor channels are due to the rejection cuts for Drell-Yan and $Z\rightarrow\ell\ell$ events in these channels,
while isolation requirements that are more restrictive for electrons than for muons in 7 TeV analyses
result in further lowered efficiencies in the $\diel$ channel.
The bin width for each observable is determined
by considering the resolution of the observable and the statistical precision in each bin.
In the 7 TeV analysis,
the bin widths are set to be the same as the ones used in the previous 7$\TeV$ ATLAS measurement in the $\ell$+jets channel~\cite{TOPQ-2012-08} due to comparable resolutions for each observable,
and to enable a direct comparison of the results between the two channels.
For the 8 TeV analysis,
the determined bin widths are generally finer than the bin widths for the 7 TeV analysis
due to the larger dataset available.
Possible biases due to the use of the MC generator in the unfolding procedure are assessed by altering the shape of the parton-level spectra in simulation using continuous functions.
The altered shapes studied cover the difference observed between the default MC and data for each observable.
These studies verify that the altered shapes are recovered by the unfolding based on the nominal migration matrices within statistical uncertainties.
A multichannel combination is performed in the 7$\TeV$ analysis
by summing the background-subtracted observed events
corrected by the migration matrix and the event selection efficiency over channels.
The results obtained from the combined dilepton channel are consistent with those from the individual channels.
\begin{figure}[htbp]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[$\ensuremath{\pt^{\ttbar}}$ for 7$\TeV$ analysis in $\emu$ channel]{
\includegraphics[width=0.33\textwidth]{fig_03a}
\label{fig:resp_pttt_emu_7TeV}
}
\subfloat[$\ensuremath{\pt^{\ttbar}}$ for 8$\TeV$ analysis in $\emu$ channel]{
\includegraphics[width=0.33\textwidth]{fig_03b}
\label{fig:resp_pttt_emu_8TeV}
}
\caption{The migration matrix of
$\ensuremath{\pt^{\ttbar}}$ represented in probability for (a) 7$\TeV$ and (b) 8$\TeV$ in the $\emu$ channel,
obtained from $\ttbar$ simulation with the {\sc Powheg+Pythia} generator.
Different $h_{\rm damp}$ parameters are used at 7 TeV ($h_{\rm damp}=\infty$) and 8 TeV ($h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$) in the {\sc Powheg+Pythia} sample,
where the effect of the different $h_{\rm damp}$ in the migration matrix is negligible.
Elements in each row add up to unity.
Empty elements indicate
either a probability of lower than 0.5\%
or no events are present.
}
\label{fig:resp}
\end{figure}
\section{Uncertainties}
\label{sec:uncertainties}
Various sources of systematic uncertainty affect the measurement and are discussed below.
The systematic uncertainties due to signal modeling and detector modeling
affect the estimation of the detector response and the signal reconstruction efficiency.
The systematic uncertainties due to the background estimation and the detector modeling
affect the background subtraction.
The covariance matrix due to the statistical and systematic uncertainties for each normalized unfolded spectrum
is obtained by evaluating the correlations between the bins for each uncertainty contribution.
In particular,
the correlations due to statistical fluctuations are evaluated from an ensemble of pseudoexperiments, each by
varying the data event counts independently in each bin and propagating the variations through the unfolding procedure.
\subsection{Signal modeling uncertainties}
The signal modeling uncertainties are estimated by repeating the full analysis procedure,
using an alternative MC sample to derive the migration matrix
and the corrections for selection efficiency.
The differences between the results obtained using the alternative and nominal MC samples
are taken as systematic uncertainties.
At $\sqrt{s}$ = 7 TeV, the uncertainties due to the choice of generator are estimated by comparing
{\sc Powheg+Pythia} and {\sc MC@NLO+Herwig} signal MC samples.
The uncertainty is found to be up to 2\% in $\ensuremath{m_{\ttbar}}$ and $\ensuremath{|y_{\ttbar}|}$, and in the range of 2--19\% in $\ensuremath{\pt^{\ttbar}}$ with larger values with increasing $\ensuremath{\pt^{\ttbar}}$, due to the difference at the parton level between the two MC $\ttbar$ samples in the high $\ensuremath{\pt^{\ttbar}}$ region.
At \rts = 8 TeV, the uncertainties related to the generator
are estimated using {\sc Powheg+Herwig} and {\sc MC@NLO+Herwig} signal MC samples,
and the uncertainties due to parton shower and hadronization
are estimated using {\sc Powheg+Pythia} and {\sc Powheg+Herwig} signal MC samples.
These uncertainties are typically less than 10\% (3\%) in $\ensuremath{m_{\ttbar}}$ and $\ensuremath{\pt^{\ttbar}}$ ($\ensuremath{|y_{\ttbar}|}$),
and increase to 20\% at large $\ensuremath{m_{\ttbar}}$ in the case of generator uncertainty.
The effects due to modeling of extra radiation in $\ttbar$ events
are assessed at both the matrix element and parton shower levels.
At $\sqrt{s}$ = 7 TeV, the uncertainty due to matrix element renormalization and factorization scales
is evaluated using {\sc MC@NLO+Herwig} samples with varied renormalization/factorization scales,
and the uncertainty due to parton showering in different initial-state and final-state radiation (ISR/FSR) conditions
is estimated using two different {\sc Alpgen+Pythia} samples
with varied radiation settings.
The overall effects in both cases are less than 1\% in $\ensuremath{|y_{\ttbar}|}$ and up to 6\% for $\ensuremath{m_{\ttbar}}$ and $\ensuremath{\pt^{\ttbar}}$ with the larger values towards higher values of $\ensuremath{m_{\ttbar}}$ and $\ensuremath{\pt^{\ttbar}}$.
At $\sqrt{s}$ = 8 TeV, the treatment of these uncertainties was improved by using {\sc Powheg+Pythia} samples with
tuned parameters to span the variations in radiation compatible with the ATLAS $\ttbar$ gap fraction measurements
at $\sqrt{s}$ = 7 TeV~\cite{TOPQ-2011-21} as discussed in detail in Ref.~\cite{ATL-PHYS-PUB-2014-005}.
The samples have varied renormalization/factorization scales and $h_{\rm damp}$ parameter values,
resulting in either more or less radiation than the nominal signal sample.
The overall impact is typically less than 2\% for all observables, and up to 4\% towards higher values of $\ensuremath{\pt^{\ttbar}}$.
The uncertainties due to the choice of PDFs,
which affect most significantly the signal selection efficiency,
are estimated
based on the PDF4LHC recommendations~\cite{Botje:2011sn}
using the {\sc MC@NLO+Herwig} sample with three different NLO PDF sets:
CT10~\cite{CT10}, MSTW2008nlo68cl~\cite{MSTW}, and NNPDF2.3~\cite{NNPDF}.
An intra-PDF uncertainty is obtained for each PDF set
by following its respective prescription while an inter-PDF
uncertainty is computed as the envelope of the three intra-PDF uncertainties.
The overall effect is less than 2\% for all observables in both the 7 TeV and 8 TeV measurements
(except for the highest $\ensuremath{|y_{\ttbar}|}$ bin at 8 TeV where the effect is up to 8\%).
The dependence of the $\ttbar$-system observables on the top quark mass $\ensuremath{m_{\mathrm{top}}}$ is evaluated at $\sqrt{s}=$ 7 TeV using $\ttbar$ samples with different mass points at 170 GeV and 175 GeV to unfold the data,
then the difference of the results at the two mass points is taken and divided by the difference $\Delta \ensuremath{m_{\mathrm{top}}}$
to extract the difference of the differential cross-section per GeV change of $\Delta \ensuremath{m_{\mathrm{top}}}$.
These studies show that the dependence of the differential cross-sections on the $\ensuremath{m_{\mathrm{top}}}$ is
no more than 1\% per GeV for all kinematic observables.
These variations are not included in the total uncertainty.
\subsection{Background modeling uncertainties}
Uncertainties arising from the background estimates are evaluated by repeating
the full analysis procedure, varying the background contributions by $\pm 1\sigma$
from the nominal values.
The differences between the results obtained using the nominal and the varied background estimations
are taken as systematic uncertainties.
The uncertainties due to the $Wt$ background modeling are estimated
by comparing the inclusive ``diagram removal'' and inclusive ``diagram subtraction'' samples.
The uncertainty is typically below 1\%,
except for high $\ensuremath{m_{\ttbar}}$ and $\ensuremath{\pt^{\ttbar}}$ bins where the uncertainty is up to about 5\% and 2\%, respectively.
The relative uncertainties of 7.7\% (7 TeV) and 6.8\% (8 TeV) in the predicted cross-section of $Wt$ production are applied in all bins of the differential cross-sections.
An uncertainty of 5\% is assigned to the predicted diboson cross-section, with an additional uncertainty of 24\% per additional selected jet added in quadrature to account for the assumption that the ($W$ + $n$ + 1 jets)/($W$ + $n$ jets) ratio is constant~\cite{Alwall:2007fs,TOPQ-2010-01}.
The overall impact of these uncertainties is less than 1\%.
For the $Z$+jets background, in the $\emu$ channel only the $Z(\rightarrow\tau\tau)$+jets process contributes,
while the $Z(\rightarrow\diel)$+jets ($Z(\rightarrow\dimu)$+jets) process contributes only to the $\diel$ ($\dimu$) channel.
An inclusive uncertainty of 4\% is assigned to the predicted cross-section of $Z(\rightarrow\tau\tau)$+jets,
with an additional uncertainty of 24\% per additional selected jet added in quadrature.
The $Z(\rightarrow\diel/\dimu)$+jets background is estimated by a data-driven method~\cite{TOPQ-2010-01,TOPQ-2011-01} that uses a control region populated with $Z$ events. The uncertainty is evaluated by varying the control region (defined by $|m_{\ell\ell}-m_Z|<10\GeV$ and $\ensuremath{$E_{\mathrm{T}}$^{\rm miss}}>30\GeV$) by $\pm$5 GeV in $\ensuremath{$E_{\mathrm{T}}$^{\rm miss}}$.
The overall impact of these uncertainties is less than 1\% in both the 7 TeV and 8 TeV measurements.
The fake-lepton contribution is estimated directly from data,
using a matrix method~\cite{TOPQ-2010-01} in 7 TeV data
and the same-sign dilepton events in the 8 TeV data sample~\cite{TOPQ-2013-04}.
In the 7 TeV analysis,
the uncertainty of the fake-lepton background is evaluated by
considering the uncertainties in the real- and fake-lepton efficiency measurements
and by comparing results obtained from different matrix methods.
In the 8 TeV analysis a conservative uncertainty of 50\% is assigned to the fake-lepton background~\cite{TOPQ-2013-04}.
The impact of the uncertainty is typically less than 1\% in all observables, except in high-$\ensuremath{m_{\ttbar}}$ and high-$\ensuremath{\pt^{\ttbar}}$ bins where it is up to
5\%.
\subsection{Detector modeling uncertainties}
The uncertainties due to the detector modeling
are estimated for each bin based on the methods described in Ref.~\cite{TOPQ-2013-04}.
They affect the detector response including signal reconstruction efficiency
and the estimation of background events that passed all event selections and their kinematic distribution.
The full analysis procedure is repeated with the varied detector modeling, and the difference between the results using the nominal and the varied modeling is taken as a systematic uncertainty.
The lepton reconstruction efficiency in simulation
is calibrated by correction factors derived from measurements
of these efficiencies in data using control regions enriched in $Z\rightarrow\ell\ell$ events.
The lepton trigger and reconstruction efficiency correction factors,
energy scale, and resolution
are varied within the uncertainties in the $Z\to \ell\ell$ measurements~\cite{PERF-2010-04,ATLAS-CONF-2013-088}.
The jet energy scale (JES) uncertainty is derived using a combination of simulations, test beam data and
{\it in situ} measurements~\cite{PERF-2012-01,PERF-2011-03,PERF-2011-05}.
Additional contributions from the jet flavor composition, calorimeter response
to different jet flavors, and pileup are taken into account.
Uncertainties in the jet energy resolution are
obtained with an {\it in situ} measurement of the jet response balance
in dijet events~\cite{PERF-2011-04}.
The difference in $b$-tagging efficiency between data and MC simulation is estimated
in lepton+jets $\ttbar$ events
with the selected jet containing a $b$-hadron on the leptonic side~\cite{ATLAS-CONF-2012-097}.
Correction factors are also applied for jets originating from light hadrons that are
misidentified as jets containing $b$-hadrons.
The associated systematic uncertainties are computed by varying the correction factors
within their uncertainties.
The uncertainty associated with \MET{}
is calculated by propagating the energy scale and resolution
systematic uncertainties to all jets and leptons in the \MET{} calculation.
Additional \MET{} uncertainties arising from energy deposits not associated
with any reconstructed objects are also included~\cite{PERF-2011-07}.
The uncertainty due to the finite size of the MC simulated samples are evaluated
by varying the content of the migration matrix with a Poisson distribution.
The standard deviation of the ensemble of results unfolded with the varied matrices is taken as the uncertainty.
The effect is more significant in the 7 TeV analysis (up to 3\% in high-$\ensuremath{m_{\ttbar}}$ and high-$\ensuremath{\pt^{\ttbar}}$ bins),
due to the smaller size of the MC simulation sample available at 7 TeV.
In the 8 TeV analysis, while the MC statistical uncertainty is less significant (sub-percent overall),
an additional uncertainty is included to account for the bias introduced by the unfolding procedure
due to the observed deviation between data and the predicted $\ttbar$ events.
The typical size of the bias is less than 1\%, and increases towards higher $\ensuremath{m_{\ttbar}}$, $\ensuremath{\pt^{\ttbar}}$, and $\ensuremath{|y_{\ttbar}|}$ up to about 4\%.
The bias in the 7 TeV analysis is taken into account
by choosing an unfolding parameter based on the level of bias for an observable,
which is reflected in the data statistical uncertainty and thus not included as a systematic uncertainty.
The uncertainty in the integrated luminosity
is estimated to be
1.8\% for \rts = 7\,\TeV{}~\cite{DAPR-2011-01}
and 1.9\% for \rts = 8\,\TeV{}~\cite{DAPR-2013-01}.
The effect of the uncertainty is substantially reduced in the normalized differential cross-sections due to large bin-to-bin correlations.
\subsection{Summary of the main sources of systematic uncertainty}
For $\ensuremath{m_{\ttbar}}$, the largest systematic uncertainties come from
signal modeling (including generator choice,
parton showering and hadronization,
and extra radiation),
JES,
and $Wt$ background modeling (at large $\ensuremath{m_{\ttbar}}$).
The uncertainty due to signal modeling in $\ensuremath{m_{\ttbar}}$ is generally smaller at 7 TeV because of the requirement on the jet-lepton invariant mass, which reduces the fraction of wrong-jet events used to reconstruct the $\ttbar$-system, is applied in the 7 TeV analysis but not in the 8 TeV analysis.
For $\ensuremath{\pt^{\ttbar}}$,
the uncertainty from signal modeling (including generator choice,
parton showering and hadronization,
and extra radiation)
is the largest, followed by JES.
The main uncertainties for $\ensuremath{|y_{\ttbar}|}$ come from PDF and signal generator choice.
\section{Results}
\label{sec:result}
The unfolded parton-level normalized differential cross-sections for $\sqrt{s}=7\TeV$ and $\sqrt{s}=8\TeV$ are shown
in Table~\ref{tab:summary_dxs_norm_ll}
and Table~\ref{tab:xsec_summary_norm}, respectively.
The total inclusive $\ttbar$ cross-sections, evaluated by integrating the spectra before the normalization, agree with the theoretical calculations and other inclusive measurements within uncertainties at both energies.
The estimated uncertainties include all sources discussed in Section~\ref{sec:uncertainties}.
Comparisons of the data distributions with different SM predictions
are quantified by computing $\chi^2$ values
and inferring $p$-values (probability of obtaining a $\chi^2$ is larger than or equal to the observed value)
from the $\chi^2$ values and the number of degrees of freedom (NDF).
The $\chi^2$ is defined as
\begin{equation}
\chi^{2}=V^{\rm T}\cdot {\rm Cov}^{-1}\cdot V
\end{equation}
where $V$ is the vector of the differences between the data and the theoretical predictions,
and ${\rm Cov}^{-1}$ is the inverse of the full bin-to-bin covariance matrix.
Due to the normalization constraint in the derivation of normalized differential cross-sections,
the NDF and the rank of the covariance matrix
is reduced by one unit to $N_{\rm b}-1$,
where $N_{\rm b}$ is the number of bins in the spectrum being considered.
Consequently, one of the $N_{\rm b}$ elements in $V$
and the corresponding row and column in the $N_{\rm b} \times N_{\rm b}$ full covariance matrix ${\rm Cov}$
is discarded,
and the $N_{\rm b}-1 \times N_{\rm b}-1$ submatrix obtained in this way is invertible,
allowing the $\chi^2$ to be computed.
The $\chi^2$ value does not depend on which element is discarded from the vector $V_{N_{\rm b}-1}$
and the corresponding sub-matrix ${\rm Cov}_{N_{\rm b}-1}$.
The evaluation of $\chi^2$ under the normalization constraint
follows the same procedure as described in Refs.~\cite{TOPQ-2012-08,TOPQ-2015-06}.
The comparison of the measured normalized distributions to
predictions from different MC generators of $\ttbar$ production are shown graphically in
Figure~\ref{fig:dxs_norm_mc_ll} for $\sqrt{s}=7\TeV$
and
Figure~\ref{fig:Normalized} for $\sqrt{s}=8\TeV$, with the corresponding $p$-values comparing the measured spectra to the predictions from the MC generators in Table~\ref{tab:chi2_dxs_norm_ll} and Table~\ref{tab:chi2_norm}.
Predictions from
{\sc Powheg+Pythia} with $h_{\rm damp} = \ensuremath{m_{\mathrm{top}}}$,
{\sc MC@NLO+Herwig},
{\sc Powheg+Pythia} with $h_{\rm damp} = \infty$,
and {\sc Powheg+Herwig}
are used for comparison with data.
In the 7 TeV analysis, {\sc Alpgen+Herwig} is also used for the comparison,
as it was the default sample used in the differential measurement in the $\ell$+jets channel by ATLAS~\cite{TOPQ-2012-08}.
Both NLO generators ({\sc Powheg} and {\sc MC@NLO}) use the NLO CT10~\cite{CT10} PDF set, while {\sc Alpgen+Herwig} uses the LO CTEQ6L1~\cite{cteq} PDF set.
Most of the generators agree with data in a wide kinematic range of the distributions.
The $\ensuremath{m_{\ttbar}}$ spectrum is well described by most of the generators
at both 7 TeV and 8 TeV,
except for {\sc Powheg+Pythia}
in the highest $\ensuremath{m_{\ttbar}}$ bin in the 7 TeV analysis.
For $\ensuremath{\pt^{\ttbar}}$, agreement with {\sc Powheg+Pythia} with $h_{\rm damp} = \infty$ is particularly bad
due to a harder $\ensuremath{\pt^{\ttbar}}$ spectrum than data at both 7 TeV and 8 TeV.
Better agreement with data is obtained from {\sc Powheg+Pythia} with $h_{\rm damp} = \ensuremath{m_{\mathrm{top}}}$.
This is consistent with the studies in Refs.~\cite{ATL-PHYS-PUB-2015-002,ATL-PHYS-PUB-2015-011}
using data from the $\sqrt{s}$ = 7 TeV ATLAS parton-level measurement in the $\ell$+jets channel~\cite{TOPQ-2012-08}.
In both the 7 TeV and 8 TeV analyses, {\sc MC@NLO+Herwig} describes the $\ensuremath{\pt^{\ttbar}}$ spectrum well also.
Similar good agreement is also observed in 7 TeV and 8 TeV parton-level measurements by ATLAS in the $\ell$+jets channel~\cite{TOPQ-2012-08,TOPQ-2015-06}.
For $\ensuremath{|y_{\ttbar}|}$, all the generators show fair agreement with data in the 7 TeV analysis,
while at 8 TeV, none of the generators provides an adequate description of $\ensuremath{|y_{\ttbar}|}$.
This difference in the level of agreement is due to the improved statistical precision and finer binning in $\ensuremath{|y_{\ttbar}|}$ for the 8 TeV analysis.
The increasing discrepancy between data and MC prediction with increasing $\ensuremath{|y_{\ttbar}|}$ is also observed at the reconstructed level for both energies, as shown in Figure~\ref{fig:ttbar_system_opt2} and Figure~\ref{fig:cplots}.
This observation is also consistent with the results of the ATLAS differential cross-section measurements in the $\ell$+jets channel, at both 7 and 8 TeV\cite{TOPQ-2012-08,TOPQ-2015-06}.
Figure~\ref{fig:Normalized4PDF} shows the normalized differential cross-sections at $\sqrt{s}=8\TeV$
compared with the predictions of {\sc MC@NLO+Herwig} reweighted with different PDF sets:
CT10, MSTW2008nlo68cl, NNPDF2.3, and HERAPDF15NLO.
The hatched bands show the uncertainty of each PDF set.
All predictions are compatible with the measured cross-sections within the uncertainties in the
cases of $\ensuremath{m_{\ttbar}}$ and $\ensuremath{\pt^{\ttbar}}$.
However, for $\ensuremath{|y_{\ttbar}|}$,
the {\sc MC@NLO+Herwig} sample with the CT10 PDF set does not agree with the measured cross-sections
at $\ensuremath{|y_{\ttbar}|} \sim 1.6$.
Using NNPDF or HERAPDF significantly improves the agreement.
The corresponding $p$-values are shown in Table~\ref{tab:chi2_norm_pdf}.
Figure~\ref{fig:Normalized3IFSR} and Table~\ref{tab:chi2_norm_ifsr} show the comparison of the measured normalized differential cross-sections at $\sqrt{s}=8\TeV$
to {\sc Powheg+Pythia} with different levels of radiation.
The nominal sample (with $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$) and two other samples, one with lower radiation ($h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ and $\mu=2.0$) and one with higher radiation ($h_{\rm damp}=2.0\ensuremath{m_{\mathrm{top}}}$ and $\mu=0.5$) than the nominal one, are used in the comparison.
The $\ensuremath{\pt^{\ttbar}}$ spectrum, particularly sensitive to radiation activity, shows that
the nominal sample has better agreement with data.
This observation is also consistent with the studies in Refs.~\cite{ATL-PHYS-PUB-2015-002,ATL-PHYS-PUB-2015-011}.
The parton-level measured distributions are also compared to fixed-order QCD calculations.
Figure~\ref{fig:thll_dxs_norm_mtt_pttt} and Figure~\ref{fig:Normalized_NLO} show
the comparison with theoretical QCD NLO+NNLL predictions
for $\ensuremath{m_{\ttbar}}$~\cite{NLO_NNLL_mttbar}
and $\ensuremath{\pt^{\ttbar}}$~\cite{NLO_NNLL_ptttbar1,NLO_NNLL_ptttbar2}
distributions at $\sqrt{s}=7\TeV$ and $\sqrt{s}=8\TeV$, respectively,
and the corresponding $p$-values are given in Table~\ref{tab:chi2_dxs_norm_ll_th}.
The predictions are calculated using the mass of the $\ttbar$ system as the dynamic scale of the process
and the MSTW2008nnlo PDF~\cite{MSTW} set.
The NLO+NNLL calculation shows a good agreement in the $\ensuremath{m_{\ttbar}}$ spectrum
and a large discrepancy for high values of $\ensuremath{\pt^{\ttbar}}$ in measurements at both $\sqrt{s}=7\TeV$ and $\sqrt{s}=8\TeV$.
Figure~\ref{fig:Normalized_NNLO} shows
the comparison of a full NNLO calculation~\cite{Czakon:2015owf} to the $\ensuremath{m_{\ttbar}}$ and $|\ensuremath{y_{\ttbar}}|$ measurements at $\sqrt{s}$ = 8 TeV.
The full NNLO calculation is evaluated using
the fixed
scale $\mu=\ensuremath{m_{\mathrm{top}}}$
and the MSTW2008nnlo PDF~\cite{MSTW}.
The range of the NNLO prediction does not fully cover the highest bins in $\ensuremath{m_{\ttbar}}$ and $|\ensuremath{y_{\ttbar}}|$ and thus no prediction is shown in those bins.
The $\sqrt{s}=7\TeV$ results,
together with previous results reported in $\ell$+jets channel by ATLAS~\cite{TOPQ-2012-08},
are summarized with the SM predictions in Figure~\ref{fig:summary_all_dxs_norm}.
This direct comparison can be performed due to the same bin widths of the $\ttbar$-system observables used in both analyses.
All distributions are plotted as ratios with respect to dilepton channel results.
The normalized results from both the dilepton and $\ell$+jets channels are consistent with each other in all $\ttbar$-system variables within the uncertainties of the measurements.
\begin{table}[htbp]
\centering
\sisetup{round-mode=figures, round-precision=2, retain-explicit-plus=true, separate-uncertainty, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\begin{tabular}{cccc}
\toprule
{$\ensuremath{m_{\ttbar}}$ [GeV]} & {$\frac{1}{\sigma} \frac{d\sigma}{d\ensuremath{m_{\ttbar}}}$ [$10^{-3}\GeV^{-1}$]} & {Stat. [\%]} & {Syst. [\%]} \\
\midrule
250--450 & \numRP{2.41426}{2}$\pm$\numRP{0.0802242}{2} & $\pm$\num{1.59179} & $\pm$\num{2.91686} \\
450--550 & \numRP{2.78669}{2}$\pm$\numRP{0.0485375}{2} & $\pm$\num{1.40859} & $\pm$\num{1.02449} \\
550--700 & \numRP{1.09332}{2}$\pm$\numRP{0.0604522}{2} & $\pm$\num{3.09392} & $\pm$\num{4.58259} \\
700--950 & \numRP{0.251502}{3}$\pm$\numRP{0.0231462}{3} & $\pm$\num{5.73509} & $\pm$\num{7.19774} \\
950--2700 & \numRP{0.00663185}{4}$\pm$\numRP{0.00141049}{4} & $\pm$\num{16.2267} & $\pm$\num{13.7493} \\
& & & \\
\toprule
{$\ensuremath{\pt^{\ttbar}}$ [GeV]} & {$\frac{1}{\sigma} \frac{d\sigma}{d\ensuremath{\pt^{\ttbar}}}$ [$10^{-3}\GeV^{-1}$]} & {Stat. [\%]} & {Syst. [\%]} \\
\midrule
0--40 & \numRP{13.492}{1}$\pm$\numRP{0.653202}{1} & $\pm$\num{1.18917} & $\pm$\num{4.6931} \\
40--170 & \numRP{3.14436}{2}$\pm$\numRP{0.167031}{2} & $\pm$\num{1.48227} & $\pm$\num{5.10108} \\
170--340 & \numRP{0.268961}{3}$\pm$\numRP{0.0327834}{3} & $\pm$\num{6.14172} & $\pm$\num{10.5284} \\
340--1000 & \numRP{0.00883629}{4}$\pm$\numRP{0.00256648}{4} & $\pm$\num{18.6101} & $\pm$\num{22.2994} \\
& & & \\
\toprule
{$|\ensuremath{y_{\ttbar}}|$ } & {$\frac{1}{\sigma} \frac{d\sigma}{d\ensuremath{|y_{\ttbar}|}}$ } & {Stat. [\%]} & {Syst. [\%]} \\
\midrule
0--0.5 & \numRP{0.825575}{3}$\pm$\numRP{0.0194582}{3} & $\pm$\num{1.89434} & $\pm$\num{1.40235} \\
0.5--1 & \numRP{0.64306}{3}$\pm$\numRP{0.0179726}{3} & $\pm$\num{1.84211} & $\pm$\num{2.10187} \\
1--2.5 & \numRP{0.177121}{3}$\pm$\numRP{0.00729457}{3} & $\pm$\num{2.8409} & $\pm$\num{2.9817} \\
\bottomrule
\end{tabular}
\label{tab:summary_dxs_norm_abyttbar_ll}
\caption{Normalized $\ttbar$ differential cross-sections for the different $\ttbar$ kinematic variables at $\sqrt{s}$ = 7 TeV.
The cross-sections in the last bins include events (if any) beyond of the bin edges.
The uncertainties quoted in the second column represent the statistical and systematic uncertainties added in quadrature.
}
\label{tab:summary_dxs_norm_ll}
\end{table}
\begin{table}[htbp]
\begin{center}
\sisetup{round-mode=figures, round-precision=2, retain-explicit-plus=true, separate-uncertainty, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\begin{tabular}{cccc}
\toprule
{$\ensuremath{m_{\ttbar}}$ [GeV]} & {$\frac{1}{\sigma} \frac{d\sigma}{d\ensuremath{m_{\ttbar}}}$ [$10^{-3}\GeV^{-1}$]} & {Stat. [\%]} & {Syst. [\%]} \\
\midrule
250--450 & \numRP{2.4074325}{2}$\pm$\numRP{0.06527}{2} & $\pm$\num{1.13651} & $\pm$\num{6.0409} \\
450--570 & \numRP{2.55699}{2}$\pm$\numRP{0.05376}{2} & $\pm$\num{1.0528} & $\pm$\num{1.8587} \\
570--700 & \numRP{0.967977}{2}$\pm$\numRP{0.08342}{2} & $\pm$\num{1.6335} & $\pm$\num{8.4166} \\
700--850 & \numRP{0.345951}{2}$\pm$\numRP{0.04515}{2} & $\pm$\num{2.53026} & $\pm$\num{12.6573} \\
850--1000 & \numRP{0.128619}{3}$\pm$\numRP{0.02249}{3} & $\pm$\num{3.55797} & $\pm$\num{16.9644} \\
1000--2700 & \numRP{0.00861843}{4}$\pm$\numRP{0.00243}{4} & $\pm$\num{6.58544} & $\pm$\num{23.4147} \\
& & & \\
\toprule
{$\ensuremath{\pt^{\ttbar}}$ [GeV]} & {$\frac{1}{\sigma} \frac{d\sigma}{d\ensuremath{\pt^{\ttbar}}}$ [$10^{-3}\GeV^{-1}$]} & {Stat. [\%]} & {Syst. [\%]} \\
\midrule
0--30 & \numRP{14.3409}{1}$\pm$\numRP{1.0038}{1} & $\pm$\num{1.15801} & $\pm$\num{6.8662} \\
30--70 & \numRP{7.59835}{2}$\pm$\numRP{0.1596}{2} & $\pm$\num{1.08914} & $\pm$\num{1.8510} \\
70--120 & \numRP{2.94231}{2}$\pm$\numRP{0.2793}{2} & $\pm$\num{1.75359} & $\pm$\num{9.2875} \\
120--180 & \numRP{1.13949}{2}$\pm$\numRP{0.11514}{2} & $\pm$\num{2.65273} & $\pm$\num{9.4565} \\
180--250 & \numRP{0.423467}{2}$\pm$\numRP{0.0441}{2} & $\pm$\num{4.01751} & $\pm$\num{9.7130} \\
250--350 & \numRP{0.142997}{3}$\pm$\numRP{0.01806}{3} & $\pm$\num{6.0417} & $\pm$\num{11.4005} \\
350--1000 & \numRP{0.00986377}{4}$\pm$\numRP{0.00147}{4} & $\pm$\num{8.93817} & $\pm$\num{11.6486} \\
& & & \\
\toprule
{$\ensuremath{|y_{\ttbar}|}$} & {$\frac{1}{\sigma} \frac{d\sigma}{d\ensuremath{y_{\ttbar}}}$} & {Stat. [\%]} & {Syst. [\%]} \\
\midrule
0.0--0.4 & \numRP{0.821127}{3}$\pm$\numRP{0.02134938}{3} & $\pm$\num{1.29998} & $\pm$\num{2.2403} \\
0.4--0.8 & \numRP{0.721364}{3}$\pm$\numRP{0.018034}{3} & $\pm$\num{1.34046} & $\pm$\num{2.1394} \\
0.8--1.2 & \numRP{0.499483}{3}$\pm$\numRP{0.01298648}{3} & $\pm$\num{1.64534} & $\pm$\num{2.0287} \\
1.2--2.0 & \numRP{0.206433}{3}$\pm$\numRP{0.0061929}{3} & $\pm$\num{2.38768} & $\pm$\num{1.8787} \\
2.0--2.8 & \numRP{0.0225804}{4}$\pm$\numRP{0.00230316}{4} & $\pm$\num{8.26282} & $\pm$\num{9.878} \\
\bottomrule
\end{tabular}
\caption{Normalized $\ttbar$ differential cross-sections for the different $\ttbar$ kinematic variables at $\sqrt{s}$ = 8 TeV.
The uncertainties quoted in the second column represent the statistical and systematic uncertainties added in quadrature.
}
\label{tab:xsec_summary_norm}
\end{center}
\end{table}
\clearpage
\begin{figure}[htbp]
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_04a}
\label{fig:dxs_mc_norm_ll_mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_04b}
\label{fig:dxs_mc_norm_ll_pttt}
}
\hspace{8pt}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.45\textwidth]{fig_04c}
\label{fig:dxs_mc_norm_ll_absytt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:dxs_mc_norm_ll_mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
\protect\subref{fig:dxs_mc_norm_ll_pttt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$)
and \protect\subref{fig:dxs_mc_norm_ll_absytt} absolute value of the rapidity ($\ensuremath{|y_{\ttbar}|}$)
of the $\ttbar$ system at $\sqrt{s}=7$ TeV
measured in the dilepton channel
compared to theoretical predictions from MC generators.
All generators use the NLO CT10~\cite{CT10} PDF, except for {\sc Alpgen+Herwig} using the LO CTEQ6L1 PDF.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
}
\label{fig:dxs_norm_mc_ll}
\end{figure}
\begin{figure}[htbp]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_05a}
\label{fig:Normalized_Mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_05b}
\label{fig:Normalized_pTtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.45\textwidth]{fig_05c}
\label{fig:Normalized_ytt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:Normalized_Mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
\protect\subref{fig:Normalized_pTtt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$)
and \protect\subref{fig:Normalized_ytt} absolute value of the rapidity ($\ensuremath{|y_{\ttbar}|}$)
of the $\ttbar$ system at $\sqrt{s}=8$ TeV
measured in the dilepton $\emu$ channel
compared to theoretical predictions from MC generators.
All generators use the NLO CT10~\cite{CT10} PDF.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
}
\label{fig:Normalized}
\end{figure}
\begin{figure}[htbp]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_06a}
\label{fig:Normalized_PDF_Mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_06b}
\label{fig:Normalized_PDF_pTtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.45\textwidth]{fig_06c}
\label{fig:Normalized_PDF_ytt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:Normalized_PDF_Mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
\protect\subref{fig:Normalized_PDF_pTtt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$)
and \protect\subref{fig:Normalized_PDF_ytt} absolute value of the rapidity ($\ensuremath{|y_{\ttbar}|}$)
of the $\ttbar$ system at $\sqrt{s}=8$ TeV
measured in the dilepton $\emu$ channel
compared to different PDF sets.
The {\sc MC@NLO+Herwig} generator is reweighted using the PDF sets to produce the different predictions.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
}
\label{fig:Normalized4PDF}
\end{figure}
\begin{figure}[htbp]
\captionsetup[subfloat]{farskip=2pt,captionskip=1pt}
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_07a}
\label{fig:Normalized_IFSR_Mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_07b}
\label{fig:Normalized_IFSR_pTtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.45\textwidth]{fig_07c}
\label{fig:Normalized_IFSR_ytt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:Normalized_IFSR_Mtt} invariant mass ($\ensuremath{m_{\ttbar}}$),
\protect\subref{fig:Normalized_IFSR_pTtt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$),
and \protect\subref{fig:Normalized_IFSR_ytt} absolute value of the rapidity ($\ensuremath{|y_{\ttbar}|}$),
of the $\ttbar$ system at $\sqrt{s}=8$ TeV
measured in the dilepton $\emu$ channel
compared to theoretical predictions from MC generators.
The {\sc Powheg+Pythia} generator with different levels of radiation
are used for the predictions.
All generators use the NLO CT10~\cite{CT10} PDF.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
}
\label{fig:Normalized3IFSR}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_08a}
\label{fig:thll_dxs_norm_mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_08b}
\label{fig:thll_dxs_norm_pttt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:thll_dxs_norm_mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
and \protect\subref{fig:thll_dxs_norm_pttt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$)
of the $\ttbar$ system at $\sqrt{s}=7$ TeV
measured in the dilepton channel
compared with theoretical QCD calculations at NLO+NNLL level.
The predictions are calculated using the MSTW2008nnlo PDF.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
}
\label{fig:thll_dxs_norm_mtt_pttt}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_09a}
\label{fig:Normalized_NLO_Mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_09b}
\label{fig:Normalized_NLO_pTtt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:Normalized_NLO_Mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
and \protect\subref{fig:Normalized_NLO_pTtt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$)
of the $\ttbar$ system at $\sqrt{s}=8$ TeV
measured in the dilepton $\emu$ channel
compared with theoretical QCD calculations at NLO+NNLL level.
The predictions are calculated using the MSTW2008nnlo PDF.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
}
\label{fig:Normalized_NLO}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.45\textwidth]{fig_10a}
\label{fig:Normalized_NNLO_Mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.45\textwidth]{fig_10b}
\label{fig:Normalized_NNLO_ytt}
}
\caption{
Normalized $\ttbar$ differential cross-sections as a function of the
\protect\subref{fig:Normalized_NNLO_Mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
and \protect\subref{fig:Normalized_NNLO_ytt} absolute value of the rapidity ($\ensuremath{|y_{\ttbar}|}$)
of the $\ttbar$ system at $\sqrt{s}=8$ TeV
measured in the dilepton $\emu$ channel
compared with theoretical QCD calculations at full NNLO accuracy.
The predictions are calculated using the MSTW2008nnlo PDF.
The bottom panel shows the ratio of prediction to data.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
The NNLO prediction does not cover the highest bins in $\ensuremath{m_{\ttbar}}$ and $|\ensuremath{y_{\ttbar}}|$.
}
\label{fig:Normalized_NNLO}
\end{figure}
\clearpage
\begin{table}[htbp]
\centering
\sisetup{round-mode=places, round-precision=2, retain-explicit-plus=true, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\scalebox{0.9}
{
\begin{tabular}{c|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{$\ensuremath{m_{\ttbar}}$} & \multicolumn{2}{c|}{$\ensuremath{\pt^{\ttbar}}$} & \multicolumn{2}{c}{$\ensuremath{|y_{\ttbar}|}$} \\
\midrule
MC generator & {$\chi^2$/NDF} & {$p$-value} & {$\chi^2$/NDF }& {$p$-value} & {$\chi^2$/NDF} & {$p$-value} \\
\midrule
PWG+PY6 CT10 $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ &
\numRP{4.69644/4}{1} & \num{0.319885} &
\numRP{2.23623/3}{1} & \num{0.524847} &
\numRP{1.31152/2}{1} & \num{0.519047} \\
PWG+PY6 CT10 $h_{\rm damp}=\infty$ &
\numRP{4.39173/4}{1} & \num{0.355579} &
\numRP{6.37951/3}{1} & \num{0.0945374} &
\numRP{1.28319/2}{1} & \num{0.526452} \\
MC@NLO+HW CT10 AUET2 &
\numRP{3.85227/4}{1} & \num{0.426368} &
\numRP{0.775003/3}{1} & \num{0.855436} &
\numRP{0.665769/2}{1} & \num{0.716853} \\
PWG+HW CT10 AUET2 &
\numRP{9.07335/4}{1} & \num{0.0592921} &
\numRP{1.85955/3}{1} & \num{0.602063} &
\numRP{1.17449/2}{1} & \num{0.555856} \\
ALPGEN+HW CTEQ6L1 AUET2 &
\numRP{4.27043/4}{1} & \num{0.370642} &
\numRP{3.30766/3}{1} & \num{0.346578} &
\numRP{0.453504/2}{1} & \num{0.797118} \\
\bottomrule
\end{tabular}
}
\caption{
Comparisons between the measured normalized cross-sections and the MC predictions at $\sqrt{s}$ = 7 TeV.
For each variable and prediction a $\chi^2$ and a $p$-value are calculated using the covariance matrix of each measured spectrum.
The number of degrees of freedom is equal to one less than the number of bins $(N_{\rm b}-1)$.
The abbreviations PWG, PY and HW correspond to {\sc Powheg}, {\sc Pythia} and {\sc Herwig} respectively.
}
\label{tab:chi2_dxs_norm_ll}
\end{table}
\begin{table}[htbp]
\begin{center}
\sisetup{round-mode=places, round-precision=2, retain-explicit-plus=true, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\scalebox{0.9}{
\begin{tabular}{c|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{$\ensuremath{m_{\ttbar}}$} & \multicolumn{2}{c|}{$\ensuremath{\pt^{\ttbar}}$} & \multicolumn{2}{c}{$\ensuremath{|y_{\ttbar}|}$} \\
\midrule
MC generator & {$\chi^2$/NDF} & {$p$-value} & {$\chi^2$/NDF }& {$p$-value} & {$\chi^2$/NDF} & {$p$-value} \\
\midrule
PWG+PY6 CT10 $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ &
\numRP{1.29354/5}{1} & \num{0.935595} &
\numRP{4.07882/6}{1} & \num{0.666011} &
\numRP{38.2019/4}{1} & \num{< 0.01}\\
PWG+PY6 CT10 $h_{\rm damp}=\infty$ &
\numRP{1.14644/5}{1} & \num{0.949912} &
\numRP{16.6705/6}{1} & \num{0.0105737} &
\numRP{39.3264/4}{1} & \num{< 0.01}\\
MC@NLO+HW CT10 AUET2 &
\numRP{1.98943/5}{1} & \num{0.850606} &
\numRP{0.424909/6}{1} & \num{0.998636} &
\numRP{29.8026/4}{1} & \num{< 0.01}\\
PWG+HW CT10 AUET2 &
\numRP{1.19053/5}{1} & \num{0.945782} &
\numRP{3.32858/6}{1} & \num{0.766619} &
\numRP{36.9976/4}{1} & \num{< 0.01}\\
\bottomrule
\end{tabular}
}
\caption{
Comparisons between the measured normalized cross-sections and the MC predictions at $\sqrt{s}$ = 8 TeV.
For each variable and prediction a $\chi^2$ and a $p$-value are calculated using the covariance matrix of each measured spectrum.
The number of degrees of freedom is equal to one less than the number of bins $(N_{\rm b}-1)$.
The abbreviations PWG, PY and HW correspond to {\sc Powheg}, {\sc Pythia} and {\sc Herwig} respectively.
}
\label{tab:chi2_norm}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\sisetup{round-mode=places, round-precision=2, retain-explicit-plus=true, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\scalebox{0.9}{
\begin{tabular}{c|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{$\ensuremath{m_{\ttbar}}$} & \multicolumn{2}{c|}{$\ensuremath{\pt^{\ttbar}}$} & \multicolumn{2}{c}{$\ensuremath{|y_{\ttbar}|}$} \\
\midrule
PDF & {$\chi^2$/NDF} & {$p$-value} & {$\chi^2$/NDF }& {$p$-value} & {$\chi^2$/NDF} & {$p$-value} \\
\midrule
CT10 NLO &
\numRP{1.98943/5}{1} & \num{0.850606} &
\numRP{0.424909/6}{1} & \num{0.998636} &
\numRP{29.8026/4}{1} & \num{< 0.01}\\
MSTW2008nlo &
\numRP{2.13175/5}{1} & \num{0.830632} &
\numRP{0.583443/6}{1} & \num{0.99667} &
\numRP{11.5989/4}{1} & \num{0.0205972}\\
NNPDF23nlo &
\numRP{2.2889/5}{1} & \num{0.807896} &
\numRP{0.421544/6}{1} & \num{0.998666} &
\numRP{3.1781/4}{1} & \num{0.528475}\\
HERAPDF15NLO &
\numRP{2.37959/5}{1} & \num{0.794509} &
\numRP{2.27062/6}{1} & \num{0.893204} &
\numRP{5.63515/4}{1} & \num{0.228103}\\
\bottomrule
\end{tabular}
}
\caption{
Comparisons between the measured normalized cross-sections and the {\sc MC@NLO+Herwig} predictions with varied PDF sets at $\sqrt{s}$ = 8 TeV.
For each variable and prediction a $\chi^2$ and a $p$-value are calculated using the covariance matrix of each measured spectrum.
The number of degrees of freedom is equal to one less than the number of bins $(N_{\rm b}-1)$. }
\label{tab:chi2_norm_pdf}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\sisetup{round-mode=places, round-precision=2, retain-explicit-plus=true, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\scalebox{0.9}{
\begin{tabular}{c|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{$\ensuremath{m_{\ttbar}}$} & \multicolumn{2}{c|}{$\ensuremath{\pt^{\ttbar}}$} & \multicolumn{2}{c}{$\ensuremath{|y_{\ttbar}|}$} \\
\midrule
MC generator & {$\chi^2$/NDF} & {$p$-value} & {$\chi^2$/NDF }& {$p$-value} & {$\chi^2$/NDF} & {$p$-value} \\
\midrule
PWG+PY6 CT10 $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ &
\numRP{1.29354/5}{1} & \num{0.935595} &
\numRP{4.07882/6}{1} & \num{0.666011} &
\numRP{38.2019/4}{1} & \num{< 0.01}\\
PWG+PY6 CT10 $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$, $\mu=2\ensuremath{m_{\mathrm{top}}}$ &
\numRP{0.933353/5}{1} & \num{0.96776} &
\numRP{14.4779/6}{1} & \num{0.0247303} &
\numRP{39.8503/4}{1} & \num{< 0.01}\\
PWG+PY6 CT10 $h_{\rm damp}=2.0\ensuremath{m_{\mathrm{top}}}$, $\mu=0.5\ensuremath{m_{\mathrm{top}}}$ &
\numRP{1.64397/5}{1} & \num{0.895881} &
\numRP{9.69229/6}{1} & \num{0.138223} &
\numRP{33.8276/4}{1} & \num{< 0.01}\\
\bottomrule
\end{tabular}
}
\caption{
Comparisons between the measured normalized cross-sections and the {\sc Powheg+Pythia} predictions with different levels of radiation at $\sqrt{s}$ = 8 TeV.
For each variable and prediction a $\chi^2$ and a $p$-value are calculated using the covariance matrix of each measured spectrum.
The number of degrees of freedom is equal to one less than the number of bins $(N_{\rm b}-1)$.
The abbreviations PWG and PY correspond to {\sc Powheg} and {\sc Pythia} respectively.
}
\label{tab:chi2_norm_ifsr}
\end{center}
\end{table}
\begin{table}[htbp]
\centering
\sisetup{round-mode=places, round-precision=2, retain-explicit-plus=true, group-integer-digits=true, group-decimal-digits=false, group-four-digits=true}
\scalebox{0.9}
{
\begin{tabular}{c|cc|cc}
\toprule
& \multicolumn{2}{c|}{$\ensuremath{m_{\ttbar}}$} & \multicolumn{2}{c}{$\ensuremath{\pt^{\ttbar}}$} \\
\midrule
QCD calculation & {$\chi^2$/NDF} & {$p$-value} & {$\chi^2$/NDF }& {$p$-value} \\
\midrule
NLO+NNLL ($\sqrt{s}$ = 7 TeV) &
\numRP{4.99566/4}{1} & \num{0.287743} &
\numRP{14.3025/3}{1} & \num{< 0.01} \\
NLO+NNLL ($\sqrt{s}$ = 8 TeV) &
\numRP{5.86843/5}{1} & \num{0.319232} &
\numRP{121.534/6}{1} & \num{< 0.01} \\
\bottomrule
\end{tabular}
}
\caption{
Comparisons between the measured normalized cross-sections and the QCD NLO+NNLL calculations at $\sqrt{s}$ = 7 TeV and $\sqrt{s}$ = 8 TeV.
The NLO+NNLL predictions are calculated using the MSTW2008nnlo PDF.
For each variable and prediction a $\chi^2$ and a $p$-value are calculated using the covariance matrix of each measured spectrum.
The number of degrees of freedom is equal to one less than the number of bins $(N_{\rm b}-1)$.
}
\label{tab:chi2_dxs_norm_ll_th}
\end{table}
\clearpage
\begin{figure}[htbp]
\centering
\subfloat[$\ensuremath{m_{\ttbar}}$]{
\includegraphics[width=0.42\textwidth]{fig_11a}
\label{fig:summary_ljll_dxs_norm_mtt}
}
\hspace{8pt}
\subfloat[$\ensuremath{\pt^{\ttbar}}$]{
\includegraphics[width=0.42\textwidth]{fig_11b}
\label{fig:summary_ljll_dxs_norm_pttt}
}
\hspace{8pt}
\subfloat[$\ensuremath{|y_{\ttbar}|}$]{
\includegraphics[width=0.42\textwidth]{fig_11c}
\label{fig:summary_ljll_dxs_norm_absytt}
}
\caption{
Ratio of different theoretical predictions and the lepton+jets measurement~\cite{TOPQ-2012-08} to the measurement of the normalized $\ttbar$ differential cross-sections in the dilepton channel for
\protect\subref{fig:summary_ljll_dxs_norm_mtt} invariant mass ($\ensuremath{m_{\ttbar}}$)
\protect\subref{fig:summary_ljll_dxs_norm_pttt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$)
and \protect\subref{fig:summary_ljll_dxs_norm_absytt} absolute value of the rapidity ($\ensuremath{|y_{\ttbar}|}$)
of the $\ttbar$ system at $\sqrt{s}=7$ TeV.
Theoretical QCD calculations at NLO+NNLL level
are also included
in $\ensuremath{m_{\ttbar}}$
and $\ensuremath{\pt^{\ttbar}}$.
All generators use the NLO CT10~\cite{CT10} PDF, except for {\sc Alpgen+Herwig} using the LO CTEQ6L1 PDF.
The NLO+NNLL calculations use the MSTW2008nnlo PDF.
The light (dark) gray band includes the total (data statistical) uncertainty in the data in each bin.
The uncertainties on the two data measurements do not account for the correlations of the systematic uncertainties between the two channels.
}
\label{fig:summary_all_dxs_norm}
\end{figure}
\FloatBarrier
\section{Conclusions}
\label{sec:conclusion}
Normalized differential $\ttbar$ production cross-sections
have been measured
as a function of the invariant mass, the transverse momentum, and the rapidity of the $\ttbar$ system
in \rts{} = 7 TeV and 8 TeV proton-proton collisions
using the dilepton channel.
The data correspond to an integrated luminosity of 4.6\,\ifb{} and 20.2\,\ifb{}
for \rts{} = 7 TeV and 8 TeV, respectively,
collected by the ATLAS detector at the CERN LHC.
The results complement the other ATLAS measurements in the lepton+jets channel
using the 7 TeV
and 8 TeV
datasets.
The predictions from Monte Carlo and QCD calculations generally agree with data in a wide range of the kinematic distributions.
Most of the generators describe the $\ensuremath{m_{\ttbar}}$ spectrum fairly well in 7 TeV and 8 TeV data.
The $\ensuremath{\pt^{\ttbar}}$ spectrum in both 7 TeV and 8 TeV data is well described by
{\sc Powheg+Pythia} with $h_{\rm damp}=\ensuremath{m_{\mathrm{top}}}$ and {\sc MC@NLO+Herwig},
but is particularly poorly described by {\sc Powheg+Pythia} with $h_{\rm damp}=\infty$.
For $\ensuremath{|y_{\ttbar}|}$, all of the generators predict higher cross-sections at large $\ensuremath{|y_{\ttbar}|}$
than observed in data,
and the level of agreement is improved when using
NNPDF2.3 and HERAPDF1.5 PDF sets
instead of CT10.
The QCD calculation agrees well with data in the $\ensuremath{m_{\ttbar}}$ spectrum
at both NLO+NNLL and NNLO accuracy,
while a large discrepancy for $\ensuremath{\pt^{\ttbar}}$ is seen at NLO+NNLL accuracy for both $\sqrt{s}=7\TeV$ and $\sqrt{s}=8\TeV$.
The results at both 7 TeV and 8 TeV are consistent with the other ATLAS measurements in the lepton+jets channel.
\section*{Acknowledgements}
We honor the memory of our colleague Irene Vichou, who made a large contribution to this work, but died shortly before its completion.
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF, I-CORE and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, the Canada Council, CANARIE, CRC, Compute Canada, FQRNT, and the Ontario Innovation Trust, Canada; EPLANET, ERC, FP7, Horizon 2020 and Marie Sk{\l}odowska-Curie Actions, European Union; Investissements d'Avenir Labex and Idex, ANR, R{\'e}gion Auvergne and Fondation Partager le Savoir, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF; BSF, GIF and Minerva, Israel; BRF, Norway; Generalitat de Catalunya, Generalitat Valenciana, Spain; the Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref.~\cite{ATL-GEN-PUB-2016-002}.
\printbibliography
\newpage \input{atlas_authlist}
\end{document}
|
1,116,691,497,691 | arxiv |
\section{Introduction}
\blfootnote{\footnotesize{Website: \url{https://simongiebenhain.github.io/NPHM}}}
Human faces and heads lie at the core of human visual perception, and hence are key to creating digital replica of someones identity, likeliness, and appearance.
In particular, 3D reconstruction of human heads from sparse inputs, such as point clouds, is central to a wide range of applications in the context of gaming, augmented and virtual reality, and digitization in our modern digital era.
One of the most successful lines of research to address this challenging problem are parametric face models, which represent both shape identities and expressions featuring a low-dimensional parametric space.
These Blendshape and 3D morphable models have achieved incredible success, since they can be fitted to sparse inputs, regularize out noise, and provide a compact 3D representation.
As a result, many practical settings could be realized, ranging from face tracking and 3D avatar creation to facial-reenactment applications~\cite{zollhofer2018state}.
Traditionally, these parametric models such as Blendshapes and 3D morphable models (3DMM), are based on a low-rank approximation of the underlying 3D mesh geometry.
To this end, a given template mesh with a fixed topology is non-rigidly registered to a series of 3D scans of human faces at training time.
From this template registration, a parametric model can be computed using dimensionality reduction methods such as principal component analysis (PCA).
The quality of the resulting parametric space depends strongly on the quality of 3D scans, their registration, and the ability to properly disentangle between shape identities and expression parameters.
While these PCA-based models are excellent at regularizing out noise when fitting to noisy input point clouds, their inherent limitation lies in their inability to represent local surface detail and the reliance on a template mesh of fixed topology.
As a result, fitted test-time models lack high-frequency surface details and are typically limited to the frontal facial regions, while for instance not including hair regions of the human head.
In this work, we propose neural parametric head models (NPHM), which represent complete human head geometry in a canonical space using a SDF, and morph the resulting geometry to posed space using a forward deformation field.
By decoupling the human head representation into these two spaces, we are able to learn disentangled latent spaces -- one of the core concepts of 3DMMs.
Furthermore, we decompose the implicit geometry representation in canonical space into an ensemble of local MLPs to encode high-frequency geometric detail.
Each part is represented by a small MLP that operates in a local coordinate system centered around face keypoints.
Additionally, we exploit face symmetry by sharing network weights of symmetric regions.
This decomposition into separate parts poses a strong geometry prior into our model, and helps to improve both generalization and provide higher levels of detail.
In order to train our model, we capture a new high-fidelity head dataset with a high-end capture rig, which is composed of over 2200~ 3D head scans from 124~ different people.
After rigidly aligning all scans in a canonical coordinate system, we train our identity network on scans in canonical expression.
In order to train the deformation network, we non-rigidly register each scan against a template mesh, which we in turn use as training data for our neural deformation model.
At inference time, we can then fit our model to a given input point cloud by optimizing for the latent code parameters for both expression and identity.
In a series of experiments, we demonstrate that our neural parametric model characterizes significantly more detail than state-of-the-art models, representing even fine-scale details.
\smallskip\noindent
In sum, our contributions are as follows:
\vspace{-0.1cm}
\begin{itemize}
\setlength\itemsep{-.3em}
\item We introduce a novel 3D dataset captured with a high-end capture rig, including over 2200~ 3D scans of human heads from 124~ different identities.
\item We propose a new neural-field-based parametric head representation, which facilitates high-fidelity local details through an ensemble of local implicit models.
\item We demonstrate that our neural parametric head model can be robustly fit to range data, regularize out noise, and outperform existing models by a significant margin in terms of fitting accuracy.%
\end{itemize}
\section{Related Work}
\noindent{\bf{3D morphable face and head models.}}
The seminal work of Blanz and Vetter~\cite{blanz1999morphable} was one of the first to introduce a model-based approach to represent variations in human faces.
The model was built upon PCA using 200 face scans, where correspondences were established via optical flow.
Since the scans were captured in constrained environments, the expressiveness of the model was relatively limited.
As such, improvements in the registration~\cite{paysan20093d} as well as use of data captured in the wild~\cite{booth20173d,booth20163d,ploumpis2019combining} led to significant advances.
Thereafter, more advanced face models were introduced, including multilinear models of identity and expression~\cite{bolkart2015groupwise,brunton2014multilinear}, as well as models that combined linear shape spaces with articulated head parts~\cite{FLAME}.
With the advent of deep learning, various works focused on extending face and head 3DMMs beyond linear spaces.
To this end, convolutional neural network based architectures have been proposed to both regress the model parameters and reconstruct the face~\cite{tran2018nonlinear,tran2019learning,tran2019towards}.
At the same time, graph convolutions~\cite{bouritsas2019neural,gong2019spiralnet++}
and attention modules~\cite{chen2021learning} have been proposed to model the head mesh geometry.
\noindent{\bf{Neural field representations.}}
Neural field-based networks have emerged as an efficient way to implicitly represent 3D scenes.
In contrast to explicit representations (e.g., meshes or voxel grids), neural fields are well-suited to represent geometries of arbitrary topology.
Park et al.~\cite{park2019deepsdf} proposed to represent a class-specific SDF with an MLP that is conditioned on a latent variable.
Similarly, Mescheder et al.~\cite{mescheder2019occupancy} implicitly define a surface as the decision boundary of a binary classifier and Mildenhall et al.~\cite{mildenhall2021nerf} represent a radiance field using an MLP by supervising a photometric loss on the rendered images.
\begin{figure*}[htb]
\centering
\vspace{-0.3cm}
\includegraphics[width=\textwidth]{figures/dataset_small_3.pdf}
\vspace{-0.6cm}
\caption{3D head scans from our newly-captured dataset: for each person (rows), we first capture a neutral pose, followed by several scans in different expressions (columns). We aim to keep expressions consistent across different identities. Overall, our dataset has more than 2200~ 3D scans from 124~ people.%
}
\label{fig:dataset_figure}
\end{figure*}
Building upon these approaches, a series of works focus on modeling deformations.
These methods use a separate network to model the deformations that occur in a sequence (e.g.,~\cite{park2021nerfies,hypernerf}), and have been successfully applied to animation of human bodies~\cite{liu2021neural, li2022tava} and heads~\cite{IMavatars}.
Following this paradigm, a number of neural parametric models have been proposed for bodies~\cite{gDNA,NPM,SPAM}, faces~\cite{ImFace}, and ---most closely related to our work--- heads~\cite{i3DMM,wang2022morf,ramon2021h3d}.
For instance, H3D-Net~\cite{ramon2021h3d} and MoRF~\cite{wang2022morf} proposed 3D generative models of heads, but do not account for expression-specific deformations.
Recently, neural parametric models for human faces~\cite{i3DMM,ImFace} and bodies~\cite{NPM,SPAM,chen2021snarf, gDNA} have explored combinations of SDFs and deformation fields, to produce complex non-linear deformations, while maintaining the flexibility of an implicit geometry representation.
Our work is greatly inspired by these lines; however, the key difference is that we tailor our neural field representation specifically to human heads through an ensemble of local MLPs. Thereby, our work is also related to local conditioning methods for neural fields~\cite{genova2020local, peng2020convolutional, AIR-Nets, DeepLocalShapes}.
\section{Dataset Acquisition}
\label{sec:data_capture}
\input{tables/tab_dataset}
Our dataset comprises 124~ subjects, 25\% female, and contains over 2200~ 3D scans; see Table.~\ref{tab:dataset}.
Our 3D head scans show great levels of detail and completeness, as shown in Fig.~\ref{fig:dataset_figure}.
Additionally, we do not require participants to wear a bathing cap or similar contraption, allowing for the capture of natural hair styles to a certain degree.
\subsection{Capture Setup}
Our setup is composed of two Artec Eva scanners~\cite{sivanandan2017assessing}, running the latest software including their upsampling algorithm, that are rotated 360° around a subject's head using a robotic actuator.
Each scan takes only 6 seconds, which is crucial to keep involuntary, non-rigid facial movements to a minimum.
The scanners operate at 16 FPS, and are aligned through the scanning sequence and fused into a single mesh reconstruction; each fused scan contains approximately 1.5M vertices and 3.5M triangles.
During a capture session, we ask each participant to perform 20 different expressions, which are adopted from the FACS coded expression proposed in FaceWarehouse~\cite{FaceWarehouse}.
Most importantly, we capture a neutral expression with the mouth open, which later serves as canonical pose, as described in Section~\ref{sec:method}.
\subsection{Registration Pipeline}
\label{sec:registration}
Registering all head scans against a common template is a key requirement to effectively train our parametric head model.
First, we start with a rigid alignment into our canonical coordinate system; second, we non-rigidly register all scans to a common template. %
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/pipeline.pdf}
\vspace{-0.5cm}
\caption{Method overview: at the core of our neural parametric head model lies a neural field representation that parameterizes shape and expressions in disentangled latent spaces. Specifically, we propose a local MLP ensemble that is anchored at face keypoints (left). We train this model by leveraging a set of high-fidelity 3D scans from our newly-captured dataset comprising various expressions for identity (middle). In order to obtain the ground truth deformation samples, we non-rigidly registrer all scans to a common template (right).}
\label{fig:pipeline}
\end{figure*}
\subsubsection{Rigid Alignment}
We leverage 2D face landmark detectors to obtain a rigid transformation into the canonical coordinate system of the FLAME model~\cite{FLAME}.
To this end, we deploy the MediaPipe~\cite{MediaPipe} face mesh detector and back-project a subset of 48 landmarks corresponding to iBUG68 annotations~\cite{sagonas2013300} to the 3D scan.
Since not all viewing angles of the scanner's trajectories are suited for 2D facial landmark detection, we instead use frontal renderings of the colored meshes, which yields robust detection quality.
Note that the initial landmark detection is the only time we use the scanner's color images.
We then calculate a similarity transform using \cite{umeyama} to transform the detected landmarks to the average face of FLAME. %
\subsubsection{Non-Rigid Registration}
As a non-rigid registration prior, we first constrain the non-rigid deformation to FLAME parameter space, before optimizing an offset for each vertex.
Additionally, we back-project 2D hair segmentation masks obtained by FaRL~\cite{FaRL} to mask out the respective areas of the scans.
\paragraph{Initialization.}
Given the 20 expression scans $\{ S_{j}\}_{j=1}^{20}$ of a subject, we jointly estimate identity parameters $\mathbf{z}^{\text{id}} \in \mathbb{R}^{100}$, expression parameters $\{ \mathbf{z}^{\text{ex}}_j\}_{j=1}^{20}$, and jaw poses $\{ \theta_j\}_{j=1}^{20}$ of the FLAME model, as well as a shared scale $s \in \mathbb{R}$ and per-scan rotation and translation corrections $\{R_j\}_{j=1}^{20}$ and $\{t_j\}_{j=1}^{20}$.
Updating the initial similarity transform is crucial to obtaining a more consistent canonical alignment.
Let $\Phi_j$ denote all parameters affecting the $j$-th FLAME model and $V(\Phi_j)$ its vertices.
We jointly optimize for these parameters by minimizing
\begin{equation}
\argmin_{\Phi_1, \dots \Phi_{20}}\! \sum_{j=1}^{20}\! \left[\Vert L_{j}\! -\! \hat{L}_j \Vert_1\!+\! \lambda_d\cdot\sum_{\small{v \in V(\Phi_j)}}\! d(v, \mathcal{S}_j) \! +\! \mathcal{R}(\Phi_j)\right],
\label{eq:flame_fitting}
\end{equation}
where $L_j \in \mathbb{R}^{3}$ denotes the back-projected 3D landmarks, $\hat{L}_j$ are the 3D landmarks from $V(\Phi_j)$, and $d(v, S_j)$ is the point-to-plane distance from $v$ to its nearest neighbor in scan $S_j$. We refer to the supplemental for details of the regularization term $\mathcal{R}(\Phi)$.
\paragraph{Fine tuning.}
Once the initial alignment has been obtained, we upsample the mesh resolution by a factor of 16 for the face region, and perform non-rigid registration using ARAP~\cite{ARAP} for each scan individually.
Let $V$ be the upsampled vertices, which we aim to register to the scan $\mathcal{S}$.
We seek vertex-specific offsets $\{\delta_v\}_{v \in V}$, and auxiliary, vertex-specific rotation $\{R_v\}_{v \in V}$ from the ARAP term. Therefore, we solve
\begin{equation}
\argmin_{\substack{\{\delta_{v}\}_{v\in V} \\ \{R_v\}_{v \in V}}} \sum_{v \in V}\! \left[ d(\hat{v}, \mathcal{S})\! +\! \sum_{u \in \mathcal{N}_v}\! \Vert R(v\! -\! u)\! -\! (\hat{v}\! - \hat{u})\Vert_2^2 \right],\!
\label{eq:nrr}
\end{equation}
using the L-BFGS optimizer, where $\hat{v} = v + \delta_v$, $\mathcal{N}_v$ denotes all neighboring vertices, and $d(\hat{v}, \mathcal{S})$ is as before. See the supplemental for more details.
\section{Neural Parametric Head Models}
\label{sec:method}
Our neural parametric head model separately represents geometry in a canonical space and facial expression as forward deformations; see Sections \ref{sec:iden_repr} and \ref{sec:expr_repr}, respectively.
\subsection{Identity Representation}
\label{sec:iden_repr}
We represent a person's identity-specific geometry implicitly in its canonical space as a SDF.
Compared to template-mesh-based approaches, this offers the necessary flexibility that is required to model a complete head with hair.
In accordance with related work on human body modeling, \eg \cite{NPM, SPAM, gDNA}, we choose a canonical expression with an open mouth to avoid topological issues.
While a canonical coordinate system already reduces the dimensionality of the learning problem at hand, we further tailor our neural identity representation to the domain of human heads; see below.
\subsubsection{Local Decomposition}
Instead of globally conditioning the SDF network on a specific identity, we exploit the structure of the human face to impose two important geometric priors.
First, we embrace the fixed composition of human faces by decomposing the SDF network into an ensemble of several smaller local MLP-based networks, which are defined around certain facial anchors, as shown in Fig.~\ref{fig:pipeline}.
Thereby, we reduce the learning problem into smaller, more tractable ones, \eg a network specialized on corners of an eye can generalize faster and with more detail, than a global one.
We choose facial anchor points as a trade-off between the relevance of an area and spatial uniformity.
Second, we exploit the symmetry of the face by only learning SDFs on the left side of the face, which are shared with right half after flipping spatial coordinates accordingly.
More specifically, we divide the face into $K=2K_{\text{symm}} + K_{\text{middle}}$ regions, which are centered at facial anchor points $\mathbf{a} \in \mathbb{R}^{K\times 3}$.
We use $\mathcal{M}$ to denote the index set anchors lying on the symmetry axis, and $\mathcal{S}$ and $\mathcal{S}^*$ for symmetric regions on the left and right side respectively, such that for $k \in \mathcal{S}$ there is a $k^* \in \mathcal{S}^*$ that corresponds to the symmetric anchor point.
In addition to a global latent vector $\mathbf{z}_{\text{glob}} \in \mathbb{R}^{d_{\text{glob}}}$, the $k$-th region is equipped with a local latent vector $\mathbf{z}^{\text{id}}_k \in \mathbb{R}^{d_{\text{loc}}}$.
Together, the $k$-th region is represented by a small MLP
\begin{align}
f_k: \mathbb{R}^{d_{\text{glob}} + d_{\text{loc}} + 3} &\rightarrow \mathbb{R} \\
(x, \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_k) &\mapsto \operatorname{MLP}_{\theta_k}([x - \mathbf{a}_k, \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_k]),
\end{align}
where $[\cdot]$ denotes the concatenation operator. %
In order to exploit face symmetry, we share the network parameters and mirror the coordinates for each pair $(k, k^*)$ of symmetric regions:
\begin{equation}
f_{k^*}(x, \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_{k^*}) := f_{k}(\operatorname{flip}(x - a_{k^*}), \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_{k^*}),
\end{equation}
where $\operatorname{flip}(\cdot)$ represents a flip of the coordinates along the face symmetry axis.
\subsubsection{Global Blending}
In order to facilitate a decomposition that helps generalization, it is crucial that reliable anchor positions $\mathbf{a}$ are available.
To this end, we train a small $\operatorname{MLP}_{\text{pos}}$ that predicts $\mathbf{a}$ from the global latent $\mathbf{z}_{\text{glob}}^{\text{id}}$.
Since each local SDF focuses on a specific semantic region of the face, as defined by the anchors $\mathbf{a}$, we additionally introduce $f_0(x, \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_0) = \operatorname{MLP}_0(x, \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_0)$, which operates in the global coordinate system, hence covering all SDF values far away from any anchor in $\mathbf{a}$.
To clarify the notation, we set $a_0 := \mathbf{0} \in \mathbb{R}^3$.
Subsequently, we blend all local fields $f_k$ into a global field
\begin{equation}
\mathscr{F}_{\text{id}}(x) = \sum_{k=0}^{K} w_k(x, a_k) f_k(x, \mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_k),
\end{equation}
using Gaussian kernels, similar to \cite{genova2020local}, where
\begin{equation}
w_k^*(x, a_k) =
\begin{cases}
e^{\frac{-|| x - a||_2}{2\sigma}}, ~~ \text{if }k > 0\\
c, \qquad \qquad\text{if } $k = 0$
\end{cases}
\end{equation}
\begin{equation}
\text{ and } w_k(x, a_k) = \frac{w_k^*(x, a_k)}{\sum_{k^{\prime}} w_{k^{\prime}}^*(x, a_{k^{\prime}})}
\end{equation}
We use a fixed isotropic kernel with standard deviation $\sigma$ and a constant response $c$ for $f_0$.
\subsection{Expression Representation}
\label{sec:expr_repr}
In contrast to our local geometry representation, we model expressions only with a globally conditioned deformation field; \eg a smile will effect the cheeks corners of the mouth and eye region.
In this context, we define $\mathbf{z}^{\text{ex}} \in \mathbb{R}^{d_{\text{ex}}}$ as a latent expression description.
Since such a deformation field is defined in the ambient Euclidean space, it is crucial to additionally condition the deformation network with an identity feature.
By imposing an information bottleneck on the latent expression description, the deformation network is then forced to learn a disentangled representation of expressions.
More formally, we model deformations using an MLP
\begin{equation}
\mathscr{F}_{\text{ex}}(x, \mathbf{z}^{\text{ex}}, \hat{\mathbf{z}}^{\text{id}}): \mathbb{R}^{d_{\text{ex}} + d_{\text{id-ex}}} \rightarrow \mathbb{R}^{3}.
\end{equation}
Rather than directly feeding all identity information into $\mathscr{F}_{\text{ex}}$ directly, we first project the information to a lower dimensional representation
\begin{equation}
\hat{\mathbf{Z}}^{\text{id}} = W[\mathbf{z}_{\text{glob}}^{\text{id}}, \mathbf{z}^{\text{id}}_0, \dots \mathbf{z}^{\text{id}}_K, \mathbf{a}_1, \dots, \mathbf{a}_K],
\end{equation}
using a single linear layer $W$, where $d_{\text{id-ex}}$ denotes the dimensionality of the interdependence of identity and expression.
\subsection{Training Strategy}
\label{sec:training}
Our training strategy closely follows NPMs~\cite{NPM} and sequentially trains the identity and expression networks in an auto-decoder fashion.
\medskip
\noindent{\bf{Identity Representation}}
For the identity space, we jointly train latent codes $\mathbf{Z}^{\text{id}}_j := \{\mathbf{z}^{\text{id}}_{\text{glob}, j}, \mathbf{z}^{\text{id}}_{0, j}, \dots, \mathbf{z}^{\text{id}}_{K, j} \}$ for each $j$ in the set of training indices $J$ and network parameters $\theta_{\text{pos}}$ and $\theta_0, \dots, \theta_K$, by minimizing
\begin{equation}
\mathcal{L}_{\text{id}} = \sum_{j \in J} \mathcal{L}_{\text{IGR}} + \lambda_{a}\Vert \hat{\mathbf{a}}_j - \mathbf{a}_j\Vert_2^2 + \lambda_{\text{sy}}\mathcal{L}_{\text{sy}} + \lambda_{\text{reg}}^{\text{id}}\Vert \mathbf{Z}^{\text{id}}_j \Vert_2^2,
\label{eq:loss_id}
\end{equation}
where $\mathcal{L}_{\text{IGR}}$ is the loss introduced in \cite{IGR} which enforces SDF values to be zero on the surface and contains an Eikonal term.
This ensures consistency between surface normals and SDF gradients and is in similar spirit to \cite{IGR, SIREN}.
For training, we directly sample points and surface normals from our ground truth scans.
Additionally, we supervise anchor predictions $\hat{a}$ using the corresponding vertices from our registrations.
The last two terms serve regularization purposes, where
\begin{equation}
\mathcal{L}_{\text{sy}} = \sum_{k \in \mathcal{S}}\Vert \mathbf{z}^{\text{id}}_k - \mathbf{z}^{\text{id}}_{k^*} \Vert_2^2
\end{equation}
enforces the local latent description of symmetric regions to be close, and the final term encourages a well-behaved distribution of both global and local latent descriptions centered around zero.
\medskip
\noindent{\bf{Expression Representation}}
Once the identity representation is learned, we optimize for network parameters $\theta_{\text{ex}}$, $W$ and latent expression codes, \{$\mathbf{z}^{\text{ex}}_{j, l}\}_{j \in J, l \in L}$, where $j$ indexes identity and $l$ indexes expressions.
The deformation loss
\begin{equation}
\mathcal{L}_{\text{ex}}\! =\! \sum_{\substack{i,j \in J,L \\ x \in X_{j, l} }}\! \Vert \mathscr{F}_{\text{ex}}(x, \mathbf{z}^{\text{ex}}_{j, l}, \hat{\mathbf{z}}_{j}^{\text{id}}) - \delta(x)_{j, l}\Vert_2^2\! +\! \lambda_{\text{reg}}^{ex}\Vert \mathbf{z}^{\text{ex}}_{j, l} \Vert_2^2
\end{equation}
directly supervises the deformation field using samples $x~\in~X_{j, l}$, which have been precomputed from the registration. See the supplemental for more details. %
\section{Results}
\subsection{Single-View Depth Map Reconstruction}
In this experiment, we evaluate how well our method generalizes from our training dataset of 87 identities to new ones, and their unique expressions.
Our test dataset consists of 6 female and 12 male heads.
We fit our model to frontal single view depth maps, which are generated by rendering the unseen validation meshes. For ablations with respect to the number of points and noise level, we refer to our supplemental.
In our evaluation, we isolate the reconstruction of identity and expression. The respective experiments are described in the following.
We evaluate against the Basel Face Model (BFM) and FLAME as representatives of PCA-based approaches, and against NPMs as representative of neural field based morphable models.
For the former two, we additionally provide the 68 facial landmarks that we obtained in our registration process, as described in \ref{sec:registration}.
\medskip
\noindent{\bf Metrics.}
To evaluate the quality of the reconstructions, we report $L_1$-Chamfer distance, normal consistency (N. C.), and F-Score with a threshold of 1.5mm.
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/results_reconstruction_identity_frame_fixed.pdf}
\begin{tabularx}{\textwidth}{P{0.14\textwidth}p{0.12\textwidth}P{0.145\textwidth}P{0.135\textwidth}P{0.145\textwidth}P{0.155\textwidth}}
Depth Observation & BFM~\cite{gerig2018morphable} & FLAME~\cite{FLAME} & NPM~\cite{NPM} & Ours & GT Scan\\
\end{tabularx}
\vspace{-0.2cm}
\caption{Model fitting: at inference time, we fit our model to sparse, partial input point clouds from single depth maps. We compare our method to widely-used state-of-the-art parametric face models, including the Basel face model (BMF)~\cite{gerig2018morphable}, the FLAME model~\cite{FLAME}, and neural parametric models (NPM)~\cite{NPM}. Note that our parametric model has significantly more surface detail and covers the entire human head, including the hair region.}
\label{fig:results_comparison_identity}
\end{figure*}
\medskip
\noindent{\bf {Identity Reconstruction.}}
To evaluate the quality of our identity space, we fit against a single neutral expression for each identity, which we assume to be aligned in our canonical coordinate system.
For the PCA-based baselines, we found an ICP loss in combination with a landmark reconstruction term to work best, while for our method, we minimize
\begin{equation}
\sum_{x \in X} | \mathscr{F}_{\text{id}}(x) | + \lambda_{\text{glob}}^{\text{fit}}\Vert \mathbf{z}^{\text{id}}_{\text{glob}} \Vert_2^2\!+\! \lambda_{\text{loc}}^{\text{fit}}\sum_{k=1}^K\Vert \mathbf{z}^{\text{id}}_{k} \Vert_2^2 + \lambda_{\text{sy}}^{\text{fit}}\mathcal{L}_{\text{sy}},
\label{eq:fitting_id}
\end{equation}
where $X$ is the observed point cloud. For NPMs we simply omit the symmetry regularization $\mathcal{L}_{\text{sy}}$ local regularizer.
Figure~\ref{fig:results_comparison_identity} and Table~\ref{tab:results_identity} present qualitative and quantitative results, respectively.
We observe that both neural field methods achieve much more accurate reconstructions.
We further argue that our local conditioning allows us to model details better and capture statistically unlikely elements more reliably, \eg see the beard of the second identity in Figure \ref{fig:results_comparison_identity}.
\input{tables/tab_identity}
\medskip
\noindent{\bf {Expression Reconstruction.}}
\begin{figure*}
\centering
\vspace{-0.3cm}
\includegraphics[width=\textwidth]{figures/results_reconstruction_expression_3rows.pdf}
\begin{tabularx}{\textwidth}{P{0.14\textwidth}p{0.12\textwidth}P{0.145\textwidth}P{0.135\textwidth}P{0.145\textwidth}P{0.155\textwidth}}
Depth Observation & BFM~\cite{gerig2018morphable} & FLAME~\cite{FLAME} & NPM~\cite{NPM} & Ours & GT Scan\\
\end{tabularx}
\vspace{-0.2cm}
\caption{Comparison on fitting expressions to sparse input point clouds: from a sparse set of depth observations from a frontal view (left), we compare against the Basel face model (BFM)~\cite{gerig2018morphable}, the FLAME model~\cite{FLAME}, neural parametric models (NPM)~\cite{NPM}, and our method against the respective ground truth scans. Note that our model is able to reconstruct significantly more surface detail than the baselines.}
\label{fig:results_comparison_expression}
\end{figure*}
After fitting the identity, we optimize for all expression parameters, assuming the identity to be known. This time, the ICP loss was used for all methods; for more details, we refer to the supplementary material. Figure~\ref{fig:results_comparison_expression} and Table~\ref{tab:results_expression} show qualitative and quantitative comparisons with our baselines, respectively.
\input{tables/tab_expression}
\subsection{Real-World Tracking}
\label{sec:tracking}
Additionally, we evaluate our model in a real-world face tracking scenario.
For this purpose, we fit our model against a depth video captured with a Kinect Azure, a commodity depth sensor.
Figure \ref{fig:results_tracking} shows our results of a single frame and a comparison to the FLAME model. For details on our tracking approach and the full video, we refer to the supplemental.
\begin{figure}[htb!]
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{figures/res_tracking.png}
\begin{tabularx}{\columnwidth}{P{0.21\columnwidth}P{0.2\columnwidth}P{0.2\columnwidth}P{0.18\columnwidth}}
Depth Map & FLAME~\cite{FLAME} & Ours & RGB\\
\end{tabularx}
\vspace{-0.3cm}
\caption{Real-world tracking. For a single frame we show from left to right: the depth map obtained from a commodity depth sensor, FLAME and our reconstructions, and an image as reference.}
\label{fig:results_tracking}
\vspace{-0.3cm}
\end{figure}
\subsection{Ablations}
We ablate two main contributions of the proposed identity representation. First, we analyze the effect of the number of regions $K$ in our local ensemble of SDFs, by comparing against NPM~\cite{NPM}, which effectively would be an ensemble of size 1, and against versions with 12 and 26 regions and adjusted number of latent dimensions.
Note that a much lower number of $K$, likely requires further architectural changes, i.e., deeper MLPs, since the regions that have to be covered are larger.
Additionally, we confirm the benefit of sharing weights for symmetric keypoints by running experiments with and without symmetry constraints.
Table~\ref{tab:results_ablation} shows a quantitative evaluation of these two ablations supporting our design choices.
\input{tables/tab_ablations}
\subsection{Limitations}
In our experiments, we show that NPHM can reconstruct high-quality human heads; however, at the same time, we believe that there are still several limitations and opportunities for future work.
For instance we focus solely on the geometry of heads while omitting any information about appearance. This makes our model ill-suited for fitting to RGB images using dense photometric terms.
Here, an interesting future avenue would be to explore learning appearance, anchored on top of the geometric base model.
In fact, as part of our dataset we also provide the RGB frames captured during the 3D scanning process, which should facilitate learning such a texture model.
Another limitation is that currently we do not capture open hair, which limits general diversity; however, compared to other existing face models such as 3D morphable models, we significantly expand the application domain by covering the entirety of the human head.
In the future, we still would like to cover a broader range of hairstyles.
\section{Conclusion}
We have introduced neural parametric head models, a neural representation which disentangles identity and expressions of human heads, by representing geometry in canonical space and modelling expressions as forward deformations.
For our identity representation we have proposed and validated a local representation that is tailored towards human head.
To train our model, we introduce a new dataset of over 2200~ high-fidelity 3D scans.
Once trained, our model can be fitted to sparse input point clouds, for instance, captured by a commodity range sensor.
Compared to existing methods, such as widely-used PCA-based techniques, our model represents significantly more detail while being able to regularize out noise of the underlying point cloud inputs.
Overall, we believe that our method is an important step towards high-fidelity face capture and our newly-introduced dataset opens up opportunities to further explore learning priors for neural face models.
\subsubsection*{Acknowledgements}
\footnotesize
This work was supported by the ERC Starting Grant Scan2CAD (804724), the German Research Foundation (DFG) Grant ``Making Machine Learning on Static and Dynamic 3D Data Practical'', and the German Research Foundation (DFG) Research Unit ``Learning and Simulation in Visual Computing''. We would like to thank Maximilian Knörl and Tim Walter for the help with scanning, and Angela Dai for the video voice over.
\normalsize
{\small
\bibliographystyle{ieee_fullname}
\section{Additional Ablations}
\label{sec:ablations_supp}
The experiments in the main paper were restricted to single view depth maps with 5000 points.
Here, we present a thorough evaluation with respect to the number of input points and with respect to artificial Gaussian noise.
\paragraph{Number of Points:}
Figure \ref{fig:plot_num_points} shows how the number of observed points effect the reconstructions quantitatively. We evaluate on 250, 500, 1000, 2500, 5000, and 10000 points, respectively. Figure~\ref{fig:qual_num_points} illustrates the effect qualitatively.
\paragraph{Noise:}
Similarly, we ablate against additive Gaussian noise with standard deviations of 0.0mm, 0.3mm, 0.75mm and 1.5mm. Quantitative and qualitative results are presented in Figures \ref{fig:plot_noise} and \ref{fig:qual_noise}, respectively.
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figures/supplemental/plot_ablation_number_of_points.pdf}
\caption{Ablation with respect to the number of points in the input point cloud.}
\label{fig:plot_num_points}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figures/supplemental/plot_ablation_noise.pdf}
\caption{Robustness of our method to noise in the input point cloud.}
\label{fig:plot_noise}
\end{figure}
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figures/supplemental/supp_ablation_comparison_noise.pdf}
\begin{tabularx}{\linewidth}{P{0.14\linewidth}p{0.15\linewidth}P{0.15\linewidth}P{0.12\linewidth}P{0.18\linewidth}}
Noise Level& 0.75 mm & 0.30 mm & 0 mm & GT Scan\\
\end{tabularx}
\caption{Qualitative comparison of NPMs~\cite{NPM} and our method with respect to noise in the input point cloud. We perturb the points by applying random Gaussian noise with different standard deviations. }
\label{fig:qual_noise}
\end{figure}
\begin{figure}[tbh!]
\centering
\includegraphics[width=\linewidth]{figures/supplemental/supp_ablation_comparison_n_points.pdf}
\begin{tabularx}{\linewidth}{P{0.16\linewidth}p{0.12\linewidth}P{0.15\linewidth}P{0.12\linewidth}P{0.18\linewidth}}
\#points & 500 & 1000 & 5000 & GT Scan\\
\end{tabularx}
\caption{Qualitative comparison of NPMs~\cite{NPM} and our method with respect to the number of points in the input point cloud.}
\label{fig:qual_num_points}
\end{figure}
\subsection{Deformation Consistency}
Furthermore, we illustrate the behaviour of our expression network $\mathscr{F}_{\text{ex}}$ in figure \ref{fig:def_con}, by assigning a distinctive UV-map as colors to each vertex. To be more specific, we assign vertex colors by projecting a UV-map parallel to the "depth-dimension". We then fix vertex colors and deform the mesh using $\mathscr{F}_{\text{ex}}$. The results show that semantic consistency is preserved well, which is a direct consequence of our training strategy. i3DMMs~\cite{i3DMM} and ImFace~\cite{ImFace} exhibit fewer consistent correspondences since they model backward deformations and do not rely on direct supervision from deformations.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=\linewidth]{figures/supplemental/uv_map.pdf}
\caption{Deformation Consistency: We show surface correspondences between neutral and posed meshes from our test set.}
\label{fig:def_con}
\end{figure*}
\section{Dataset}
\label{sec:dataset_supp}
High quality data is of fundamental importance for every learning algorithm. We therefore decided to capture a high quality dataset of 3D head scans. In the following, we provide details about our custom capture set-up and the dataset.
For more samples of our dataset, we refer to Figure~\ref{fig:dataset_supp}.
\subsection{Capture Set-Up}
Figure~\ref{fig:capture_set_up} shows our custom capture set-up, which is built inside of an aluminium cube with an edge length of two meters. We use a robotic actuator\footnote{We use an acuator of the TUAKA series of Sumitomot Drive Technologies: \url{https://us.sumitomodrive.com/en-us/actuators}} to rotate an inverted U-shape around a participant's head.
We place two Artec Eva scanners opposite of each other, with complementary viewing angles on the ends of the inverted U-shape. The height and angles of the scanners are adjusted to obtain an optimal coverage, while avoiding extreme step angles which decrease scanning accuracy.
\subsection{Details}
During the six seconds of a 360° rotation, each scanner roughly produces 95 frames. Each frame captures range measurements obtained by analyzing a structured light projection using a stereo camera pair.
Additionally, a third camera captures RGB images every fifth frame, as depicted in Figure \ref{fig:capture_set_up}. Note that we currently do not use the captured RGB input, except for face landmark detection.
We process the individual 3D measurements of each frame using the provided software of Artec. First, we align the individual frames of the upper and lower scanner using a global registration algorithm. The individual frames are then fused into a single 3D mesh. Subsequently, we further use a hole filling algorithm and remove disconnected parts, for simplicity. However, the unprocessed fused meshes we be additionally released.
The RGB data, including camera parameters and poses will be released alongside the captured 3D scans. We believe that the raw images can be of value to the community and future research projects, e.g. for creating textured 3DMMs or (multi) image reconstruction tasks.
\subsection{Expressions}
As mentioned in the main paper, our 20 facial expressions are adapted from FaceWarehouse~\cite{FaceWarehouse}. We illustrate the different expression that we capture in figure \ref{fig:dataset_supp_expressions}. As mentioned before the neutral, open mouthed expressions is of special importance, since it serves as our canonical expression.
\subsection{GDPR}
Due to the sensitivity of the captured data, all participants in our dataset signed an agreement form compliant with GDPR. Please note that GDPR compliance includes the right for every participant to request the timely deletion of their data. We will enforce these rights in the distribution of our dataset.
\section{Implementation Details}
\label{sec:details_supp}
We implement our approach -- including registration, training, and inference -- in PyTorch and, unless otherwise mentioned, run all heavy computations on the GPU, for which we use an Nvidia GTX 3090.
\subsection{Non-Rigid Registration}
\label{sec:non_rigid_registration}
In Equations \ref{eq:flame_fitting} and \ref{eq:nrr} of the main paper, we use the point-to-plane distance $d(v, \mathcal{S})$ from a point $v \in \mathbb{R}^3$ to a surface $\mathcal{S} \subset \mathbb{R}^3$. To make our energy terms more robust, we filter this distance based on a distance $\delta_d$ and normal threshold $\delta_n$, such that
\begin{equation}
d^*(v, \mathcal{S}) =
\begin{cases}
0, \qquad \quad \text{if } d(v, \mathcal{S}) > \delta_d,\\
0, \qquad \quad \text{if } \langle n(v), n(s) \rangle > \delta_n, \\
d(v, \mathcal{S}), \quad \text{otherwise},
\end{cases}
\end{equation}
where
\begin{equation}
d(v, \mathcal{S}) = \underset{s\in \mathcal{S}}{\text{min}} | \langle v - s, n(s)\rangle |
\end{equation}
is the unfiltered point to plane distance and $n(v)$ and $n(s)$ denote the vertex normals of $v$ in the template mesh and the normals of its nearest neighbor in the target $\mathcal{S}$, respectively.
\paragraph{FLAME Fitting}
We regularize our optimization in FLAME parameter space using
\begin{align}
\mathcal{R}(\Phi_j) = \lambda_{\text{id}}\frac{\Vert \mathbf{z}^{\text{id}}\Vert_2^2}{20} &+
\lambda_{\text{ex}}\Vert \mathbf{z}^{\text{ex}_j}\Vert_2^2 + \lambda_{\text{jaw}} \Vert \theta_j\Vert_2^2 \nonumber \\
&+
\lambda_{\text{rigid}} (\Vert R_j\Vert_2^2 + \Vert t_j\Vert_2^2).
\end{align}
We use $\lambda_{\text{id}}= 1/5000$, $\lambda_{\text{ex}} = 1/3000$ to regularize the identity and expression parameters respectively. For the jaw angle and the rigid parameters we regularize with $\lambda_{\text{jaw}} = 1/10$ and $\lambda_{\text{rigid}} = 1/10$. Since the point to plane distance initially gives an unreliable signal, despite our filtering we down-weight the point to plane distance with $\lambda_d=1/15$ for the first 300 iterations. For all remaining one of the 2000 iterations we set $\lambda_d=1$. We solve Equation~\ref{eq:flame_fitting} using the Adam~\cite{adam} optimizer with a learning rate of $4e^{-3}$, which is decayed by a factor of 5 for the final 500 iterations.
\paragraph{Finetuning}
We exponentially decay the weight $\lambda_{\text{ARAP}}$ of the ARAP~\cite{ARAP} term with a factor of $0.99$. We start with $\lambda_{\text{ARAP}} = 10.0$, but do not decay below $\lambda_{\text{ARAP}} = 0.1$. On average our unoptimized implementation converges after 400-500 iterations of the L-BFGS optimizer and takes roughly 4 minutes on a single GPU.
\subsection{Data Preparation and Training}
\label{sec:train_data}
\paragraph{Identity Training}
To train $\mathscr{F}_{\text{id}}$, we use the loss
\begin{align}
\label{eq:IGR_loss}
\mathcal{L}_{\text{IGR}}\! =\! &\sum_{x \in \delta X} \lambda_s | \mathscr{F}_{\text{id}}(x)|\! +\! \lambda_s\left(1\!-\! \langle \nabla \mathscr{F}_{\text{id}}(x), n(x)\rangle\right) \\
+ &\sum_{x \in X \cup \delta X} \lambda_{\text{eik}}(\Vert \nabla \mathscr{F}_{\text{id}(x)}\vert_2\! -\! 1) + \lambda_{0}\sum_{x \in X}\text{exp}(-\alpha |\mathscr{F}_{\text{id}}(x)|) \nonumber
\end{align}
introduced in \cite{IGR} and \cite{SIREN}, where we omit the conditioning of $\mathscr{F}_{\text{id}}$ for simplicity. Here $\delta X$ denotes samples on the surface and $X$ denotes samples in space. We choose $\lambda_s = 2$, $\lambda_n = 0.3$, $\lambda_{\text{eik}}= 0.1$ and $\lambda_0=0.01$. For the additional hyperparameters mentioned in Equation~\ref{eq:loss_id} we set $\lambda_{\text{reg}}^{\text{id}}=0.005$, $\lambda_a = 7.5$ and $\lambda_{\text{sy}} = 0.005$.
Furthermore, we train for $15,000$ epochs with a learning rate of $0.0005$ and $0.001$ for the network parameters and latent codes, respectively. Both learning rates are decayed by a factor of $0.5$ every $3,000$ epochs. We use a batch size of $16$ and $|\delta X| = 500$ points sampled on the surface. Samples $X$ are obtained by adding Gaussian noise with $\sigma = 0.01$ to surface points and adding some points sampled uniformly in a bounding box. Additionally, we use gradient clipping with a cut-off value of $0.1$ and weight decay with a factor of $0.01$.
Since this loss only requires samples on the surface directly, we precompute $2,000,000$ points sampled uniformly on the surface of the 3D scans, after removing the lower part of the scan, which we determine using a plane spanned by three vertices on the neck of our registered template mesh. Since our focus lies on the front part of the face, 80\% of these points are sampled on the front and 20\% on the back and neck. The frontal area is determined by a region on our registered meshes, which covers the face, ears, and forehead. We additionally sample surface normals.
Training the identity network takes about 12 hours until convergence on a single GPU.
\paragraph{Expression Training}
For the training of $\mathscr{F}_{\text{ex}}$, we follow NPMs~\cite{NPM} and precompute samples of the deformation field, which can be used for direct supervision of $\mathscr{F}_{\text{ex}}$.
More specifically, let $\mathcal{M}$ and $\mathcal{M}^{\prime}$ be a neutral and expression scan. For a point $x \in \mathcal{M}$, we determine the corresponding point $x^{\prime} \in \mathcal{M}^{\prime}$ using barycentric coordinates and construct samples of the deformation $\delta(x) = x^{\prime} - x$. While strictly speaking the deformation is only defined for points on the surface, we compute field values close to the surface by offsetting along the normal direction, \ie $\delta(x + \alpha n(x)) = x^{\prime} + \alpha n(x^{\prime}) - (x + \alpha n(x)) $, where we sample $\alpha \sim \mathcal{N}(0, \tau_{i}\mathbb{I}_3)$ twice with standard deviations $\tau_1 = 0.02$ and $\tau_2 = 0.004$. Overall, we sample $2,000,000$ points per expression.
For the expression training we use $\lambda_{\text{reg}}^{\text{ex}} = 5e^{-5}$ and a learning rate of $5e^{-4}$ and $1e^{-3}$ for the network and latent codes, respectively. We train for $2,000$ epochs with a learning rate decay of $0.5$ every $600$ epochs, gradient clipping at $0.025$ and weight decay strength $5e^{-4}$. We use $1000$ samples to compute $\mathcal{L}_{\text{ex}}$ and a batch size of 32.
Training the expression network until convergence takes about 8 hours on a single GPU.
\subsection{Architectural Details}
\subsubsection{NPMs}
In the main paper, we compare our proposed method against our implementation of NPMs~\cite{NPM}.
Our method replaces the identity network $\mathscr{F}_{\text{id}}$. Instead of $\mathscr{F}_{\text{id}}$, NPMs uses the original architecture of DeepSDF~\cite{park2019deepsdf} with 8 layers, a hidden dimensionality of 1024 and $\mathbf{Z}_{\text{id}}=512$ dimensions for the latent vector.
The expression latent dimension is $d_{\text{ex}} = 200$ and the MLP has 6 hidden layers with 512 hidden units.
\subsubsection{NPHMs}
\label{sec:nphm}
Our default choice for the number of anchor points is $K=39$, of which $K_{\text{symm}}=16$ are symmetric. This leads to $7$ anchor points lying directly on the symmetry axis, and hence parameters of $16+7=23$ local DeepSDFs have to be optimized. Figure~\ref{fig:anchor_layout} depicts the arrangement of the anchor points.
The identity latent space is composed of the shared global part $\mathbf{z}_{\text{glob}}^{\text{id}} \in \mathbb{R}^{d_{\text{glob}}}$ with $d_{\text{glob}}=64$ and local latent vectors $\mathbf{z}^{\text{id}}_{k} \in \mathbb{R}^{d_{\text{loc}}}$ with $d_{\text{loc}}=32$.
Our local MLPs have 4 hidden layers with 200 hideen units each and follow the DeepSDF~\cite{park2019deepsdf} architecture.
Note that the total number of latent identity dimensions $d_{\text{id}} = (K+1)*d_{\text{loc}} + d_{\text{glob}} = 1344$.
Furthermore, we use $\sigma=0.1$ and $c=e^{-0.2/\sigma^2}$ to blend the ensemble of local MLPs. Figure \ref{fig:anchor_layout} illustrates the resulting influence that the individual local MLPs have on the final prediction.
\paragraph{Anchor Points}
In the main paper, we ablated the number of face anchor points. Figure~\ref{fig:anchor_layout} shows a comparison of the different anchor layouts that we ablated. For a lower number of anchors, we increase $d_{\text{loc}}$ such that $d_{\text{id}}$ is roughly preserved.
For the ablation of our symmetry prior, we keep the exact same anchor layout; however, do not share network weights for symmetric anchors and do not mirror coordinates.
\subsection{Evaluation}
\label{sec:eval}
Since we quantitatively compare models that represent vastly different regions of the human head, we restrict the calculations of our metrics to the face region.
This also aligns with the fact, that each model only observes a single, frontal depth map, i.e. other parts of the head can only be estimated roughly.
To this end, we determine the facial area by all points which are closer than 1cm to a region defined on our registered template mesh. Within this region we sample 1,000,000 points with their corresponding normals on the ground truth as well as on each reconstruction. Using these sampled points and normals, we compute all of our metrics. %
\section{Fitting}
\label{sec:fitting_supp}
\subsection{Mesh-Based Models} For both BFM~\cite{gerig2018morphable} and FLAME~\cite{FLAME}, we optimize Equation \ref{eq:flame_fitting}, \ie we resort to facial landmarks and jointly optimize identity and expression parameters over multiple expressions.
\subsection{Field-Based Model, Identity Fitting}
For our field based methods we optimize Equation~\ref{eq:fitting_id} directly for identity parameters using the Adam optimizer for $400$ iterations. The optimization procedure starts with a learning rate of $0.01$ and is decayed by a factor of $5$ after epochs $150$, $300$ and $350$.
\paragraph{NPHM}
For our model we use $\lambda_{\text{glob}}^{\text{fit}} = 0.004$ and $\lambda_{\text{loc}}^{\text{fit}} = 0.01$ to regularize the global and local identity components. Additionally, we encourage symmetry $\lambda_{\text{sy}}^{\text{fit}} = 1.0$ for the first half of iterations and then set $\lambda_{\text{sy}}^{\text{fit}} = 0.0$.
\paragraph{NPM}
We use the exact same hyperparameters as for our model. However, the local regularization and symmetry prior have no effect.
\subsection{Field-Based Model, Expression Fitting}
After fitting the identity code $\mathbf{z}^{\text{id}}$ of a person from a neutral scan we optimize an ICP-style loss for expression parameters
\begin{equation}
\argmin_{\mathbf{z}^{\text{ex}}} \sum_{x \in \mathcal{S}} \left|\mathscr{F}_{\text{ex}}(x, \mathbf{z}^{\text{ex}}, \mathbf{z}^{\text{id}})\right| + \lambda^{\text{fit}}_{\text{ex}} \Vert \mathbf{z}^{\text{ex}} \Vert_2^2,
\end{equation}
where $\mathcal{S}$ are $100,000$ points sampled uniformly on the surface $\{ x \in \mathbb{R}^3 : \mathscr{F}_{\text{id}}(x, \mathbf{z}^{\text{id}}) = 0 \}$, which we extract using marching cubes. Here we use $\lambda^{\text{fit}}_{\text{ex}} = 0.005$ and an initial learning rate of $0.001$ for the Adam optimizer. We optimize for 400 epochs and decay the learning rate by a factor of $10$ after epochs $200$ and $300$.
\subsection{Tracking}
For our tracking results on a commodity depth sensor, we include a total variation prior along the temporal axis over estimated head pose and expression parameters. More specifically, we add
\begin{equation}
\mathcal{L}_{TV}(\phi) = \sum_{t=1}^T \Vert \phi(t+1) - \phi(t) \Vert
\end{equation}
to the respective optimization problems, where $\phi(t)$ denotes any of the time dependent optimization parameters, \ie expression and pose.
Otherwise, we follow the same strategy as before, \ie optimize for identity on a single frame and then optimize pose and expression parameters for each time step. To align the coordinates system of the back-projected depth map into our canonical coordinate system, we calculate the similarity transform using \cite{umeyama} from detected landmarks to the the landmarks of the average FLAME face.
To stabilize the optimization, we also include landmarks at the mouth and eye corners, as well as on the top and bottom of the lips, which we denote as $\mathbf{a}_t \in \mathbb{R}^{8 \times 3}$ for each time step.
For the identity fitting on a chosen frame $t_{\text{can}}$, the landmarks serve as additional supervision for $\mathbf{z}^{\text{id}}_{\text{glob}}$ including the term
$$
\Vert \text{MLP}_{\text{pos}}(\mathbf{z}^{\text{id}}_{\text{glob}}) - \mathbf{a}_{t_{\text{can}}}\Vert_1.
$$ In this stage, we additionally estimate normals using a Sobel filter and use them as additional supervision signal, as in Equation~\ref{eq:IGR_loss}.
During expression fitting, we instead use
\begin{equation}
\sum_{t=1}^T \Vert \mathscr{F_{\text{ex}}}(\text{MLP}_{\text{pos}}(\mathbf{z}^{\text{id}}_{\text{glob}}), \mathbf{z}^{\text{ex}}_t, \mathbf{z}^{\text{id}}_{\text{glob}}) - \mathbf{a}_t \Vert_1.
\end{equation}
|
1,116,691,497,692 | arxiv | \section{Introduction and Overview}
Let $F_{n,d}$ denote the vector space of real homogeneous forms
$p(x_1,\dots,x_n)$ of degree $d$.
A blender is a closed convex cone in $F_{n,d}$
which is also closed under linear changes of variable. Blenders were
introduced in \cite{Re1} to help describe several different familiar
cones of polynomials, but that memoir was mainly concerned with the
cones of psd and sos forms and their duals, and the discussion of
blenders {\it per se} was scattered there (pp.\! 36-50, 119-120,
140-142). This paper is devoted to a general discussion of blenders
and their properties, as well as considering the extremal elements of
some particular blenders not discussed in \cite{Re1}.
Non-trivial blenders will only occur when $d = 2r$ is an even integer.
Choi and Lam \cite{CL1,CL2} named the cone of {\it psd} forms:
\begin{equation}
P_{n,2r}:= \{p \in F_{n,2r} : u \in {\mathbb R}^n \implies p(u) \ge 0\},
\end{equation}
and the cone of {\it sos} forms:
\begin{equation}
\Sigma_{n,2r}:= \biggl\{p \in F_{n,2r} : p = \sum_{k=1}^s h_k^2,\ h_k
\in F_{n,r}\biggr\}.
\end{equation}
Other blenders of interest in \cite{Re1} are the cone of sums of $2r$-th powers:
\begin{equation}
Q_{n,2r}:= \biggl\{p \in F_{n,2r} : p = \sum_{k=1}^s (\alpha_{k1}x_1 +
\cdots + \alpha_{kn}x_n)^{2r},\ \alpha_{kj} \in {\mathbb R}\biggr \}
\end{equation}
and the ``Waring blenders'': suppose $r = uv$, $u,v \in {\mathbb N}$ and let:
\begin{equation}
W_{n,(u,2v)}:= \biggl\{p \in F_{n,2r} : p = \sum_{k=1}^s h_k^{2v},\ h_k
\in F_{n,u}\biggr \}.
\end{equation}
Note that $W_{n,(r,2)} = \Sigma_{n,2r}$ and $W_{n,(1,2r)} =
Q_{n,2r}$.
The Waring blenders generalize. If $d = 2r$ and $\sum_{i=1}^m u_iv_i =
r$, let
\begin{equation}
W_{n,\{(u_1,2v_1),\dots, (u_m,2v_m)\}}:= \biggl\{p \in F_{n,2r} : p =
\sum_{k=1}^s h_{k,1}^{2v_1}\cdots h_{k,m}^{2v_m} ,\ h_{k,i}
\in F_{n,u_i} \biggr\}.
\end{equation}
There has been recent interest in the cones of convex forms:
\begin{equation}\label{E:kn2r}
K_{n,2r}:= \{p \in F_{n,2r} : p \ \text{is convex}\}.
\end{equation}
We shall use the two equivalent definitions of ``convex'' (see
e.g. \cite[Thm.4.1,4.5]{Ro}): under the
{\it line segment} definition, $p$ is convex if for all $u, v
\in {\mathbb R}^n$ and $\lambda \in [0,1]$,
\begin{equation}
p(\lambda u + (1 - \lambda) v) \le \lambda p(u) + (1-\lambda)p(v).
\end{equation}
The {\it Hessian} definition says that if
\begin{equation}\label{E:hes}
Hes(p;u,v):= \sum_{i=1}^n \sum_{j=1}^n \frac{\partial^2p}{\partial
x_i \partial x_j}(u) v_iv_j,
\end{equation}
then $p$ is convex provided $Hes(p;u,v) \ge 0$ for all $u, v \in
{\mathbb R}^n$. The cone $K_{n,m}$ appeared in \cite{Re1}, but as
$N_{n,m}$ (see Corollary 4.5). Pablo Parrilo asked
whether every convex form is sos; that is, is $K_{n,2r} \subseteq
\Sigma_{n,2r}$? This question
has been answered by Greg Blekherman \cite{B} in the negative. For
fixed $n$, the ``probability'' that a convex form is sos goes to 0
as $r \to \infty$. No examples of $p \in
K_{n,2r} \setminus \Sigma_{n,2r}$ are yet known.
We now make the definition of blender more precise.
Suppose $n \ge 1$ and $d \ge 0$. The index set for monomials in
$F_{n,d}$ consists of $n$-tuples of non-negative integers:
\begin{equation}
\mathcal I(n,d) = \biggl\lbrace i=(i_1,\dots,i_n): \sum\limits_{k=1}^n
i_k = d\biggr\rbrace.
\end{equation}
Write $N(n,d) = \binom {n+d-1}{n-1} = |\mathcal I(n,d)|$ and for $i
\in \mathcal I(n,d)$, let
$c(i) = \frac{d!}{i_1!\cdots i_n!}$ be the associated multinomial coefficient.
The abbreviation $u^i$ means $u_1^{i_1}\dots u_n^{i_n}$,
where $u$ may be an $n$-tuple of constants or variables.
Every $p \in F_{n,d}$ can be written as
\begin{equation}
p(x_1,\dots,x_n)=\sum_{i\in\mathcal I(n,d)} c(i)a(p;i)x^i.
\end{equation}
The identification of $p$ with the $N(n,d)$-tuple $(a(p;i))$ shows that
$F_{n,d} \approx {\mathbb R}^{N(n,d)}$ as a vector space. The topology
placed on $F_{n,d}$ is the usual one: $p_m \to p$ means that for
every $i \in \mathcal I(n,d)$, $a(p_m;i) \to a(p;i)$.
For $\alpha \in {\mathbb R}^n$, define $(\alpha\cdot)^d \in F_{n,d}$ by
\begin{equation}
(\alpha\cdot)^d(x) = \biggl(\sum_{k=1}^n \alpha_kx_k\biggr)^d =
\sum_{i\in\mathcal I(n,d)} c(i)\alpha^ix^i.
\end{equation}
If $\alpha$ is regarded as a row vector and $x$ as a column vector,
then $(\alpha \cdot)^d(x) = (\alpha x)^d$.
If $M = [m_{ij}]\in Mat_n({\mathbb R})$ is a (not
necessarily invertible) real $n\times n$ matrix and $p \in F_{n,d}$, we
define $p\circ M \in F_{n,d}$ by
\begin{equation}
(p\circ M)(x_1,\dots,x_n)= p(\ell_1,\dots,\ell_n), \qquad
\ell_j(x_1,\dots,x_n) = \sum_{k=1}^nm_{jk}x_k.
\end{equation}
If $x$ is viewed as a column vector, then $(p\circ M)(x) =
p(Mx)$; $(\alpha\cdot)^d \circ M = (\alpha M \cdot)^d$.
Define $[[p]]$ to be $\{p \circ M: M \in Mat_n({\mathbb R})\}$,
the {\it closed orbit of $p$}. If $ p = q \circ M$ for {\it invertible}
$M$, we write $p \sim q$; invertibility implies that $\sim$ is an
equivalence relation.
\begin{lemma}
\smallskip
\
\noindent (i) If $p \in F_{n,d}$ and $d$ is odd, then $p \sim \lambda p$ for every $0
\neq \lambda \in {\mathbb R}$.
\noindent (ii) If $p \in F_{n,d}$ and $d$ is even, then $p \sim \lambda p$
for every $0 < \lambda \in {\mathbb R}$.
\noindent (iii) If $u, \alpha \in {\mathbb R}^n$, then there exists a (singular)
$M$ so that $p\circ M = p(u)(\alpha\cdot)^d.$
\end{lemma}
\begin{proof}
For (i), (ii), observe that $(p \circ (cI_n)) =
c^dp$ since $p$ is homogeneous, and $cI_n$ is invertible if $c
\neq 0$. For (iii), note that if $m_{jk} = u_j\alpha_k$
for $1 \le j,k \le n$, then
\begin{equation}
\ell_j(x) = u_j\sum_{k=1}^n \alpha_k x_k = (\alpha x)u_j \implies (p\circ
M)(x_1,\dots,x_n) = (\alpha x)^dp(u_1,\dots, u_n)
\end{equation}
by homogeneity.
\end{proof}
\begin{definition}
A set $B \subseteq F_{n,d}$ is a {\it blender} if these conditions hold:
\smallskip
\noindent (P1) If $p, q \in B$, then $p+q \in B$.
\noindent (P2) If $p_m \in B$ and $p_m \to p$, then $p \in
B$.
\noindent (P3) If $p \in B$ and $M \in Mat_n({\mathbb R})$, then $p
\circ M \in B$.
\end{definition}
Thus, a blender is a closed convex cone of forms which is also
a union of closed orbits. Lemma 1.1 makes it unnecessary
to specify in (P1) that $p \in B$ and $\lambda \ge 0$ imply $\lambda p \in
B$. Let $\mathcal B_{n,d}$ denote the set of blenders in $F_{n,d}$.
Trivially, $\{0\}, F_{n,d} \in \mathcal B_{n,d}$.
It is simple to see that $P_{n,2r}$ is a blender: conditions (P1) and
(P2) can be verified pointwise and if $p(u) \ge
0$ for every $u$, then the same will be true for $p(Mu)$.
Similarly, $K_{n,2r}$ is a blender because (P1) and (P2) follow from
the Hessian definition and (P3) follows from the line segment definition.
If $B_1, B_2 \in \mathcal B_{n,d}$, then $B_1 \cap B_2 \in \mathcal
B_{n,d}$. Define the {\it Minkowski sum}
\begin{equation}\label{E:b1+b2}
B_1+B_2:= \{p_1+p_2: p_i \in B_i\}.
\end{equation}
The smallest blender containing both $B_1$ and $B_2$ must
include $B_1+B_2$; this set is a blender (Theorem 3.5(i)), but it
requires an argument to prove (P2). It is not hard to see that
$\mathcal B_{n,d}$ is not always a chain. Let $(n,d) = (2,8)$ and let $B_1
=W_{2,\{(1,6),(1,2)\}}$ and $B_2 = W_{2,\{(1,4),(1,4)\}}$. Then $x^6y^2 \in
B_1$ and $x^4y^4 \in B_2$. If $x^6y^2 \in B_2$, then
\begin{equation}
x^6y^2 = \sum_{k=1}^s(\alpha_k x + \beta_k y)^4(\gamma_k x + \delta_k y)^4.
\end{equation}
A consideration of the coefficients of $x^8$ and $y^8$ shows that
$\alpha_k\gamma_k = \beta_k\delta_k=0$ for all $k$, hence the only non-zero
summands are positive multiples of $x^4y^4$. Thus $x^6y^2 \not\in
B_2$, and, similarly, $x^4y^4 \not\in B_1$, so $B_1 \setminus B_2$ and $B_2
\setminus B_1$ are both non-empty. It is not clear which octics belong to
$B_1 \cap B_2$ and $B_1 + B_2$.
If $B_1 \in \mathcal B_{n,d_1}$
and $B_2 \in \mathcal B_{n,d_2}$, define
\begin{equation}\label{E:b1*b2}
B_1*B_2:= \left\{\sum_{k=1}^s p_{1,k}p_{2,k}: p_{i,k} \in B_i \right\}.
\end{equation}
Again, this is a blender (Theorem 3.5(ii)), but (P2) is not trivial to prove.
We review some standard facts about convex cones; see \cite[Ch.2,3]{Re1}
and \cite{Ro}.
If $C \subset {\mathbb R}^N$ is a closed convex cone, then $u \in C$ is {\it
extremal} if $u = v_1 + v_2, v_i \in C$, implies that $v_i = \lambda_i
u$, $\lambda_i \ge 0$. The set of extremal elements in $C$ is denoted
$\mathcal E(C)$.
All cones $C \neq 0, {\mathbb R}^N$ in this
paper have the property that $x, -x \in C$ implies $x
= 0$. In such a cone, every element in $C$ is a sum of
extremal elements. (It will follow from Prop.\!\! 2.4 that if $B \in \mathcal
B_{n,d}$ and $p,-p \in B$ for some $p \neq 0$, then $B = F_{n,d}$.)
As usual, $u$ is {\it interior} to $C$ if $C$ contains a
non-empty open ball centered at $u$. The set of interior points of
$C$ is denoted $int(C)$, and the boundary of $C$ is denoted
$\partial(C)$. The next definition depends on the inner product. If $C$
is a closed convex cone, let
\begin{equation}
C^* = \{ v \in {\mathbb R}^N : [u,v] \ge 0\quad \text{for all}\quad u \in C\}.
\end{equation}
Then $C^* \subset {\mathbb R}^N$ is also a closed convex cone and $(C^*)^* =
C$; $C$ and $C^*$ are {\it dual} cones.
If $u \in C$ (and $\pm x \in C$ implies $x = 0$),
then $u \in int(C)$ if and only if $[u,v]>0$ for every
$0 \neq v \in C^*$ (see e.g. \cite[p.26]{Re1}). Thus, if $u
\in \partial(C)$ (in particular, if $u$ is extremal), then there
exists $v \in C^*$, $v \neq 0$ so that $[u,v] = 0$.
This discussion applies to blenders by identifying $p \in F_{n,d}$ with
the $N(n,d)$-tuple of its coefficients. For example, $p \in int(B)$ if
there exists $\epsilon >0$ so that if $|a(q;i)| < \epsilon$ for all $i
\in \mathcal I(n,d)$, then $p +
q \in B$. If $p \sim q \in B$, then $p$ and $q$ simultaneously belong
to (or do not belong to) $int(B), \partial(B), \mathcal E(B)$.
We shall discuss in section two the natural inner product
on $F_{n,d}$. It turns out that, under this inner product, $P_{n,2r}$
and $Q_{n,2r}$ are
dual cones (Prop.\!\! 3.8), as are $K_{n,2r}$ and
$W_{n,\{(1,2r-2),(1,2)\}}$ (Theorem 3.11).
The description of $\mathcal E(P_{n,2r})$ is extremely difficult if $n
\ge 3$. (See e.g \cite{CL1, CL2, CLRsex,CLR, Ha, ReAGI,Re4}.) Every element of
$\mathcal E(\Sigma_{n,2r})$ obviously has the form $h^2$, but not
every square is extremal; e.g.,
\begin{equation}\label{E:h22}
(x^2+y^2)^2 = (x^2-y^2)^2 + (2xy)^2 =\frac1{18} \left((\sqrt 3\ x +
y)^4 + (\sqrt 3\ x - y)^4 + 16y^4 \right).
\end{equation}
We now describe the contents of this paper. Section two reviews the
relevant results from \cite{Re1} regarding
the inner product and its many properties. The
principal results are that if $B \in \mathcal B_{n,d}$ and $B \neq \{0\},
F_{n,d}$, then $d=2r$ is even and $Q_{n,2r} \subset \pm B \subset
P_{n,2r}$ (Prop.\!\! 2.5); the dual cone to a blender is also a
blender (Prop.\!\! 2.7). Section three begins with a number of
preparatory lemmas, mainly involving convergence. We show that if
$B_i$ are blenders, then so are $B_1+B_2$ and $B_1*B_2$ (Theorem 3.5)
and hence the Waring blenders and their generalizations are blenders
(Theorems 3.6, 3.7). We show that $P_{n,2r}$ and $Q_{n,2r}$ are dual
and give a description of $W_{n,(u,v)}^*$ (both from \cite{Re1}) and
show that $K_{n,2r}$ and $W_{n,\{(1,2r-2),(1,2)\}}$ are dual (Theorem
3.11). In section four, we consider $K_{n,2r}$. We show that it cannot
be decomposed non-trivially as $B_1*B_2$ (Corollary 4.2), and
that $K_{n,2r}=N_{n,2r}$ (c.f.\! \eqref{E:kn2r}, \eqref{E:nnd},
Corollary 4.5). We also show that if $p$ is positive definite, then $(\sum
x_i^2)^Np$ is convex for sufficiently large $N$ (Theorem
4.6). In section five, we show that (up to $\pm$) $\mathcal B_{2,4}$
consists of a one-parameter family of blenders $B_{\tau}$, $\tau \in
[-\frac 13, 0]$, where $\tau = \inf\{\lambda: x^4 + 6\lambda x^2y^2 + y^4 \in
B_{\tau}\}$, increasing from $Q_{2,4}=B_0$ to $P_{2,4}=B_{-\frac 13}$,
and that $B_{\tau}^* = B_{U(\tau)}$, where $U(\tau) =
-\frac{1+3\tau}{3-3\tau}$ (Theorem 5.7). In
section six, we review the results of $K_{2,4}$ and $K_{2,6}$ in
\cite{D1,D2,Re00} by Dmitriev and the author, and give some new
examples in $\partial(K_{2,2r})$. The
full analysis of $\mathcal E(K_{2,2r})$ seems intractable for $r \ge
4$. Finally, in section seven, we look at sums of 4th
powers of binary forms. Conjecture 7.1 states that $p \in W_{2,(u,4)}$
if and only if $p = f^2 + g^2$, where $f,g \in P_{n,2u}$. We show that
this is true for $u=1$ and for even symmetric octics $p$ (Theorems
7.3, 7.4). Our classification of even symmetric octics implies that
\begin{equation}\label{E:48}
x^8 + \alpha x^4y^4 + y^8 \in W_{2,(2,4)} \iff \alpha \ge - \tfrac {14}9.
\end{equation}
I would like to thank the organizers of BIRS
10w5007, Convex Algebraic Geometry, held at Banff in February, 2010,
for the opportunity to speak. I would also like to thank my fellow
participants for many stimulating conversations. Sections four and six were
particularly influenced by this meeting. I also thank Greg Blekherman for
very helpful email discussions. Special thanks to Peter Kuchment, a
classmate of V. I. Dmitriev, for trying contact him for
me. Finally, I thank the editors of this volume for the opportunity to
contribute to this memorial volume in memory of Prof. Borcea.
\section{The inner product}
For $p$ and $q$ in $F_{n,d}$, we define an inner product with deep
roots in 19th century algebraic geometry and analysis. Let
\begin{equation}\label{E:ip}
[p,q] = \sum_{i \in \mathcal I(n,d)} c(i)a(p;i)a(q;i).
\end{equation}
This is the usual Euclidean inner product, if $p \leftrightarrow
(c(i)^{1/2}a(p;i)) \in {\mathbb R}^N$. The many properties of this inner
product (see Props.\!\! 2.1, 2.6 and 2.9) strongly suggest that this
is the ``correct'' inner product for $F_{n,d}$. We present without
proof the following observations about the inner product.
\begin{proposition}\cite[pp.2,3]{Re1}
\
\noindent (i) $[p,q] = [q,p]$.
\noindent (ii) $j \in \mathcal I(n,d) \implies [p,x^j] = a[p;j]$.
\noindent (iii) $\alpha \in {\mathbb R}^n \implies [p,(\alpha\cdot)^d] = p(\alpha)$.
\noindent (iv) If $p_m \to p$, then $[p_m,q] \to [p,q]$ for every $q \in
F_{n,d}$.
\noindent (v) In particular, taking $q = (u \cdot)^d$, $p_m \to p \implies
p_m(u) \to p(u)$ for all $u \in {\mathbb R}^n$.
\end{proposition}
The orthogonal complement of a subspace $U$ of $F_{n,d}$,
\begin{equation}
U^\perp = \{ v \in F_{n,d}: [u,v] = 0\quad \text{for all}\quad u \in U\},
\end{equation}
is also a subspace of $F_{n,d}$ and $(U^\perp)^\perp = U$.
The following result is widely-known and has been frequently proved over the
last century, see e.g.\cite[p.30]{Re1}.
\begin{proposition}\cite[p.93]{Re1}
Suppose $S \subset {\mathbb R}^n$ has non-empty interior. Then
$F_{n,d}$ is spanned by $\{(\alpha\cdot)^d: \alpha \in S \}$.
\end{proposition}
\begin{proof}
Let $U$ be the subspace of $F_{n,d}$ spanned by $\{(\alpha\cdot)^d:
\alpha \in S \}$ and suppose
$q \in U^{\perp}$. Then $0 = [q,(\alpha\cdot)^d] = q(\alpha)$ for all $\alpha \in S$.
Since $q$ is a polynomial which vanishes on an open set, $q = 0$.
Thus, $U^{\perp} = \{0\}$, so $U = (U^\perp)^\perp = \{0\}^\perp = F_{n,d}$.
\end{proof}
\begin{proposition}[Biermann's Theorem]\cite[p.31]{Re1}
The set $\{(i \cdot)^d : i \in \mathcal I(n,d)\}$ is a basis for
$F_{n,d}$.
\end{proposition}
\begin{proof}
We note that there are $N(n,d)$ such forms, so it suffices to construct
a dual set $\{g_j : j \in \mathcal I(n,d)\} \subset F_{n,d}$
so that $[g_j,(i \cdot)^d] = 0$ if $j \neq i$ and $[g_i,(i \cdot)^d] > 0$. Let
\begin{equation}\label{E:bier}
g_j(x_1,\dots,x_n) = \prod_{k=1}^n \prod_{\ell =0}^{j_k-1} (d x_k - \ell(x_1 +
\cdots + x_n)).
\end{equation}
Each $g_j$ is a product of $\sum_k j_k = d$ linear factors, so $g_j
\in F_{n,d}$. The $(k,\ell)$ factor in \eqref{E:bier} vanishes
at any $x = i \in \mathcal I(n,d)$ for which $i_k = \ell$. Thus,
$[g_j,(i \cdot)^d] = g_j(i) = 0$ if $i_k \le j_k-1$ for any $k$. Since
$\sum_k i_k = \sum_k j_k$, it follows that $g_j(i) = 0$ if $j \neq i$.
A computation shows that $g_i(i) = d^d\prod_k (i_k !) = d^d d!/c(i)$.
\end{proof}
Prop.\!\! 2.3 implies Prop.\!\! 2.2 directly, upon
mapping $\mathcal I(n,d)$ linearly into $S$.
\begin{proposition}\cite[p.141]{Re1}
Suppose $B \in \mathcal B_{n,d}$ and there exist $p,q \in
B$ and $u,v \in {\mathbb R}^n$ so that $p(u) > 0 > q(v)$. Then $B = F_{n,d}$.
\end{proposition}
\begin{proof}
By Lemma 1.1(iii), $\pm(\alpha\cdot)^d \in
B$ for $\alpha \in {\mathbb R}^n$, so by Prop.\!\! 2.2, $F_{n,d} \subseteq B$.
\end{proof}
This is the argument Ellison used in \cite[p.667]{E} to show that
every $p \in F_{n,u(2v+1)}$ is a sum of $(2v+1)$-st powers of $h_k
\in F_{n,u}$.
Let $-B = \{ -h: h \in B\}$. It is easy to check
that if $B$ is a blender, then so is $-B$.
\begin{proposition}\cite[p.141]{Re1}
If $B \neq \{0\}, F_{n,d}$ is a blender, then $d=2r$ is even
and for a suitable choice of sign, $Q_{n,2r} \subseteq \pm B
\subseteq P_{n,2r}$.
\end{proposition}
\begin{proof}
If $B \neq \{0\}$, then there exists $p \in B$ and
$a \in {\mathbb R}^n$ so that $p(a) \neq 0$. If $d$ is odd, then $p(-a) =
-p(a)$, and by Prop.\!\! 2.4, $B = F_{n,d}$. If $d$ is
even, by taking $-B$ if necessary, we may assume that $p(a)
\ge 0$. Thus, if $B \neq F_{n,2r}$, then $\pm B \subseteq
P_{n,2r}$. On the other hand,
Lemma 1.1 and (P1) imply that $Q_{n,2r} \subseteq \pm B$.
\end{proof}
Since $Q_{n,2} = P_{n,2}$, there are no ``interesting''
blenders of quadratic forms.
The inner product has a useful contravariant property.
\begin{proposition} \cite[p.32]{Re1}
Suppose $p$, $q\in F_{n,d}$ and $M \in Mat_n({\mathbb R})$. Then
\begin{equation}\label{E:contra}
[p\circ M,q]=[p,q\circ M^t].
\end{equation}
\end{proposition}
\begin{proof}
By Prop.\!\! 2.2, it suffices to prove \eqref{E:contra}
for $d$-th powers; note that $[p \circ M,q]
= [(\alpha M\cdot)^d,(\beta\cdot)^d] = (\alpha M \beta^t)^d =
(\alpha(\beta M^t)^t)^d = [(\alpha \cdot)^d, (\beta M^t\cdot)^d] = [p, q \circ M^t]$.
\end{proof}
\begin{proposition}\cite[p.46]{Re1}
If $B$ is a blender, then so is its dual cone $B^*$.
\end{proposition}
\begin{proof}
The dual of a closed convex cone is a closed convex cone, so
(P1) and (P2) are automatic. Suppose $p \in B, q \in
B^*$ and $M \in Mat_n({\mathbb R})$. Since $p\circ M^t \in
B$, we have
\begin{equation}
[p, q\circ M] = [q \circ M , p] = [q, p\circ M^t] = [p\circ M^t,q] \ge 0,
\end{equation}
and so $q \circ M \in B^*$. This verifies (P3).
\end{proof}
For $i\in\mathcal I(n,d)$, let $D^i = \prod (\frac
{\partial}{\partial x_k})^{i_k}$; let
$f(D) = \sum c(i)a(f;i)D^i$ be the $d$-th order differential operator
associated to $f \in F_{n,d}$. Since $\frac {\partial}{\partial x_k}$ and\
$\frac {\partial}{\partial x_\ell}$ commute, $D^iD^j = D^{i+j} =
D^jD^i$ for any $i \in \mathcal I(n,d)$ and $j\in\mathcal I(n,e)$. By
multilinearity, $(fg)(D) = f(D)g(D) = g(D)f(D)$ for forms $f$ and $g$
of any degree.
\begin{proposition}\cite[p.183]{Re2}
If $i, j \in \mathcal I(n,d)$ and $i \neq j$, then $D^i(x^j) =
0$ and $D^i x^i = \prod_k (i_k)! = d!/c(i)$.
\end{proposition}
\begin{proof}
We have
\begin{equation}
D^i(x^j) = \prod_{k=1}^n \biggl(\frac {\partial^{^{i_k}}}{\partial x_k^{i_k}}\biggr)
\prod_{k=1}^n x_k^{j_k} =
\prod_{k=1}^n \frac {\partial^{^{i_k}} (x_k^{j_k})}{\partial x_k^{i_k}}.
\end{equation}
If $i_k > j_k$, then the $k$-th factor above is zero. If $i \neq j$,
then this will happen for at least one $k$. Otherwise, $i=j$, and the
$k$-th factor is $i_k!$.
\end{proof}
We now connect the inner product with differential operators.
\begin{proposition}\cite[p.184]{Re2}
\smallskip
\noindent (i) If $p, q \in F_{n,d}$, then $p(D)q = q(D)p = d![p,q]$.
\noindent (ii) If $p, hf \in F_{n,d}$, where $f \in F_{n,k}$ and $h \in
F_{n,d-k}$,
then
\begin{equation}
d![p,hf] = (d-k)![h,f(D)p].
\end{equation}
\end{proposition}
\begin{proof}
For (i), we have by Prop.\!\! 2.8:
\begin{equation}
\begin{gathered}
p(D)q = \sum_{i \in \mathcal I(n,d)} c(i)a(p;i)D^i \biggl(\sum_{j \in
\mathcal I(n,d)} c(j)a(q;j)x^j\biggr) =
\\ \sum_{i \in \mathcal I(n,d)} \sum_{j \in \mathcal I(n,d)}
c(i)c(j)a(p;i)a(q;j)D^ix^j
= \sum_{i \in \mathcal I(n,d)} c(i)c(i)a(p;i)a(q;i)D^ix^i
\\ = \sum_{i \in \mathcal I(n,d)} c(i)^2a(p;i)a(q;i) \frac {d!}{c(i)} =
d![p,q] = d![q,p] = q(D)p.
\end{gathered}
\end{equation}
\noindent (ii) Two applications of (i) give
\begin{equation}
d![p,hf] = (hf)(D)p = h(D)f(D)p = h(D)(f(D)p) = (d-k)![h,f(D)p].
\end{equation}
\end{proof}
\begin{corollary}
If $p \in F_{n,2r}$, then $Hes(p;u,v) = 2r(2r-1)[p,(u\cdot)^{2r-2}(v\cdot)^2]$.
\end{corollary}
\begin{proof}
Apply Prop.\!\! 2.9 with $h = (u\cdot)^{2r-2}$, $f = (v\cdot)^2$, $d=2r$
and $k=2$. We have
\begin{equation}
f(x_1,\dots,x_n) = (v_1x_1 + \cdots + v_nx_n)^2 \implies
f(D) = \sum_{i=1}^n \sum_{j=1}^n v_iv_j
\frac{\partial^2}{\partial x_i \partial x_j},
\end{equation}
so that $[h,f(D)p] = Hes(p;u,v)$ by \eqref{E:hes} and Prop.\!\! 2.1(iii).
\end{proof}
\section{Convergence and duals}
We shall need some tools to prove that certain convex cones are
closed. The first one (see \cite[p.37]{Re1}) is an immediate
consequence of Prop.\!\! 2.2.
\begin{lemma}
Suppose $S \subset {\mathbb R}^n$ is bounded and has non-empty interior. Then for $i \in
\mathcal I(n,d)$ and $p \in F_{n,d}$,
$|a(p;i)| \le R_{n,d}(i)\cdot\sup\{|p(x)|: x \in S\}$ for some $ R_{n,d}(i)$.
\end{lemma}
\begin{proof}
Fix $i \in \mathcal I(n,d)$. By Prop.\!\! 2.2, there exist
$\lambda_k(i)$ and $\alpha_k \in S$ so that
\begin{equation}\label{E:bound}
x^i = \sum_{k=1}^{N(n,d)} \lambda_k(i) (\alpha_k\cdot)^d.
\end{equation}
Taking the inner product of \eqref{E:bound} with $p$, we find that
\begin{equation}
a(p;i) = [p,x^i] = \sum_{k=1}^{N(n,d)} \lambda_k(i) [p, (\alpha_k\cdot)^d ] =
\sum_{k=1}^{N(n,d)} \lambda_k(i) p(\alpha_k).
\end{equation}
Now set $R_{n,d}(i) = \sum_k |\lambda_k(i)|$.
\end{proof}
We define the norm on $F_{n,d}$ in the usual way, by
\begin{equation}
||p||^2 = [p,p] = \sum_{i \in \mathcal I(n,d)|} c(i) a(p;i)^2.
\end{equation}
Given a sequence $(p_m) \in F_{n,d}$, the statement that
$(|a(p_m;i)|)$ is uniformly bounded for all $(i,m)$ is equivalent to the
statement that $(||p_m||)$ is bounded.
\begin{lemma}
Suppose $(p_{m,r}) \subset F_{n,d}$, $1 \le r \le N$, and suppose
that for all $(m,r)$, $|p_{m,r}(u)| \le M$ for $u \in S$, where $S$
is bounded and
has non-empty interior. Then there exist $p_r \in F_{n,d}$ and $m_k\to\infty$
so that simultaneously for each $r$, $p_{m_k,r} \to p_r$.
\end{lemma}
\begin{proof}
Identify each $p_{m,r}$ with the vector $(a(p_{m,r};i)) \in
{\mathbb R}^{N(n,d)}$; these vectors are uniformly bounded by Lemma 3.1.
Concatenate them to form a vector $v_m \in {\mathbb R}^{N*N(n,d)}$.
By Bolzano-Weierstrass, there is a convergent
subsequence $(v_{m_k})$. The corresponding subsequences of forms are then
convergent.
\end{proof}
Even when $(p_m)$ is unbounded, one can still find an interesting subsequence.
\begin{lemma}
Suppose $(p_m) \subset F_{n,d}$ and $||p_m||$ is unbounded. Then
there exists a subsequence $p_{m_k}$ and $\tau_k \to
\infty$ so that $\tau_k^{-1}p_{m_k} \to p$, where $p \neq 0$.
\end{lemma}
\begin{proof}
Let $\mu_m = \max\{|a(p_m;i)|\}$; by hypothesis, $(\mu_m)$ is unbounded.
Take a subsequence on which $\mu_m\to \infty$ and drop the subscripts. Let
$\bar p_m = \mu_m^{-1}p_m$. Then each $\bar p_m$ has at least one
coefficient $a(\bar p_m;i(m)) = \pm 1$. Since $\mathcal I(n,d)$ is finite,
there exists $i_0$ so that there is a subsequence on which $a(\bar
p_{m_k};i_0) = \pm 1$. Taking $-p_{m_k}$ if necessary and dropping the
subscripts, we have $a(\bar p_m;i_0) = 1$ and $|a(\bar p_m;i)|
\le 1$ for all $(m,i)$. By Lemma 3.2, $(\bar p_m)$ has a convergent subsequence
$\bar p_{m_k} \to p$, and $a(p;i_0) = 1$, so $p \neq 0$.
Since $\bar p_{m_k} = \mu_{m_k}^{-1} p_{m_k}$, this is the desired
subsequence.
\end{proof}
We state without proof a direct implementation of
Carath\'eodory's Theorem (see
e.g. \cite[p.27]{Re1}.). It is worth noting that in 1888 (when
Carath\'eodory was 15), Hilbert \cite{Hi} used this
argument with $N(3,6) = 28$ to show that $\Sigma_{3,6}$ is
closed.
\begin{proposition}[Carath\'eodory's Theorem]
If $r > N(n,d)$, and $h_k \in F_{n,d}$, then there exist $\lambda_k
\ge 0$ so that
\begin{equation}
\sum_{k=1}^r h_k = \sum_{k=1}^{N(n,d)} \lambda_k h_{n_k}.
\end{equation}
\end{proposition}
We use these lemmas to show that if $B_1$ and $B_2$ are
blenders, then so are $B_1+B_2$ (c.f. \eqref{E:b1+b2}) and $B_1*B_2$
(c.f. \eqref{E:b1*b2}). We may assume $B_i \neq 0$.
\begin{theorem}
\
\smallskip
\noindent (i) If $B_i \in \mathcal B_{n,2r}$, then $B_1 + B_2 \in
\mathcal B_{n,2r}$.
\noindent (ii) If $B_i \in \mathcal B_{n,2r_i}$ and $r=r_1+r_2$,
then $B_1*B_2 \in \mathcal B_{n,2r}$.
\end{theorem}
\begin{proof}
In each case, (P1) is automatic, and
since $(p_1+p_2) \circ M = p_1 \circ M + p_2 \circ M$ and $(p_1p_2)
\circ M = (p_1 \circ M)( p_2 \circ M)$, (P3) is verified. The issue is
(P2).
Suppose $B_i \in \mathcal B_{n,2r}$ have opposite ``sign'', say
$B_1 \subset P_{n,2r}$ and $B_2 \subset -P_{n,2r}$. Then Prop.\!\! 2.4
implies that $B_1 + B_2 = F_{n,2r}$. Otherwise, we may assume
that $B_i \subset P_{n,2r_i}$.
Suppose $p_{i,m} \in B_i$ and $p_{1,m} + p_{2,m} = p_m\to
p$. Let $S$ be the unit ball in ${\mathbb R}^n$.
If $\sup\{p(u) : u \in S\} = T$, then for $m \ge m_0$,
$\sup\{p_m(u) : u \in S\} \le T+1$, and since $p_{i,m}$ is psd, it
follows that $\sup\{p_{i,m}(u) : u \in S\} \le T+1$ as well. By
Lemma 3.2, there is a common subsequence so that $p_{i,m_k} \to p_i
\in B_i$, hence $p = \lim p_{m_k} = p_1+p_2 \in B_1 + B_2$.
Suppose now $B_i \in \mathcal B_{n,2r_i}$, and by taking $\pm B_i$,
assume $B_i \subset P_{n,2r_i}$.
By Prop.\!\! 3.4, a sum such as \eqref{E:b1*b2} can be compressed
into one in which
$s \le N(n,2r)$. Write
\begin{equation}
p_m = \sum_{k=1}^{N(n,2r)} p_{1,k,m}p_{2,k,m},\qquad p_{i,k,m} \in B_i,
\end{equation}
and suppose $p_m \to p$.
As above, since $p$ is bounded on $S$, so is the sequence $(p_m)$,
and since each $p_{i,k,m}$ is psd, it follows that the
sequence $(p_{1,k,m}p_{2,k,m})$ is bounded on $S$, and hence by
Lemma 3.2, a subsequence of $(p_{1,k,m}p_{2,k,m}) \to p_k$ for some $p_k \in
P_{n,2r}$. We need to show that $p_k$ can be written as a product
$q_{1,k}q_{2,k}$, where $q_{i,k} \in B_i$.
A complication is that the given sequence of factors might not both
converge (e.g.\!\! if $p_{1,k,m} = m q_{1,k}$ and $p_{2,k,m} = m^{-1}
q_{2,k}$), so we need to normalize.
First observe that if $p_k=0$, we are done. Otherwise, choose $v \in {\mathbb R}^n$
so that $p_k(v) =1$. Since $p_{1,k,m}(v)p_{2,k,m}(v) \to 1$,
$p_{1,k,m}(v)p_{2,k,m}(v) > 0$ for $m \ge m_0$. Drop the first
$m_0$ terms and define
\begin{equation}
q_{1,k,m}(x) = \frac {p_{1,k,m}(x)}{p_{1,k,m}(v)} \in B_1, \qquad
q_{2,k,m}(x) = \frac {p_{2,k,m}(x)}{p_{2,k,m}(v)} \in B_2.
\end{equation}
Then $(q_{1,k,m}q_{2,k,m}) \to p_k$ and $q_{i,k,m}(v) = 1$.
If each $(||q_{i,k,m}||)$ is bounded, then by
Lemma 3.2, there are convergent subsequences $q_{i,k,m} \to q_{i,k} \in
B_i$ and $p_k = q_{1,k}q_{2,k}$ as desired.
Suppose $(||q_{1,k,m}||)$ is unbounded and $(||q_{2,k,m}||)$ is
bounded. Taking the common convergent subsequences from
Lemmas 3.2 and 3.3, and dropping subscripts, we have $\tau_m \to
\infty$ and $q_{1,k,m} = \tau_m \bar q_{1,k,m}$ so that $\bar
q_{1,k,m} \to \bar q_{1,k} \in B_1$ (where $\bar q_{1,k} \neq 0$) and $q_{2,k,m} \to
q_{2,k} \in B_2$, where $q_{2,k}(v) = \lim q_{2,k,m}(v) = 1$,
so $q_{2,k} \neq 0$. But now
\begin{equation}
0 = \lim_{m \to \infty} \tau_m^{-1} q_{1,k,m}q_{2,k,m} = \lim_{m \to \infty} \bar
q_{1,k,m}q_{2,k,m} = \bar q_1q_2,
\end{equation}
a contradiction. If both $(||q_{i,k,m}||)$'s are unbounded, we
can write $q_{2,k,m} = \nu_m \bar q_{2,k,m}$ with $\nu_m \to
\infty$ and $\bar q_{2,k,m} \to \bar q_{2,k}\neq 0$ and derive a similar
contradiction. It follows that the first case holds for each $k$ and
so $B_1*B_2$ satisfies (P2).
\end{proof}
The following theorem was announced in \cite[p.47]{Re1}, but the proof
was not given.
\begin{theorem}
If $uv = r$, then $W_{n,(u,2v)}$ is a blender.
\end{theorem}
\begin{proof}
As we have already seen, (P1) and (P3) are immediate. Suppose $p_m \in
W_{n,(u,2v)}$ and $p_m \to p$. Prop.\!\! 3.4 says that we can write
\begin{equation}
p_m = \sum_{k=1}^{N(n,2r)}h_{k,m}^{2v}, \qquad h_{k,m} \in F_{n,u}.
\end{equation}
As before, $p$ (and so $(p_m)$) is bounded on $S$, and the
summands are psd so $(h_{k,m}^{2v})$ and thus also $(|h_{k,m}|) =
((h_{k,m}^{2v})^{1/(2v)})$ are bounded on $S$. Taking a convergent
subsequence, suppose $(h_{k,m}) \to
h_k$. Then $(h_{k,m}^{2v}) \to h_k^{2v}$. Taking a common subsequence
for each of the $N(n,2r)$ summands, we see that $p \in W_{n,(u,2v)}$.
\end{proof}
In particular, $\Sigma_{n,2r}$ and $Q_{n,2r}$ are blenders; see \cite[p.46]{Re1}.
\begin{theorem}
If $\sum_iu_iv_i = 2r$, then
$W_{n,\{(u_1,2v_1),...., (u_m,2v_m)\}} \in \mathcal B_{n,2r}$.
\end{theorem}
\begin{proof}
Note that $W_{n,\{(u_1,2v_1),...., (u_m,2v_m)\}} = W_{n,(u_1,2v_1)} * \cdots
* W_{n,(u_m,2v_m)}$.
\end{proof}
\begin{proposition}\cite[p.38]{Re1}
$P_{n,2r}$ and $Q_{n,2r}$ are dual blenders.
\end{proposition}
\begin{proof}
We have $p \in Q_{n,2r}^*$ if and only if $p \in F_{n,2r}$ and, whenever $\lambda_k
\ge 0$ and $\alpha_k \in {\mathbb R}^n$,
\begin{equation}
0 \le \left[ p, \sum_{k=1}^r \lambda_k (\alpha_k\cdot)^{2r} \right] =
\sum_{k=1}^r \lambda_k p(\alpha_k).
\end{equation}
This is true iff $p(\alpha) \ge 0$ for all $\alpha \in {\mathbb R}^n$;
that is, iff $p \in P_{n,2r}$.
\end{proof}
It was a commonplace by the time of \cite{Hi} that
$P_{n,2r} = \Sigma_{n,2r}$ when $n=2$ or $2r=2$. Hilbert proved there that
$P_{3,4} = \Sigma_{3,4}$ and that strict inclusion is true for other
$(n,2r)$ (see \cite{Re3}.)
We say that $p \in P_{n,2r}$ is
{\it positive definite} or {\it pd} if $p(u) = 0$ only for $u=0$.
It follows that $p \in int(P_{n,2r})$ if and only if $p$ is pd.
Blenders are cousins of orbitopes. An {\it orbitope}
is the convex hull of an orbit of a compact algebraic group $G$ acting
linearly on a real vector space; see \cite[p.1]{SSS}. The key
differences from blenders are that it is a single orbit, and that $G$ is
compact. One object which is both a blender and an orbitope is
$Q_{n,2r}$, which is named $\mathcal V_{n,2r}$ (and called the {\it
Veronese orbitope}) in \cite{SSS}.
The duals of the Waring blenders can be explicitly given.
\begin{proposition}\cite[p.47]{Re1}
Given $p \in F_{n,2uv}$, define the form $H_p(t) \in F_{N(n,u),2v}$, in
variables $\{t(\ell)\}$ indexed by $\{\ell \in \mathcal I(n,u)\}$, by
\begin{equation}\label{E:wardual}
H_p(\{t(\ell_j)\}) = \sum_{\ell_1 \in \mathcal I(n,u)}\cdots
\sum_{\ell_{2v} \in \mathcal I(n,u)} a(p;\ell_1 + \cdots + \ell_{2v})t(\ell_1)\cdots
t(\ell_{2v}).
\end{equation}
Then $p \in W_{n,(u,2v)}^*$ if and only if $H_p \in P_{N(n,u),2v}$.
\end{proposition}
\begin{proof}
We have $p \in W_{n,(u,v)}^*$ if and only if, for every form $g \in
F_{n,u}$, $[p,g^{2v}] \ge 0$. Writing
$g \in F_{n,u}$ with coefficients $\{t(\ell): \ell \in \mathcal I(n,u)\}$,
we have:
\begin{equation}
\begin{gathered}
g(x) = \sum_{\ell \in \mathcal I(n,u)} t(\ell) x^\ell \implies \\
g^{2v}(x) = \sum_{\ell_1 \in \mathcal I(n,u)}\cdots\sum_{\ell_{2v} \in \mathcal I(n,u)}
t(\ell_1)\cdots t(\ell_{2v}) x^{\ell_1 + \cdots+ \ell_{2v}}.
\end{gathered}
\end{equation}
It follows from \eqref{E:ip} and \eqref{E:wardual} that $[p,g^v] =
H_p(t(\ell))$.
\end{proof}
If $v=1$, then $\mathcal I(n,1) = \{e_i\}$ and, upon writing $t(e_i) =
y_i$, $H_p(y_1,\dots,y_n) = p(y)$; we recover $Q_{n,2r}^* = P_{n,2r}$.
If $u=1$, then $H_p$ becomes the classical catalecticant and
\begin{equation}\label{E:cata}
p \in \Sigma_{2,2r}^*
\iff H_p(t) = \sum_{i \in \mathcal I(n,r)}\sum_{j \in \mathcal I(n,r)}
a(p;i+j)t(\ell_i)t(\ell_j)\ \text{ is $psd$}.
\end{equation}
This shows that $\Sigma_{n,2r}$ is a spectrahedron (see \cite[p.27]{SSS}).
\begin{theorem}
If $\sum v_i = r$, then $W_{2,\{(1,2v_1),\dots,(1,2v_m)\}} = P_{2,2r}$
iff $m=r$ and $v_i=1$.
\end{theorem}
\begin{proof}
If $p \in P_{2,2r} = \Sigma_{2,2r}$, then $p = f_1^2 + f_2^2$, where $f_i
\in F_{2,r}$. Factor $\pm f_i$ into a product of linear and pd
quadratic factors (themselves a sum of two squares):
\begin{equation}
f_i = \prod_j \ell_{1,j} \prod_k (\ell_{2,k}^2 + \ell_{3,k}^2).
\end{equation}
Then, using \eqref{E:h22} and expanding the product below, we see that
\begin{equation}
f_i^2 = \prod_j \ell_{1,j}^2 \prod_k \bigl((\ell_{2,k}^2 -
\ell_{3,k}^2)^2 + (2\ell_{2,k}\ell_{3,k})^2 \bigr) \in W_{2,\{(1,2),\dots,(1,2)\}}.
\end{equation}
The converse inclusion follows from Prop.\!\! 2.5.
Suppose $m < r$ and suppose
\begin{equation}
\prod_{\ell=1}^r (x - \ell y)^2 = \sum_{k=1}^s h_{k,1}^{2v_1}\cdots
h_{k,m}^{2v_m} ,\quad h_{k,i}(x,y) = \alpha_{k,i}x + \beta_{k,i}y \in F_{2,1}.
\end{equation}
Then for each $k$, we have
\begin{equation}
\prod_{\ell=1}^r (x - \ell y)\ \bigg \vert \ \prod_{i=1}^m (\alpha_{k,i}x + \beta_{k,i}y);
\end{equation}
since $m < r$, the right-hand side is 0, and we have a contradiction.
\end{proof}
Finally, we have a simple expression for $K_{n,2r}^*$; this
seems to be implicit in \cite{B}.
\begin{theorem}
$K_{n,2r}$ and $W_{n,\{(1,2r-2),(1,2)\}}$ are dual blenders.
\end{theorem}
\begin{proof}
By Corollary 2.10 and the Hessian definition, $p$ is convex if and
only if $0 \le Hes(p;u,v) = 2r(2r-1)[p, (u\cdot)^{2r-2}(v\cdot )^2]$
for all $u,v \in {\mathbb R}^n$.
\end{proof}
It follows from Theorems 3.10 and 3.11 that
$K_{2,4}^* = W_{2,\{(1,2),(1,2)\}} = P_{2,4}$, so $K_{2,4} =
Q_{2,4}$. For $r \ge 3$, $K_{2,2r}^* = W_{2,\{(1,2r-2),(1,2)\}}
\subsetneq P_{2,4}$, so $K_{2,2r} \supsetneq Q_{2,2r}$. We
return to this topic in section six.
\section{$K_{n,2r}$: convex forms}
In this section, we prove some general results for $K_{n,2r}$.
Since $p \in K_{n,2r}$ if and only if $Hes(p;u,v)$ is psd and
$Hes(p;u,u) = 2r(2r-1)p(u)$, we get an alternative proof that
$K_{2,2r} \subseteq P_{n,2r}$. We also know from Theorem 3.11 that $p
\in int(K_{n,2r})$ if and only if $[p,q] > 0$ for $0 \neq q \in
W_{n,\{(1,2r-2),(1,2)\}}$; accordingly, $int(K_{n,2r})$ is the set of $p
\in K_{2,2r}$ so that $Hes(p;u,v)$ is positive definite as a
bihomogeneous form in the variables $u \in {\mathbb R}^n$ and $v \in
{\mathbb R}^n$. Equivalently, $p \in K_{n.2r}$ is in $\partial(K_{n,2r})$ if and only if
there exist $u_0\neq 0, v_0 \neq 0$ such that $Hes(p;u_0,v_0) = 0$.
Although the psd and sos properties are preserved under homogenization and
dehomogenization, this is not true for convexity. For example, $t^2-1$
is a convex polynomial which cannot be homogenized to a convex form,
because it is not definite. As a pd polynomial in one variable,
$t^4 + 12 t^2 + 1$ is convex, but if $p(x,y) = x^4 +
12x^2y^2 + y^4$, then $Hes(p;(1,1),(v_1,v_2)) = 36v_1^2 +
96v_1v_2 + 36v_2^2$ is not psd, so $p$ is not convex.
\begin{proposition}
If $p \in K_{2,2r}$, then there is a pd form $q$ in $\le n$ variables
and $\bar p \sim p$ such that $\bar p(x) = q(x_k,\dots, x_n)$.
\end{proposition}
\begin{proof}
If $p$ is pd, there is nothing to prove. Otherwise, we can assume that
$p \sim \bar p$, where $\bar p$ is convex and
$\bar p(e_1) = 0$. We shall show that $\bar p = \bar
p(x_2,\dots,x_n)$. Repeated application of this argument then proves the result.
Suppose otherwise that $x_1$ appears in a term of $\bar p$ and let
$m \ge 1$ be the largest such power of $x_1$; write
the associated terms in $\bar p$ as $x_1^mh(x_2,...,x_n)$. After
an additional invertible linear change involving $(x_2,\dots,x_n)$,
we may assume that
one of these terms is $x_1^mx_2^{2r-m}$. We then have
\begin{equation}
\bar p(x_1,x_2,0,\dots,0) = x_1^m x_2^{2r-m} + \text{lower order terms in $x_1$}
\end{equation}
which implies that
\begin{equation}\label{E:binhess}
\begin{gathered}
\frac{\partial^2 \bar p}{\partial x_1^2}\frac{\partial^2 \bar
p}{\partial x_2^2}
- \left(\frac{\partial^2 \bar p}{\partial x_1\partial x_2}\right)^2 = \\
-(2r-1)m(2r-m) x_1^{2m-2}x_2^{4r-2m-2} + \text{lower order terms in $x_1$}.
\end{gathered}
\end{equation}
Since $r \ge 1$ and $1 \le m \le 2r-1$, \eqref{E:binhess} cannot be
psd, and this contradiction shows that $x_1$ does not occur in $\bar p$.
\end{proof}
\begin{corollary}
There do not exist $B_i \in \mathcal B_{n,2r_i}$, $r_i \ge 1$, so that
$K_{n,2r_1+2r_2} = B_1 * B_2$.
\end{corollary}
\begin{proof}
It follows from Prop.\!\! 2.5 that $x_i^{2r_i} \in B_i$, hence
$x_1^{2r_1}x_2^{2r_2} \in B_1*B_2$, but by Prop.\!\! 4.1, this form
is not convex.
\end{proof}
The next theorem connects $K_{n,2r}$ with the blender $N_{n,2r}$ defined
in \cite[p.119-120]{Re1}. Let $E = <\!\! e_1,\dots,e_n\! \!>$ be a real
$n$-dimensional vector space. We say that $f$ is a {\it
norm-function} on $E$ if, after defining
\begin{equation}\label{E:nf}
||x_1e_1+ \dots + x_ne_n|| = f(x_1,\dots,x_n),
\end{equation}
the pair $(E,||\cdot||)$ is a Banach space. Let
\begin{equation}\label{E:nnd}
N_{n,d}:= \{p \in F_{n,d}: p^{1/d} \text{ is a norm function} \}.
\end{equation}
A necessary condition is that $f = p^{1/d} \ge 0$, hence $d=2r$ is
even and $p \in P_{n,2r}$.
For example, if $p(x) = \sum_k x_k^2$, then \eqref{E:nf} with $f=p^{1/2}$
gives ${\mathbb R}^n$ with the Euclidean norm. If $(E,||\cdot||)$ is isometric to a
subspace of some $L_{2r}(X,\mu)$, then $f^{2r} \in Q_{n,2r}$.
The following theorem was proved in the author's thesis; see \cite{Re0,Re00}.
\begin{proposition}\cite[Thm.1]{Re00}
If $p \in P_{n,2r}$, then $p \in N_{n,2r}$ iff for all $u,v \in {\mathbb R}^n$,
$p(u_1+tv_1,\dots,u_n+tv_n)^{1/d}$ is a convex function of $t$.
\end{proposition}
It is not obvious that $N_{n,2r}$ is a blender; in fact,
$N_{n,2r} = K_{n,2r}$! The connection is a proposition whose
provenance is unclear. It appears in Rockafellar's monograph
\cite[Cor.15.3.1]{Ro}, where it is attributed to Lorch \cite{Lo}, although the
derivation is not transparent.
V. I. Dmitriev (see section 6) attributes the result to an
observation by his advisor S. G. Krein in 1969. Note below that $q$ is
{\it not} homogeneous.
\begin{proposition}
Suppose $p \in P_{n,2r}$ and $p(1,0,...,0) > 0$. Let
\begin{equation}
q(x_2,\dots,x_n) = p(1,x_2,\dots,x_n).
\end{equation}
Then $p \in K_{n,2r}$ if and only if $q^{1/(2r)}(x_2,\dots,x_n)$ is
convex.
\end{proposition}
\begin{corollary}
$K_{n,2r} = N_{n,2r}$.
\end{corollary}
\begin{proof}[Proof of Prop.\!\! 4.4]
A function is convex if and only if it is convex when restricted
to all two-dimensional subspaces.
Consider all $a \in {\mathbb R}^N$ with $a_1 = 1$. Suppose we can show that
$Hes(p;a,u)$ is psd in $u$ if and only if $q^{1/(2r)}$ is
convex at $(a_2,\dots,a_n)$. By homogeneity, this
occurs if and only if $Hes(p;a,u)$ is psd in $u$ for every $a$ with $a_1
\neq 0$ and by continuity, this holds if and only
if $Hes(p;a,u)$ is psd for all $a,u$. Thus, it suffices to
set $a_1 = 1$ and prove the equivalence pointwise.
Fix $(a_2,\dots,a_n)$ and let
\begin{equation}
\begin{gathered}
\tilde p(x_1,x_2\dots, x_n) = p(x_1, x_2 + a_2 x_1, \dots, x_n + a_nx_1),\\
\tilde q(x_2,\dots,x_n) = \tilde p(1,x_2,\dots x_n) = q(x_2+a_2,\dots,x_n+a_n)
\end{gathered}
\end{equation}
Then $p$ and $q^{1/(2r)}$ are convex at $a$ and $(a_2,\dots,a_n)$ iff
$\tilde p$ and
$\tilde q$ are convex at $e_1$ and 0, and we can drop the tildes and
assume that
$a_k = 0$ for $k \ge 2$, so $a = e_1$.
Since it suffices to look at all two-dimensional
subspaces containing $e_1$, we make one more change of variables in
$(x_2,\dots,x_n)$, and assume this
subspace is $\{(x_1,x_2,0,\dots,0)\}$.
Suppose now that
\begin{equation}
h(x_1,x_2) = p(x_1,x_2,0,\dots,0) = a_0 x_1^{2r} + \binom {2r}1 a_1
x_1^{2r-1}x_2 + \binom {2r}2 a_2x_1^{2r-2}x_2^2 + \dots.
\end{equation}
Then
\begin{equation}
Hes(h;(1,0),(v_1,v_2)) = 2r(2r-1)(a_0 v_1^2 + 2a_1 v_1v_2 + a_2 v_2^2),
\end{equation}
and since $a_0 = p(e_1) > 0$, this is psd iff $a_0a_2 \ge a_1^2$. On
the other hand,
\begin{equation}
q(t) = p(1,t) = a_0 + \binom {2r}1 a_1 t + \binom {2r}2 a_2 t^2 + \dots
\end{equation}
and a routine computation shows that
\begin{equation}
(q^{(1/(2r))})''(0) = (2r-1)a_0^{-2+1/(2r)}(a_0a_2 - a_1^2).
\end{equation}
Thus the two conditions hold simultanously.
\end{proof}
A more complicated proof computes the Hessian of $p$, uses the Euler
PDE ($2rp = \sum x_i \frac{\partial p}{\partial x_i}$ and
$(2r-1)\frac{\partial p}{\partial x_i} = \sum x_j\frac{\partial^2
p}{\partial x_i\partial x_j} $) to replace partials involving $x_1$
with partials involving only the other variables.
The discriminant of this Hessian with respect to $u_1$ (after a
change of variables) becomes a positive multiple of the Hessian of $q^{1/(2r)}$.
We conclude this section with a peculiar result which implies that
every pd form is, in a computable way, the restriction of a convex
form on $S^{n-1}$.
\begin{theorem}
Suppose $p \in P_{n,2r}$ is pd, and let $p_N:= (\sum_j x_j^2)^N p$.
Then there exists $N$ so that $p_N \in K_{n,2r+2N}$.
\end{theorem}
\begin{proof}
Since $p$ is pd, it is bounded away from 0 on $S^{n-1}$ and so there are
uniform upper bounds $T$ for $ |p(x)^{-1}\nabla_u(p)(x)|$ and $U$
for $|p(x)^{-1} \nabla^2_u(p)(x)|$, for $x, u \in
S^{n-1}$. Since $\sum x_i^2$ is rotation-invariant, once again it suffices to
show that $p_N$ is convex at $(1,0,\dots,0)$, given $x_3 = \cdots = x_n
= 0$. We claim that if $N > (T^2 + U)/2$, then $p_N$
is convex. By Prop.\!\! 4.4, it suffices to show that
$p^{1/(2N+2r)}_N(1,t,0,\dots,0)$ is convex at $t=0$.
Writing down the relevant Taylor series, this becomes
\begin{equation}
(1 + t^2)^{N/(2N+2r)} (1 + \alpha t + \tfrac 12 \beta t^2 + \dots )^{1/(2N+2r)},
\end{equation}
where $|\alpha| \le T$ and $|\beta|\le U$. By expanding the product, a
standard computation shows that the second derivative at $t=0$ is
\begin{equation}
\frac {N}{N+r} + \frac 1{2N+2r}\cdot b - \frac
{2N+2r-1}{(2N+2r)^2}\cdot a^2
\ge \frac 1{2N+2r} \left(2N - U - T^2 \right) \ge 0.
\end{equation}
\end{proof}
Greg Blekherman pointed out to the author's chagrin in Banff that
Theorem 4.6
follows from \cite[Thm.3.12]{R2}: if $p$ is pd, then there exists $N$
so that $p_N \in Q_{n,2r+2N}$. This was used in \cite{R2} to show that
$P_N\in \Sigma_{n,2r+2N}$; it also implies that $p \in K_{n,2r+2N}$.
The proof of \cite[Thm.3.12]{R2} is much less elementary.
We conclude this section with a computational illustration of the
proof of Theorem 4.6.
If $a > 0$, then $x^2 + a y^2$ is convex, but if $r \ge 1$ and
$(x^2 + y^2)^r(x^2 + a x^2) \in K_{2,2r+2}$ for all $a>0$, then by (P2),
$x^2(x^2 + y^2)^r$ would be convex, violating Prop.\!\! 4.1.
\begin{theorem}
\begin{equation}
(x^2 + y^2)^r(x^2 + a x^2) \in K_{2,2r+2} \iff a + 1/a \le 8r + 18 + 8/r.
\end{equation}
\end{theorem}
\begin{proof}
Let $p(x,y) = (x^2 + y^2)^r(x^2 + a x^2)$. A computation shows that
\begin{equation}
\begin{gathered}
\frac{\partial^2p}{\partial x^2}\frac{\partial^2p}{\partial y^2} -
\left(\frac{\partial^2p}{\partial x\partial y}\right)^2 =
4(2r+1)(x^2+y^2)^{2r-2} q(x,y), \quad \text{where} \quad
q(x,y) = \\ (1 + r) (a + r)x^4 + ( 2 a - r + 6 a r - a^2 r + 2 a
r^2)x^2y^2 + a (1 + r) (1 + a r)y^4.
\end{gathered}
\end{equation}
Another computation shows that
\begin{equation}\label{E:mess}
\begin{gathered}
4(1+r)(a+r)q(x,y)\\ = (2(1+r)(a+r)x^2 + ( 2 a - r + 6 a r - a^2 r + 2 a
r^2)y^2)^2 \\ + 4a r^2(a-1)^2\bigl((8r + 18 + 8/r)-(a+1/a)\bigr)y^4.
\end{gathered}
\end{equation}
If $a + 1/a \le 8r + 18 + 8/r$, then \eqref{E:mess} shows that $q$
is psd. Suppose $a + 1/a > 8r + 18 + 8/r$. Observe that
$ 2 a - r + 6 a r - a^2 r + 2 ar^2 \ge
0$ if and only if $(a + 1/a) \le 2r + 6 + 2/r$, so in this case, $ 2
a - r + 6 a r - a^2 r + 2 ar^2 < 0$ and we can choose $(x,y) = (x_0,y_0) \neq
(0,0)$ to make the first square in \eqref{E:mess} equal to zero. It then
follows that $4(1+r)(a+r)q(x_0,y_0)< 0$.
\end{proof}
In particular, $(x^2 + y^2)(x^2 + a y^2) \in K_{2,4} \iff
17 - 12 \sqrt 2 \le a \le 17 + 12 \sqrt 2$.
\section{$\mathcal B_{2,4}$: binary quartic blenders}
In view of Prop.\!\! 2.5, the simplest non-trivial opportunity to
classify blenders comes with the binary quartics. Throughout this section, we
choose a sign for $\pm B \in \mathcal B_{2,4}$ and assume that $B
\subset P_{2,4}$. We shall show that $\mathcal B_{2,4}$
is a one-parameter nested family of blenders increasing from $Q_{2,4}$ to
$P_{2,4}$. It is also convenient to let $Z_{2,4}$ denote the set of $p \in
P_{2,4}$ which are neither pd not a 4th power; if $p \in Z_{2,4}$,
then $p = \ell^2 h$, where $\ell$ is linear and $h$ is a psd quadratic
form relatively prime to $\ell$.
\begin{lemma}
If $B \in \mathcal B_{2,4}$ and $0 \neq p \in B \cap Z_{2,4}$, then $B =P_{2,4}$.
\end{lemma}
\begin{proof}
We have $p \sim q$, where
$q(x,y) = x^2(ax^2 + 2bxy + cy^2) \in B$, $ac - b^2\ge 0$ and $c > 0$. But
\begin{equation}
x^2(ax^2 + 2bxy + cy^2) =
x^2\bigl( \bigl(\tfrac{ac - b^2}c\bigr) x^2 + c\bigl(\tfrac bc x + y
\bigr)^2\bigr) \sim x^2(d x^2 + c y^2),
\end{equation}
and $d \ge 0$. Next, $(x,y) \mapsto (\epsilon x, \epsilon^{-1}y)$
shows that $\epsilon^2 dx^4 + c x^2y^2 \in B$, so $x^2y^2 \in B$ by (P2) and
$\ell_1^2\ell_2^2 \in B$ by (P3). Thus, $W_{2,\{(1,2), (1,2)\}} = P_{2,4}
\subseteq B$ by Theorem 3.10.
\end{proof}
This lemma illustrates one difference between
blenders and orbitopes. If $G = SO(2)$ and $p(x,y) = x^2(x^2+y^2)$,
then the image of $p$ under the action of $G$ will be $\{(\cos t x + \sin t
y)^2(x^2+y^2)\}$, so even taking scalar multiples into account, the
convex hull will not contain the 4th powers or the square of an
indefinite quadratic.
A binary quartic of particular importance is
\begin{equation}\label{E:flam}
f_{\lambda}(x,y) := x^4 + 6\lambda x^2y^2 + y^4;
\end{equation}
we also define
\begin{equation}
g_{\lambda}(x,y):= f_{\lambda}(x+y,x-y) = (2+6\lambda)x^4 + (12 - 12\lambda)x^2y^2 + (2+6\lambda)y^4.
\end{equation}
We shall need two special fractional linear transformations. Let
\begin{equation}\label{E:TU}
T(z):= \frac{1-z}{1+3z}, \qquad U(z) := - \frac{1+3z}{3-3z}.
\end{equation}
It follows from \eqref{E:flam} that $g_{\lambda} = (2+6\lambda)f_{T(\lambda)}$, hence for
$\lambda \neq -\frac 13$, $f_{\lambda} \sim f_{T(\lambda)}$. Note that $T(T(z))
= z$, $T(0) = 1$, $T(\frac 13) = \frac 13$, and
$T(-\frac 13) = \infty$ (corresponding to $(x^2-y^2)^2 \sim x^2y^2$);
$T$ gives a 1-1
decreasing map between $[\frac 13,\infty)$ and $(-\frac 13,\frac 13]$.
We also have
\begin{equation}\label{E:apo}
[f_{\lambda},g_{\mu}] = (2+6\mu) + \lambda(12-12\mu) + (2+6\mu) =
4(1+3\lambda+3\mu-3\lambda\mu).
\end{equation}
Note that $U(U(z)) = z$, $U(0) = -\tfrac 13$, $U$ gives a
1-1 decreasing map from $[-\frac 13,0]$ to itself, and
\begin{equation}\label{E:fglam}
[f_{\lambda},g_{U(\lambda)+\tau}] = 12(1-\lambda)\tau.
\end{equation}
It follows from \eqref{E:fglam} that $[f_{\lambda},g_{U(\lambda)}] = 0$, and
if $\lambda < 1$ and $\mu < U(\lambda)$, then $[f_{\lambda},g_{\mu}] < 0$.
It is easy to see directly from \eqref{E:flam} that $f_{\lambda}$ is psd iff $\lambda
\in [-\frac 13,\infty)$, and pd iff $\lambda \in (-\frac 13,\infty)$, and
from (P3) that, if $B \in \mathcal B_{2,4}$, then
\begin{equation}
f_{\lambda} \in B \iff f_{T(\lambda)} \in B.
\end{equation}
By (P1), if $-\frac 13 < \lambda \le \frac 13$, then $f_{\lambda}
\in B$ implies that $f_{\mu} \in B$ for $\mu \in [\lambda,T(\lambda)]$.
It is classically known that a ``general'' binary quartic can be put
into the shape $f_{\lambda}$ for some $\lambda$ after an invertible linear
transformation. However there is no guarantee that the coefficients of
the transformation are real, and the result is not universal: $x^4
\not \sim f_{\lambda}$. The following first appeared in \cite[Thm.6]{PR}.
\begin{proposition}
If $p \in P_{2,4}$ is pd, then $p \sim f_{\lambda}$ for some $\lambda \in
(-\frac 13,\frac 13]$.
\end{proposition}
\begin{proof}
Suppose first $p = g^2$. Then $g$ is pd, so $g
\sim x^2+y^2$ and $p \sim f_{\frac 13}$.
If $p$ is not a perfect square, then it is a product of two pd
quadratic forms; we may assume that $p(x,y) = (x^2+y^2)q(x,y)$, with
\begin{equation}
q(x,y) = ax^2 + 2bxy +cy^2.
\end{equation}
A ``rotation of axes'' fixes $x^2+y^2$ and takes $q$ into
$d x^2 + ey^2$ with $d,e > 0$, $d \neq e$, so $p
\sim(x^2+y^2)(dx^2+ey^2)$. Now,
$(x,y) \mapsto (d^{-1/4}x,e^{-1/4}y)$ gives $p
\sim f_{\mu}$, where $\mu = \frac 16(\gamma + \gamma^{-1}) > \frac 13$ for $\gamma =
\sqrt{d/e}\neq 1$. Thus, $p \sim f_{T(\mu)}$ where $T(\mu) \in
(-\frac 13,\frac 13)$.
\end{proof}
We need some results from classical algebraic geometry.
Suppose
\begin{equation}
p(x,y) = \sum_{k=0}^4\binom 4k a_k(p) x^{4-k}y^k.
\end{equation}
The two ``fundamental invariants'' of $p$ are
\begin{equation}
\begin{gathered}
I(p) = a_0(p)a_4(p) - 4a_1(p)a_3(p)+3a_2(p)^2, \\
J(p) = \det
\begin{vmatrix}
a_0(p) & a_1(p) & a_2(p) \\
a_1(p) & a_2(p) & a_3(p)\\
a_2(p)& a_3(p) & a_4(p)
\end{vmatrix}.
\end{gathered}
\end{equation}
(Note $J(p)$ is the determinant of the catalecticant matrix $H_p$.)
We have $I(f_{\lambda}) = 1 + 3\lambda^2$ and $J(f_{\lambda})= \lambda - \lambda^3$, but
$I(x^4)=J(x^4)=0$.
It follows from Prop.\!\! 5.2 that if $p$ is pd, then $I(p) > 0$.
It is easily
checked that if $q(x,y) = p(ax + by,cx+dy)$, then
\begin{equation}\label{E:inv}
I(q) = (ad-bc)^4 I(p), \qquad J(q) = (ad - bc)^6 J(p).
\end{equation}
Let
\begin{equation}\label{E:inv2}
K(p) := \frac {J(p)}{I(p)^{3/2}}.
\end{equation}
It follows from \eqref{E:inv} and \eqref{E:inv2} that, if $p \sim q$,
then $K(q) = K(p)$.
In particular,
\begin{equation}
p \sim f_{\lambda} \implies K(p) = K(f_{\lambda}) = \phi(\lambda):=
\frac{\lambda-\lambda^3}{(1+3\lambda^2)^{3/2}}.
\end{equation}
\begin{lemma}
If $p$ is pd, then $p \sim f_{\lambda}$, where $\lambda$ is the unique
solution in $(-\frac 13, \frac 13]$ to $K(p) = \phi(\lambda)$. If $p \in
Z_{2,4}$, then $K(p) = \phi(-\frac 13)$.
\end{lemma}
\begin{proof}
By Proposition 5.2, $p \sim f_{\lambda}$ for some $\lambda \in (-\frac 13,
\frac 13]$. A routine computation shows that $f'(\lambda) =
(1-9\lambda^2)(1+3\lambda^2)^{-5/2}$ is positive on $(-\frac 13, \frac 13)$,
hence $\phi$ is strictly increasing. By Lemma 5.1, if $p \in Z_{2,4}$,
then $p \sim q$, where $q(x,y) = dx^4 + 6e x^2y^2$ for some $e > 0$.
Since $I(q) =3e^2$ and $J(q) = -e^3$, $K(p) = K(q) = 3^{-3/2} = \phi(-\frac
13)$.
\end{proof}
\begin{theorem}
Suppose $r,s \in [-\frac 13,0]$, and suppose $1+3r+3s-3rs = 0$; that
is, $s = U(r)$. If $ p
\in [[f_r]]$ and $q \in [[f_s]]$, then $[p,q] \ge 0$.
\end{theorem}
\begin{proof}
Suppose $p = f_r \circ M_1$ and $q = f_s \circ M_2$. Then
\begin{equation}
[p,q] = [f_r\circ M_1,f_s \circ M_2] = [f_r, f_s \circ M_2M_1^t],
\end{equation}
hence it suffices to show that for all $a,b,c,d$,
\begin{equation}
\Psi(a,b,c,d;r,s):= [f_r(x,y),f_s(ax+by,cx+dy)] \ge 0
\end{equation}
A calculation shows that
\begin{equation}
\begin{gathered}
\Psi(a,b,c,d;r,s) = a^4+b^4+c^4+d^4 + \\ 6r(a^2b^2 + c^2d^2) +
6s(a^2c^2+b^2d^2) + 6rs(a^2d^2 + 4abcd + b^2c^2).
\end{gathered}
\end{equation}
When $s = U(r)$, a sos expression can be found:
\begin{equation}
\begin{gathered}
2(1-r)\Psi(a,b,c,d;r,U(r)) = (1+r)(1+3r)(a^2+b^2-c^2-d^2)^2 \\
-4 r (a^2 + c^2 - b^2 - d^2)^2 + (1 + r) (1 -
3 r) (a^2 + d^2 - b^2 - c^2)^2 \\ - 8 r (1 + 3 r) (a b + c d)^2,
\end{gathered}
\end{equation}
which is non-negative when $r \in [-\frac 13, 0]$.
Note that $\Psi(1,1,1,-1;r,U(r)) = 0$; reaffirming that $[f_r,g_{U(r)}] = 0$.
\end{proof}
\begin{theorem}
Suppose $r,s \in [-\frac 13, 0]$. If $s \ge U(r)$, $p \in [[f_r]]$ and
$q \in [[f_s]]$, then $[p,q] \ge 0$. If $s < U(r)$, then there exist $p
\in [[f_r]]$ and $q \in [[f_s]]$ so that $[p,q] < 0$.
\end{theorem}
\begin{proof}
If $0 \ge s \ge U(r)$, then $s \in [U(r),T(U(r))]$, hence
$f_s$ is a convex combination of $f_{U(r)}$ and $f_{T(U(r))}$,
and each $f_s \circ M$ is a convex combination of $f_{U(r)}\circ
M$ and $f_{T(U(r))}\circ M$. By Theorem 5.4,
$[f_r,f_s \circ M]$ is a convex combination of non-negative numbers
and is non-negative.
If $U(r) \ge s \ge -\frac 13$, then $[f_r,g_s] < 0$ by \eqref{E:fglam}.
\end{proof}
We now have the tools to analyze $B \in \mathcal B_{2,4}$.
If $Q_{2,4} \subseteq B \subseteq P_{2,4}$, let
\begin{equation}
\Delta(B) = \{ \lambda \in {\mathbb R} : f_{\lambda} \in B \}.
\end{equation}
\begin{theorem}
If $B \subset F_{2,4}$ is a blender, then $\Delta(B) =
[\tau,T(\tau)]$ for some $\tau \in [-\frac 13,0]$.
\end{theorem}
\begin{proof}
By (P2), $\Delta(B)$ is a closed interval.
We have seen that $\Delta(P_{2,4}) = [-\frac 13,\infty)$. Since $Q_{2,4} =
P_{2,4}^* = \Sigma_{2,4}^*$, by \eqref{E:cata}, $f_{\lambda} \in Q_{2,4}$ if and
only if
$\left( \begin{smallmatrix}
1 & 0 & \lambda \\
0 & \lambda & 0\\
\lambda & 0 & 1\\
\end{smallmatrix}\right)
$
is psd; that is, $\Delta(Q_{2,4}) = [0,1]$.
Otherwise, let $\tau = \inf \{ \lambda : f_{\lambda} \in B \}$. Since $Q_{2,4}
\subsetneq B \subsetneq P_{2,4}$, $\tau \in (-\frac 13, 0)$.
By (P2),
$f_{\tau} \in B$ and by (P3), $f_{T(\tau)} \in B$,
and by convexity, $f_{\nu} \in B$ for $\nu \in
[\tau,T(\tau)]$. If $\nu < \tau$, then $f_{\nu}\not \in B$ by
definition. If $\nu > T(\tau)$ and $f_{\nu} \in B$,
then $f_{T(\nu)} \in B$ and $T(\nu) < T(T(\tau)) = \tau$, a
contradiction.
\end{proof}
Now, for $\tau \in [-\frac 13,0]$, let
\begin{equation}
B_{\tau}:= \bigcup_{\tau \le \lambda \le \frac 13} [[f_\lambda]] = \{p: p \sim
f_{\lambda}, \tau \le \lambda \le \tfrac 13 \} \cup \{(\alpha
x +\beta y)^4: \alpha, \beta \in {\mathbb R} \}.
\end{equation}
\begin{theorem}
If $B \in \mathcal B_{2,4}$, then $B =
B_{\tau}$ for some $\tau \in [-\frac 13, 0]$ and $B_\tau^* = B_{U(\tau)}$.
\end{theorem}
\begin{proof}
Suppose $B$ is a blender and $Q_{2,4} \subsetneq B \subsetneq
P_{2,4}$. Then $\Delta(B) = [\tau,T(\tau)]$ by Theorem 5.6, so $B =
B_{\tau}$ by Prop.\!\! 5.2. We need to show that each such $B_{\tau}$
is a blender. Since $B_0 = Q_{2,4}$ and $B_{-\frac 13} = P_{2,4}$ are blenders,
we may assume $\tau > -\frac 13$ and all $p \in B_{\tau}$ are pd.
Clearly, (P3) holds in $B_{\tau}$.
Suppose $p_m \in B_{\tau}$ and
$p_m \to p$. If $p$ is a 4th power, then $p \in B_{\tau}$. If $p$ is
pd, then $K(p_m) \to K(p)$ by \eqref{E:inv},
\eqref{E:inv2} and continuity. In any case, $K(p_m) \ge \phi(\tau)$,
so $K(p) \ge \phi(\tau)$ and $p \in B_{\tau}$. Finally, if $p \in Z_{2,4}$,
then $K(p_m) \ge \phi(\tau) > \phi(-\frac 13) = K(p)$ by Lemma 5.3, and this
contradiction completes the proof of (P2).
We turn to (P1). Suppose $p, q \in B_{\tau}$ and $p+q \not\in
B_{\tau}$. Since $p+q$ is pd, $p+q \sim f_{\lambda}$ for some $\lambda < \tau$,
and so there exists $M$ so that $p\circ M + q \circ M = f_{\tau}$. But
now, \eqref{E:apo} and Theorem 5.5 give a contradiction:
\begin{equation}
0 > [f_{\lambda},g_{U(\tau)}] = [p\circ M,g_{U(\tau)}] + [q \circ M
,g_{U(\tau)}] \ge 0.
\end{equation}
Thus, $p+q \in B_{\tau}$ and (P1) is satisfied, showing that
$B_{\tau}$ is a blender. It follows from Prop.\!\! 2.7 and Theorem 5.5
that $B_{\tau}^* = B_{\nu}$ for some $\nu$. But by Theorem 5.5,
$B_{U(\tau)} \subseteq B_{\tau}^*$ and if $\lambda < U(\tau)$, then
$f_{\lambda} \notin B_{\nu}^*$, thus $B_{\tau}^* = B_{U(\tau)}$.
\end{proof}
A computation shows that $\phi^2(\lambda) + \phi^2(U(\lambda)) = \frac 1{27}$,
and this gives an alternate way of describing the dual
cones. Regrettably, this result was garbled in \cite[p.141]{Re1}
into the statement that
$B_{\tau}^* = B_{\nu}$, where $\tau^2 + \nu^2 = \frac 19$. The
self-dual blender $B_{\nu_0} =B_{\nu_0}^*$ occurs for $\nu_0 = 1 -
\sqrt{4/3}$. We know of no other interesting properties of $B_{\mu_0}$.
\section{$K_{2,2r}$: binary convex forms}
The author's Ph.D. thesis, submitted in 1976 and
published as \cite{Re0,Re00} in 1978 and 1979,
discussed $N_{n,2r}$. (The identification of
$N_{n,2r}$ and $K_{n,2r}$ was not made there.)
Unbeknownst to him, V. I. Dmitriev had earlier worked on
similar questions at Kharkov University. In
1969, S. Krein, Dmitriev's advisor, had asked about the extreme elements of
$K_{2,2r}$. Dmitriev wrote \cite{D1} in 1973 and \cite{D2} in 1991.
Dmitriev writes in \cite{D2}: ``I am not aware of any articles on this
topic, except \cite{D1}.'' We have seen \cite{D2} both in its
original Russian and in the English translation. We have not yet seen
\cite{D1} (although UI Interlibrary Loan is still trying!), and our
comments on \cite{D2} are based on references in \cite{D2}. There are at
least two mathematicians named V. I. Dmitriev in MathSciNet; the
author of \cite{D1,D2} is
affiliated with Kursk State Technical University.
Let
\begin{equation}\label{E:qlam}
q_{\lambda}(x,y) = x^6 + 6\lambda x^5y+ 15\lambda^2 x^4y^2 + 20 \lambda^3 x^3y^3 + 15
\lambda^2 x^2y^4 + 6\lambda x y^5 + y^6.
\end{equation}
In the language of this paper, the four relevant results from
\cite{D1,Re00,D2} are these:
\begin{proposition}
\
\smallskip
\noindent (i) $K_{2,4} = Q_{2,4}$.
\noindent (ii) $Q_{2,2r} \subsetneq K_{2,2r}$ for $r \ge 3$.
\noindent (iii) The elements of $\mathcal E(K_{2,6})$, are
$[[q_{\lambda}]]$, where $0 < | \lambda | \le \frac 12$.
\noindent (iv) $K_{3,4} \subsetneq Q_{3,4}$; specifically,
$x^4+y^4+z^4+6x^2y^2+6x^2z^2+2y^2z^2 \in K_{3,4} \setminus
Q_{3,4}$.
\end{proposition}
According to \cite{D2}, \cite{D1} gave a proof of (i) and (ii) (for
even $r$); \cite{D2} gave a proof of (iii).
All four appeared in \cite{Re00}; (iii) was announced
without proof. (The results from
\cite{Re00} were in the author's thesis, except
that (iv) was proved there by an extremely long perturbation argument.)
Note that (i) and (ii) follow from Prop.\!\! 3.8 and Theorems 3.10
and 3.11. Since $P_{n,m} = \Sigma_{n,m}$ if $n = 2$ or $(n,m) = (3,4)$,
these examples are not helpful in resolving Parrilo's question about
convex forms which are not sos.
The rest of this section discusses $\partial(K_{2,2r})$, mostly for small $r$.
Let
\begin{equation}\label{E:pdef}
p(x,y) = \sum_{i=0}^{2r} \binom{2r}i a_ix^{2r-i}y^i,
\end{equation}
and define
\begin{equation}
\begin{gathered}
\Theta_p(x,y):= \sum_{m=0}^{4r-4} b_mx^{4r-4-m}y^m, \quad
\text{where} \\
b_m := \sum_{j=0}^{2r-1}
\left(\binom{2r-2}j\binom{2r-2}{m-j}
-\binom{2r-2}{j-1}\binom{2r-2}{m-j+1}\right)
a_ja_{m+2-j},
\end{gathered}
\end{equation}
with the convention that $a_i=0$ if $i < 0$ or $i > 2r$.
\begin{proposition}\cite[Prop.B]{D2}
Suppose $p \in P_{2,2r}$. Then
$p \in K_{2,2r}$ if and only if $\Theta_p \in P_{2,4r-2}$ and
$p \in \partial(K_{2,2r})$ if and only if $\Theta_p$ is psd but not pd.
\end{proposition}
\begin{proof}
A direct computation shows that
\begin{equation}
\frac{\partial^2p}{\partial x^2}\frac{\partial^2p}{\partial y^2} -
\left(\frac{\partial^2p}{\partial x\partial y}\right)^2 =
(2r)^2(2r-1)^2\Theta_p(x,y).
\end{equation}
Since $Hes(p;u,u) = 2r(2r-1)p(u) \ge 0$, the first assertion is proved.
Further, $p \in \partial(K_{2,2r})$ if and only if $Hes(p;u_0,v_0) =
0$ for some $u_0 \neq 0, v_0 \neq 0$.
\end{proof}
Observe that
$\Theta_{(\alpha\cdot)^{2r}} =0$, and it may be checked that if $q(x,y)
= p(ax+by,cx+dy)$, then $\Theta_q(x,y) =
(ad-bc)^2\Theta_p(ax+by,cx+dy)$. Thus, if $q
\in \partial(K_{2,2r})$, we may assume that $q \sim p$,
where $\Theta_p(0,1) = 0$, so that
\begin{equation}\label{E:zero}
0 = b_0 = a_0a_2 - a_1^2;\qquad 0 = b_1 = (2r-2)(a_0a_3 - a_1a_2).
\end{equation}
We give a proof that $K_{2,4} = Q_{2,4}$, using the argument of
\cite{Re00} and, presumably, \cite{D1}.
\begin{proposition}
$K_{2,4} = Q_{2,4}$.
\end{proposition}
\begin{proof}
Suppose $q \in \mathcal E(K_{2,4})$. Then $q \in \partial(K_{2,4})$ and
$q \sim p$ where $\Theta_p$ is psd, but $\Theta_p(0,1) = 0$. If $a_0 = 0$,
then $p(0,1) = 0$, so by Prop.\!\! 4.1, $p(x,y) = a_4 y^4$ is a 4th
power. Otherwise, $a_0 > 0$, and if we write $a_1 = ra_0$, then
by \eqref{E:zero}, we have $a_2 = r^2a_0$ and $a_3 = r^3 a_0$. Write $a_4 =
r^4a_0 + s$. A computation shows that $\Theta_p(x,y) = a_0 s
x^2(x+ry)^2$, hence $s \ge 0$ and $p(x,y) = a_0(x + ry)^4 + s
y^4$. Since $Q_{2,4} \subset K_{2,4}$ and $s \ge 0$, it follows
that $p \in \mathcal E(K_{2,4})$ if and only if $s=0$. Thus $p \in
K_{2,4}$, being a sum of extremal elements, is a sum of 4th powers.
\end{proof}
If $2r=6$, then we shall need $\Theta_p(x,y)$ in full bloom:
\begin{equation}\label{E:theta}
\begin{gathered}
\Theta_p(x,y) = (a_0a_2-a_1^2)x^8 +4(a_0a_3-a_1a_2)x^7y + (6a_0a_4 +
4a_1a_3 - 10a_2^2) x^6y^2 \\
+ 4(a_0a_5 + 4a_1a_4 - 5a_2a_3) x^5 y^3 +
(a_0a_6+14a_1a_5+5a_2a_4-20a_3^2) x^4y^4\\ + 4(a_1a_6 + 4a_2a_5 -
5a_3a_4) x^3 y^5 + (6a_2a_6 +
4a_3a_5 - 10a_4^2) x^2y^6 \\+ 4(a_3a_6-a_4a_5)xy^7 + (a_4a_6-a_5^2)y^8.
\end{gathered}
\end{equation}
\begin{lemma}
If $p \in K_{2,6}$ and $\Theta_p(x,y) = \ell^2(x,y)B_p(x,y)$, where
$\ell$ is linear and $B_p$ is a
pd sextic, then $p \notin \mathcal E(K_{2,6})$.
\end{lemma}
\begin{proof}
After a linear change, we may assume $\ell(x,y) = y$, and assume $p$
is given by \eqref{E:pdef}, so that \eqref{E:theta} holds. If
$a_0=p(1,0) = 0$, then as
in Prop.\!\! 6.3, $p(x,y) = a_6y^6$ and $\Theta_p(x,y) = 0$. Otherwise,
we again have $a_1 = ra_0$, $a_2 = r^2a_0$ and $a_3 = r^3 a_0$. A
computation shows that
\begin{equation}\label{E:bp}
\begin{gathered}
B_p(x,y) = 6a_0(a_4-r^4a_0)x^6 + 4a_0(a_5+4ra_4-5r^5a_0)x^5y \\ +
a_0(a_6+14ra_5+5r^2a_4-20r^6a_0) x^4y^2\\ + 4ra_0(a_6 + 4ra_5 -
5r^2a_4) x^3 y^3+ (6r^2a_0a_4 +
4r^3a_0a_5 - 10a_4^2) x^2y^4 \\ + 4(r^3a_0a_6-a_4a_5)xy^5 + (a_4a_6-a_5^2)y^6.
\end{gathered}
\end{equation}
Observe that if $p_{\lambda} = p + \lambda y^6$, then $a_6$ is replaced above
by $a_6 + \lambda$ and
\begin{equation}
\begin{gathered}
B_{p_{\lambda}} = B_p+ \lambda (a_0 x^4y^2 + 4r a_0 x^3y^3 + 6r^2
a_0 x^2y^4 + 4r^3 a_0 xy^5 + a_4 y^6).
\end{gathered}
\end{equation}
Since $B_p$ is pd, there exists sufficiently small $\epsilon$ so that
$B_{p_{_{\pm \epsilon}}}$ is psd, so $p_{\pm \epsilon} \in K_{2,6}$.
But then $p = \frac 12(p_{\epsilon} + p_{-\epsilon})$ is not extremal.
\end{proof}
\begin{proof}[Proof of Prop.\!\! 6.1(iii)]
By Prop.\! 6.2 and Lemma 6.4, we may assume that $\Theta_p = y^2B_p$ and $B_p$
is psd, but not pd. If $B_p(0,1) = 0$, then by \eqref{E:bp}, $a_4 = r^4a_0$
and $a_5 = r^5a_0$ and, as before, if $a_6 = r^6a_0 + t$, then $\Theta_p =
a t x^4(x+ry)^4$, so $t \ge 0$ and $p \in \mathcal E(K_{2,6})$ if and
only if $t=0$, so $p$ is a 6th power.
If $B_p(1,e) = 0$ and $e \neq 0$, and $\tilde p(x,y) = p(y,x+ey)$, then
$\Theta_{\tilde p}(x,y) = 0$ at $(x,y) = (1,0), (0,1)$, and by dropping the
tilde, we may assume from \eqref{E:theta} that $0 = a_4a_6 - a_5^2 = a_3a_6
- a_4a_5$. Again, $a_6 = p(0,1) \ge 0$, and if $a_6=0$, then $p$ is a
6th power. Otherwise, we set $a_5 = s a_6$, so that $a_4 = s^2a_6$ and
$a_3 = s^3 a_6$; recall that $a_3 = r^3 a_0$ as well. If $s=0$, then
$a_3 = 0$, so $r=0$ and $p(x,y) = a_0x^6 + a_6y^6$, which is only
extremal if it is a 6th power. Thus $s \neq 0$, and similarly, $r \neq 0$.
Letting $t = s^{-1}$, we obtain the formulation of \cite{D2}:
\begin{equation}
p(x,y) = a_0(x^6 + 6r x^5y + 15 r^2 x^4y^2 + 20 r^3 x^3y^3 + 15 r^3t
x^2y^4 + 6 r^3t^2 xy^5 + r^3t^3 y^6)
\end{equation}
Finally, send $(x,y) \mapsto (a_0^{-1/6}x,
a_0^{-1/6}(rt)^{-1/2} y)$ and set $\lambda = \sqrt{r/t} = \sqrt{rs}$ to obtain
$q_{\lambda}$.
A calculation shows that
\begin{equation}
\begin{gathered}
\Theta_{q_{\lambda}}(x,y) = (1-\lambda^2)x^2y^2 C_{\lambda}(x,y),\quad \text{where} \\
C_{\lambda}(x,y) = 6\lambda^2(x^4+y^4) + (4\lambda +
20\lambda^3)(x^3y+xy^3) + (1+15\lambda^2+20\lambda^4)x^2y^2.
\end{gathered}
\end{equation}
Note that
\begin{equation}
\begin{gathered}
D_{\lambda}(x,y):= C_{\lambda}(x+y,x-y)= (1 + \lambda) (1 + 2 \lambda) (1 + 5 \lambda + 10
\lambda^2)x^4\\ -2
(1-\lambda^2)(1-20 \lambda^2)x^2y^2 + (1 -\lambda) (1 - 2 \lambda) (1 - 5 \lambda + 10 \lambda^2)x^4.
\end{gathered}
\end{equation}
If $\Theta_{q_{\lambda}}$ is psd, then $6\lambda^2(1-\lambda^2) \ge 0$, so $|\lambda| \le
1$. Under this assumption, it suffices to determine when $D_{\lambda}$ is
psd. Since $D_{\lambda}(1,0), D_{\lambda}(0,1) \ge 0$, $|\lambda| \le \frac 12$.
If $D_{\lambda}(x,y) = E_{\lambda}(x^2,y^2)$, then
the discriminant of $E_{\lambda}$ is
$128\lambda^2(1-\lambda^2)(1-10\lambda^2)$, hence $D_{\lambda}$ is psd if $0 \le \lambda^2
\le \frac 1{10}$. But, if $\frac 1{20} \le \lambda^2 \le \frac 14$, then
$D_{\lambda}$ is a sum of psd monomials. Thus $D_{\lambda}$ is psd
if $|\lambda| \le \frac 12$, and hence this is also true for
$C_{\lambda}$ and thus for $\Theta_{q_{\lambda}}$, so $q_{\lambda} \in K_{2,6}$.
\end{proof}
Since $\Theta_{q_\lambda}$ has two zeros when $|\lambda| < \frac 12$, but
$\Theta_{q_{1/2}} = \frac 98 x^2y^2(x+y)^2(x^2+xy+y^2)$ has
three, one expects that the algebraic patterns for $\Theta_p$ will
be variable for $ p \in \mathcal E(K_{2,2r})$ for $r \ge 3$ and that
$\mathcal E(K_{2,2r})$ will be hard to analyze.
Note also that
\begin{equation}\label{E:even}
\begin{gathered}
q_{\lambda}(x+y,x-y) =
2(1+\lambda)(1+5\lambda+10\lambda^2)x^6 + 30(1-\lambda^2)(1+2\lambda)x^4y^2 \\ +
30(1-\lambda^2)(1-2\lambda)x^2y^4
+ 2(1-\lambda)(1-5\lambda+10\lambda^2)y^6.
\end{gathered}
\end{equation}
One of the two boundary examples is $q_{-1/2}(x+y,x-y)= x^6 + 45 x^2 y^4 + 18
y^6$, which scales to $x^6 + 15\alpha x^2 y^4 + y^6$, where $\alpha^3 =
\frac 1{12}$.
We now consider the sections of $P_{2,6}=\Sigma_{2,6}$,
$Q_{2,6}$ and $K_{2,6}$ consisting of forms
\begin{equation}
g_{A,B}(x,y) = x^6 + \binom 62 A x^4y^2 + \binom 64 B x^2y^4 + y^6,
\end{equation}
and identify $g_{A,B}$ with the point $(A,B)$ in the plane.
If $g_{A,B}$ is on the boundary of the $P_{2,6}$ section, then it
is not pd, and we may assume $(x + r y)^2\ |\ g_{A,B}$ for some $r \neq
0$. Thus, $(x-ry)^2\ |\ g_{A,B}$ as well, and
since the remaining factor must be even, the coefficients of $x^6,y^6$ force it
to be $x^2 + \frac 1{r^4} y^2$. Thus, the boundary forms for the
section of $P_{2,6}$ are
\begin{equation}
(x^2-r^2y^2)^2(x^2 + \tfrac 1{r^4}y^2) = x^6 +( \tfrac 1{r^4} -
2r^2)x^4y^2 + (r^4 - \tfrac 2{r^2})x^2y^4 + y^6.
\end{equation}
The parameterized boundary curve
\begin{equation}
(A,B) = \tfrac 1{15}( \tfrac 1{r^4} - 2r^2, r^4 - \tfrac 2{r^2})
\end{equation}
is strictly decreasing as we move from left to right, and is a
component of the curve $500(A^3+B^3) = 1875(AB)^2 + 150AB - 1$.
By \eqref{E:cata}, $g_{A,B}$ is in $Q_{2,6} = \Sigma^*_{2,6}$, iff
$\left(
\begin{smallmatrix} 1 & 0 & A & 0 \\ 0 & A & 0 & B \\ A & 0 & B & 0 \\ 0 &
B & 0 & 1
\end{smallmatrix}\right)$
is psd iff $A \ge B^2$ and $B \ge A^2$, so the section is the
familiar region between these two parabolas.
Except for the fortuitous identity \eqref{E:even},
it would have been very challenging to determine the section for $K_{2,6}$.
Scale $x$ and $y$ in \eqref{E:even} to get $g_{A,B}$: the
parameterization of the boundary is
$(\psi(\lambda),\psi(-\lambda))$, where
\begin{equation}
\psi(\lambda) = \frac
{(1-\lambda)^{2/3}(1+\lambda)^{1/3}(1+2\lambda)}{(1+5\lambda+10\lambda^2)^{2/3}(1-5\lambda+10\lambda^2)^{1/3}}.
\end{equation}
The intercepts occur when $\lambda = \pm \frac 12$ and are
$(12^{-\frac 13},0)$ and $(0,12^{-\frac 13})$. The point $(1,1)\ (\lambda = 0)$ is
smooth but of infinite curvature. The Taylor series of $\psi(\lambda)$ at
$\lambda=0$ begins $1 + \frac {16}3 \lambda^3 - 48 \lambda^4$, so locally, $x-y
\approx \frac
{32}3 \lambda^3$ and $x+y-2 \approx -96 \lambda^4$, hence
\begin{equation*}
x+y-2 \approx
-\tfrac{3^{7/3}}{2^{5/3}}(x-y)^{4/3}.
\end{equation*}
The maximum value of $\psi(\lambda)$ is
$5^{-5/3}(1565+496\sqrt{10})^{1/3} \approx 1.000905$ at $\lambda =
\frac{2\sqrt{10}-5}{15} \approx .0883$; this was asserted without
proof in \cite[p.232]{Re00}.
At this point, we punt
and present some trinomials in $\partial(K_{2,2r})$. Suppose
$1 \le v \le 2r-1$, $a,c > 0$ and suppose
\begin{equation}
h(x,y) = a x^{2r} + b x^{2r-v}y^v + c y^{2r} \in K_{2,2r}.
\end{equation}
An examination of the end terms of $\Theta_h$ shows that $v$ must be
even and $b \ge 0$. If $b=0$, then $h \in Q_{2,2r}$, so we assume $b >
0$, and wish to find the largest possible value of $b$. Calculations,
which we omit, show that if
\begin{equation}\label{E:hrk}
\begin{gathered}
h_{r,k}(x,y) := (r-k)(2(r-k)-1)^2 x^{2r}\\
+ r(2r-1)(2k-1)(2r-2k-1)x^{2r-2k}y^{2k} +
k(2k-1)^2 y^{2r},
\end{gathered}
\end{equation}
then $\Theta_{h_{r,k}}(x,y)= x^{2r-2-2k}y^{2k-2}(x^2-y^2)^2g(x,y)$,
where $g$ is a (psd) sum
of even terms with positive coefficients, and that
if $c > 0$ and $g_{r,k,c} = h_{r,k} + c x^{2r-2k}y^{2k}$,
then $\Theta_{g_{r,k,c}}(1,1) < 0$. Given $(a,c)$, there exist
$(\alpha,\beta)$ so that the coefficients of $x^{2r}$ and
$y^{2r}$ in $h_{r,k}(\alpha x, \beta y)$ are both 1, and we get the examples in
\cite[Prop.1]{Re00}. In particular,
\begin{equation}
h_{4k,2k}(x,y) \sim x^{4k} + (8k-2)x^{2k}y^{2k} + y^{4k}\in \partial(K_{2,4k}).
\end{equation}
Similar methods show that
\begin{equation}
x^{6k} + (6k-1)(6k-3)x^{4k}y^{2k} + (6k-1)(6k-3)x^{2k}y^{4k} + y^{6k}
\in \partial(K_{2,6k}).
\end{equation}
We have been unable to analyze $K_{2,8}$ completely, but have found
this interesting element in $\mathcal E(K_{2,8})$:
\begin{equation}
p(x,y) = (x^2+y^2)^4 + \tfrac 8{\sqrt 7}\ x y (x^2 - y^2)(x^2+y^2)^2,
\end{equation}
for which $\Theta_p(x,y)= 3072 x^2 (x - y)^2 y^2 (x + y)^2 (x^2 + y^2)^2$.
\section{Sums of 4th powers and octics}
Hilbert's 17th Problem asks whether $p \in P_{n,2r}$ must be a sum of
squares of rational functions: does there always exist $h = h_p \in
F_{n,d}$ (for some $d$) so that $h^2p \in \Sigma_{n,2r+2d} = W_{n,2(r+d)}$? Artin
proved that the
answer is ``yes''. (See \cite{R2,Re3}.) Becker \cite{Be} investigated the
question for higher even powers. His result implies that if $p \in
P_{2,2kr}$ and all real linear factors of $p$ (if any) occur to an exponent
which is a multiple of $2k$, then there exists $h = h_p \in F_{2,d}$
(for some $d$) so that $h^{2k}p \in W_{2,(r+d,2k)}$.
For example, by Becker's criteria, $f_{\lambda}$ (c.f. \eqref{E:flam})
is a sum of
4th powers of rational functions if and only if it is pd; that is,
$\lambda \in (-\frac 13,\infty)$. As we have seen, $f_{\lambda}
\in Q_{2,4} = W_{2,(1,4)}$ if and only if $\lambda \in [0,1]$. If $\ell$ is linear and
$\ell^4f = \sum_k h_k^4 \in W_{2,(2,4)}$, then $\ell | h_k$, so if $f_{\lambda}
\notin Q_{2,4}$ and $h^4f \in W_{2,(1+d,4)}$, then $\deg h = d \ge
2$. The identity
\begin{equation}
\begin{gathered}
3(3x^4 - 4x^2y^2 + 3y^4)(x^2 + y^2)^4 \\ = 2 ( (x-y)^4 + (x+y)^4)
(x^8 + y^8) + 5x^{12} + 11x^8y^4 + 11x^4y^8 + 5y^{12}
\end{gathered}
\end{equation}
shows that $(x^2+y^2)^4f_{\lambda} \in W_{2,(3,4)}$ for $\lambda \in [-\frac
29, \frac{11}3]$, since $T(-\frac{2}9) = \frac{11}3$, c.f. \eqref{E:TU}.
We know no alternate characterization of
$W_{2,(u,4)}$, but offer the following conjecture:
\begin{conjecture}
If $p \in P_{2,4u}$, then $p \in W_{2,(u,4)}$ if and only if there
exist $f,g \in P_{2,2u}$ so that $p = f^2 + g^2$.
\end{conjecture}
It follows from \eqref{E:h22} that
the square of a psd binary form is a sum of three 4th
powers. Conjecture 7.1 thus implies that any sum of 4th powers of
polynomials is a sum of six 4th powers of polynomials.
Any sum of $s$ 4th powers will be a sum of $s$
squares of psd forms; the conjecture asserts that $p$ is a
sum of {\it two} such squares.
If $p \in W_{2,(u,4)}$, then $p \in P_{2,4u} =
\Sigma_{2,4u}$, so $p = f^2 + g^2$ for some $f,g \in F_{n,2u}$;
the conjecture says that there is a representation
in which $f$ and $g$ are themselves psd.
This seems related to a result in \cite{CLPR} about sums of 4th powers
of rational functions over real closed fields. If $p = \sum h_k^4$ and
$\ell | p$ for a linear form, then $\ell^{4t} | p$ for some $t$
and $\ell^t | h_k$, so we may assume $p$ is pd. The following is a
special case of \cite[Thm.4.12]{CLPR}, referring to
sums of 4th powers of non-homogeneous rational functions.
\begin{proposition}
Suppose $p \in {\mathbb R}[x]$ is pd. Then $p$ is a sum of 4th powers in
${\mathbb R}(x)$ if and only if there exist pd $f,g,h$ in ${\mathbb R}[x]$,
$\deg f = \deg g$, such that $h^2p = f^2 +g^2$.
\end{proposition}
It follows that a sum of 4th powers in ${\mathbb R}(x)$ is a sum of at most six
4th powers.
\begin{theorem}
Conjecture 7.1 is true for $p \in W_{2,(1,4)} = Q_{2,4}$.
\end{theorem}
\begin{proof}
We have seen that if $p \in W_{2,(1,4)}$, then $p \sim f_{\lambda}$ for
$\lambda \in [0,1]$. If $\lambda \in (\frac 13, 1]$, then $T(\lambda) \in [0,\frac
13)$, so it suffices to find a representation for $F_{\lambda}$ with $\lambda
\in [0,\frac 13]$. Such a representation is
$f_{\lambda}(x,y) = (x^2 + 3\lambda y^2)^2 + (1-9\lambda^2)(y^2)^2$.
\end{proof}
\begin{theorem}
Conjecture 7.1 is true for even symmetric octics.
\end{theorem}
It will take some work to get to the proof of Theorem 7.4.
For the rest of this section, write $W:= W_{2,(2,4)}$. We first characterize
$\partial(W^*)$.
\begin{theorem}
If $p \in \partial(W^*)$, then $p = (\alpha\cdot)^8$ or $p \sim q$, where
\begin{equation}\label{E:bdry}
q(x,y) = d_0 x^8 + 8d_1 x^7y + 28d_2 x^6y^2 + 28 d_6 x^2y^6 + 8 d_7 x
y^7 + d_8 y^8,
\end{equation}
and
\begin{equation}\label{E:disc}
(6d_2u^2 + 6d_6 w^2)(d_0 u^4
+ 4d_2u^3w + 4d_6uw^3 + d_8 w^4)-( 2d_1u^3+2d_7w^3)^2
\end{equation}
is psd.
\end{theorem}
\begin{proof}
Consider a typical element $q \in W^*$,
\begin{equation}\label{E:octdef}
q(x,y) = \sum_{k=0}^8 \binom 8k d_k x^{8-k}y^k.
\end{equation}
Then as in Prop.\!\! 3.9,
\begin{equation}\label{E:Hoct}
\begin{gathered}[]
H_q(u,v,w):= [q,(u x^2 + v x y + w y^2)^4] = d_0u^4 + 4 d_1 u^3 v + d_2(6 u^2 v^2
+ 4 u^3 w) \\ +
d_3( 4 u v^3 + 12 u^2 v w) + d_4(v^4 + 12 u v^2 w +
6 u^2 w^2) + d_5(4 v^3 w + 12 u v w^2) \\ + d_6(6 v^2 w^2 +
4 u w^3) + 4 d_7v w^3 + d_8 w^4
\end{gathered}
\end{equation}
is a psd ternary quartic in $u,v,w$.
If $ q \in \partial(W^*)$, then $[q,h^2] = 0$ for some non-zero
quadratic $h$. Since $\pm h \sim x^2, xy, x^2+y^2$,
it suffices by Prop.\!\! 2.6 to consider three cases: $[q,x^8]=0,
[q,x^4y^4]=0$ and $[q,(x^2+y^2)^4] = 0$. Since
\begin{equation}\label{E:h24}
420(x^2+y^2)^4 = 256(x^8+y^8) + \sum_{\pm} (x \pm \sqrt 3 y)^8 +
( \sqrt 3 x \pm y)^8,
\end{equation}
$[q,(x^2+y^2)^4] = 0$ implies that $q(1,0) = q(0,1) = q(1, \pm \sqrt
3) = q(\sqrt3 , \pm 1) = 0$; since $q$ is psd, $q=0$. (An alternate
proof derives
this result from $(x^2 + y^2)^4 \in int(Q_{2,8})$ by
\cite[Thm.8,15(ii)]{Re1}, so $(x^2+y^2)^4 \in int(W)$.)
Suppose $[h,(x^2)^4]=0$; that is,
$H_q(1,0,0) = 0$. Then
$d_0 =0$, and since $H_q$ is now at most quadratic in $u$, it follows
that $d_1=d_2 = 0$. This implies that the coefficient of $u^2$ in
$H_q$ is $12d_3 vw + 6d_4w^2$, hence $d_3=0$ and
\begin{equation}
\begin{gathered}
H_q(u,v,w) = u^2(6d_4w^2) + 2u(2d_6w^3 + 6d_5vw^2+6d_4v^2w) \\ + (d_8w^4
+ 4d_7 w^3v+6d_6w^2v^2+4d_5 wv^3+ d_4v^4).
\end{gathered}
\end{equation}
Since $H_q$ is psd if and only if its discriminant with respect to $u$ is
psd in $v,w$, and this discriminant is $-30d_4^2 v^4w^2 + $ lower terms in $v$,
$d_4=0$. Since $H_q$ cannot be linear in $u$, it follows that
$d_5=d_6=0$ and $H_q(u,v,w) = d_8w^4 + 4d_7w^3v$, which is only psd
if $d_7=0$, so that $q(x,y) = d_8y^8$ is an 8th power.
Finally, suppose $[q,x^4y^4] = 0$; that is, $H_q(0,1,0) = d_4 =
0$. Since $H_q$ is at most
quadratic in $v$, it follows that $d_3=d_5 = 0$ as well, so $q$ has the
shape \eqref{E:bdry} and
\begin{equation}
\begin{gathered}
H_q(u,v,w) = v^2(6d_2u^2 + 6d_6 w^2) \\ + 2v( 2d_1u^3+2d_7w^3) + d_0 u^4
+ 4u^3w d_2 + 4uw^3 d_6 + d_8 w^4;
\end{gathered}
\end{equation}
$H_q$ is psd if and only if its discriminant with respect to $v$,
namely \eqref{E:disc}, is psd.
\end{proof}
It should be possible to characterize $\mathcal E(W^*)$, though we do
not do so here. One family of extremal elements is parameterized by
$\alpha \in {\mathbb R}$:
\begin{equation}
\omega_{\alpha}(x,y):= x^8 + 28 x^2 y^6 + 24 \alpha x y^7 + 3(1 + 2\alpha^2) y^8
\in \mathcal E(W^*).
\end{equation}
In this case,
\begin{equation}
\begin{gathered}
H_{\omega_{\alpha}}(u,v,w) = 6 v^2 w^2 + 12 \alpha v w^3 + u^4 + 4 u w^3 + (3 +
6\alpha^2)w^4 \\ = 6 (v w + \alpha w^2)^2 + (u+w)^2(u^2-2uw+3w^2)
\end{gathered}
\end{equation}
is psd; $H_{\omega_\alpha}(0,1,0) = H_{\omega_\alpha}(1,\alpha,-1) = 0$,
and $H_{\omega_\alpha}(u,v,0)=u^4$ has a 4th order zero at $(0,1,0)$.
It is unclear whether $\omega_{\alpha}$ has other interesting algebraic properties.
We now simplify matters by limiting our attention to even symmetric
octics. Let
\begin{equation}\label{E:tildeF}
\widetilde F = \{ ((A,B,C)):= A x^8 + B x^6y^2 + C x^4y^4 + B x^2y^6 +
A y^8\ : \ A,
B, C \in {\mathbb R}\}.
\end{equation}
denote the cone of even symmetric octics, and let
\begin{equation}
\widetilde W = W \cap \widetilde F.
\end{equation}
Then $\widetilde W$ is no longer a blender, because (P3) fails
spectacularly. However, it is still a closed convex cone. We give
the inner product explicitly:
\begin{equation}\label{E:esoip}
p_i = ((A_i,B_i,C_i)) \implies [p_1,p_2] = A_1A_2 + \tfrac{B_1B_2}{28}
+ \tfrac{C_1C_2}{70} + \tfrac{B_1B_2}{28} + A_1A_2.
\end{equation}
Let $(\widetilde W)^* \subset \widetilde F$
denote the dual cone to $\widetilde W$. Here is a special
case of \cite[p.142]{Re1}.
\begin{theorem}
$(\widetilde W)^* = W^* \cap \widetilde F$.
\end{theorem}
\begin{proof}
Suppose $p \in \widetilde W$
and $q \in W^* \cap \widetilde F$. Then $p \in W$ and $q \in W^*$ imply
$[p,q] \ge 0$, so $q \in (\widetilde W)^*$. Suppose now that $q \in
(\widetilde W)^*$; we wish to show that $q \in W^*$. Pick $r \in W$,
and let $r_1 = r$, $r_2(x,y) = r(x,-y)$, $r_3(x,y) = r(y,x)$ and
$r_4(x,y) = r(y,-x)$. Since $q \in \widetilde F$, $[r_j,q] = [r,q]$
for $1 \le j \le 4$, and since $p = r_1+r_2+r_3+r_4 \in \widetilde W$,
$0 \le [p,q] = 4[r,q]$. Thus, $[r,q] \ge 0$ as desired.
\end{proof}
We need not completely analyze $(\widetilde W)^*$ to determine
$\widetilde W$. The following suffices.
\begin{lemma}
If $q =((1,0,0))$,
$((4,28,0))$ or $((6-4\lambda^2+3\lambda^4, 28(6-\lambda^2), 420))$, $\lambda \in
{\mathbb R}$, then $q \in W^*$.
\end{lemma}
\begin{proof}
Using the notation of \eqref{E:octdef}, suppose
\begin{equation}
q(x,y) = ((d_0,28d_2,70d_4)) = d_0^8 + 28 d_2x^6y^2 + 70 d_4 x^4y^4 +
28 d_2 x^2y^6 + d_0 y^8.
\end{equation}
Comparison with \eqref{E:esoip} shows that
\begin{equation}\label{E:wstar}
q \in \widetilde W^* \iff ((A,B,C)) \in \widetilde W \implies 2d_0 A +
2d_2 B +d_4 C \ge 0.
\end{equation}
On the other hand, \eqref{E:Hoct} and Theorem 7.6 imply that $q \in
\widetilde W^*$ if and only if
\begin{equation}
H_q(u,v,w) = d_0(u^4+w^4) + d_2(u^2+w^2)(6v^2 + 4uw) + d_4(v^4 +
12uv^2w +6u^2w^2)
\end{equation}
is psd. If $(d_0,d_2,d_4) = (1,0,0)$, then $H_q(u,v,w) =
u^4+w^4$, which is psd,
and if $(d_0,d_2,d_4) = (4,1,0)$, then
\begin{equation}
H_q(u,v,w) = 4(u+w)^2(u^2-uw+w^2) + 6(u^2+w^2)v^2.
\end{equation}
Finally, if $(d_0,d_2,d_4) = (6-4\lambda^2+3\lambda^4, 6-\lambda^2, 6)$, then a
computation gives
\begin{equation}
\begin{gathered}
2H_q(u,v,w) = 2(6-4\lambda^2+3\lambda^4)(u^4+w^4) \\ +
2(6-\lambda^2)(u^2+w^2)(6v^2 + 4uw) +
12(v^4 + 12uv^2w +6u^2w^2) \\
= 48(u+w)^2v^2 + 4\lambda^2(u+w)^4 + 3\lambda^4(u^2-w^2)^2 \\ +
3(2v^2 + 2(u+w)^2 - \lambda^2(u^2+w^2))^2.
\end{gathered}
\end{equation}
Note that $H_q(1,\pm \lambda, -1) = 0$.
\end{proof}
An important family of elements in $\widetilde W$ is
\begin{equation}
\begin{gathered}
\psi_\lambda(x,y) : = \tfrac 12\left( (x^2 + \lambda xy - y^2)^4 + (x^2 - \lambda xy -
y^2)^4 \right)\\ =
((1,\ 6\lambda^2-4,\ \lambda^4-12\lambda^2+6))
\end{gathered}
\end{equation}
\begin{theorem}
The extremal elements of $\widetilde W$ are $x^4y^4$ and $\{\psi_{\lambda} :
\lambda \ge 0\}$. Hence $p =((A,B,C)) \in \widetilde W$ if and only if
\begin{equation}\label{E:cond}
\begin{gathered}
A = B=0,\ C \ge 0,\quad \text{or}\quad A > 0,\ B \ge - 4A,\ 36AC \ge
B^2 - 64AB - 56A^2.
\end{gathered}
\end{equation}
\end{theorem}
\begin{proof}
By Lemma 7.7 and \eqref{E:wstar}, if $p \in \widetilde W$, then $A\ge
0$, $A + 4B \ge 0$ and
\begin{equation}\label{E:hcond}
2(6 - 4\lambda^2 + 3\lambda^4)A + 2(6 - \lambda^2) B + 6C \ge 0.
\end{equation}
We have $A = p(1,0)=p(0,1)\ge 0$, and if $A=0$ and $p = \sum h_k^4$, then $xy
| h_k$, hence $p = [0,0,C]$ with $C \ge 0$. Otherwise, assume that $A =
1$, so that \eqref{E:cond} becomes
\begin{equation}
B \ge -4, \quad C \ge \tfrac 1{36}(B^2 - 64B -56).
\end{equation}
The first inequality follows from $((4,28,0)) \in \widetilde W^*$, and
we can thus write $B =
6\alpha^2 - 4$, where $\alpha = \sqrt{\frac {B+4}6}$. Put $\lambda = \alpha$ in
\eqref{E:hcond} to obtain
\begin{equation}\label{E:parab}
C \ge \alpha^4 - 12\alpha^2 + 6 = \tfrac 1{36}(B^2 - 64B -56).
\end{equation}
Conversely, suppose $p=((A,B,C))$ satisfies \eqref{E:cond}. If $A = 0$, then
$p = c x^4y^4 \in \widetilde W$. If $A > 0$, then we can take $A=1$ and
substitute $B = 6\alpha^2 - 4$, so that, by \eqref{E:parab},
\begin{equation}
p = ((1,B,C)) = ((1,6\alpha^2-4, \alpha^4-12\alpha^2-6)) + ((0,0,\gamma)) =
\psi_{\lambda}(x,y) + \gamma x^4y^4
\end{equation}
for some $\gamma \ge 0$, hence $p \in \widetilde W$.
\end{proof}
Taking $(A,B) = (1,0)$, we obtain \eqref{E:48}. Suppose $\lambda, \mu \ge
-2 $. Then Theorem 7.6 implies that (c.f. \eqref{E:flam})
$f_{\lambda}(x,y)f_{\mu}(x,y) \in W$ if and only if
\begin{equation}
(17 -12\sqrt 2) (\lambda+2) \le \mu+2 \le (17 + 12\sqrt 2) (\lambda+2)
\end{equation}
There is a peculiar resonance with the example after Theorem 4.7.
\begin{proof}[Proof of Theorem 7.4]
Suppose the even symmetric octic
$((A,B,C))$ satisfies \eqref{E:cond}. If $A=0$, then $((0,0,C)) =
C(x^2y^2)^2$. Otherwise, again suppose $A=1$ and write $B = 6\alpha^2-4$, so
\begin{equation}
B = 6\alpha^2-4, \quad C = \tfrac 1{36}(B^2 - 64B -56) + T =
\alpha^4 - 12\alpha^2 + 6 + T, \quad T \ge 0.
\end{equation}
Observe that
\begin{equation}
\begin{gathered}
(x^4 + (3\alpha^2-2) x^2y^2 + y^4)^2 + (T- 8\alpha^4)(x^2y^2)^2 \\=
((1,6\alpha^2-4,9\alpha^4 - 12 \alpha^2 + 6)) + ((0,0,T-8\alpha^4)) = ((1,B,C)),
\end{gathered}
\end{equation}
so if $T \ge 8\alpha^4$, then we are done. Otherwise, $0 \le T \le 8\alpha^4$.
Finally, note that
\begin{equation}
\begin{gathered}
\tfrac12 \left( \bigl((x^2 - \sqrt{\lambda} x y - y^2)^2 + \mu x^2 y^2\bigr)^2 +
\bigl((x^2 + \sqrt{\lambda} x y - y^2)^2 + \mu x^2 y^2\bigr)^2\right)\\ = ((1,
6\lambda+2\mu-4,
6-12\lambda+\lambda^2-4\mu+2\lambda\mu+\mu^2))
\end{gathered}
\end{equation}
is a sum of two squares of psd forms if $\mu \ge 0$. One
solution to the system
\begin{equation}
\begin{gathered}
6\alpha^2-4= 6\lambda+2\mu-4, \alpha^4 - 12\alpha^2 + 6 + T =
6-12\lambda+\lambda^2-4\mu+2\lambda\mu+\mu^2
\end{gathered}
\end{equation}
is
\begin{equation}
\begin{gathered}
\lambda = \frac{ 3\alpha^2 - \sqrt{\alpha^4+T}}2, \quad
\mu = \frac{3(\sqrt{\alpha^4+T} - \alpha^2)}2.
\end{gathered}
\end{equation}
Evidently, $\mu \ge 0$; since $T \le 8\alpha^4$, $\lambda \ge 0$, so
$\sqrt{\lambda}$ is real.
\end{proof}
\section{Bibliography}
\section{Introduction and overview}
Let $F_{n,d}$ denote the vector space of real homogeneous forms
$p(x_1,\dots,x_n)$ of degree $d$.
A blender is a closed convex cone in $F_{n,d}$
that is also closed under linear changes of variable. Blenders were
introduced in \cite{Re1} to help describe several different familiar
cones of polynomials, but that memoir was mainly concerned with the
cones of psd and sos forms and their duals, and the discussion of
blenders {\it per se} was scattered (pp.\! 36-50, 119-120,
140-142). This paper is devoted to a general discussion of blenders
and their properties, as well as the extremal elements of
some particular blenders not discussed in \cite{Re1}.
We shall see that
non-trivial blenders only occur when $d = 2r$ is an even integer.
Choi and Lam \cite{CL1,CL2} named the cone of {\it psd} forms:
\begin{equation}
P_{n,2r}:= \{p \in F_{n,2r} : u \in {\mathbb R}^n \implies p(u) \ge 0\},
\end{equation}
and the cone of {\it sos} forms:
\begin{equation}
\Sigma_{n,2r}:= \biggl\{p \in F_{n,2r} : p = \sum_{k=1}^s h_k^2,\ h_k
\in F_{n,r}\biggr\}.
\end{equation}
Other blenders of interest in \cite{Re1} are the cone of sums of $2r$-th powers:
\begin{equation}
Q_{n,2r}:= \biggl\{p \in F_{n,2r} : p = \sum_{k=1}^s (\alpha_{k1}x_1 +
\cdots + \alpha_{kn}x_n)^{2r},\ \alpha_{kj} \in {\mathbb R}\biggr \}
\end{equation}
and the ``Waring blenders''. Suppose $r = uv$, $u,v \in {\mathbb N}$ and let:
\begin{equation}
W_{n,(u,2v)}:= \biggl\{p \in F_{n,2r} : p = \sum_{k=1}^s h_k^{2v},\ h_k
\in F_{n,u}\biggr \}.
\end{equation}
Note that $W_{n,(r,2)} = \Sigma_{n,2r}$ and $W_{n,(1,2r)} =
Q_{n,2r}$.
The Waring blenders generalize. If $d = 2r$ and $\sum_{i=1}^m u_iv_i =
r$, let
\begin{equation}
W_{n,\{(u_1,2v_1),\dots, (u_m,2v_m)\}}:= \biggl\{p \in F_{n,2r} : p =
\sum_{k=1}^s h_{k,1}^{2v_1}\cdots h_{k,m}^{2v_m} ,\ h_{k,i}
\in F_{n,u_i} \biggr\}.
\end{equation}
There has been recent interest in the cones of convex forms:
\begin{equation}\label{E:kn2r}
K_{n,2r}:= \{p \in F_{n,2r} : p \ \text{is convex}\}.
\end{equation}
We shall use the two equivalent definitions of ``convex'' (see
e.g. \cite[Thm.4.1,4.5]{Ro}): under the
{\it line segment} definition, $p$ is convex if for all $u, v
\in {\mathbb R}^n$ and $\lambda \in [0,1]$,
\begin{equation}
p(\lambda u + (1 - \lambda) v) \le \lambda p(u) + (1-\lambda)p(v).
\end{equation}
The {\it Hessian} definition says that if
\begin{equation}\label{E:hes}
Hes(p;u,v):= \sum_{i=1}^n \sum_{j=1}^n \frac{\partial^2p}{\partial
x_i \partial x_j}(u) v_iv_j,
\end{equation}
then $p$ is convex provided $Hes(p;u,v) \ge 0$ for all $u, v \in
{\mathbb R}^n$. The cone $K_{n,m}$ appeared in \cite{Re1}, but as
$N_{n,m}$ (see Corollary 4.5). Pablo Parrilo asked
whether every convex form is sos; that is, is $K_{n,2r} \subseteq
\Sigma_{n,2r}$? This question
has been answered by Greg Blekherman \cite{B} in the negative. For
fixed $n$, the ``probability'' that a convex form is sos goes to 0
as $r \to \infty$. No examples of $p \in
K_{n,2r} \setminus \Sigma_{n,2r}$ are yet known.
We now give the formal definition of blender.
Suppose $n \ge 1$ and $d \ge 0$. The index set for monomials in
$F_{n,d}$ consists of $n$-tuples of non-negative integers:
\begin{equation}
\mathcal I(n,d) = \biggl\lbrace i=(i_1,\dots,i_n): \sum\limits_{k=1}^n
i_k = d\biggr\rbrace.
\end{equation}
Write $N(n,d) = \binom {n+d-1}{n-1} = |\mathcal I(n,d)|$ and for $i
\in \mathcal I(n,d)$, let
$c(i) = \frac{d!}{i_1!\cdots i_n!}$ be the associated multinomial coefficient.
The abbreviation $u^i$ means $u_1^{i_1}\dots u_n^{i_n}$,
where $u$ may be an $n$-tuple of constants or variables.
Every $p \in F_{n,d}$ can be written as
\begin{equation}
p(x_1,\dots,x_n)=\sum_{i\in\mathcal I(n,d)} c(i)a(p;i)x^i.
\end{equation}
The identification of $p$ with the $N(n,d)$-tuple $(a(p;i))$ shows that
$F_{n,d} \approx {\mathbb R}^{N(n,d)}$ as a vector space. The topology
placed on $F_{n,d}$ is the usual one: $p_m \to p$ means that for
every $i \in \mathcal I(n,d)$, $a(p_m;i) \to a(p;i)$.
For $\alpha \in {\mathbb R}^n$, define $(\alpha\cdot)^d \in F_{n,d}$ by
\begin{equation}
(\alpha\cdot)^d(x) = \biggl(\sum_{k=1}^n \alpha_kx_k\biggr)^d =
\sum_{i\in\mathcal I(n,d)} c(i)\alpha^ix^i.
\end{equation}
If $\alpha$ is regarded as a row vector and $x$ as a column vector,
then $(\alpha \cdot)^d(x) = (\alpha x)^d$.
If $M = [m_{ij}]\in Mat_n({\mathbb R})$ is a (not
necessarily invertible) real $n\times n$ matrix and $p \in F_{n,d}$, we
define $p\circ M \in F_{n,d}$ by
\begin{equation}
(p\circ M)(x_1,\dots,x_n)= p(\ell_1,\dots,\ell_n), \qquad
\ell_j(x_1,\dots,x_n) = \sum_{k=1}^nm_{jk}x_k.
\end{equation}
If $x$ is viewed as a column vector, then $(p\circ M)(x) =
p(Mx)$; $(\alpha\cdot)^d \circ M = (\alpha M \cdot)^d$.
Define $[[p]]$ to be $\{p \circ M: M \in Mat_n({\mathbb R})\}$,
the {\it closed orbit of $p$}. If $ p = q \circ M$ for {\it invertible}
$M$, we write $p \sim q$; $\sim$ is an
equivalence relation.
\begin{lemma}
\smallskip
\
\noindent (i) If $p \in F_{n,d}$ and $d$ is odd, then $p \sim \lambda p$ for every $0
\neq \lambda \in {\mathbb R}$.
\noindent (ii) If $p \in F_{n,d}$ and $d$ is even, then $p \sim \lambda p$
for every $0 < \lambda \in {\mathbb R}$.
\noindent (iii) If $u, \alpha \in {\mathbb R}^n$, then there exists a (singular)
$M$ so that $p\circ M = p(u)(\alpha\cdot)^d.$
\end{lemma}
\begin{proof}
For (i), (ii), observe that $(p \circ (cI_n)) =
c^dp$ since $p$ is homogeneous, and $cI_n$ is invertible if $c
\neq 0$. For (iii), note that if $m_{jk} = u_j\alpha_k$
for $1 \le j,k \le n$, then
\begin{equation}
\ell_j(x) = u_j(\alpha x) \implies (p\circ
M)(x_1,\dots,x_n) = (\alpha x)^dp(u_1,\dots, u_n)
\end{equation}
by homogeneity.
\end{proof}
\begin{definition}
A set $B \subseteq F_{n,d}$ is a {\it blender} if these conditions hold:
\smallskip
\noindent (P1) If $p, q \in B$, then $p+q \in B$.
\noindent (P2) If $p_m \in B$ and $p_m \to p$, then $p \in
B$.
\noindent (P3) If $p \in B$ and $M \in Mat_n({\mathbb R})$, then $p
\circ M \in B$.
\end{definition}
Thus, a blender is a closed convex cone of forms which is also
a union of closed orbits. Lemma 1.1 makes it unnecessary
to specify in (P1) that $p \in B$ and $\lambda \ge 0$ imply $\lambda p \in
B$. Let $\mathcal B_{n,d}$ denote the set of blenders in $F_{n,d}$.
Trivially, $\{0\}, F_{n,d} \in \mathcal B_{n,d}$.
It is simple to see that $P_{n,2r}$ is a blender: conditions (P1) and
(P2) can be verified pointwise and if $p(u) \ge
0$ for every $u$, then the same will be true for $p(Mu)$.
Similarly, $K_{n,2r}$ is a blender because (P1) and (P2) follow from
the Hessian definition and (P3) follows from the line segment definition.
If $B_1, B_2 \in \mathcal B_{n,d}$, then $B_1 \cap B_2 \in \mathcal
B_{n,d}$. Define the {\it Minkowski sum}
\begin{equation}\label{E:b1+b2}
B_1+B_2:= \{p_1+p_2: p_i \in B_i\}.
\end{equation}
The smallest blender containing both $B_1$ and $B_2$ must
include $B_1+B_2$; this set is a blender (Theorem 3.4(i)), but it
requires an argument to prove (P2). It is not hard to see that
$\mathcal B_{n,d}$ is not always a chain. Let $(n,d) = (2,8)$ and let $B_1
=W_{2,\{(1,6),(1,2)\}}$ and $B_2 = W_{2,\{(1,4),(1,4)\}}$. Then $x^6y^2 \in
B_1$ and $x^4y^4 \in B_2$. If $x^6y^2 \in B_2$, then
\begin{equation}
x^6y^2 = \sum_{k=1}^s(\alpha_k x + \beta_k y)^4(\gamma_k x + \delta_k y)^4.
\end{equation}
The coefficients of $x^8$ and $y^8$ show that
$\alpha_k\gamma_k = \beta_k\delta_k=0$ for all $k$, hence the only non-zero
summands are positive multiples of $x^4y^4$. Thus $x^6y^2 \not\in
B_2$, and, similarly, $x^4y^4 \not\in B_1$, so $B_1 \setminus B_2$ and $B_2
\setminus B_1$ are both non-empty. We do not know simple descriptions of
$B_1 \cap B_2$ and $B_1 + B_2$.
If $B_1 \in \mathcal B_{n,d_1}$
and $B_2 \in \mathcal B_{n,d_2}$, define
\begin{equation}\label{E:b1*b2}
B_1*B_2:= \left\{\sum_{k=1}^s p_{1,k}p_{2,k}: p_{i,k} \in B_i \right\}.
\end{equation}
Again, this is a blender (Theorem 3.4(ii)), but (P2) is not trivial to prove.
We review some standard facts about convex cones; see \cite[Ch.2,3]{Re1}
and \cite{Ro}.
If $C \subset {\mathbb R}^N$ is a closed convex cone, then $u \in C$ is {\it
extremal} if $u = v_1 + v_2, v_i \in C$, implies that $v_i = \lambda_i
u$, $\lambda_i \ge 0$. The set of extremal elements in $C$ is denoted
$\mathcal E(C)$.
All cones $C \neq 0, {\mathbb R}^N$ in this
paper have the property that $x, -x \in C$ implies $x
= 0$. In such a cone, every element in $C$ is a sum of
extremal elements. (It will follow from Prop.\! 2.4 that if $B \in \mathcal
B_{n,d}$ and $p,-p \in B$ for some $p \neq 0$, then $B = F_{n,d}$.)
As usual, $u$ is {\it interior} to $C$ if $C$ contains a
non-empty open ball centered at $u$. The set of interior points of
$C$ is denoted $int(C)$, and the boundary of $C$ is denoted
$\partial(C)$. The next definition depends on the choice of inner
product for ${\mathbb R}^N$. For a closed conves cone $C$, we define the {\it dual} cone
\begin{equation}
C^* = \{ v \in {\mathbb R}^N : [u,v] \ge 0\quad \text{for all}\quad u \in C\}.
\end{equation}
Then $C^* \subset {\mathbb R}^N$ is also a closed convex cone and $(C^*)^* =
C$.
If $u \in C$ (and $\pm x \in C$ implies $x = 0$),
then $u \in int(C)$ if and only if $[u,v]>0$ for every
$0 \neq v \in C^*$ (see e.g. \cite[p.26]{Re1}). Thus, if $u
\in \partial(C)$ (in particular, if $u$ is extremal), then there
exists $v \in C^*$, $v \neq 0$ so that $[u,v] = 0$.
This discussion applies to blenders by identifying $p \in F_{n,d}$ with
the $N(n,d)$-tuple of its coefficients. For example, $p \in int(B)$ if
there exists $\epsilon >0$ so that if $|a(q;i)| < \epsilon$ for all $i
\in \mathcal I(n,d)$, then $p +
q \in B$. If $p \sim q \in B$, then $p$ and $q$ simultaneously belong
to (or do not belong to) $int(B), \partial(B), \mathcal E(B)$.
We shall discuss in section two the natural inner product
on $F_{n,d}$. It turns out that, under this inner product, $P_{n,2r}$
and $Q_{n,2r}$ are
dual cones (Prop.\! 3.7), as are $K_{n,2r}$ and
$W_{n,\{(1,2r-2),(1,2)\}}$ (Theorem 3.10).
The description of $\mathcal E(P_{n,2r})$ is extremely difficult if $n
\ge 3$. (See e.g \cite{CL1, CL2, CLRsex,CLR, Ha, ReAGI,Re4}.) Every element of
$\mathcal E(\Sigma_{n,2r})$ obviously has the form $h^2$, but not
every square is extremal; e.g.,
\begin{equation}\label{E:h22}
(x^2+y^2)^2 = (x^2-y^2)^2 + (2xy)^2 =\tfrac1{18} \left((\sqrt 3\ x +
y)^4 + (\sqrt 3\ x - y)^4 + 16y^4 \right).
\end{equation}
We now describe the contents of this paper. Section two reviews the
relevant material from \cite{Re1} regarding
the inner product and its many properties. The
principal results are that if $B \in \mathcal B_{n,d}$ and $B \neq \{0\},
F_{n,d}$, then $d=2r$ is even and $Q_{n,2r} \subset \pm B \subset
P_{n,2r}$ (Prop.\! 2.5); the dual cone to a blender is also a
blender (Prop.\! 2.7). Section three begins with a number of
preparatory lemmas, mainly involving convergence. We show that if
$B_i$ are blenders, then so are $B_1+B_2$ and $B_1*B_2$ (Theorem 3.4)
and hence the Waring blenders and their generalizations are blenders
(Theorems 3.5, 3.6). We show that $P_{n,2r}$ and $Q_{n,2r}$ are dual
and give a description of $W_{n,(u,2v)}^*$ (both from \cite{Re1}) and
show that $K_{n,2r}$ and $W_{n,\{(1,2r-2),(1,2)\}}$ are dual (Theorem
3.10). In section four, we consider $K_{n,2r}$. We show that it cannot
be decomposed non-trivially as $B_1*B_2$ (Corollary 4.2), and
that $K_{n,2r}=N_{n,2r}$ (c.f.\! \eqref{E:kn2r}, \eqref{E:nnd},
Corollary 4.5). We also show that if $p$ is positive definite, then $(\sum
x_i^2)^Np$ is convex for sufficiently large $N$ (Theorem
4.6). In section five, we show that (up to $\pm$) $\mathcal B_{2,4}$
consists of a one-parameter family of blenders $B_{\tau}$, $\tau \in
[-\frac 13, 0]$, where $\tau = \inf\{\lambda: x^4 + 6\lambda x^2y^2 + y^4 \in
B_{\tau}\}$, increasing from $Q_{2,4}=B_0$ to $P_{2,4}=B_{-\frac 13}$,
and that $B_{\tau}^* = B_{U(\tau)}$, where $U(\tau) =
-\frac{1+3\tau}{3-3\tau}$ (Theorem 5.7). In
section six, we review the results of $K_{2,4}$ and $K_{2,6}$ in
\cite{D1,D2,Re00} by Dmitriev and the author, and give some new
examples in $\partial(K_{2,2r})$. The
full analysis of $\mathcal E(K_{2,2r})$ seems intractable for $r \ge
4$. Finally, in section seven, we look at sums of 4th
powers of binary forms. Conjecture 7.1 states that $p \in W_{2,(u,4)}$
if and only if $p = f^2 + g^2$, where $f,g \in P_{2,2u}$. We show that
this is true for $u=1$ and for even symmetric octics $p$ (Theorems
7.3, 7.4). Our classification of even symmetric octics implies that
\begin{equation}\label{E:48}
x^8 + \alpha x^4y^4 + y^8 \in W_{2,(2,4)} \iff \alpha \ge - \tfrac {14}9.
\end{equation}
I would like to thank the organizers of BIRS
10w5007, Convex Algebraic Geometry, held at Banff in February, 2010,
for the opportunity to speak. I would also like to thank my fellow
participants for many stimulating conversations. Sections four and six were
particularly influenced by this meeting. I also thank Greg Blekherman for
very helpful email discussions. Special thanks to Kathy Danner and the
Interlibrary Loan Staff of the University of Illinois Library for
their persistence in retrieving copies of the original papers of
V. I. Dmitriev and to Peter Kuchment for
trying to contact Dmitriev for me. I also thank the referee, who gave a very
thoughtful and helpful report.
Finally, I thank the editors of this volume for the opportunity to
contribute to this memorial volume in memory of Prof. Borcea.
\section{The inner product}
For $p$ and $q$ in $F_{n,d}$, we define an inner product with deep
roots in 19th century algebraic geometry and analysis. Let
\begin{equation}\label{E:ip}
[p,q] = \sum_{i \in \mathcal I(n,d)} c(i)a(p;i)a(q;i).
\end{equation}
This is the usual Euclidean inner product, if $p \leftrightarrow
(c(i)^{1/2}a(p;i)) \in {\mathbb R}^N$. The many properties of this inner
product (see Props.\! 2.1, 2.6 and 2.9) strongly suggest that this
is the ``correct'' inner product for $F_{n,d}$. We present without
proof the following observations about the inner product.
\begin{proposition}\cite[pp.2,3]{Re1}
\
\noindent (i) $[p,q] = [q,p]$.
\noindent (ii) $j \in \mathcal I(n,d) \implies [p,x^j] = a[p;j]$.
\noindent (iii) $\alpha \in {\mathbb R}^n \implies [p,(\alpha\cdot)^d] = p(\alpha)$.
\noindent (iv) If $p_m \to p$, then $[p_m,q] \to [p,q]$ for every $q \in
F_{n,d}$.
\noindent (v) In particular, taking $q = (u \cdot)^d$, $p_m \to p \implies
p_m(u) \to p(u)$ for all $u \in {\mathbb R}^n$.
\end{proposition}
The orthogonal complement of a subspace $U$ of $F_{n,d}$,
\begin{equation}
U^\perp = \{ v \in F_{n,d}: [u,v] = 0\quad \text{for all}\quad u \in U\},
\end{equation}
is also a subspace of $F_{n,d}$ and $(U^\perp)^\perp = U$.
The following result is widely-known and has been frequently proved over the
last century, see e.g.\cite[p.30]{Re1}.
\begin{proposition}\cite[p.93]{Re1}
Suppose $S \subset {\mathbb R}^n$ has non-empty interior. Then
$F_{n,d}$ is spanned by $\{(\alpha\cdot)^d: \alpha \in S \}$.
\end{proposition}
\begin{proof}
Let $U$ be the subspace of $F_{n,d}$ spanned by $\{(\alpha\cdot)^d:
\alpha \in S \}$ and suppose
$q \in U^{\perp}$. Then $0 = [q,(\alpha\cdot)^d] = q(\alpha)$ for all $\alpha \in S$.
Since $q$ is a form which vanishes on an open set, $q = 0$.
Thus, $U^{\perp} = \{0\}$, so $U = (U^\perp)^\perp = \{0\}^\perp = F_{n,d}$.
\end{proof}
\begin{proposition}[Biermann's Theorem]\cite[p.31]{Re1}
The set $\{(i \cdot)^d : i \in \mathcal I(n,d)\}$ is a basis for
$F_{n,d}$.
\end{proposition}
\begin{proof}
It suffices to construct
a dual basis $\{g_j : j \in \mathcal I(n,d)\} \subset F_{n,d}$ of
$N(n,d)$ forms
satisfying $[g_j,(i \cdot)^d] = 0$ if $j \neq i$ and $[g_j,(j
\cdot)^d] > 0$. Let
\begin{equation}\label{E:bier}
g_j(x_1,\dots,x_n) = \prod_{k=1}^n \prod_{\ell =0}^{j_k-1} (d x_k - \ell(x_1 +
\cdots + x_n)).
\end{equation}
Each $g_j$ is a product of $\sum_k j_k = d$ linear factors, so $g_j
\in F_{n,d}$. The $(k,\ell)$ factor in \eqref{E:bier} vanishes
at any $x = i \in \mathcal I(n,d)$ for which $i_k = \ell$. Thus,
$[g_j,(i \cdot)^d] = g_j(i) = 0$ if $i_k \le j_k-1$ for any $k$. Since
$\sum_k i_k = \sum_k j_k$, it follows that $g_j(i) = 0$ if $j \neq i$.
A computation shows that $g_j(j) = d^d\prod_k (j_k !) = d^d d!/c(j)$.
\end{proof}
Prop.\! 2.3 implies Prop.\! 2.2 directly, by finding an affine copy of
$\mathcal I(n,d)$ in $S$.
\begin{proposition}\cite[p.141]{Re1}
Suppose $B \in \mathcal B_{n,d}$ and there are forms $p,q \in
B$ and points $u,v \in {\mathbb R}^n$ so that $p(u) > 0 > q(v)$. Then $B = F_{n,d}$.
\end{proposition}
\begin{proof}
By Lemma 1.1(iii), $\pm(\alpha\cdot)^d \in
B$ for $\alpha \in {\mathbb R}^n$, so by Prop.\! 2.2, $F_{n,d} \subseteq B$.
\end{proof}
This is the argument Ellison used in \cite[p.667]{E} to show that
every form in $ F_{n,u(2v+1)}$ is a sum of $(2v+1)$-st powers of forms
of degree $u$.
For $B \in \mathcal B_{n,d}$, let $-B = \{ -h: h \in B\}$; it is easy
to check that $-B \in \mathcal B_{n,d}$.
Since $Q_{n,2} = P_{n,2}$, the following proposition shows that
there are no ``interesting'' blenders of quadratic forms.
\begin{proposition}\cite[p.141]{Re1}
If $B \neq \{0\}, F_{n,d}$ is a blender, then $d=2r$ is even
and for a suitable choice of sign, $Q_{n,2r} \subseteq \pm B
\subseteq P_{n,2r}$.
\end{proposition}
\begin{proof}
If $B \neq \{0\}$, then there exists $p \in B$ and
$a \in {\mathbb R}^n$ so that $p(a) \neq 0$. If $d$ is odd, then $p(-a) =
-p(a)$, and by Prop.\! 2.4, $B = F_{n,d}$. If $d$ is
even, by taking $-B$ if necessary, we may assume that $p(a)
\ge 0$. Thus, if $B \neq F_{n,2r}$, then $\pm B \subseteq
P_{n,2r}$. On the other hand,
Lemma 1.1 and (P1) imply that $Q_{n,2r} \subseteq \pm B$.
\end{proof}
The inner product has a useful contravariant property.
\begin{proposition} \cite[p.32]{Re1}
Suppose $p$, $q\in F_{n,d}$ and $M \in Mat_n({\mathbb R})$. Then
\begin{equation}\label{E:contra}
[p\circ M,q]=[p,q\circ M^t].
\end{equation}
\end{proposition}
\begin{proof}
By Prop.\! 2.2, it suffices to prove \eqref{E:contra}
for $d$-th powers; note that $[p \circ M,q]
= [(\alpha M\cdot)^d,(\beta\cdot)^d] = (\alpha M \beta^t)^d =
(\alpha(\beta M^t)^t)^d = [(\alpha \cdot)^d, (\beta M^t\cdot)^d] = [p, q \circ M^t]$.
\end{proof}
\begin{proposition}\cite[p.46]{Re1}
If $B$ is a blender, then so is its dual cone $B^*$.
\end{proposition}
\begin{proof}
The dual of a closed convex cone is a closed convex cone, so
(P1) and (P2) are clear. Suppose $p \in B, q \in
B^*$ and $M \in Mat_n({\mathbb R})$. Since $p\circ M^t \in
B$, we have
\begin{equation}
[p, q\circ M] = [q \circ M , p] = [q, p\circ M^t] = [p\circ M^t,q] \ge 0,
\end{equation}
and so $q \circ M \in B^*$. This verifies (P3).
\end{proof}
For $i\in\mathcal I(n,d)$, let $D^i = \prod (\frac
{\partial}{\partial x_k})^{i_k}$; let
$f(D) = \sum c(i)a(f;i)D^i$ be the $d$-th order differential operator
associated to $f \in F_{n,d}$. Since $\frac {\partial}{\partial x_k}$ and\
$\frac {\partial}{\partial x_\ell}$ commute, $D^iD^j = D^{i+j} =
D^jD^i$ for any $i \in \mathcal I(n,d)$ and $j\in\mathcal I(n,e)$. By
multilinearity, $(fg)(D) = f(D)g(D) = g(D)f(D)$ for forms $f$ and $g$
of any degree.
\begin{proposition}\cite[p.183]{Re2}
If $i, j \in \mathcal I(n,d)$ and $i \neq j$, then $D^i(x^j) =
0$ and $D^i x^i = \prod_k (i_k)! = d!/c(i)$.
\end{proposition}
\begin{proof}
We have
\begin{equation}
D^i(x^j) = \prod_{k=1}^n \biggl(\frac {\partial^{^{i_k}}}{\partial x_k^{i_k}}\biggr)
\prod_{k=1}^n x_k^{j_k} =
\prod_{k=1}^n \frac {\partial^{^{i_k}} (x_k^{j_k})}{\partial x_k^{i_k}}.
\end{equation}
If $i_k > j_k$, then the $k$-th factor above is zero. If $i \neq j$,
then this will happen for at least one $k$. Otherwise, $i=j$, and the
$k$-th factor is $i_k!$.
\end{proof}
We now connect the inner product with differential operators.
\begin{proposition}\cite[p.184]{Re2}
\smallskip
\noindent (i) If $p, q \in F_{n,d}$, then $p(D)q = q(D)p = d![p,q]$.
\noindent (ii) If $p, hf \in F_{n,d}$, where $f \in F_{n,k}$ and $h \in
F_{n,d-k}$,
then
\begin{equation}
d![p,hf] = (d-k)![h,f(D)p].
\end{equation}
\end{proposition}
\begin{proof}
For (i), we have by Prop.\! 2.8:
\begin{equation}
\begin{gathered}
p(D)q = \sum_{i \in \mathcal I(n,d)} c(i)a(p;i)D^i \biggl(\sum_{j \in
\mathcal I(n,d)} c(j)a(q;j)x^j\biggr) =
\\ \sum_{i \in \mathcal I(n,d)} \sum_{j \in \mathcal I(n,d)}
c(i)c(j)a(p;i)a(q;j)D^ix^j
= \sum_{i \in \mathcal I(n,d)} c(i)c(i)a(p;i)a(q;i)D^ix^i
\\ = \sum_{i \in \mathcal I(n,d)} c(i)^2a(p;i)a(q;i) \frac {d!}{c(i)} =
d![p,q] = d![q,p] = q(D)p.
\end{gathered}
\end{equation}
\noindent (ii) Two applications of (i) give
\begin{equation}
d![p,hf] = (hf)(D)p = h(D)f(D)p = h(D)(f(D)p) = (d-k)![h,f(D)p].
\end{equation}
\end{proof}
\begin{corollary}
If $p \in F_{n,2r}$, then $Hes(p;u,v) = 2r(2r-1)[p,(u\cdot)^{2r-2}(v\cdot)^2]$.
\end{corollary}
\begin{proof}
Apply Prop.\! 2.9 with $h = (u\cdot)^{2r-2}$, $f = (v\cdot)^2$, $d=2r$
and $k=2$. We have
\begin{equation}
f(x_1,\dots,x_n) = (v_1x_1 + \cdots + v_nx_n)^2 \implies
f(D) = \sum_{i=1}^n \sum_{j=1}^n v_iv_j
\frac{\partial^2}{\partial x_i \partial x_j},
\end{equation}
so that $[h,f(D)p] = Hes(p;u,v)$ by \eqref{E:hes} and Prop.\! 2.1(iii).
\end{proof}
\section{Convergence and duality}
Throughout this section $S$ will denote
the (solid) unit ball in ${\mathbb R}^n$. (The referee generously suggested a more
general and much more geometric approach to the results of the first
part of this section, using the fact that if $C$ is a compact
convex set not containing 0, then the conical hull of $C$ is closed,
and a consideration of the behavior of bases of convex cones under
Cartesian products.)
\begin{lemma}
For $i \in \mathcal I(n,d)$, there eiststs $R_{n,d}(i) > 0$ so that
if $p \in F_{n,d}$, then
$|a(p;i)| \le R_{n,d}(i)\cdot\sup\{|p(x)|: x \in S\}$.
\end{lemma}
\begin{proof}
By Prop.\! 2.2, there exist $\alpha_k \in S$, so that for every
$i \in \mathcal I(n,d)$, we have
\begin{equation}\label{E:bound}
x^i = \sum_{k=1}^{N(n,d)} \lambda_k(i) (\alpha_k\cdot)^d
\end{equation}
for some $\lambda_k(i) \in {\mathbb R}$.
Taking the inner product of \eqref{E:bound} with $p$, we find that
\begin{equation}
a(p;i) = [p,x^i] = \sum_{k=1}^{N(n,d)} \lambda_k(i) [p, (\alpha_k\cdot)^d ] =
\sum_{k=1}^{N(n,d)} \lambda_k(i) p(\alpha_k).
\end{equation}
Now set $R_{n,d}(i) = \sum_k |\lambda_k(i)|$.
\end{proof}
We define a norm on $F_{n,d}$ by
\begin{equation}
||p||^2 = p(D)p = d! [p,p] = d!\sum_{i \in \mathcal I(n,d)|} c(i) a(p;i)^2.
\end{equation}
This norm satisfies a remarkable inequality due to
Beauzamy, Bombieri, Enflo and Montgomery \cite{BBEM} (see
\cite{Re1.5} for this formulation): if $p \in F_{n,d_1}$ and $q \in
F_{n,d_2}$, then
\begin{equation}\label{E:BBEM}
||pq||\ \ge ||p|| \cdot ||q||.
\end{equation}
Given a sequence $(p_m) \in F_{n,d}$, the statement that
$(|a(p_m;i)|)$ is uniformly bounded for all $(i,m)$ is equivalent to the
statement that $(||p_m||)$ is bounded.
\begin{lemma}
Suppose $(p_{m,r}) \subset F_{n,d}$, $1 \le r \le N$, and suppose
that for all $(m,r)$, $|p_{m,r}(u)| \le M$ for $u \in S$. Then there
exist $p_r \in F_{n,d}$ and a common subsequence $m_k\to\infty$
so that $p_{m_k,r} \to p_r$ for each $r$.
\end{lemma}
\begin{proof}
Identify $p_{m,r}$ with the vector $(a(p_{m,r};i)) \in
{\mathbb R}^{N(n,d)}$; these are uniformly bounded by Lemma 3.1.
Concatenate them to form a vector $v_m \in {\mathbb R}^{N*N(n,d)}$.
By Bolzano-Weierstrass, there is a convergent
subsequence $(v_{m_k})$. The corresponding subsequences of forms are then
convergent.
\end{proof}
We state without proof a direct implementation of
Carath\'eodory's Theorem (see
e.g. \cite[p.27]{Re1}.). It is worth noting that in 1888 (when
Carath\'eodory was 15), Hilbert \cite{Hi} used this
argument with $N(3,6) = 28$ to show that $\Sigma_{3,6}$ is
closed.
\begin{proposition}[Carath\'eodory's Theorem]
If $r > N(n,d)$, and $h_k \in F_{n,d}$, then there exist $\lambda_k
\ge 0$ so that
\begin{equation}
\sum_{k=1}^r h_k = \sum_{k=1}^{N(n,d)} \lambda_k h_{n_k}.
\end{equation}
\end{proposition}
We use these lemmas to show that if $B_1$ and $B_2$ are
blenders, then so are $B_1+B_2$ (c.f. \eqref{E:b1+b2}) and $B_1*B_2$
(c.f. \eqref{E:b1*b2}). We may assume $B_i \neq 0$.
\begin{theorem}
\
\smallskip
\noindent (i) If $B_i \in \mathcal B_{n,2r}$, then $B_1 + B_2 \in
\mathcal B_{n,2r}$.
\noindent (ii) If $B_i \in \mathcal B_{n,2r_i}$ and $r=r_1+r_2$,
then $B_1*B_2 \in \mathcal B_{n,2r}$.
\end{theorem}
\begin{proof}
In each case, (P1) is automatic, and
since $(p_1+p_2) \circ M = p_1 \circ M + p_2 \circ M$ and $(p_1p_2)
\circ M = (p_1 \circ M)( p_2 \circ M)$, (P3) is verified. The issue is
(P2).
Suppose $B_i \in \mathcal B_{n,2r}$ have opposite ``sign'', say
$B_1 \subset P_{n,2r}$ and $B_2 \subset -P_{n,2r}$. Then Prop.\! 2.4
implies that $B_1 + B_2 = F_{n,2r}$. Otherwise, we may assume
that $B_i \subset P_{n,2r_i}$.
Suppose $p_{i,m} \in B_i$ and $p_{1,m} + p_{2,m} = p_m\to
p$. If $\sup\{p(u) : u \in S\} = T$, then for $m \ge m_0$,
$\sup\{p_m(u) : u \in S\} \le T+1$, and since $p_{i,m}$ is psd, it
follows that $\sup\{p_{i,m}(u) : u \in S\} \le T+1$ as well. By
Lemma 3.2, there is a common subsequence so that $p_{i,m_k} \to p_i
\in B_i$, hence $p = \lim p_{m_k} = p_1+p_2 \in B_1 + B_2$.
The proof for products is more complicated; the example
$(mp_1)(m^{-1}p_2) = p_1p_2$ shows that the factors might need to be normalized.
By taking $\pm B_i$,
assume $B_i \subset P_{n,2r_i}$. Suppose first that $p_{i,m} \in B_i$ and
$p_{1,m}p_{2,m} \to p \in P_{n,2r_1+2r_2}$. If $p=0$, then $p \in B_1
* B_2$. Otherwise, assume that
$p_{i,m} \neq 0$. Let $\lambda_m = (||p_{1,m}||/||p_{2,m}||)^{1/2}$, $q_{1,m} = \lambda_m^{-1}
p_{1,m}$ and $q_{2,m}= \lambda_m p_{2,m}$. Then $q_{i,m} \in B_i$,
$q_{1,m}q_{2,m} \to p$ and $||q_{1,m}|| = ||q_{2,m}||$. It follows
from \eqref{E:BBEM} that $\limsup ||q_{i,m}|| \le ||p||^{1/2}$, hence
the $q_{i,m}$'s have bounded norm and again, there exists $m_k$
so that $q_{i,m_k} \to q_i \in B_i$ and $p = q_1q_2$.
By Prop.\! 3.3, a sum such as \eqref{E:b1*b2} can be compressed
into one in which
$s \le N(n,2r)$. Write
\begin{equation}
p_m = \sum_{k=1}^{N(n,2r)} p_{1,k,m}p_{2,k,m},\qquad p_{i,k,m} \in B_i,
\end{equation}
and suppose $p_m \to p$. Since $p$ is bounded on $S$, so is $(p_m)$,
and since each $p_{i,k,m}$ is psd, it follows that the
sequence $(p_{1,k,m}p_{2,k,m})$ is bounded on $S$, and hence by
Lemma 3.2, a subsequence of $(p_{1,k,m}p_{2,k,m}) \to p_k$ for some $p_k \in
P_{n,2r}$; without loss of generality, we may drop the subscripts as
assume that $(p_{1,k,m}p_{2,k,m}) \to p_k$. We now apply the argument
of the previous paragraph to complete the proof.
\end{proof}
The following theorem was announced without proof in \cite[p.47]{Re1}.
\begin{theorem}
If $uv = r$, then $W_{n,(u,2v)}$ is a blender.
\end{theorem}
\begin{proof}
As we have seen, (P1) and (P3) are immediate. Suppose $p_m \in
W_{n,(u,2v)}$ and $p_m \to p$. Prop.\! 3.3 says that we can write
\begin{equation}
p_m = \sum_{k=1}^{N(n,2r)}h_{k,m}^{2v}, \qquad h_{k,m} \in F_{n,u}.
\end{equation}
As before, $p$ is bounded on $S$, so the $p_m$'s
are bounded, hence so are the sequences $(h_{k,m}^{2v})$ and
$(|h_{k,m}|) = ((h_{k,m}^{2v})^{1/(2v)})$. Thus, there is a common
convergent subsequence so that
$(h_{k,m_{\ell}}) \to h_k$, hence $(h_{k,m_{\ell}}^{2v}) \to
h_k^{2v}$ and $p \in W_{n,(u,2v)}$.
\end{proof}
In particular, $\Sigma_{n,2r}$ and $Q_{n,2r}$ are blenders; see \cite[p.46]{Re1}.
\begin{theorem}
If $\sum_iu_iv_i = 2r$, then
$W_{n,\{(u_1,2v_1),...., (u_m,2v_m)\}} \in \mathcal B_{n,2r}$.
\end{theorem}
\begin{proof}
Note that $W_{n,\{(u_1,2v_1),...., (u_m,2v_m)\}} = W_{n,(u_1,2v_1)} * \cdots
* W_{n,(u_m,2v_m)}$.
\end{proof}
\begin{proposition}\cite[p.38]{Re1}
$P_{n,2r}$ and $Q_{n,2r}$ are dual blenders.
\end{proposition}
\begin{proof}
We have $p \in Q_{n,2r}^*$ if and only if $p \in F_{n,2r}$ and $\lambda_k
\ge 0$ and $\alpha_k \in {\mathbb R}^n$ imply
\begin{equation}
0 \le \left[ p, \sum_{k=1}^r \lambda_k (\alpha_k\cdot)^{2r} \right] =
\sum_{k=1}^r \lambda_k p(\alpha_k).
\end{equation}
This holds if and only if $p(\alpha) \ge 0$ for $\alpha \in {\mathbb R}^n$;
that is, if and only if $p \in P_{n,2r}$.
\end{proof}
It was a commonplace by the time of \cite{Hi} that
$P_{n,2r} = \Sigma_{n,2r}$ when $n=2$ or $2r=2$. Hilbert proved there that
$P_{3,4} = \Sigma_{3,4}$ and that strict inclusion is true for other
$(n,2r)$ (see \cite{Re3}.)
We say that $p \in P_{n,2r}$ is
{\it positive definite} or {\it pd} if $p(u) = 0$ only for $u=0$.
It follows that $p \in int(P_{n,2r})$ if and only if $p$ is pd.
Blenders are cousins of orbitopes. An {\it orbitope}
is the convex hull of an orbit of a compact algebraic group $G$ acting
linearly on a real vector space; see \cite[p.1]{SSS}. The key
differences from blenders are that it is a single orbit, and that $G$ is
compact. One object which is both a blender and an orbitope is
$Q_{n,2r}$, which is named $\mathcal V_{n,2r}$ (and called the {\it
Veronese orbitope}) in \cite{SSS}.
\begin{proposition}\cite[p.47]{Re1}
Given $p \in F_{n,2uv}$, define the form $H_p(t) \in F_{N(n,u),2v}$, in
variables $\{t(\ell)\}$ indexed by $\{\ell \in \mathcal I(n,u)\}$, by
\begin{equation}\label{E:wardual}
H_p(\{t(\ell_j)\}) = \sum_{\ell_1 \in \mathcal I(n,u)}\cdots
\sum_{\ell_{2v} \in \mathcal I(n,u)} a(p;\ell_1 + \cdots + \ell_{2v})t(\ell_1)\cdots
t(\ell_{2v}).
\end{equation}
Then $p \in W_{n,(u,2v)}^*$ if and only if $H_p \in P_{N(n,u),2v}$.
\end{proposition}
\begin{proof}
We have $p \in W_{n,(u,v)}^*$ if and only if, for every form $g \in
F_{n,u}$, $[p,g^{2v}] \ge 0$. Writing
$g \in F_{n,u}$ with coefficients $\{t(\ell): \ell \in \mathcal I(n,u)\}$,
we have:
\begin{equation}
\begin{gathered}
g(x) = \sum_{\ell \in \mathcal I(n,u)} t(\ell) x^\ell \implies \\
g^{2v}(x) = \sum_{\ell_1 \in \mathcal I(n,u)}\cdots\sum_{\ell_{2v} \in \mathcal I(n,u)}
t(\ell_1)\cdots t(\ell_{2v}) x^{\ell_1 + \cdots+ \ell_{2v}}.
\end{gathered}
\end{equation}
It follows from \eqref{E:ip} and \eqref{E:wardual} that $[p,g^{2v}] =
H_p(t(\ell))$.
\end{proof}
If $v=1$, then $\mathcal I(n,1) = \{e_i\}$ and, on writing $t(e_i) =
y_i$, $H_p(y_1,\dots,y_n) = p(y)$; i.e., $Q_{n,2r}^* = P_{n,2r}$.
If $u=1$, then $H_p$ becomes the classical catalecticant and
\begin{equation}\label{E:cata}
p \in \Sigma_{2,2r}^*
\iff H_p(t) = \sum_{i \in \mathcal I(n,r)}\sum_{j \in \mathcal I(n,r)}
a(p;i+j)t(\ell_i)t(\ell_j)\ \text{ is $psd$}.
\end{equation}
This shows that $\Sigma_{n,2r}$ is a spectrahedron (see \cite[p.27]{SSS}).
\begin{theorem}
If $\sum v_i = r$, then $W_{2,\{(1,2v_1),\dots,(1,2v_m)\}} = P_{2,2r}$
if and only if $m=r$ and $v_i=1$.
\end{theorem}
\begin{proof}
If $p \in P_{2,2r} = \Sigma_{2,2r}$, then $p = f_1^2 + f_2^2$, where $f_i
\in F_{2,r}$. Factor $\pm f_i$ into a product of linear and pd
quadratic factors (themselves a sum of two squares):
\begin{equation}
f_i = \prod_j \ell_{1,j} \prod_k (\ell_{2,k}^2 + \ell_{3,k}^2).
\end{equation}
Then, using \eqref{E:h22} and expanding the product below, we see that
\begin{equation}
f_i^2 = \prod_j \ell_{1,j}^2 \prod_k \bigl((\ell_{2,k}^2 -
\ell_{3,k}^2)^2 + (2\ell_{2,k}\ell_{3,k})^2 \bigr) \in W_{2,\{(1,2),\dots,(1,2)\}}.
\end{equation}
The converse inclusion follows from Prop.\! 2.5.
Suppose $m < r$ and suppose
\begin{equation}
\prod_{\ell=1}^r (x - \ell y)^2 = \sum_{k=1}^s h_{k,1}^{2v_1}\cdots
h_{k,m}^{2v_m} ,\quad h_{k,i}(x,y) = \alpha_{k,i}x + \beta_{k,i}y \in F_{2,1}.
\end{equation}
Then for each $k$, we have
\begin{equation}
\prod_{\ell=1}^r (x - \ell y)\ \bigg \vert \ \prod_{i=1}^m (\alpha_{k,i}x + \beta_{k,i}y);
\end{equation}
since $m < r$, the right-hand side is 0, and we have a contradiction.
\end{proof}
Finally, we have a simple expression for $K_{n,2r}^*$ which is
implicit in \cite{B}.
\begin{theorem}
$K_{n,2r}$ and $W_{n,\{(1,2r-2),(1,2)\}}$ are dual blenders.
\end{theorem}
\begin{proof}
By Corollary 2.10 and the Hessian definition, $p$ is convex if and
only if $0 \le Hes(p;u,v) = 2r(2r-1)[p, (u\cdot)^{2r-2}(v\cdot )^2]$
for all $u,v \in {\mathbb R}^n$.
\end{proof}
It follows from Theorems 3.9 and 3.10 that
$K_{2,4}^* = W_{2,\{(1,2),(1,2)\}} = P_{2,4}$, so $K_{2,4} =
Q_{2,4}$. For $r \ge 3$, $K_{2,2r}^* = W_{2,\{(1,2r-2),(1,2)\}}
\subsetneq P_{2,4}$, so $K_{2,2r} \supsetneq Q_{2,2r}$. We
return to this topic in section six.
\section{$K_{n,2r}$: convex forms}
In this section, we prove some general results for $K_{n,2r}$.
Since $p \in K_{n,2r}$ if and only if $Hes(p;u,v)$ is psd and
$Hes(p;u,u) = 2r(2r-1)p(u)$, we get an alternative proof that
$K_{2,2r} \subseteq P_{n,2r}$. We also know from Theorem 3.10 that $p
\in int(K_{n,2r})$ if and only if $[p,q] > 0$ for $0 \neq q \in
W_{n,\{(1,2r-2),(1,2)\}}$; accordingly, $int(K_{n,2r})$ is the set of $p
\in K_{2,2r}$ so that $Hes(p;u,v)$ is positive definite as a
bihomogeneous form in the variables $u \in {\mathbb R}^n$ and $v \in
{\mathbb R}^n$. Equivalently, $p \in K_{n.2r}$ is in $\partial(K_{n,2r})$ if and only if
there exist $u_0\neq 0, v_0 \neq 0$ such that $Hes(p;u_0,v_0) = 0$.
Although psd and sos are preserved under homogenization and
dehomogenization, this is not true for convexity. For example, $t^2-1$
is a convex polynomial which cannot be homogenized to a convex form,
because it is not definite. As a pd polynomial in one variable,
$t^4 + 12 t^2 + 1$ is convex, but if $p(x,y) = x^4 +
12x^2y^2 + y^4$, then $Hes(p;(1,1),(v_1,v_2)) = 36v_1^2 +
96v_1v_2 + 36v_2^2$ is not psd, so $p$ is not convex.
\begin{proposition}
If $p \in K_{n,2r}$, then there is a pd form $q$ in $\le n$ variables
and $\bar p \sim p$ such that $\bar p(x) = q(x_k,\dots, x_n)$.
\end{proposition}
\begin{proof}
If $p$ is pd, there is nothing to prove. Otherwise, we can assume that
$p \sim \bar p$, where $\bar p$ is convex and
$\bar p(e_1) = 0$. We shall show that $\bar p = \bar
p(x_2,\dots,x_n)$. Repeated application of this argument then proves the result.
Suppose otherwise that $x_1$ appears in a term of $\bar p$ and let
$m \ge 1$ be the largest such power of $x_1$; write
the associated terms in $\bar p$ as $x_1^mh(x_2,...,x_n)$. After
an additional invertible linear change involving $(x_2,\dots,x_n)$,
we may assume that
one of these terms is $x_1^mx_2^{2r-m}$. We then have
\begin{equation}
\bar p(x_1,x_2,0,\dots,0) = x_1^m x_2^{2r-m} + \text{lower order terms in $x_1$}
\end{equation}
which implies that
\begin{equation}\label{E:binhess}
\begin{gathered}
\frac{\partial^2 \bar p}{\partial x_1^2}\frac{\partial^2 \bar
p}{\partial x_2^2}
- \left(\frac{\partial^2 \bar p}{\partial x_1\partial x_2}\right)^2 = \\
-(2r-1)m(2r-m) x_1^{2m-2}x_2^{4r-2m-2} + \text{lower order terms in $x_1$}.
\end{gathered}
\end{equation}
Since $r \ge 1$ and $1 \le m \le 2r-1$, \eqref{E:binhess} cannot be
psd, and this contradiction shows that $x_1$ does not occur in $\bar p$.
\end{proof}
\begin{corollary}
There do not exist $B_i \in \mathcal B_{n,2r_i}$, $r_i \ge 1$, so that
$K_{n,2r_1+2r_2} = B_1 * B_2$.
\end{corollary}
\begin{proof}
It follows from Prop.\! 2.5 that $x_i^{2r_i} \in B_i$, hence
$x_1^{2r_1}x_2^{2r_2} \in B_1*B_2$, but by Prop.\! 4.1, this form
is not convex.
\end{proof}
The next theorem connects $K_{n,2r}$ with the blender $N_{n,2r}$ defined
in \cite[p.119-120]{Re1}. Let $E = <\! e_1,\dots,e_n\! \!>$ be a real
$n$-dimensional vector space. We say that $f$ is a {\it
norm-function} on $E$ if, after defining
\begin{equation}\label{E:nf}
||x_1e_1+ \dots + x_ne_n|| = f(x_1,\dots,x_n),
\end{equation}
the pair $(E,||\cdot||)$ is a Banach space. Let
\begin{equation}\label{E:nnd}
N_{n,d}:= \{p \in F_{n,d}: p^{1/d} \text{ is a norm function} \}.
\end{equation}
A necessary condition is that $f = p^{1/d} \ge 0$, hence $d=2r$ is
even and $p \in P_{n,2r}$.
For example, if $p(x) = \sum_k x_k^2$, then \eqref{E:nf} with $f=p^{1/2}$
gives ${\mathbb R}^n$ with the Euclidean norm. If $(E,||\cdot||)$ is isometric to a
subspace of some $L_{2r}(X,\mu)$, then $f^{2r} \in Q_{n,2r}$.
The following theorem was proved in the author's thesis; see \cite{Re0,Re00}.
\begin{proposition}\cite[Thm.1]{Re00}
If $p \in P_{n,2r}$, then $p \in N_{n,2r}$ if and only if for all
$u,v \in {\mathbb R}^n$, $p(u_1+tv_1,\dots,u_n+tv_n)^{1/(2r)}$ is a convex function of $t$.
\end{proposition}
It is not obvious that $N_{n,2r}$ is a blender, but in fact,
$N_{n,2r} = K_{n,2r}$! The connection is a proposition whose
provenance is unclear. It appears in Rockafellar
\cite[Cor.15.3.1]{Ro}, where it is attributed to Lorch \cite{Lo}, although the
derivation is not transparent.
V. I. Dmitriev (see section 6) attributes the result to an
observation by his advisor S. G. Krein in 1969. Note below that $q$ is
{\it not} homogeneous.
\begin{proposition}
Suppose $p \in P_{n,2r}$ and $p(1,0,...,0) > 0$. Let
\begin{equation}
q(x_2,\dots,x_n) = p(1,x_2,\dots,x_n).
\end{equation}
Then $p \in K_{n,2r}$ if and only if $q^{1/(2r)}(x_2,\dots,x_n)$ is
convex.
\end{proposition}
\begin{corollary}
$K_{n,2r} = N_{n,2r}$.
\end{corollary}
\begin{proof}[Proof of Prop.\! 4.4]
A function is convex if and only if it is convex when restricted
to all two-dimensional subspaces.
Consider all $a \in {\mathbb R}^N$ with $a_1 = 1$. Suppose we can show that
$Hes(p;a,u)$ is psd in $u$ if and only if $q^{1/(2r)}$ is
convex at $(a_2,\dots,a_n)$. By homogeneity, this
occurs if and only if $Hes(p;a,u)$ is psd in $u$ for every $a$ with $a_1
\neq 0$ and by continuity, this holds if and only
if $Hes(p;a,u)$ is psd for all $a,u$. Thus, it suffices to
set $a_1 = 1$ and prove the equivalence pointwise.
Fix $(a_2,\dots,a_n)$ and let
\begin{equation}
\begin{gathered}
\tilde p(x_1,x_2\dots, x_n) = p(x_1, x_2 + a_2 x_1, \dots, x_n + a_nx_1),\\
\tilde q(x_2,\dots,x_n) = \tilde p(1,x_2,\dots x_n) = q(x_2+a_2,\dots,x_n+a_n)
\end{gathered}
\end{equation}
Then $p$ and $q^{1/(2r)}$ are convex at $a$ and $(a_2,\dots,a_n)$ if
and only if
$\tilde p$ and
$\tilde q$ are convex at $e_1$ and 0, and we can drop the tildes and
assume that
$a_k = 0$ for $k \ge 2$, so $a = e_1$.
Since it suffices to look at all two-dimensional
subspaces containing $e_1$, we may assume it is
$\{(x_1,x_2,0,\dots,0)\}$, after another change of variables.
Suppose now that
\begin{equation}
h(x_1,x_2) = p(x_1,x_2,0,\dots,0) = a_0 x_1^{2r} + \binom {2r}1 a_1
x_1^{2r-1}x_2 + \dots.
\end{equation}
Then
\begin{equation}
Hes(h;(1,0),(v_1,v_2)) = 2r(2r-1)(a_0 v_1^2 + 2a_1 v_1v_2 + a_2 v_2^2),
\end{equation}
and since $a_0 = p(e_1) > 0$, this is psd if and only if $a_0a_2 \ge a_1^2$. On
the other hand,
\begin{equation}
q(t) = p(1,t) = a_0 + \binom {2r}1 a_1 t + \binom {2r}2 a_2 t^2 + \dots
\end{equation}
and a routine computation shows that
\begin{equation}
(q^{(1/(2r))})''(0) = (2r-1)a_0^{-2+1/(2r)}(a_0a_2 - a_1^2).
\end{equation}
Thus the two conditions hold simultanously.
\end{proof}
A more complicated proof computes the Hessian of $p$ and uses the Euler
PDE ($2rp = \sum x_i \frac{\partial p}{\partial x_i}$ and
$(2r-1)\frac{\partial p}{\partial x_i} = \sum x_j\frac{\partial^2
p}{\partial x_i\partial x_j} $) to replace partials involving $x_1$
with partials involving $x_j$, $j \ge 2$.
We conclude this section with a peculiar result which implies that
every pd form is, in a computable way, the restriction of a convex
form on $S^{n-1}$.
\begin{theorem}
Suppose $p \in P_{n,2r}$ is pd, and let $p_N:= (\sum_j x_j^2)^N p$.
Then there exists $N$ so that $p_N \in K_{n,2r+2N}$.
\end{theorem}
\begin{proof}
Since $p$ is pd, it is bounded away from 0 on $S^{n-1}$ and so there are
uniform upper bounds $T$ for $ |p(x)^{-1}\nabla_u(p)(x)|$ and $U$
for $|p(x)^{-1} \nabla^2_u(p)(x)|$, for $x, u \in
S^{n-1}$. Since $\sum x_i^2$ is rotation-invariant, once again it suffices to
show that $p_N$ is convex at $(1,0,\dots,0)$, given $x_3 = \cdots = x_n
= 0$. We claim that if $N > (T^2 + U)/2$, then $p_N$
is convex. By Prop.\! 4.4, it suffices to show that
$p^{1/(2N+2r)}_N(1,t,0,\dots,0)$ is convex at $t=0$.
Writing down the relevant Taylor series, this becomes
\begin{equation}
(1 + t^2)^{N/(2N+2r)} (1 + \alpha t + \tfrac 12 \beta t^2 + \dots )^{1/(2N+2r)},
\end{equation}
where $|\alpha| \le T$ and $|\beta|\le U$. By expanding the product, a
standard computation shows that the second derivative at $t=0$ is
\begin{equation}
\frac {N}{N+r} + \frac 1{2N+2r}\cdot b - \frac
{2N+2r-1}{(2N+2r)^2}\cdot a^2
\ge \frac 1{2N+2r} \left(2N - U - T^2 \right) \ge 0.
\end{equation}
\end{proof}
Greg Blekherman pointed out to the author's chagrin in Banff that
Theorem 4.6
follows from \cite[Thm.3.12]{R2}: if $p$ is pd, then there exists
computable $N$
so that $p_N \in Q_{n,2r+2N}$. This was used in \cite{R2} to show that
$P_N\in \Sigma_{n,2r+2N}$; it also implies that $p \in K_{n,2r+2N}$.
The proof of \cite[Thm.3.12]{R2} is much less elementary.
We conclude this section with a computational illustration of Theorem 4.6.
If $a \ge 0$, then $x^2 + a y^2$ is convex, but if $r \ge 1$ and
$(x^2 + y^2)^r(x^2 + a y^2) \in K_{2,2r+2}$ for all $a>0$, then by (P2),
$x^2(x^2 + y^2)^r$ would be convex, violating Prop.\! 4.1.
\begin{theorem}
\begin{equation}
(x^2 + y^2)^r(x^2 + a y^2) \in K_{2,2r+2} \iff a + 1/a \le 8r + 18 + 8/r.
\end{equation}
\end{theorem}
\begin{proof}
Let $p(x,y) = (x^2 + y^2)^r(x^2 + a y^2)$. Then
$\frac{\partial^2p}{\partial x^2}\frac{\partial^2p}{\partial y^2} -
(\frac{\partial^2p}{\partial x\partial y})^2$ equals
\begin{equation}
\begin{gathered}
4(2r+1)(x^2+y^2)^{2r-2} q(x,y), \quad \text{where} \quad
q(x,y) = \\ (1 + r) (a + r)x^4 + ( 2 a - r + 6 a r - a^2 r + 2 a
r^2)x^2y^2 + a (1 + r) (1 + a r)y^4.
\end{gathered}
\end{equation}
Another computation shows that
\begin{equation}\label{E:mess}
\begin{gathered}
4(1+r)(a+r)q(x,y)\\ = (2(1+r)(a+r)x^2 + ( 2 a - r + 6 a r - a^2 r + 2 a
r^2)y^2)^2 \\ + a r^2(a-1)^2\bigl((8r + 18 + 8/r)-(a+1/a)\bigr)y^4.
\end{gathered}
\end{equation}
If $a + 1/a \le 8r + 18 + 8/r$, then \eqref{E:mess} shows that $q$
is psd. Suppose $a + 1/a > 8r + 18 + 8/r$. Observe that
$ 2 a - r + 6 a r - a^2 r + 2 ar^2 \ge
0$ if and only if $(a + 1/a) \le 2r + 6 + 2/r$, so in this case, $ 2
a - r + 6 a r - a^2 r + 2 ar^2 < 0$ and we can choose $(x,y) = (x_0,y_0) \neq
(0,0)$ to make the first square in \eqref{E:mess} equal to zero. It then
follows that $4(1+r)(a+r)q(x_0,y_0)< 0$.
\end{proof}
In particular, $(x^2 + y^2)(x^2 + a y^2) \in K_{2,4} \iff
17 - 12 \sqrt 2 \le a \le 17 + 12 \sqrt 2$.
\section{$\mathcal B_{2,4}$: binary quartic blenders}
In view of Prop.\! 2.5, the simplest non-trivial opportunity to
classify blenders comes with the binary quartics. Throughout this section, we
choose a sign for $\pm B \in \mathcal B_{2,4}$ and assume that $B
\subset P_{2,4}$. We shall show that $\mathcal B_{2,4}$
is a one-parameter nested family of blenders increasing from $Q_{2,4}$ to
$P_{2,4}$. Let $Z_{2,4}$ denote the set of $p \in
P_{2,4}$ which are neither pd not a 4th power; if $p \in Z_{2,4}$,
then $p = \ell^2 h$, where $\ell$ is linear and $h$ is a psd quadratic
form relatively prime to $\ell$.
\begin{lemma}
If $B \in \mathcal B_{2,4}$ and $0 \neq p \in B \cap Z_{2,4}$, then $B =P_{2,4}$.
\end{lemma}
\begin{proof}
We have $p \sim q$, where
$q(x,y) = x^2(ax^2 + 2bxy + cy^2) \in B$, $ac - b^2\ge 0$ and $c > 0$. But
\begin{equation}
x^2(ax^2 + 2bxy + cy^2) =
x^2\bigl( \bigl(\tfrac{ac - b^2}c\bigr) x^2 + c\bigl(\tfrac bc x + y
\bigr)^2\bigr) \sim x^2(d x^2 + c y^2),
\end{equation}
and $d \ge 0$. Next, $(x,y) \mapsto (\epsilon x, \epsilon^{-1}y)$
shows that $\epsilon^2 dx^4 + c x^2y^2 \in B$, so $x^2y^2 \in B$ by (P2) and
$\ell_1^2\ell_2^2 \in B$ by (P3). Thus, $W_{2,\{(1,2), (1,2)\}} = P_{2,4}
\subseteq B$ by Theorem 3.9.
\end{proof}
This lemma illustrates one difference between
blenders and orbitopes. If $G = SO(2)$ and $p(x,y) = x^2(x^2+y^2)$,
then the convex hull of the image of $p$ under $G$ will
be cvx$(\{(\cos t x + \sin t y)^2(x^2+y^2)\})$, which contains no 4th powers.
Two important families of binary quartics are:
\begin{equation}\label{E:flam}
f_{\lambda}(x,y) := x^4 + 6\lambda x^2y^2 + y^4;
\end{equation}
\begin{equation}
g_{\lambda}(x,y):= f_{\lambda}(x+y,x-y) = (2+6\lambda)x^4 + (12 - 12\lambda)x^2y^2 + (2+6\lambda)y^4.
\end{equation}
We shall need two special fractional linear transformations. Let
\begin{equation}\label{E:TU}
T(z):= \frac{1-z}{1+3z}, \qquad U(z) := - \frac{1+3z}{3-3z}.
\end{equation}
Thus, $g_{\lambda} = (2+6\lambda)f_{T(\lambda)}$, hence for
$\lambda \neq -\frac 13$, $f_{\lambda} \sim f_{T(\lambda)}$. Note that $T(T(z))
= z$, $T(0) = 1$, $T(\frac 13) = \frac 13$, and
$T(-\frac 13) = \infty$ (corresponding to $(x^2-y^2)^2 \sim x^2y^2$);
$T$ gives a 1-1
decreasing map between $[\frac 13,\infty)$ and $(-\frac 13,\frac 13]$.
A calculation shows that
\begin{equation}\label{E:apo}
[f_{\lambda},g_{\mu}] = (2+6\mu) + \lambda(12-12\mu) + (2+6\mu) =
4(1+3\lambda+3\mu-3\lambda\mu).
\end{equation}
Note that $U(U(z)) = z$, $U(0) = -\tfrac 13$, $U$ gives a
1-1 decreasing map from $[-\frac 13,0]$ to itself, and
\begin{equation}\label{E:fglam}
[f_{\lambda},g_{(U(\lambda)+\tau)}] = 12(1-\lambda)\tau.
\end{equation}
It follows from \eqref{E:fglam} that $[f_{\lambda},g_{U(\lambda)}] = 0$;
if $\lambda < 1$ and $\mu < U(\lambda)$, then $[f_{\lambda},g_{\mu}] < 0$.
It is easy to see directly from \eqref{E:flam} that $f_{\lambda}$ is psd
if and only if $\lambda
\in [-\frac 13,\infty)$, and pd if and only if $\lambda \in (-\frac 13,\infty)$, and
from (P3) that, if $B \in \mathcal B_{2,4}$, then
\begin{equation}
f_{\lambda} \in B \iff f_{T(\lambda)} \in B.
\end{equation}
By (P1), if $-\frac 13 < \lambda \le \frac 13$, then $f_{\lambda}
\in B$ implies that $f_{\mu} \in B$ for $\mu \in [\lambda,T(\lambda)]$.
Classically, a ``general'' real binary quartic can be put
into the shape $f_{\lambda}$ after an invertible linear
transformation. However the coefficients of
might not be real, and there are singularities: $x^4
\not \sim f_{\lambda}$. The following result is \cite[Thm.6]{PR}.
\begin{proposition}
If $p \in P_{2,4}$ is pd, then $p \sim f_{\lambda}$ for some $\lambda \in
(-\frac 13,\frac 13]$.
\end{proposition}
\begin{proof}
Suppose first $p = g^2$. Then $g$ is pd, so $g
\sim x^2+y^2$ and $p \sim f_{\frac 13}$.
If $p$ is not a perfect square, then it is a product of two pd
quadratic forms; we may assume that $p(x,y) = (x^2+y^2)q(x,y)$, with
\begin{equation}
q(x,y) = ax^2 + 2bxy +cy^2.
\end{equation}
A ``rotation of axes'' fixes $x^2+y^2$ and takes $q$ into
$d x^2 + ey^2$ with $d,e > 0$, $d \neq e$, so $p
\sim(x^2+y^2)(dx^2+ey^2)$. Now,
$(x,y) \mapsto (d^{-1/4}x,e^{-1/4}y)$ gives $p
\sim f_{\mu}$, where $\mu = \frac 16(\gamma + \gamma^{-1}) > \frac 13$ for $\gamma =
\sqrt{d/e}\neq 1$. Thus, $p \sim f_{T(\mu)}$ where $T(\mu) \in
(-\frac 13,\frac 13)$.
\end{proof}
We need some results from classical algebraic geometry.
Suppose
\begin{equation}
p(x,y) = \sum_{k=0}^4\binom 4k a_k(p) x^{4-k}y^k.
\end{equation}
The two ``fundamental invariants'' of $p$ are
\begin{equation}
\begin{gathered}
I(p) = a_0(p)a_4(p) - 4a_1(p)a_3(p)+3a_2(p)^2, \\
J(p) = \det
\begin{vmatrix}
a_0(p) & a_1(p) & a_2(p) \\
a_1(p) & a_2(p) & a_3(p)\\
a_2(p)& a_3(p) & a_4(p)
\end{vmatrix}.
\end{gathered}
\end{equation}
(Here, $J(p)$ is the determinant of the catalecticant matrix $H_p$.)
We have $I(f_{\lambda}) = 1 + 3\lambda^2$ and $J(f_{\lambda})= \lambda - \lambda^3$, but
$I(x^4)=J(x^4)=0$.
It follows from Prop.\! 5.2 that if $p$ is pd, then $I(p) > 0$, and,
classically, if $q(x,y) = p(ax + by,cx+dy)$, then
\begin{equation}\label{E:inv}
I(q) = (ad-bc)^4 I(p), \qquad J(q) = (ad - bc)^6 J(p).
\end{equation}
Let
\begin{equation}\label{E:inv2}
K(p) := \frac {J(p)}{I(p)^{3/2}}.
\end{equation}
It follows from \eqref{E:inv} and \eqref{E:inv2} that, if $p \sim q$,
then $K(q) = K(p)$.
In particular,
\begin{equation}
p \sim f_{\lambda} \implies K(p) = K(f_{\lambda}) = \phi(\lambda):=
\frac{\lambda-\lambda^3}{(1+3\lambda^2)^{3/2}}.
\end{equation}
\begin{lemma}
If $p$ is pd, then $p \sim f_{\lambda}$, where $\lambda$ is the unique
solution in $(-\frac 13, \frac 13]$ to $K(p) = \phi(\lambda)$. If $p \in
Z_{2,4}$, then $K(p) = \phi(-\frac 13)$.
\end{lemma}
\begin{proof}
By Proposition 5.2, $p \sim f_{\lambda}$ for some $\lambda \in (-\frac 13,
\frac 13]$. A routine computation shows that $f'(\lambda) =
(1-9\lambda^2)(1+3\lambda^2)^{-5/2}$ is positive on $(-\frac 13, \frac 13)$,
hence $\phi$ is strictly increasing. By Lemma 5.1, if $p \in Z_{2,4}$,
then $p \sim q$, where $q(x,y) = dx^4 + 6e x^2y^2$ for some $e > 0$.
Since $I(q) =3e^2$ and $J(q) = -e^3$, $K(p) = K(q) = 3^{-3/2} = \phi(-\frac
13)$.
\end{proof}
\begin{theorem}
Suppose $r,s \in [-\frac 13,0]$, and suppose $1+3r+3s-3rs = 0$; that
is, $s = U(r)$. If $ p
\in [[f_r]]$ and $q \in [[f_s]]$, then $[p,q] \ge 0$.
\end{theorem}
\begin{proof}
Suppose $p = f_r \circ M_1$ and $q = f_s \circ M_2$. Then
\begin{equation}
[p,q] = [f_r\circ M_1,f_s \circ M_2] = [f_r, f_s \circ M_2M_1^t],
\end{equation}
hence it suffices to show that for all $a,b,c,d$,
\begin{equation}
\Psi(a,b,c,d;r,s):= [f_r(x,y),f_s(ax+by,cx+dy)] \ge 0
\end{equation}
A calculation shows that
\begin{equation}
\begin{gathered}
\Psi(a,b,c,d;r,s) = a^4+b^4+c^4+d^4 + \\ 6r(a^2b^2 + c^2d^2) +
6s(a^2c^2+b^2d^2) + 6rs(a^2d^2 + 4abcd + b^2c^2).
\end{gathered}
\end{equation}
When $s = U(r)$, a sos expression can be found:
\begin{equation}
\begin{gathered}
2(1-r)\Psi(a,b,c,d;r,U(r)) = (1+r)(1+3r)(a^2+b^2-c^2-d^2)^2 \\
-4 r (a^2 + c^2 - b^2 - d^2)^2 + (1 + r) (1 -
3 r) (a^2 + d^2 - b^2 - c^2)^2 \\ - 8 r (1 + 3 r) (a b + c d)^2,
\end{gathered}
\end{equation}
which is non-negative when $r \in [-\frac 13, 0]$.
Note that $\Psi(1,1,1,-1;r,U(r)) = 0$; reaffirming that $[f_r,g_{U(r)}] = 0$.
\end{proof}
\begin{theorem}
Suppose $r,s \in [-\frac 13, 0]$. If $s \ge U(r)$, $p \in [[f_r]]$ and
$q \in [[f_s]]$, then $[p,q] \ge 0$. If $s < U(r)$, then there exist $p
\in [[f_r]]$ and $q \in [[f_s]]$ so that $[p,q] < 0$.
\end{theorem}
\begin{proof}
If $0 \ge s \ge U(r)$, then $s \in [U(r),T(U(r))]$, hence
$f_s$ is a convex combination of $f_{U(r)}$ and $f_{T(U(r))}$,
and each $f_s \circ M$ is a convex combination of $f_{U(r)}\circ
M$ and $f_{T(U(r))}\circ M$. By Theorem 5.4,
$[f_r,f_s \circ M]$ is a convex combination of non-negative numbers
and is non-negative.
If $U(r) > s \ge -\frac 13$, then $[f_r,g_s] < 0$ by \eqref{E:fglam}.
\end{proof}
We now have the tools to analyze $B \in \mathcal B_{2,4}$.
If $Q_{2,4} \subseteq B \subseteq P_{2,4}$, let
\begin{equation}
\Delta(B) = \{ \lambda \in {\mathbb R} : f_{\lambda} \in B \}.
\end{equation}
\begin{theorem}
If $B \subset F_{2,4}$ is a blender, then $\Delta(B) =
[\tau,T(\tau)]$ for some $\tau \in [-\frac 13,0]$.
\end{theorem}
\begin{proof}
By (P2), $\Delta(B)$ is a closed interval.
We have seen that $\Delta(P_{2,4}) = [-\frac 13,\infty)$. Since $Q_{2,4} =
P_{2,4}^* = \Sigma_{2,4}^*$, by \eqref{E:cata}, $f_{\lambda} \in Q_{2,4}$ if and
only if
$\left( \begin{smallmatrix}
1 & 0 & \lambda \\
0 & \lambda & 0\\
\lambda & 0 & 1\\
\end{smallmatrix}\right)
$
is psd; that is, $\Delta(Q_{2,4}) = [0,1]$.
Otherwise, let $\tau = \inf \{ \lambda : f_{\lambda} \in B \}$. Since $Q_{2,4}
\subsetneq B \subsetneq P_{2,4}$, $\tau \in (-\frac 13, 0)$.
By (P2),
$f_{\tau} \in B$ and by (P3), $f_{T(\tau)} \in B$,
and by convexity, $f_{\nu} \in B$ for $\nu \in
[\tau,T(\tau)]$. If $\nu < \tau$, then $f_{\nu}\not \in B$ by
definition. If $\nu > T(\tau)$ and $f_{\nu} \in B$,
then $f_{T(\nu)} \in B$ and $T(\nu) < T(T(\tau)) = \tau$, a
contradiction.
\end{proof}
If $M$ is singular, then $f_{\lambda} \circ M$ is a 4th power;
accordingly, for $\tau \in [-\frac 13,0]$, let
\begin{equation}
B_{\tau}:= \bigcup_{\tau \le \lambda \le \frac 13} [[f_\lambda]] = \{p: p \sim
f_{\lambda}, \tau \le \lambda \le \tfrac 13 \} \cup \{(\alpha
x +\beta y)^4: \alpha, \beta \in {\mathbb R} \}.
\end{equation}
\begin{theorem}
If $B \in \mathcal B_{2,4}$, then $B =
B_{\tau}$ for some $\tau \in [-\frac 13, 0]$ and $B_\tau^* = B_{U(\tau)}$.
\end{theorem}
\begin{proof}
Suppose $B$ is a blender and $Q_{2,4} \subsetneq B \subsetneq
P_{2,4}$. Then $\Delta(B) = [\tau,T(\tau)]$ by Theorem 5.6, so $B =
B_{\tau}$ by Prop.\! 5.2. We need to show that each such $B_{\tau}$
is a blender. Since $B_0 = Q_{2,4}$ and $B_{-\frac 13} = P_{2,4}$ are blenders,
we may assume $\tau > -\frac 13$ and all $p \in B_{\tau}$ which are
not 4th powers are pd.
Clearly, (P3) holds in $B_{\tau}$.
Suppose $p_m \in B_{\tau}$ and
$p_m \to p$. If $p$ is a 4th power, then $p \in B_{\tau}$. If $p$ is
pd, then $K(p_m) \to K(p)$ by \eqref{E:inv},
\eqref{E:inv2} and continuity. In any case, $K(p_m) \ge \phi(\tau)$,
so $K(p) \ge \phi(\tau)$ and $p \in B_{\tau}$. Finally, if $p \in Z_{2,4}$,
then $K(p_m) \ge \phi(\tau) > \phi(-\frac 13) = K(p)$ by Lemma 5.3, and this
contradiction completes the proof of (P2).
We turn to (P1). Suppose $p, q \in B_{\tau}$ and $p+q \not\in
B_{\tau}$. Since $p+q$ is pd, $p+q \sim f_{\lambda}$ for some $\lambda < \tau$,
and so there exists $M$ so that $p\circ M + q \circ M = f_{\tau}$. But
now, \eqref{E:apo} and Theorem 5.5 give a contradiction:
\begin{equation}
0 > [f_{\lambda},g_{U(\tau)}] = [p\circ M,g_{U(\tau)}] + [q \circ M
,g_{U(\tau)}] \ge 0.
\end{equation}
Thus, $p+q \in B_{\tau}$ and (P1) is satisfied, showing that
$B_{\tau}$ is a blender. It follows from Prop.\! 2.7 and Theorem 5.5
that $B_{\tau}^* = B_{\nu}$ for some $\nu$. But by Theorem 5.5,
$B_{U(\tau)} \subseteq B_{\tau}^*$ and if $\lambda < U(\tau)$, then
$f_{\lambda} \notin B_{\nu}^*$, thus $B_{\tau}^* = B_{U(\tau)}$.
\end{proof}
A computation shows that $\phi^2(\lambda) + \phi^2(U(\lambda)) = \frac 1{27}$,
and this gives an alternate way of describing the dual
cones. This result was garbled in \cite[p.141]{Re1}
into the statement that
$B_{\tau}^* = B_{\nu}$, where $\tau^2 + \nu^2 = \frac 19$. The
self-dual blender $B_{\nu_0} =B_{\nu_0}^*$ occurs for $\nu_0 = 1 -
\sqrt{4/3}$. We know of no other interesting properties of $B_{\nu_0}$.
\section{$K_{2,2r}$: binary convex forms}
The author's Ph.D. thesis, submitted in 1976 and
published as \cite{Re0,Re00} in 1978 and 1979,
discussed $N_{n,2r}$. (The identification of
$N_{n,2r}$ with $K_{n,2r}$ was not made there.)
Unbeknownst to him, V. I. Dmitriev had earlier worked on
similar questions at Kharkov University. In
1969, S. Krein, Dmitriev's advisor, asked about the extreme elements of
$K_{2,2r}$. Dmitriev wrote \cite{D1} in 1973 and \cite{D2} in 1991.
Dmitriev writes in \cite{D2}: ``I am not aware of any articles on this
topic, except \cite{D1}.'' We have seen both \cite{D1} and \cite{D2} in
Russian and \cite{D2} in its English translation, thanks to the
diligence of the Interlibrary Loan Staff of the
University of Illinois Library. To complicate matters, there are at
least two mathematicians named V. I. Dmitriev in MathSciNet; the
author of \cite{D1,D2} is
affiliated with Kursk State Technical University.
Let
\begin{equation}\label{E:qlam}
q_{\lambda}(x,y) = x^6 + 6\lambda x^5y+ 15\lambda^2 x^4y^2 + 20 \lambda^3 x^3y^3 + 15
\lambda^2 x^2y^4 + 6\lambda x y^5 + y^6.
\end{equation}
In the language of this paper, the four relevant results from
\cite{D1,Re00,D2} are these:
\begin{proposition}
\
\smallskip
\noindent (i) $K_{2,4} = Q_{2,4}$.
\noindent (ii) $Q_{2,2r} \subsetneq K_{2,2r}$ for $r \ge 3$.
\noindent (iii) The elements of $\mathcal E(K_{2,6})$, are
$[[q_{\lambda}]]$, where $0 < | \lambda | \le \frac 12$.
\noindent (iv) $K_{3,4} \subsetneq Q_{3,4}$; specifically,
$x^4+y^4+z^4+6x^2y^2+6x^2z^2+2y^2z^2 \in K_{3,4} \setminus
Q_{3,4}$.
\end{proposition}
Dmitriev \cite{D1} gave a proof of (i) and (ii) for
even $r$ (using $(x^4 + y^4)^{r/2}$ as the counterexample); his
\cite{D2} gave a proof of (iii).
Prop.\! 6.1 appeared in \cite{Re00}, but (iii) was announced
without proof. (The results from
\cite{Re00} were in the author's thesis.)
Note that (i) and (ii) follow from Prop.\! 3.7 and Theorems 3.9
and 3.10. Since $P_{n,m} = \Sigma_{n,m}$ if $n = 2$ or $(n,m) = (3,4)$,
these examples are not helpful in resolving Parrilo's question about
convex forms which are not sos.
The rest of this section discusses $\partial(K_{2,2r})$, mostly for small $r$.
For
\begin{equation}\label{E:pdef}
p(x,y) = \sum_{i=0}^{2r} \binom{2r}i a_ix^{2r-i}y^i,
\end{equation}
we define essentially the determinant of the Hessian of $p$ at
$(x,y)$. Let
\begin{equation}
\begin{gathered}
\Theta_p(x,y):= \sum_{m=0}^{4r-4} b_mx^{4r-4-m}y^m, \quad
\text{where} \\
b_m := \sum_{j=0}^{2r-1}
\left(\binom{2r-2}j\binom{2r-2}{m-j}
-\binom{2r-2}{j-1}\binom{2r-2}{m-j+1}\right)
a_ja_{m+2-j},
\end{gathered}
\end{equation}
with the convention that $a_i=0$ if $i < 0$ or $i > 2r$.
\begin{proposition}\cite[Prop.B]{D2}
Suppose $p \in P_{2,2r}$. Then
$p \in K_{2,2r}$ if and only if $\Theta_p \in P_{2,4r-4}$ and
$p \in \partial(K_{2,2r})$ if and only if $\Theta_p$ is psd but not pd.
\end{proposition}
\begin{proof}
A direct computation shows that
\begin{equation}
\frac{\partial^2p}{\partial x^2}\frac{\partial^2p}{\partial y^2} -
\left(\frac{\partial^2p}{\partial x\partial y}\right)^2 =
(2r)^2(2r-1)^2\Theta_p(x,y).
\end{equation}
Since $Hes(p;u,u) = 2r(2r-1)p(u) \ge 0$, the first assertion is proved.
Further, $p \in \partial(K_{2,2r})$ if and only if $Hes(p;u_0,v_0) =
0$ for some $u_0 \neq 0, v_0 \neq 0$.
\end{proof}
Observe that
$\Theta_{(\alpha\cdot)^{2r}} =0$, and if $q(x,y)
= p(ax+by,cx+dy)$, then it may be checked $\Theta_q(x,y) =
(ad-bc)^2\Theta_p(ax+by,cx+dy)$. Thus, if $q
\in \partial(K_{2,2r})$, we may assume that $q \sim p$,
where $\Theta_p(0,1) = 0$, so that
\begin{equation}\label{E:zero}
0 = b_0 = a_0a_2 - a_1^2;\qquad 0 = b_1 = (2r-2)(a_0a_3 - a_1a_2).
\end{equation}
We prove that $K_{2,4} = Q_{2,4}$, using the argument of
\cite{Re00} and, essentially, \cite{D1}.
\begin{proposition}
$K_{2,4} = Q_{2,4}$.
\end{proposition}
\begin{proof}
Suppose $q \in \mathcal E(K_{2,4})$. Then $q \in \partial(K_{2,4})$ and
$q \sim p$ where $\Theta_p$ is psd, but $\Theta_p(0,1) = 0$. If $a_0 = 0$,
then $p(0,1) = 0$, so by Prop.\! 4.1, $p(x,y) = a_4 y^4$ is a 4th
power. Otherwise, $a_0 > 0$, and if we write $a_1 = ra_0$, then
by \eqref{E:zero}, we have $a_2 = r^2a_0$ and $a_3 = r^3 a_0$. Write $a_4 =
r^4a_0 + s$. A computation shows that $\Theta_p(x,y) = a_0 s
x^2(x+ry)^2$, hence $s \ge 0$ and $p(x,y) = a_0(x + ry)^4 + s
y^4$. Since $Q_{2,4} \subset K_{2,4}$ and $s \ge 0$, it follows
that $p \in \mathcal E(K_{2,4})$ if and only if $s=0$. Thus $p \in
K_{2,4}$, being a sum of extremal elements, is a sum of 4th powers.
\end{proof}
If $2r=6$, then we shall need $\Theta_p(x,y)$ in full bloom:
\begin{equation}\label{E:theta}
\begin{gathered}
\Theta_p(x,y) = (a_0a_2-a_1^2)x^8 +4(a_0a_3-a_1a_2)x^7y + (6a_0a_4 +
4a_1a_3 - 10a_2^2) x^6y^2 \\
+ 4(a_0a_5 + 4a_1a_4 - 5a_2a_3) x^5 y^3 +
(a_0a_6+14a_1a_5+5a_2a_4-20a_3^2) x^4y^4\\ + 4(a_1a_6 + 4a_2a_5 -
5a_3a_4) x^3 y^5 + (6a_2a_6 +
4a_3a_5 - 10a_4^2) x^2y^6 \\+ 4(a_3a_6-a_4a_5)xy^7 + (a_4a_6-a_5^2)y^8.
\end{gathered}
\end{equation}
\begin{lemma}
If $p \in K_{2,6}$ and $\Theta_p(x,y) = \ell^2(x,y)B_p(x,y)$, where
$\ell$ is linear and $B_p$ is a
pd sextic, then $p \notin \mathcal E(K_{2,6})$.
\end{lemma}
\begin{proof}
After a linear change, we may assume $\ell(x,y) = y$, and assume $p$
is given by \eqref{E:pdef}, so that \eqref{E:theta} holds. Our goal is
to show that $B_p$ being pd implies that $p \pm \epsilon y^6$ is convex for
small $\epsilon$, which contradicts $p$ being extremal. If
$a_0=p(1,0) = 0$, then as
in Prop.\! 6.3, $p(x,y) = a_6y^6$ and $\Theta_p(x,y) = 0$. Otherwise,
we again have $a_1 = ra_0$, $a_2 = r^2a_0$ and $a_3 = r^3 a_0$. A
computation shows that
\begin{equation}\label{E:bp}
\begin{gathered}
B_p(x,y) = 6a_0(a_4-r^4a_0)x^6 + 4a_0(a_5+4ra_4-5r^5a_0)x^5y \\ +
a_0(a_6+14ra_5+5r^2a_4-20r^6a_0) x^4y^2\\ + 4ra_0(a_6 + 4ra_5 -
5r^2a_4) x^3 y^3+ (6r^2a_0a_4 +
4r^3a_0a_5 - 10a_4^2) x^2y^4 \\ + 4(r^3a_0a_6-a_4a_5)xy^5 + (a_4a_6-a_5^2)y^6.
\end{gathered}
\end{equation}
Observe that if $p_{\lambda} = p + \lambda y^6$, then $a_6$ is replaced above
by $a_6 + \lambda$ and
\begin{equation}
\begin{gathered}
B_{p_{\lambda}} = B_p+ \lambda (a_0 x^4y^2 + 4r a_0 x^3y^3 + 6r^2
a_0 x^2y^4 + 4r^3 a_0 xy^5 + a_4 y^6).
\end{gathered}
\end{equation}
Since $B_p$ is pd, there exists sufficiently small $\epsilon$ so that
$B_{p_{_{\pm \epsilon}}}$ is psd, so $p_{\pm \epsilon} \in K_{2,6}$.
But then $p = \frac 12(p_{\epsilon} + p_{-\epsilon})$ is not extremal.
\end{proof}
\begin{proof}[Proof of Prop.\! 6.1(iii)]
By Prop.\! 6.2 and Lemma 6.4, we may assume that $\Theta_p = y^2B_p$ and $B_p$
is psd, but not pd. If $B_p(0,1) = 0$, then by \eqref{E:bp}, $a_4 = r^4a_0$
and $a_5 = r^5a_0$ and, as before, if $a_6 = r^6a_0 + t$, then $\Theta_p =
a t x^4(x+ry)^4$, so $t \ge 0$ and $p \in \mathcal E(K_{2,6})$ if and
only if $t=0$, so $p$ is a 6th power.
If $B_p(1,e) = 0$ and $e \neq 0$, and $\tilde p(x,y) = p(y,x+ey)$, then
$\Theta_{\tilde p}(x,y) = 0$ at $(x,y) = (1,0), (0,1)$, and by dropping the
tilde, we may assume from \eqref{E:theta} that $0 = a_4a_6 - a_5^2 = a_3a_6
- a_4a_5$. Again, $a_6 = p(0,1) \ge 0$, and if $a_6=0$, then $p$ is a
6th power. Otherwise, we set $a_5 = s a_6$, so that $a_4 = s^2a_6$ and
$a_3 = s^3 a_6$; recall that $a_3 = r^3 a_0$ as well. If $s=0$, then
$a_3 = 0$, so $r=0$ and $p(x,y) = a_0x^6 + a_6y^6$, which is only
extremal if it is a 6th power. Thus $s \neq 0$, and similarly, $r \neq 0$.
Letting $t = s^{-1}$ and $a_0 = 1$, we obtain the formulation of \cite{D2}:
\begin{equation}
p(x,y) = x^6 + 6r x^5y + 15 r^2 x^4y^2 + 20 r^3 x^3y^3 + 15 r^3t
x^2y^4 + 6 r^3t^2 xy^5 + r^3t^3 y^6
\end{equation}
Send $(x,y) \mapsto (a_0^{-1/6}x,
a_0^{-1/6}(rt)^{-1/2} y)$ and set $\lambda = \sqrt{r/t} = \sqrt{rs}$ to obtain
$q_{\lambda}$.
We still need to show that $q_{\lambda}$ is convex! A calculation shows that
\begin{equation}
\Theta_{q_{\lambda}}(x,y) = (1-\lambda^2)x^2y^2 C_{\lambda}(x,y),
\end{equation}
where
\begin{equation}
\begin{gathered}
C_{\lambda}(x,y) \\ = 6\lambda^2(x^4+y^4) + (4\lambda +
20\lambda^3)(x^3y+xy^3) + (1+15\lambda^2+20\lambda^4)x^2y^2.
\end{gathered}
\end{equation}
Note that
\begin{equation}
\begin{gathered}
D_{\lambda}(x,y):= C_{\lambda}(x+y,x-y)= (1 + \lambda) (1 + 2 \lambda) (1 + 5 \lambda + 10
\lambda^2)x^4\\ -2
(1-\lambda^2)(1-20 \lambda^2)x^2y^2 + (1 -\lambda) (1 - 2 \lambda) (1 - 5 \lambda + 10 \lambda^2)x^4.
\end{gathered}
\end{equation}
If $\Theta_{q_{\lambda}}$ is psd, then $6\lambda^2(1-\lambda^2) \ge 0$, so $|\lambda| \le
1$. Under this assumption, it suffices to determine when $D_{\lambda}$ is
psd. Since $D_{\lambda}(1,0), D_{\lambda}(0,1) \ge 0$, $|\lambda| \le \frac 12$.
If $D_{\lambda}(x,y) = E_{\lambda}(x^2,y^2)$, then
the discriminant of the quadratic $E_{\lambda}$ is
$128\lambda^2(1-\lambda^2)(1-10\lambda^2)$, hence $D_{\lambda}$ is psd if $0 \le \lambda^2
\le \frac 1{10}$. But, if $\frac 1{20} \le \lambda^2 \le \frac 14$, then
$D_{\lambda}$ is a sum of psd terms. Thus $D_{\lambda}$ is psd
if $|\lambda| \le \frac 12$; this is also true for
$C_{\lambda}$ and $\Theta_{q_{\lambda}}$, so $q_{\lambda} \in K_{2,6}$.
\end{proof}
Note that $\Theta_{q_\lambda}$ has two double zeros when $|\lambda| < \frac 12$, but
$\Theta_{q_{1/2}}$ has three double zeros; it is $\frac 98
x^2y^2(x+y)^2(x^2+xy+y^2)$. It seems likely that for $r \ge 3$,
the structure of $\Theta_p$ for $p \in \mathcal E(K_{2,2r})$
will be complicated and $\mathcal E(K_{2,2r})$ will be hard to analyze.
Note also that
\begin{equation}\label{E:even}
\begin{gathered}
q_{\lambda}(x+y,x-y) =
2(1+\lambda)(1+5\lambda+10\lambda^2)x^6 + 30(1-\lambda^2)(1+2\lambda)x^4y^2 \\ +
30(1-\lambda^2)(1-2\lambda)x^2y^4
+ 2(1-\lambda)(1-5\lambda+10\lambda^2)y^6.
\end{gathered}
\end{equation}
One of the two boundary examples is $q_{-1/2}(x+y,x-y)= x^6 + 45 x^2 y^4 + 18
y^6$, which scales to $x^6 + 15\alpha x^2 y^4 + y^6$, where $\alpha^3 =
\frac 1{12}$.
In an attempt to visualize these blenders, we
now consider the sections of $P_{2,6}=\Sigma_{2,6}$,
$Q_{2,6}$ and $K_{2,6}$ consisting of the normalized even sextic forms
\begin{equation}
g_{A,B}(x,y) = x^6 + \binom 62 A x^4y^2 + \binom 64 B x^2y^4 + y^6,
\end{equation}
and identify $g_{A,B}$ with the point $(A,B)$ in the plane.
If $g_{A,B}$ is on the boundary of the $P_{2,6}$ section, then it is
psd but not pd, and we may assume $(x + r y)^2\ |\ g_{A,B}$ for some $r \neq
0$. Thus, $(x-ry)^2\ |\ g_{A,B}$ as well, and
since the remaining factor must be even, the coefficients of $x^6,y^6$ force it
to be $x^2 + \frac 1{r^4} y^2$. Thus, the boundary forms for the
section of $P_{2,6}$ are
\begin{equation}
(x^2-r^2y^2)^2(x^2 + \tfrac 1{r^4}y^2) = x^6 +( \tfrac 1{r^4} -
2r^2)x^4y^2 + (r^4 - \tfrac 2{r^2})x^2y^4 + y^6.
\end{equation}
The parameterized boundary curve
\begin{equation}
(A,B) = \tfrac 1{15}( \tfrac 1{r^4} - 2r^2, r^4 - \tfrac 2{r^2})
\end{equation}
is strictly decreasing as we move from left to right, and is a
component of the curve $500(A^3+B^3) = 1875(AB)^2 + 150AB - 1$.
By \eqref{E:cata}, $g_{A,B}$ is in $Q_{2,6} = \Sigma^*_{2,6}$, if and
only if
$\left(
\begin{smallmatrix} 1 & 0 & A & 0 \\ 0 & A & 0 & B \\ A & 0 & B & 0 \\ 0 &
B & 0 & 1
\end{smallmatrix}\right)$
is psd if and only if $A \ge B^2$ and $B \ge A^2$, so the section is the
region between these two parabolas.
Except for the fortuitous identity \eqref{E:even},
it would have been very challenging to determine the section for $K_{2,6}$.
Scale $x$ and $y$ in \eqref{E:even} to get $g_{A,B}$: the
parameterization of the boundary is
$(\psi(\lambda),\psi(-\lambda))$, where
\begin{equation}
\psi(\lambda) = \frac
{(1-\lambda)^{2/3}(1+\lambda)^{1/3}(1+2\lambda)}{(1+5\lambda+10\lambda^2)^{2/3}(1-5\lambda+10\lambda^2)^{1/3}}.
\end{equation}
The intercepts occur when $\lambda = \pm \frac 12$ and are
$(12^{-\frac 13},0)$ and $(0,12^{-\frac 13})$. The point $(1,1)\ (\lambda = 0)$ is
smooth but has infinite curvature. The Taylor series of $\psi(\lambda)$ at
$\lambda=0$ begins $1 + \frac {16}3 \lambda^3 - 48 \lambda^4$, so $x-y
\approx \frac
{32}3 \lambda^3$ and $x+y-2 \approx -96 \lambda^4$, hence
\begin{equation*}
x+y-2 \approx
-\tfrac{3^{7/3}}{2^{5/3}}(x-y)^{4/3}.
\end{equation*}
The maximum value of $\psi(\lambda)$ is
$5^{-5/3}(1565+496\sqrt{10})^{1/3} \approx 1.000905$ at $\lambda =
\frac{2\sqrt{10}-5}{15} \approx .0883$; this was asserted without
proof in \cite[p.232]{Re00}.
We conclude with a description of the trinomials in $\partial(K_{2,2r})$. Suppose
$1 \le v \le 2r-1$, $a,c > 0$ and suppose
\begin{equation}
h(x,y) = a x^{2r} + b x^{2r-v}y^v + c y^{2r} \in K_{2,2r}.
\end{equation}
An examination of the end terms of $\Theta_h$ shows that $v$ must be
even and $b \ge 0$. If $b=0$, then $h \in Q_{2,2r}$, so we assume $b >
0$, and wish to find the largest possible value of $b$. Calculations,
which we omit, show that if
\begin{equation}\label{E:hrk}
\begin{gathered}
h_{r,k}(x,y) := (r-k)(2(r-k)-1)^2 x^{2r}\\
+ r(2r-1)(2k-1)(2r-2k-1)x^{2r-2k}y^{2k} +
k(2k-1)^2 y^{2r},
\end{gathered}
\end{equation}
then $\Theta_{h_{r,k}}(x,y)= x^{2r-2-2k}y^{2k-2}(x^2-y^2)^2g(x,y)$,
where $g$ is a (psd) sum
of even terms, and that
if $c > 0$ and $g_{r,k,c} = h_{r,k} + c x^{2r-2k}y^{2k}$,
then $\Theta_{g_{r,k,c}}(1,1) < 0$. Given $(a,c)$, there exist
$(\alpha,\beta)$ so that the coefficients of $x^{2r}$ and
$y^{2r}$ in $h_{r,k}(\alpha x, \beta y)$ are both 1, and we obtain the
examples given in
\cite[Prop.1]{Re00}. In particular,
\begin{equation}
h_{4k,2k}(x,y) \sim x^{4k} + (8k-2)x^{2k}y^{2k} + y^{4k}\in \partial(K_{2,4k}).
\end{equation}
Similar methods show that
\begin{equation}
x^{6k} + (6k-1)(6k-3)x^{4k}y^{2k} + (6k-1)(6k-3)x^{2k}y^{4k} + y^{6k}
\in \partial(K_{2,6k}).
\end{equation}
We have been unable to analyze $K_{2,8}$ completely, but have found
this interesting element in $\mathcal E(K_{2,8})$:
\begin{equation}
p(x,y) = (x^2+y^2)^4 + \tfrac 8{\sqrt 7}\ x y (x^2 - y^2)(x^2+y^2)^2,
\end{equation}
for which $\Theta_p(x,y)= 3072 x^2 (x - y)^2 y^2 (x + y)^2 (x^2 + y^2)^2$.
\section{Sums of 4th powers and binary octic forms}
Hilbert's 17th Problem asks whether $p \in P_{n,2r}$ must be a sum of
squares of rational functions: does there exist $h = h_p \in
F_{n,d}$ (for some $d$) so that $h^2p \in \Sigma_{n,2r+2d} = W_{n,2(r+d)}$? Artin
proved that the
answer is ``yes''. (See \cite{R2,Re3}.) Becker \cite{Be} investigated the
question for higher even powers. His result implies that if $p \in
P_{2,2kr}$ and all real linear factors of $p$ (if any) occur to an exponent
which is a multiple of $2k$, then there exists $h = h_p \in F_{2,d}$
(for some $d$) so that $h^{2k}p \in W_{2,(r+d,2k)}$.
By Becker's criteria, $f_{\lambda}$ (c.f. \eqref{E:flam})
is a sum of
4th powers of rational functions if and only if it is pd; that is,
$\lambda \in (-\frac 13,\infty)$. As we have seen, $f_{\lambda}
\in Q_{2,4} = W_{2,(1,4)}$ if and only if $\lambda \in [0,1]$. If $\ell$ is linear and
$\ell^4f = \sum_k h_k^4 \in W_{2,(2,4)}$, then $\ell | h_k$, so if $f_{\lambda}
\notin Q_{2,4}$ and $h^4f \in W_{2,(1+d,4)}$, then $\deg h = d \ge
2$. The identity
\begin{equation}
\begin{gathered}
3(3x^4 - 4x^2y^2 + 3y^4)(x^2 + y^2)^4 \\ = 2 ( (x-y)^4 + (x+y)^4)
(x^8 + y^8) + 5x^{12} + 11x^8y^4 + 11x^4y^8 + 5y^{12}
\end{gathered}
\end{equation}
shows that $(x^2+y^2)^4f_{\lambda} \in W_{2,(3,4)}$ for $\lambda \in [-\frac
29, \frac{11}3]$, since $T(-\frac{2}9) = \frac{11}3$,
c.f. \eqref{E:TU}.
We offer the following conjectural characterization of
$W_{2,(u,4)}$:
\begin{conjecture}
If $p \in P_{2,4u}$, then $p \in W_{2,(u,4)}$ if and only if there
exist $f,g \in P_{2,2u}$ so that $p = f^2 + g^2$.
\end{conjecture}
It follows from \eqref{E:h22} that
the square of a psd binary form is a sum of three 4th
powers. Conjecture 7.1 thus implies that any sum of 4th powers of
polynomials is a sum of six 4th powers of polynomials.
If $p \in W_{2,(u,4)}$, then $p \in P_{2,4u} =
\Sigma_{2,4u}$, so $p = f^2 + g^2$ for some $f,g \in F_{n,2u}$;
the conjecture says that there is a representation
in which $f$ and $g$ are themselves psd.
This seems related to a result in \cite{CLPR} about sums of 4th powers
of rational functions over real closed fields. If $p = \sum h_k^4$ and
$\ell | p$ for a linear form, then $\ell^{4t} | p$ for some $t$
and $\ell^t | h_k$, so we may assume $p$ is pd. The following is a
special case of \cite[Thm.4.12]{CLPR}, referring to
sums of 4th powers of non-homogeneous rational functions.
\begin{proposition}
Suppose $p \in {\mathbb R}[x]$ is pd. Then $p$ is a sum of 4th powers in
${\mathbb R}(x)$ if and only if there exist pd $f,g,h$ in ${\mathbb R}[x]$,
$\deg f = \deg g$, such that $h^2p = f^2 +g^2$.
\end{proposition}
It follows that a sum of 4th powers in ${\mathbb R}(x)$ is a sum of at most six
4th powers.
\begin{theorem}
Conjecture 7.1 is true for $p \in W_{2,(1,4)} = Q_{2,4}$.
\end{theorem}
\begin{proof}
We have seen that if $p \in W_{2,(1,4)}$, then $p \sim f_{\lambda}$ for
$\lambda \in [0,1]$. If $\lambda \in (\frac 13, 1]$, then $T(\lambda) \in [0,\frac
13)$, so it suffices to find a representation for $f_{\lambda}$ with $\lambda
\in [0,\frac 13]$. Such a representation is
$f_{\lambda}(x,y) = (x^2 + 3\lambda y^2)^2 + (1-9\lambda^2)(y^2)^2$.
\end{proof}
\begin{theorem}
Conjecture 7.1 is true for even symmetric octics.
\end{theorem}
It will take some work to get to the proof of Theorem 7.4.
For the rest of this section, write $W:= W_{2,(2,4)}$. We first characterize
$\partial(W^*)$.
\begin{theorem}
If $p \in \partial(W^*)$, then $p = (\alpha\cdot)^8$ or $p \sim q$, where
\begin{equation}\label{E:bdry}
q(x,y) = d_0 x^8 + 8d_1 x^7y + 28d_2 x^6y^2 + 28 d_6 x^2y^6 + 8 d_7 x
y^7 + d_8 y^8,
\end{equation}
and the following form is psd:
\begin{equation}\label{E:disc}
(6d_2u^2 + 6d_6 w^2)(d_0 u^4
+ 4d_2u^3w + 4d_6uw^3 + d_8 w^4)-( 2d_1u^3+2d_7w^3)^2.
\end{equation}
\end{theorem}
\begin{proof}
Consider a typical element $q \in W^*$,
\begin{equation}\label{E:octdef}
q(x,y) = \sum_{k=0}^8 \binom 8k d_k x^{8-k}y^k.
\end{equation}
Then as in Prop.\! 3.8,
\begin{equation}\label{E:Hoct}
\begin{gathered}[]
H_q(u,v,w):= [q,(u x^2 + v x y + w y^2)^4] = d_0u^4 + 4 d_1 u^3 v + d_2(6 u^2 v^2
+ 4 u^3 w) \\ +
d_3( 4 u v^3 + 12 u^2 v w) + d_4(v^4 + 12 u v^2 w +
6 u^2 w^2) + d_5(4 v^3 w + 12 u v w^2) \\ + d_6(6 v^2 w^2 +
4 u w^3) + 4 d_7v w^3 + d_8 w^4
\end{gathered}
\end{equation}
is a psd ternary quartic in $u,v,w$.
If $ q \in \partial(W^*)$, then $[q,h^2] = 0$ for some non-zero
quadratic $h$. Since $\pm h \sim x^2, xy, x^2+y^2$,
it suffices by Prop.\! 2.6 to consider three cases: $[q,x^8]=0,
[q,x^4y^4]=0$ and $[q,(x^2+y^2)^4] = 0$. Since
\begin{equation}\label{E:h24}
420(x^2+y^2)^4 = 256(x^8+y^8) + \sum_{\pm} (x \pm \sqrt 3 y)^8 +
( \sqrt 3 x \pm y)^8,
\end{equation}
$[q,(x^2+y^2)^4] = 0$ implies that $q(1,0) = q(0,1) = q(1, \pm \sqrt
3) = q(\sqrt3 , \pm 1) = 0$; since $q$ is psd, $q=0$. (An alternate
proof derives
this result from $(x^2 + y^2)^4 \in int(Q_{2,8})$ by
\cite[Thm.8,15(ii)]{Re1}, so $(x^2+y^2)^4 \in int(W)$.)
Suppose $[h,(x^2)^4]=0$; that is,
$H_q(1,0,0) = 0$. Then
$d_0 =0$, and since $H_q$ is now at most quadratic in $u$, it follows
that $d_1=d_2 = 0$. This implies that the coefficient of $u^2$ in
$H_q$ is $12d_3 vw + 6d_4w^2$, hence $d_3=0$ and
\begin{equation}
\begin{gathered}
H_q(u,v,w) = u^2(6d_4w^2) + 2u(2d_6w^3 + 6d_5vw^2+6d_4v^2w) \\ + (d_8w^4
+ 4d_7 w^3v+6d_6w^2v^2+4d_5 wv^3+ d_4v^4).
\end{gathered}
\end{equation}
Since $H_q$ is psd if and only if its discriminant with respect to $u$ is
psd in $v,w$, and this discriminant is $-30d_4^2 v^4w^2 + $ lower terms in $v$,
$d_4=0$. Since $H_q$ cannot be linear in $u$, it follows that
$d_5=d_6=0$ and $H_q(u,v,w) = d_8w^4 + 4d_7w^3v$, which is only psd
if $d_7=0$, so that $q(x,y) = d_8y^8$ is an 8th power.
Finally, suppose $[q,x^4y^4] = 0$; that is, $H_q(0,1,0) = d_4 =
0$. Since $H_q$ is at most
quadratic in $v$, it follows that $d_3=d_5 = 0$, so $q$ has the
shape \eqref{E:bdry} and
\begin{equation}
\begin{gathered}
H_q(u,v,w) = v^2(6d_2u^2 + 6d_6 w^2) \\ + 2v( 2d_1u^3+2d_7w^3) + d_0 u^4
+ 4u^3w d_2 + 4uw^3 d_6 + d_8 w^4;
\end{gathered}
\end{equation}
$H_q$ is psd if and only if its discriminant with respect to $v$,
namely \eqref{E:disc}, is psd.
\end{proof}
It should be possible to characterize $\mathcal E(W^*)$, though we do not
do so here. One family of extremal elements in $\mathcal E(W^*)$ is
parameterized by $\alpha \in {\mathbb R}$:
\begin{equation}
\omega_{\alpha}(x,y):= x^8 + 28 x^2 y^6 + 24 \alpha x y^7 + 3(1 + 2\alpha^2) y^8
\in \mathcal E(W^*).
\end{equation}
In this case,
\begin{equation}
\begin{gathered}
H_{\omega_{\alpha}}(u,v,w) = 6 v^2 w^2 + 12 \alpha v w^3 + u^4 + 4 u w^3 + (3 +
6\alpha^2)w^4 \\ = 6 (v w + \alpha w^2)^2 + (u+w)^2(u^2-2uw+3w^2)
\end{gathered}
\end{equation}
is psd; $H_{\omega_\alpha}(0,1,0) = H_{\omega_\alpha}(1,\alpha,-1) = 0$,
and $H_{\omega_\alpha}(u,v,0)=u^4$ has a 4th order zero at $(0,1,0)$.
It is unclear whether $\omega_{\alpha}$ has other interesting algebraic properties.
We now limit our focus to the section of even symmetric
octics. Let
\begin{equation}\label{E:tildeF}
\widetilde F = \{ ((A,B,C)):= A x^8 + B x^6y^2 + C x^4y^4 + B x^2y^6 +
A y^8\ : \ A,
B, C \in {\mathbb R}\}.
\end{equation}
denote the cone of even symmetric octics, and let
\begin{equation}
\widetilde W = W \cap \widetilde F.
\end{equation}
Then $\widetilde W$ is no longer a blender, because (P3) fails
spectacularly. However, it is still a closed convex cone. We give
the inner product explicitly:
\begin{equation}\label{E:esoip}
p_i = ((A_i,B_i,C_i)) \implies [p_1,p_2] = A_1A_2 + \tfrac{B_1B_2}{28}
+ \tfrac{C_1C_2}{70} + \tfrac{B_1B_2}{28} + A_1A_2.
\end{equation}
Let $(\widetilde W)^* \subset \widetilde F$
denote the dual cone to $\widetilde W$. Here is a special
case of \cite[p.142]{Re1}.
\begin{theorem}
$(\widetilde W)^* = W^* \cap \widetilde F$.
\end{theorem}
\begin{proof}
Suppose $p \in \widetilde W$
and $q \in W^* \cap \widetilde F$. Then $p \in W$ and $q \in W^*$ imply
$[p,q] \ge 0$, so $q \in (\widetilde W)^*$. Suppose now that $q \in
(\widetilde W)^*$; we wish to show that $q \in W^*$. Choose $r \in W$,
and let $r_1 = r$, $r_2(x,y) = r(x,-y)$, $r_3(x,y) = r(y,x)$ and
$r_4(x,y) = r(y,-x)$. Since $q \in \widetilde F$, $[r_j,q] = [r,q]$
for $1 \le j \le 4$; since $p = r_1+r_2+r_3+r_4 \in \widetilde W$,
$0 \le [p,q] = 4[r,q]$. Thus, $[r,q] \ge 0$ as desired.
\end{proof}
We need not completely analyze $(\widetilde W)^*$ to determine
$\widetilde W$. The following suffices.
\begin{lemma}
If $q =((1,0,0))$,
$((4,28,0))$ or $((6-4\lambda^2+3\lambda^4, 28(6-\lambda^2), 420))$, $\lambda \in
{\mathbb R}$, then $q \in W^*$.
\end{lemma}
\begin{proof}
Using the notation of \eqref{E:octdef}, suppose
\begin{equation}
q(x,y) = ((d_0,28d_2,70d_4)) = d_0^8 + 28 d_2x^6y^2 + 70 d_4 x^4y^4 +
28 d_2 x^2y^6 + d_0 y^8.
\end{equation}
Comparison with \eqref{E:esoip} shows that
\begin{equation}\label{E:wstar}
q \in \widetilde W^* \iff ((A,B,C)) \in \widetilde W \implies 2d_0 A +
2d_2 B +d_4 C \ge 0.
\end{equation}
On the other hand, \eqref{E:Hoct} and Theorem 7.6 imply that $q \in
\widetilde W^*$ if and only if
\begin{equation}
\begin{gathered}
H_q(u,v,w) \\ = d_0(u^4+w^4) + d_2(u^2+w^2)(6v^2 + 4uw) + d_4(v^4 +
12uv^2w +6u^2w^2)
\end{gathered}
\end{equation}
is psd. If $(d_0,d_2,d_4) = (1,0,0)$, then $H_q(u,v,w) =
u^4+w^4$, which is psd,
and if $(d_0,d_2,d_4) = (4,1,0)$, then
\begin{equation}
H_q(u,v,w) = 4(u+w)^2(u^2-uw+w^2) + 6(u^2+w^2)v^2.
\end{equation}
Finally, if $(d_0,d_2,d_4) = (6-4\lambda^2+3\lambda^4, 6-\lambda^2, 6)$, then a
computation gives
\begin{equation}
\begin{gathered}
2H_q(u,v,w) = 2(6-4\lambda^2+3\lambda^4)(u^4+w^4) \\ +
2(6-\lambda^2)(u^2+w^2)(6v^2 + 4uw) +
12(v^4 + 12uv^2w +6u^2w^2) \\
= 48(u+w)^2v^2 + 4\lambda^2(u+w)^4 + 3\lambda^4(u^2-w^2)^2 \\ +
3(2v^2 + 2(u+w)^2 - \lambda^2(u^2+w^2))^2.
\end{gathered}
\end{equation}
Note that $H_q(1,\pm \lambda, -1) = 0$.
\end{proof}
An important family of elements in $\widetilde W$ is
\begin{equation}
\begin{gathered}
\psi_\lambda(x,y) : = \tfrac 12\left( (x^2 + \lambda xy - y^2)^4 + (x^2 - \lambda xy -
y^2)^4 \right)\\ =
((1,\ 6\lambda^2-4,\ \lambda^4-12\lambda^2+6)).
\end{gathered}
\end{equation}
\begin{theorem}
The extremal elements of $\widetilde W$ are $x^4y^4$ and $\{\psi_{\lambda} :
\lambda \ge 0\}$. Hence $p =((A,B,C)) \in \widetilde W$ if and only if
\begin{equation}\label{E:cond}
\begin{gathered}
A = B=0,\ C \ge 0, \text{ or } A > 0,\ B \ge - 4A,\ 36AC \ge
B^2 - 64AB - 56A^2.
\end{gathered}
\end{equation}
\end{theorem}
\begin{proof}
By Lemma 7.7 and \eqref{E:wstar}, if $p \in \widetilde W$, then $A\ge
0$, $A + 4B \ge 0$ and
\begin{equation}\label{E:hcond}
2(6 - 4\lambda^2 + 3\lambda^4)A + 2(6 - \lambda^2) B + 6C \ge 0.
\end{equation}
We have $A = p(1,0)=p(0,1)\ge 0$, and if $A=0$ and $p = \sum h_k^4$, then $xy
| h_k$, hence $p = [0,0,C]$ with $C \ge 0$. Otherwise, assume that $A =
1$, so that \eqref{E:cond} becomes
\begin{equation}
B \ge -4, \quad C \ge \tfrac 1{36}(B^2 - 64B -56).
\end{equation}
The first inequality follows from $((4,28,0)) \in \widetilde W^*$, and
we can thus write $B =
6\alpha^2 - 4$, where $\alpha = \sqrt{\frac {B+4}6}$. Put $\lambda = \alpha$ in
\eqref{E:hcond} to obtain
\begin{equation}\label{E:parab}
C \ge \alpha^4 - 12\alpha^2 + 6 = \tfrac 1{36}(B^2 - 64B -56).
\end{equation}
Suppose $p=((A,B,C))$ satisfies \eqref{E:cond}. If $A = 0$, then
$p = c x^4y^4 \in \widetilde W$. If $A > 0$, take $A=1$ and
substitute $B = 6\alpha^2 - 4$, so that, by \eqref{E:parab},
\begin{equation}
p = ((1,B,C)) = ((1,6\alpha^2-4, \alpha^4-12\alpha^2-6)) + ((0,0,\gamma)) =
\psi_{\lambda}(x,y) + \gamma x^4y^4
\end{equation}
for some $\gamma \ge 0$, hence $p \in \widetilde W$.
\end{proof}
Taking $(A,B) = (1,0)$, we obtain \eqref{E:48}. Suppose $\lambda, \mu \ge
-2 $. Then Theorem 7.6 implies that (c.f. \eqref{E:flam})
$f_{\lambda}(x,y)f_{\mu}(x,y) \in W$ if and only if
\begin{equation}
(17 -12\sqrt 2) (\lambda+2) \le \mu+2 \le (17 + 12\sqrt 2) (\lambda+2)
\end{equation}
There is a peculiar resonance with the example after Theorem 4.7.
\begin{proof}[Proof of Theorem 7.4]
Suppose $((A,B,C))$ satisfies \eqref{E:cond}. If $A=0$, then $B=0$ and
$((0,0,C)) =
C(x^2y^2)^2$. Otherwise, suppose $A=1$ and write $B = 6\alpha^2-4$, so
\begin{equation}
B = 6\alpha^2-4, \quad C = \tfrac 1{36}(B^2 - 64B -56) + T =
\alpha^4 - 12\alpha^2 + 6 + T, \quad T \ge 0.
\end{equation}
Observe that
\begin{equation}
\begin{gathered}
(x^4 + (3\alpha^2-2) x^2y^2 + y^4)^2 + (T- 8\alpha^4)(x^2y^2)^2 \\=
((1,6\alpha^2-4,9\alpha^4 - 12 \alpha^2 + 6)) + ((0,0,T-8\alpha^4)) = ((1,B,C)),
\end{gathered}
\end{equation}
so if $T \ge 8\alpha^4$, then we are done. If $0 \le T \le 8\alpha^4$, note that
\begin{equation}
\begin{gathered}
\tfrac12 \left( \bigl((x^2 - \sqrt{\lambda} x y - y^2)^2 + \mu x^2 y^2\bigr)^2 +
\bigl((x^2 + \sqrt{\lambda} x y - y^2)^2 + \mu x^2 y^2\bigr)^2\right)\\ = ((1,
6\lambda+2\mu-4,
6-12\lambda+\lambda^2-4\mu+2\lambda\mu+\mu^2))
\end{gathered}
\end{equation}
is a sum of two squares of psd forms if $\mu \ge 0$. One
solution to the system
\begin{equation}
\begin{gathered}
6\alpha^2-4= 6\lambda+2\mu-4, \\ \alpha^4 - 12\alpha^2 + 6 + T =
6-12\lambda+\lambda^2-4\mu+2\lambda\mu+\mu^2
\end{gathered}
\end{equation}
is
\begin{equation}
\begin{gathered}
\lambda = \frac{ 3\alpha^2 - \sqrt{\alpha^4+T}}2, \quad
\mu = \frac{3(\sqrt{\alpha^4+T} - \alpha^2)}2.
\end{gathered}
\end{equation}
Evidently, $\mu \ge 0$; since $T \le 8\alpha^4$, $\lambda \ge 0$, so
$\sqrt{\lambda}$ is real.
\end{proof}
\section{Bibliography}
|
1,116,691,497,693 | arxiv | \subsection{Partition function with Chern-Simons coupling}
\label{sec:PartitionFunctionCS}
The five-dimensional lift of Nekrasov's partition function of $SU({N_c})$ theory
is given by the summation over the set of ${N_c}$ Young diagrams (or colored partitions)
$\{ \Ya\lambda\a \}_{\a=1}^{{N_c}}$, as follows:
\begin{equation}
Z^{\mathrm{inst}}(\epsilon_1, \epsilon_2; a_\a, \Lambda) \nonumber \\
= \sum_{ \{ \Ya\lambda\a \}} \frac{
\left(
e^{-\frac{\epsilon_1+ \epsilon_2}{2}}
\Lambda^2\right)^{{N_c} \cdot |\lambda|}}
{\prod_{\alpha, \beta=1}^{{N_c}}
N^{\{\lambda_{\alpha}\}}_{\alpha,\beta}\left(\epsilon_1,\epsilon_2;a_\alpha\right) }~.
\label{Klift-0}
\end{equation}
The product in the denominator is the equivariant Euler character of the tangent space to the instanton moduli
space $M({N_c},k)$ at a fixed point of the toric action, which is labeled by $\{ \Ya\lambda\a \}_{\a=1}^{{N_c}}$ with $|\lambda|=k$ .
Each factor is given by
\begin{equation}
N^{\{\lambda_{\alpha}\}}_{\alpha,\beta}\left(\epsilon_1,\epsilon_2;a_\alpha\right)
=
\prod_{s \in \lambda_\alpha}
\left( 1 - e^{
\ell_{\lambda_\beta}(s)\epsilon_1 - (a_{\lambda_\alpha}(s) + 1)\epsilon_2
+ a_\alpha - a_\beta
}\right)
\prod_{t \in \lambda_\beta}
\left( 1 - e^{
-(\ell_{\lambda_\alpha}(t)+1)\epsilon_1 +a_{\lambda_\beta}(t)\epsilon_2
+ a_\alpha - a_\beta
}\right).
\end{equation}
The product
$\prod_{\alpha, \beta=1}^{{N_c}} N^{\{\lambda_{\alpha}\}}_{\alpha,\beta}\left(\epsilon_1,\epsilon_2;a_\alpha\right) $
consists of
$2{N_c} k$ factors, which agree to the complex dimensions of $M({N_c},k)$.
In \cite{NY2, GNY} the five-dimensional lift is mathematically identified as
the $K$-theoretic lift and it is computed as follows:
\begin{eqnarray}
& &Z_{m}^{\mathrm{inst}}(\epsilon_1, \epsilon_2; a_\a, \Lambda) \nonumber \\
&=& \sum_{k=0}^\infty
\left(
e^{-\frac{1}{2}({N_c}+m)(\epsilon_1+ \epsilon_2)}\Lambda^{2{N_c}} \right)^k
\sum_{i} (-1)^i {\mathrm{ch}} H^i(M({N_c},k), {\cal L}^{\otimes m}) \nonumber \\
&=& \sum_{\{\Ya\lambda\a\}} \frac{
\left(
e^{-\frac{1}{2}({N_c}+m)(\epsilon_1+ \epsilon_2)}\Lambda^{2{N_c}} \right)^{|\lambda|}}
{\prod_{\alpha, \beta} N^{\{\lambda_{\alpha}\}}_{\alpha,\beta}\left(\epsilon_1,\epsilon_2;a_\alpha\right) }
\cdot \exp \left(m \sum_\alpha \sum_{s \in \lambda_\alpha} (a_\alpha - \ell'(s)\epsilon_1 - a'(s) \epsilon_2) \right)~,
\label{Klift-m}
\end{eqnarray}
where $M({N_c},k)$ is the framed moduli space of rank ${N_c}$ torsion free sheaves $E$ on ${\mathbb P}^2$ with
$c_2(E)=k$. The line bundle ${\cal L}$ over $M({N_c},k)$ is defined by
\begin{equation}
{\cal L} := {\mathrm{det}} \left[ R^1 (p_{2})_* ({\cal E} \otimes (p_1)^*{\cal O}_{{\mathbb P}^2}(-\ell_\infty))\right]~,
\end{equation}
where ${\cal E}$ is the universal sheaf on ${\mathbb P}^2 \times M({N_c},k)$ and $p_{1,2}$ is
the projection to the first or the second component.
Physically $Z_{m}^{\mathrm{inst}}$ is the instanton part of the partition function of
$SU({N_c})$ gauge theory on ${\mathbb R}^4 \times S^1$ with eight supercharges,
and the power $m \in {\mathbb Z}$ of the line bundle ${\cal L}$ is identified
as the coefficient of the five-dimensional Chern-Simons term \cite{Tac}.
Let $(q,t):=(e^{\epsilon_2},e^{-\epsilon_1})$,
${\bf e}_\alpha := e^{-a_\alpha}$ and
$Q_{\alpha,\beta} := \QQ\alpha\beta$;%
\footnote{The different conventions $(q,t):=(e^{-\epsilon_2},e^{\epsilon_1})$ and
$Q_{\beta,\alpha} := \QQ\alpha\beta$
are also used in the literature.}
then
$Z_{m}^{\mathrm{inst}}(\epsilon_1, \epsilon_2; a_\a, \Lambda)$
is written as
\begin{equation}
\ZQ{inst}{m}qt{{\bf e}_1,\cdots,{\bf e}_{{N_c}}}\Lambda
=
\sum_{\{\Ya\lambda\a\}}
{
\prod_{\a =1}^{{N_c}}
\left(v^{-{N_c}}\Lambda^{2{N_c}}
\left(-{\bf e}_\a \right)^{-m}\right)^{|\Ya\lambda\a |}
\fla{\Ya\lambda\a }qt ^{-m}
\over
\prod_{\a ,\b = 1}^{{N_c}}
\Nek{\Ya\lambda\a }{\Ya\lambda\b }{Q_{\a ,\b }}qt
}~,
\label{eq:NekrasovZinstM}
\end{equation}
with $v:=(q/t)^{1\over2}$ and
\begin{eqnarray}
\Nek{\lambda_\alpha}{\lambda_\beta}{Q_{\alpha,\beta}}qt
&:=&
N^{\{\lambda_{\alpha}\}}_{\beta,\alpha}
\left(\epsilon_1,\epsilon_2;a_\alpha\right)
\cr
&=&
\!\prod_{s \in \lambda_\alpha}
\left( 1- q^{a_{\lambda_\alpha}(s)} t^{\ell_{\lambda_\beta}(s) +1} {Q_{\alpha,\beta}} \right)
\!\prod_{t \in \lambda_\beta}
\left( 1- q^{- a_{\lambda_\beta}(t) - 1} t^{-\ell_{\lambda_\alpha}(t)} {Q_{\alpha,\beta}} \right),~~~
\end{eqnarray}
where
$|\lambda|$ is the number of boxes of $\lambda$ and
\begin{equation}
\fla\lambda qt
:=
\prod_{s\in\lambda}(-1) q^{a'(s) + {1\over2}} t^{ - \ell'(s) -{1\over2}}
=
\prod_{(i,j)\in\lambda}
(-1) q^{\lambda_i - j + {1\over2}} t^{ -\lambda^\vee_j + i -{1\over2}},
\end{equation}
is the framing factor
\footnote{
In terms of
$||\lambda||^2
:=
\sum_i \lambda_i^2
=
2\sum_{s\in\lambda}(a(s)+{1\over2})
$,
this is
$
\fla\lambda qt
=
(-1)^{|\lambda|} q^{{1\over2}||\lambda||^2} t^{ -{1\over2}||\lambda^\vee||^2}
$.
}
which has been proposed by Taki \cite{Taki}.
This is nothing but the $m$ dependent $(q,t)$ factor of
the partition function.
Note that the framing factor satisfies the following symmetry:
\begin{equation}
\fla\lambda qt
=
\fla\lambda{q^{-1}}{t^{-1}} ^{-1}
=
\fla{\lambda^\vee}tq ^{-1} . \label{f-symmetry}
\end{equation}
Let
$u = (qt)^{{1\over2}}$ and
$v =\left({q/t}\right)^{{1\over2}}$.
We have the following six equivalent expressions of $\Nek{\lambda}{\mu}Qqt $:
\\
{\bf Proposition.}
\begin{eqnarray}
\Nek{\lambda}{\mu}Qqt
&=&
\prod_{(i,j)\in\mu }
\left( 1 - Q\, q^{\lambda_i-j} t^{\mu^\vee_j-i+1} \right)
\prod_{(i,j)\in\lambda}
\left( 1 - Q\, q^{-\mu_i+j-1} t^{-\lambda^\vee_j+i } \right),
\label{eq:NekIp}
\\
\Nek{\lambda}{\mu}Qqt
&=&
\prod_{(i,j)\in\lambda}
\left( 1 - Q\, q^{\lambda_i-j} t^{\mu^\vee_j-i+1} \right)
\prod_{(i,j)\in\mu }
\left( 1 - Q\, q^{-\mu_i+j-1} t^{-\lambda^\vee_j+i } \right),
\label{eq:NekIm}
\end{eqnarray}
\begin{eqnarray}
\Nek{\lambda}{\mu}Qqt
&=&
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q\, q^{\lambda}t^{\rho},\ t^{\mu^\vee}q^{\rho}\right)
/
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q\, t^{\rho},\ q^{\rho}\right),
\label{eq:NekIIp}
\\
\Nek{\lambda}{\mu}Qqt
&=&
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q\, t^{-\lambda^\vee}q^{-\rho},\ q^{-\mu}t^{-\rho}\right)
/
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q\, q^{-\rho},\ t^{-\rho}\right),
\end{eqnarray}
\begin{eqnarray}
\Nek{\lambda}{\mu}Qqt
&=&
\Pi\left(Q\, q^\lambda t^\rho,q^{-\mu}t^{-\rho};q,t\right)
/
\Pi\left(Q\, t^\rho,t^{-\rho};q,t\right),
\label{eq:NekIIIp}
\\
\Nek{\lambda}{\mu}Qqt
&=&
\Pi\left(Q\, t^{\mu^\vee} q^\rho,t^{-\lambda^\vee}q^{-\rho};t^{-1},q^{-1}\right)
/
\Pi\left(Q\, q^\rho,q^{-\rho};t^{-1},q^{-1}\right).
\end{eqnarray}
where
\begin{eqnarray}
\Pi_0} %{\widetilde\Pi(-x,y)
& := &
\Exp{
-\sum_{n>0}
{1
\over n} p_n(x) p_n(y)
}
=
\prod_{i,j}(1- x_i y_j),
\label{eq:Pizero}
\\
\Pi(v x,y;q,t)
& := &
\Exp{
\sum_{n>0}{1\over n}
{
t^{{n\over 2}}-t^{-{n\over 2}}
\over
q^{{n\over 2}}-q^{-{n\over 2}}
}
p_n(x) p_n(y)
}
=
\left\{
\begin{array}{ll}
\displaystyle{
\prod_{i,j}
{(u x_i y_j ; q)_\infty
\over
(v x_i y_j ; q)_\infty},
}
\quad &|q|<1
\cr
\displaystyle{
\prod_{i,j}
{(u^{-1} x_i y_j ; q^{-1})_\infty
\over
(v^{-1} x_i y_j ; q^{-1})_\infty},
}
\quad &|q^{-1}|<1.
\end{array}
\right.
\end{eqnarray}
Here
$(x;q)_\infty$ is the $q$-shifted factorial
$
(x;q)_\infty := \prod_{k\in\bZ_{\geq 0}} (1-q^k x)
$.
Note that when $|q^{-1}|, |t^{-1}| < 1$,
(\ref{eq:NekIIp}) is written as
\begin{equation}
\prod_{i,j=1}^\infty
\left(1-Q\, q^{\lambda_i-j} t^{\mu^\vee_j-i+1}\right)
/
\left(1-Q\, t^{1-i} q^{-j}\right).
\end{equation}
and also when $|q|<1$,
(\ref{eq:NekIIIp}) is
\begin{equation}
\prod_{i,j=1}^\infty
{
\left(Q\, q^{\lambda_i-\mu_j} t^{j-i+1} ; q\right)_\infty
\over
\left(Q\, q^{\lambda_i-\mu_j} t^{j-i } ; q\right)_\infty
}
{
\left(Q\, t^{j-i } ; q\right)_\infty
\over
\left(Q\, t^{j-i+1} ; q\right)_\infty
}.
\end{equation}
The equivalence of these six expressions are proved in app.\ {A}.
The first formula,
(\ref{eq:NekIp}),
is given by
\cite{rf:NakajimaYoshioka},
and
(\ref{eq:NekIIp})--(\ref{eq:NekIIIp})
by \cite{rf:NekrasovOkounkov}.
In this article, we mainly use
(\ref{eq:NekIm}) and (\ref{eq:NekIIp}).
\subsection{Another form of the partition function}
Let us transform Nekrasov's partition function $Z_{m}^{\mathrm{inst}}$ so that it becomes
transparent, and compare it with the amplitude constructed by the method of topological vertex.
Using (\ref{eq:appsetFormulaII}), i.e.
\begin{equation}
\prod_{(i,j)\in\lambda}q^{\mu_i-j}
\prod_{(i,j)\in\mu}q^{-\lambda_i+j-1}
=
\prod_{(i,j)\in\mu}q^{\mu_i-j}
\prod_{(i,j)\in\lambda}q^{-\lambda_i+j-1},
\end{equation}
we can show,
from (\ref{eq:NekIp}), that
\begin{eqnarray}
\Nek{\mu}{\lambda}{Q^{-1}}qt
&=&
\Nek\lambda\mu{v^2Q}qt
Q^{-|\lambda|-|\mu|}
\fla\mu qt
/
\fla\lambda qt
\cr
&=&
\Nek{\mu^\vee}{\lambda^\vee}{Q}tq
(vQ)^{-|\lambda|-|\mu|}
\fla\mu qt
/
\fla\lambda qt~.
\label{inversion}
\end{eqnarray}
Here we use (\ref{eq:NDual}).
Hence we have
\begin{eqnarray}
\prod_{\a <\b }^{{N_c}}
\Nek{\Ya\lambda\b }{\Ya\lambda\a }{Q_{\b ,\a }}qt
&=&
\prod_{\a <\b }^{{N_c}}
\Nek{{\Yav\lambda\b }}{{\Yav\lambda \a }}{Q_{\a ,\b }}tq
\cr
&\times&
\prod_{\a =1}^{{N_c}}
\left(
v^{{N_c}-1}\prod_{\b =1}^{\a -1} Q_{\b ,\b +1}^\b \prod_{\b =\a }^{{N_c}-1}Q_{\b ,\b +1}^{{N_c}-\b }
\right)^{-|\Ya\lambda \a |}
\hskip -10pt
\fla{\Ya\lambda \a }qt ^{-{N_c}+2\a -1},~~~
\end{eqnarray}
and thus Nekrasov's formula (\ref{eq:NekrasovZinstM})
is rewritten as
\begin{equation}
\Zm{inst}{m}
=
\sum_{\Ya\lambda {1},\cdots,\Ya\lambda {{N_c}}}
{
\prod_{\a =1}^{{N_c}}
\Lambda_{\a ,m}{}^{|\Ya\lambda \a |}
\fla{\Ya\lambda \a }qt ^{{N_c}-m-2\a +1}
\over
\prod_{\a <\b }^{{N_c}}
\Nek{\Ya\lambda \a }{\Ya\lambda \b }{Q_{\a ,\b }}qt
\Nek{{\Yav\lambda \b }}{{\Yav\lambda \a }}{Q_{\a ,\b }}tq
\prod_{\a =1}^{{N_c}}
\Nek{\Ya\lambda \a }{\Ya\lambda \a }{1}qt
},
\end{equation}
where
\begin{equation}
\Lambda_{\a ,m}
:=
v^{-1}\Lambda^{2{N_c}} \left(-{\bf e}_\a \right)^{-m}
\prod_{\b =1}^{\a -1} Q_{\b ,\b +1}^\b \prod_{\b =\a }^{{N_c}-1}Q_{\b ,\b +1}^{{N_c}-\b }.
\label{eq:Lambdai}
\end{equation}
For example, for $SU(2)$ theory we have
\begin{eqnarray}
\Zm{inst}{m}
&=&
\sum_{\lambda_1, \lambda_2}
\frac{
\left(v^{-1}\Lambda^4 Q_{H} \right)^{|\lambda|}
(-\Q_1)^{-m|\lambda_1|} (-\Q_2)^{-m|\lambda_2|}
\fla{\lambda_1}qt^{ 1-m}
\fla{\lambda_2}qt^{-1-m} }
{
\Nek{\Ya\lambda 1 }{\Ya\lambda 2 }{Q_{H}}qt
\Nek{{\Yav\lambda 2 }}{{\Yav\lambda 1 }}{Q_{H}}tq
\Nek{\Ya\lambda 1 }{\Ya\lambda 1}{1}qt
\Nek{\Ya\lambda 2 }{\Ya\lambda 2}{1}qt
}
\nonumber \\
&=&
\sum_{\lambda_1, \lambda_2}
\frac{
\left(v^{-1}\Lambda^4 Q_{H} \right)^{|\lambda|}
(-\Q_1)^{-m|\lambda_1|} (-\Q_2)^{-m|\lambda_2|}
\fla{\lambda_1}qt^{ 1-m}
\fla{\lambda_2}qt^{-1-m} }
{\Nek{\Ya\lambda 1 }{\Ya\lambda 1}{1}qt
\Nek{\Ya\lambda 2 }{\Ya\lambda 2}{1}qt }
\cr
& &\times
{
\Pi_0} %{\widetilde\Pi\left(-v^{-1}Q_H t^\rho,\ q^\rho \right)
\Pi_0} %{\widetilde\Pi\left(-v Q_H q^\rho,\ t^\rho \right)
\over
\Pi_0} %{\widetilde\Pi\left(-v^{-1}Q_H q^{\Ya \lambda 1}t^\rho,\ t^{\Yav\lambda 2}q^\rho \right)
\Pi_0} %{\widetilde\Pi\left(-v Q_H t^{\Yav\lambda 2}q^\rho,\ q^{\Ya \lambda 1}t^\rho \right)
}~,
\label{SU2}
\end{eqnarray}
where
\begin{equation}
\Nek{\lambda}{\lambda}{1}qt
=
\prod_{s \in \lambda}
\left( 1-q^{a(s)} t^{\ell(s)+1} \right)
\left( 1- q^{- a(s)-1} t^{-\ell(s)}\right),
\end{equation}
and $Q_{H} = Q_{12}$. It is this form of Nekrasov's partition function that is
obtained from the refined topological vertex with formulas of the Macdonald
functions.
\subsection{Symmetry as a character of $Spin(4)$ }
If Nekrasov's partition function gives the generating function of refined BPS state counting
in the compactification of $M$ theory on local Calabi-Yau spaces, it has to be a character of
$Spin(4) \simeq SU(2)_L \times SU(2)_R$, since the spin of massive BPS particle in
five dimensions is a representation of $Spin(4)$.
In general, if a function $f(u, v)$ in two variables $(u,v)$ is invariant under
both $u \to u^{-1}$ and $v \to v^{-1}$, it is a linear combination of $Spin(4)$ characters
\begin{equation}
f(t,q) = \sum_{(s_L, s_R)} a_{(s_L, s_R)} \chi_{s_L}(u) \chi_{s_R}(v)~, \label{Spin-decomp}
\end{equation}
where
\begin{equation}
\chi_n (z) := z^n + z^{n-2} + \cdots + z^{-n+2} + z^{-n}
= \frac{z^{n+1} - z^{-n-1}}{z - z^{-1}}~,
\end{equation}
is the character of the irreducible representation of $SU(2)$ with spin $n/2$.
Hence, if the $k$-instanton part
$Z^{(k)}(q,t)$
of the partition function is invariant
under the transformations
$r_L : (q,t) \to (q^{-1},t^{-1})$ and $r_R : (q,t) \to (t,q)$.
$Z^{(k)}(q,t)$ is expanded as
\begin{equation}
Z^{(k)}(q,t) = \sum_{(s_L, s_R)} a_{(s_L, s_R)}^{(k)} \chi_{s_L}(u) \chi_{s_R}(v)~, \label{character}
\end{equation}
with rational coefficients $a_{(s_L, s_R)}^{(k)}$. Recall that $u= \sqrt{qt}$ and $v=\sqrt{q/t}$.
Actually we will find an appropriate scaling of $Z^{(k)}(q,t)$ depending on the instanton number $k$ is
necessary for the genuine invariance under the above transformations.
If the partition function takes the form \eqref{character},
then the $k$-instanton part $F^{(k)}(q,t)$
of the free energy is also a linear combination of
$Spin(4)$ characters. Furthermore, if the pole structure of the free energy is appropriate, we can factor out
the character of the half-hypermultiplet and subtract the multicovering contributions
to obtain the expansion of the total free energy in the Gopakumar-Vafa form:
\begin{eqnarray}
F &=& \log Z = \sum_{k=0}^\infty F^{(k)} (Q_\beta;q,t) \nonumber \\
&=& \sum_{\beta \in H_2(X, {\mathbb Z})}
\sum_{(j_L, j_R)} \sum_{n=1}^\infty
\frac{N_\beta^{(j_L,j_R)} u^n v^n}{n(u^n v^n -1)(u^n- v^n)} \chi_{n\cdot j_L}(u) \chi_{n\cdot j_R}(v)
Q_\beta^n~. \label{GVrefined}
\end{eqnarray}
The coefficients $N_\beta^{(j_L,j_R)}$ of the expansion \eqref{GVrefined} are conjectured to be
nonnegative integers, since from the viewpoint of the Calabi-Yau compactification of $M$ theory
they are interpreted as multiplicities of the five-dimensional BPS particles arising from
$M2$ branes wrapping on a two-cycle $\beta \in H_2(X, {\mathbb Z})$ in the Calabi-Yau 3-fold $X$.
We have checked the integrality of the refined BPS state counting from the $SU(2)$ and $SU(3)$
partition functions up to instanton number $2$. The result is presented in appendix {C}.
Since the transformation
$(q,t) \to (t^{-1},q^{-1})$
is compensated by the transpose of colored partitions, we have
\begin{equation}
\prod_{\alpha, \beta=1}^{{N_c}}
\NY{\lambda_\alpha}{ \lambda_\beta} {q^{-1}}{t^{-1}}{Q_{\alpha,\beta}}
= \prod_{\alpha, \beta=1}^{{N_c}}
\NY{\lambda_\alpha^\vee}{\lambda_\beta^\vee}tq{Q_{\alpha,\beta}} ~.
\end{equation}
By \eqref{inversion}, we also find that
\begin{equation}
\prod_{\alpha, \beta=1}^{{N_c}}
\NY{\lambda_\alpha}{ \lambda_\beta} {t^{-1}}{q^{-1}}{Q_{\alpha,\beta}^{-1}}
= \left( \frac{q}{t} \right)^{{N_c} |\lambda|} \prod_{\alpha, \beta=1}^{{N_c}}
\NY{\lambda_\alpha}{ \lambda_\beta}tq{Q_{\alpha,\beta}} ~.
\end{equation}
Therefore, we obtain
\begin{eqnarray}
\sum_{\{\lambda_\a\}, |\lambda| =k}
\frac{1}{ \prod_{\alpha, \beta=1}^{{N_c}} \NY{\lambda_\alpha}{ \lambda_\beta} {q^{-1}}{t^{-1}}{Q_{\alpha,\beta}} }
&=&
\sum_{\{\lambda_\a\}, |\lambda| =k}
\frac{1}{ \prod_{\alpha, \beta=1}^{{N_c}} \NY{\lambda_\alpha}{ \lambda_\beta}tq{Q_{\alpha,\beta}} }~, \nonumber \\
\sum_{\{\lambda_\a\}, |\lambda| =k}
\frac{1}{ \prod_{\alpha, \beta=1}^{{N_c}} \NY{\lambda_\alpha}{ \lambda_\beta}{t^{-1}}{q^{-1}}{Q_{\alpha,\beta}^{-1}} }
&=& \left( \frac{t}{q} \right)^{{N_c} |\lambda|}
\sum_{\{\lambda_\a\}, |\lambda| =k}
\frac{1}{ \prod_{\alpha, \beta=1}^{{N_c}} \NY{\lambda_\alpha}{ \lambda_\beta} tq{Q_{\alpha,\beta}} }~. \nonumber \\
\end{eqnarray}
Thus if we can prove
\begin{equation}
\sum_{\{\lambda_\a\}, |\lambda| =k}
\frac{1}{ \prod_{\alpha, \beta=1}^{{N_c}} \NY{\lambda_\alpha}{ \lambda_\beta} tq{Q_{\alpha,\beta}^{-1}} }
=
\sum_{\{\lambda_\a\}, |\lambda| =k}
\frac{1}{ \prod_{\alpha, \beta=1}^{{N_c}} \NY{\lambda_\alpha}{ \lambda_\beta}tq{Q_{\alpha,\beta}} }~, \label{Qinv}
\end{equation}
the partition function
$Z^{\mathrm{inst}}_m$
is invariant under both of the reflections $r_L$ and $r_R$.
It is easy to see that the property \eqref{Qinv} is valid for ${N_c} =2$,
since the exchange of two partitions effectively induces ${\bf e}_\alpha \to {\bf e}_\alpha^{-1}$.
For ${N_c} > 2$ the validity of \eqref{Qinv} seems nontrivial. The overall reflection of the roots
${\bf e}_\alpha \to {\bf e}_\alpha^{-1}$ cannot be induced by any permutation of colored partitions.
However, we have checked by explicit computations that \eqref{Qinv} is true for ${N_c} = 3$ and $k=1,2$.
This is consistent with the computation in appendix {C}, where we obtain the results of
refined BPS state counting from Nekrasov's partition function of $SU(3)$ gauge theory.
The symmetry of Nekrasov's partition function with the Chern-Simons coupling
can also be derived easily.
Because of (\ref{eq:nDual}), the factor $\Nek{\lambda}{\mu}Qqt $
enjoys the following duality relations:
\begin{equation}
\Nek{\lambda}{\mu}{vQ}qt
= \Nek{\mu}{\lambda}{v^{-1} Q}{q^{-1}}{t^{-1}}
= \Nek{\mu^\vee}{\lambda^\vee}{v^{-1} Q}tq .
\label{eq:NDual}
\end{equation}
From the expression
(\ref{eq:NekIm}),
we also have
\begin{equation}
\Nek{\lambda}{\mu}{vQ}qt
=
\Nek{\mu}{\lambda}{v Q^{-1}}qt
Q^{|\lambda|+|\mu|}
\fla\lambda qt
/
\fla\mu qt .
\label{eq:NDualQ}
\end{equation}
Since
\begin{equation}
\Nek{\lambda}{\mu}{v^2 Q}qt
\Nek{\mu}{\lambda}{v^2 Q^{-1}}qt
=
\Nek{\lambda}{\mu}Qqt
\Nek{\mu}{\lambda}{Q^{-1}}qt
v^{2(|\lambda|+|\mu|)},
\end{equation}
we find that
\begin{eqnarray}
\Nek{\lambda}{\mu}Qqt
\Nek{\mu}{\lambda}{Q^{-1}}qt
v^{|\lambda|+|\mu|}
&=&
\Nek{\lambda}{\mu}{Q^{-1}}{q^{-1}}{t^{-1}}
\Nek{\mu}{\lambda}{Q }{q^{-1}}{t^{-1}}
v^{-|\lambda|-|\mu|}
\cr
&=&
\Nek{\lambda^\vee}{\mu^\vee}{Q^{-1}}tq
\Nek{\mu^\vee}{\lambda^\vee }{Q }tq
v^{-|\lambda|-|\mu|}.
\end{eqnarray}
Thus, Nekrasov's partition function $\Zm{inst}{m}$
has the following symmetries:
\begin{eqnarray}
\ZQ{inst}{m}qt{ \Q_1,\cdots,\Q_{{N_c}} }{\Lambda}
&=&
\ZQ{inst}{-m}{q^{-1}}{t^{-1}}{ \Q_1^{-1},\cdots,\Q_{{N_c}}^{-1} }{\Lambda}
\cr
&=&
\ZQ{inst}{-m}tq{ \Q_1^{-1},\cdots,\Q_{{N_c}}^{-1} }{\Lambda} .
\end{eqnarray}
\setcounter{equation}{0} \section{Geometric Engineering and Toric Geometry}
In this section following \cite{KKV, KMV, IK-P2, HIV}, we review the toric geometry that is
necessary for geometric engineering. Geometric engineering tells how
to obtain ${\cal N}=2$ $SU(N_c)$ super Yang-Mills theory with $N_f$ fundamental matters
from type II(A) string theory on local Calabi-Yau manifold $K_S$,
the canonical bundle of a 4-cycle $S$. The (toric) geometry of the 4-cycle $S$
can be described by the (dual) toric diagram.
The prescription of the geometric engineering implies that the toric diagram of $S$
has $N_c$ horizontal internal edges (\lq\lq color\rq\rq\ $D5$ branes) and
$N_f$ horizontal external edges (\lq\lq flavor\rq\rq\ $D5$ branes).
For example, the vertical distance of \lq\lq color\rq\rq\ $D5$ branes represents
vacuum expectation values of the Higgs fields or the mass of $W$ bosons.
The matter fermions are given by
fundamental strings connecting a \lq\lq color\rq\rq\ $D5$ brane
and a \lq\lq flavor\rq\rq\ $D5$ brane.
The vertical distance of a \lq\lq color\rq\rq\ $D5$ brane
and a \lq\lq flavor\rq\rq\ $D5$ brane represents the mass of the corresponding matter fermion.
One of the properties of toric diagrams that arise from geometric engineering
is that each vertex has a unique horizontal edge. In the following
we will consider toric diagrams in which we specify the horizontal edges as distinguished.
In the computation by the method of topological vertex, we cut the internal horizontal edges.
Then the contribution of each component is given by an amplitude of
\lq\lq the vertex on a strip\rq\rq \cite{IK-P3}. By gluing these amplitudes
we obtain the partition function for the local toric Calabi-Yau manifold $K_S$.
In the compactification of type IIA string theory on local Calabi-Yau manifold,
${\cal N}=2$ supersymmetric $SU(N)$ gauge theory is geometrically engineered by ALE fibration
of $A_{N-1}$ type over the rational curve ${\bf P}^1$.
The fiber consists of a chain of $N-1$ rational curves whose intersection form is given by the minus of
the Cartan matrix of $A_{N\!-\!1}$. The holomorphic 2-cycles in the fiber are in one-to-one correspondence
with the positive roots of $A_{N-1}$.
The (dual) toric diagram takes the form of \lq\lq ladder\rq\rq\ diagram with
$N$ parallel horizontal edges. In the toric diagram the faces correspond to compact $4$-cycles (divisors).
\FigLadder
In the ladder diagram of ALE fibration over ${\bf P}^1$,
we find $N-1$ divisors, all of which are ${\bf P}^1$
fibration over ${\bf P}^1$, namely the Hirzebruch surfaces. The degree of the Hirzebruch
surface can be determined by the (relative) slopes of the vertical edges of the face.
We will denote the Hirzebruch surface of degree $n$ by ${\bf F}_n$.
It is known that for each $N$ there are $N+1$ types of such geometry, which we label by $m=0,1, \cdots N$ \cite{IK-P2, HIV}.
The integer $m$ is related to the coupling constant of five-dimensional Chern-Simons coupling \cite{Tac}.
Let us call such geometry toric $SU(N)_m$ geometry.
We can characterize the toric $SU(N)_m$ geometry by saying that its compact $4$-cycles are
$\{ {\bf F}_{N-2+m}, {\bf F}_{N-4+m}, \cdots {\bf F}_{-N+2+m} \}$.
\FigDivisor
The K\"ahler parameters of $SU(N)_m$ geometry are $T_B$ of the base space ${\bf P}^1$
and $T_{F_i}~(i=1, \cdots, N-1)$ of the fiber which is a chain of $(N-1)$ ${\bf P}^1$'s.
In the subdiagram of Figure 2, the rational curves of both side edges correspond to the
fiber of ${\bf F}_{N-2k+2}$, and their K\"ahler parameters are $T_{F_k}$. On the
other hand, if we denote the K\"ahler parameters of the upper and the lower edges by
$T_{B_k}$ and $T_{B_{k+1}}$, respectively. The difference is related to the degree of
the Hirzebruch surface as follows:
\begin{equation}
T_{B_k} - T_{B_{k+1}} = (N-2k+m) T_{F_k}~. \label{recursion}
\end{equation}
From the recursion relation \eqref{recursion} we find, if $N+m = 2r+1$ is odd, that
\begin{eqnarray}
T_{B_r} &:=& T_B~, \nonumber \\
T_{B_i} &=& T_B + \sum_{j=i}^{r} (N+m-2j) T_{F_j}~, \quad (1 \leq i \leq r-1)~, \nonumber \\
T_{B_i} &=& T_B + \sum_{j=r+1}^{i-1} (2j -N -m) T_{F_j}~, \quad (r+1 \leq i \leq N)~, \label{odd}
\end{eqnarray}
and if $N+m = 2r$ is even,
\begin{eqnarray}
T_{B_r} &=& T_{B_{r+1}} := T_B~, \nonumber \\
T_{B_i} &=& T_B + \sum_{j=i}^{r-1} (N+m -2j) T_{F_{j}}~, \quad (1 \leq i \leq r-1)~, \nonumber \\
T_{B_i} &=& T_B + \sum_{j=r+1}^{i-1} (2j -N -m) T_{F_{j}}~, \quad (r+2 \leq i \leq N)~. \label{even}
\end{eqnarray}
In \eqref{odd} and \eqref{even} we take the first relations as initial conditions
in solving \eqref{recursion}.
From the slope of each edge in Figure 1, we can also compute the framing index
by the rule to be explained in section 4.2. Let us denote the index of left, right, upper and lower edges by
$n_{L,k}, n_{R,k}, n_{B,k}$ and $n_{B,k+1}$, respectively.
Then we compute
\begin{eqnarray}
n_{B, k} &=& (m-k+1,1) \wedge (-N+k, 1) = N+m-2k+1, \nonumber \\
n_{L,k} &=& (-1, 0) \wedge (N-k-2,-1) = 1, \nonumber \\
n_{R,k} &=& (m-k,1) \wedge (-1, 0) = 1.
\end{eqnarray}
Note that $n_{L,k}$ and $n_{R,k}$ are independent of $k$. By definition
the framing index changes the sign, if we reverse the orientation of the edge,
or replace the representation associated to the edge by its transpose.
We will use these framing indices in the computation of the partition function
by gluing the refined topological vertices.
\setcounter{equation}{0} \section{Refined Topological Vertex}
In \cite{AK},
we defined the refined topological vertex
which is written not by the Schur functions
but by the Macdonald functions.
Here we slightly modify it by improving the framing factor.
\subsection{Refined topological vertex}
\label{sec:RefinedTV}
Let $P_{\lambda/\mu} (x;q,t)$ and $\langle P_\lambda|P_\lambda\rangle_{q,t}$
be the Macdonald function in the infinite number of variables
$x=(x_1,x_2,\cdots)$
and its scalar product, respectively,
defined in Appendix {B}.
We introduce an involution $\iota$ acting on the power sum function
$p_n(x)$ by
$\iota(p_n) = -p_n$.
For example,
\begin{equation}
\iota p_n(q^\lambda t^\rho)
=
- \sum_{i=1}^\infty (q^{n\lambda_i}-1)t^{n({1\over2}-i)}
- { 1\over t^{n\over 2} - t^{-{n\over 2}} }.
\label{eq:involution}
\end{equation}
Note that
$
\iota p_n(t^\rho)
=
-p_n(t^{\rho})
=
p_n(t^{-\rho}).
$
We define a vertex $V_{\mu \lambda}{}^\nu$ as follows:%
\footnote{
Although we will show that the Nekrasov formula is represented by our vertex $V_{\mu \lambda }{}^\nu$,
one can also produce it through the following vertex without the involution $\iota$
$$
U_{\mu\lambda}{}^\nu
=
P}%{\widetilde P_\lambda (t^\rho;q,t)
\sum_\sigmaP}%{\widetilde P_{\mu/\sigma}(q^{-\lambda } t^{-\rho};q^{-1},t) \
P}%{\widetilde P_{\nu/\sigma}(q^{\lambda } t^{\rho};q^{-1},t)
\langle P}%{\widetilde P_\sigma|P}%{\widetilde P_\sigma\rangle_{q,t}.
$$
}
\begin{equation}
V_{\mu \lambda}{}^\nu
:=
P}%{\widetilde P_\lambda(t^\rho;q,t)
\sum_\sigma
\iota P}%{\widetilde P_{\mu ^\vee /\sigma^\vee}(-t^{\lambda^\vee}q^{\rho};t,q) \
P}%{\widetilde P_{\nu /\sigma}(q^{\lambda} t^{\rho};q,t)
v^{|\sigma|},
\end{equation}
where
\begin{equation}
P_\lambda(t^{\rho};q,t)
=
\prod_{s\in\lambda}
{
(-1) t^{{1\over2}} q^{a(s)}
\over
1-q^{a(s)} t^{\ell(s)+1}
},
\qquad
P_{\lambda^\vee}(-q^{\rho};t,q)
=
\prod_{s\in\lambda}
{
(-1)q^{-{1\over2}} q^{-a(s)}
\over
1-q^{-a(s)-1} t^{-\ell(s)}
},
\label{eq:LargeNPrincipalSpecialization}
\end{equation}
which follows by substituting $Q=0$ into (\ref{eq:Specialization}).
From (\ref{eq:ACMac}),
$V_{\mu \lambda}{}^\nu $
is rewritten as
\begin{equation}
V_{\mu \lambda}{}^\nu
=
P}%{\widetilde P_\lambda(t^\rho;q,t)
\sum_\sigma
\iota P}%{\widetilde P_{\mu/\sigma}(q^{-\lambda}t^{-\rho};q,t) \
P}%{\widetilde P_{\nu /\sigma}(q^{\lambda} t^{\rho};q,t)
\langle P_\sigma|P_\sigma\rangle_{q,t}\
g_\mu(q,t),
\label{eq:RTV}
\end{equation}
with
\begin{equation}
g_\lambda(q,t)
:=
{
v^{|\lambda|}
\over
\langle P_\lambda|P_\lambda\rangle_{q,t}
}
=
\prod_{s\in\lambda}
\left({q\over t}\right)^{1\over2}
{
1-q^{a (s) } t^{ \ell (s)+1}
\over
1-q^{a (s)+1} t^{ \ell(s) }
},
\end{equation}
which satisfies
\begin{equation}
g_\lambda(q,t)
= g_\lambda(q^{-1},t^{-1})
= g_{\lambda^\vee}(t,q)^{-1}.
\end{equation}
From \eqn{eq:RTV}, \eqn{eq:ACprincipalMac} and \eqn{eq:Wsymm},
one can show the symmetry%
\footnote{
If we replace Macdonald functions
$P_{\lambda/\mu} (x;q,t)$'s
in
$V_{\mu \lambda}{}^\nu $
by ``normalized'' Macdonald functions
$
\widetilde P_{\lambda/\mu} (x;q,t) :=
P_{\lambda/\mu} (x;q,t)
\sqrt{g_\lambda(q,t)/g_\mu(q,t)}
$,
then the $g$ factors in
(\ref{eq:SymmVi})--(\ref{eq:SymmViii})
disappear. Because
$
\widetilde P_{\lambda} (x; q,t)
\widetilde P_{\lambda^\vee} (y; t,q)
=
P_{\lambda} (x; q,t)
P_{\lambda^\vee} (y; t,q)
$,
all results in this article remain the same even if we use the normalized Macdonald functions
$\widetilde P_{\lambda/\mu} (x;q,t)$.
}
\begin{eqnarray}
g_\lambda(q,t)^{-1}
V_{\lambda\bullet}{}^\bullet
&=& V_{\bullet\lambda}{}^\bullet
= V_{\bullet \bullet}{}^\lambda ,
\label{eq:SymmVi}
\\
g_\mu(q,t)^{-1}
V_{\mu \bullet}{}^\nu
&=&
g_\nu(q,t)^{-1}
V_{\nu \bullet}{}^\mu ,
\label{eq:SymmVii}
\\
V_{\bullet \lambda}{}^\nu
&=&
V_{\bullet \nu}{}^\lambda.
\label{eq:SymmViii}
\end{eqnarray}
Incorporating the framing factor,
we define our refined topological vertices $\Ciio\mu \lambda \nu{q}{t}$ and $\Cooi\mu \lambda\nu{q}{t}$
as follows:
\begin{eqnarray}
\Ciio\mu \lambda \nu{q}{t}
&:=&
V_{\mu\lambda }{}^\nu v^{-|\nu|} \fla\nu qt ^{-1}
\\
&=&
P}%{\widetilde P_\lambda (t^\rho;q,t)
\sum_\sigma\iota P}%{\widetilde P_{\mu^\vee /\sigma^\vee}(-t^{\lambda^\vee}
q^{\rho};t,q) \
P}%{\widetilde P_{\nu /\sigma}(q^{\lambda } t^{\rho};q,t)
v^{|\sigma|-|\nu|}
\fla\nu qt ^{-1},
\cr
\Cooi\mu \lambda\nu{q}{t}
&:=&
\Ciio{\mu^\vee}{\lambda^\vee}{\nu^\vee}{t}{q}
(-1)^{|\lambda|+|\mu|+|\nu|}
\\
&=&
P}%{\widetilde P_{\lambda^\vee}(-q^{\rho};t,q)
\sum_\sigma
P}%{\widetilde P_{\nu^\vee/\sigma^\vee}(-t^{\lambda^\vee} q^{\rho};t,q) \
\iota P}%{\widetilde P_{\mu /\sigma}(q^{\lambda } t^{\rho};q,t)
v^{-|\sigma|+|\nu|}
\fla\nu qt . \nonumber
\end{eqnarray}
The lower and the upper indices correspond to
the incoming and the outgoing representations,
respectively,
and the edges of the topological vertex are ordered clockwise.
Although only the refined vertices of the above types are mainly used in this article,
the following vertices may also be useful:
\begin{eqnarray}
\Coii\mu \lambda \nu{q}{t}
&:=&
\Ciio\mu\lambda\nu{q}{t}
v^{|\mu|+|\nu|}
\fla\mu qt \fla\nu qt
\\
&=&
P}%{\widetilde P_\lambda (t^\rho;q,t)
\sum_\sigma\iota P}%{\widetilde P_{\mu^\vee /\sigma^\vee}(-t^{\lambda^\vee}
q^{\rho};t,q) \
P}%{\widetilde P_{\nu /\sigma}(q^{\lambda } t^{\rho};q,t)
v^{|\sigma|+|\mu|}
\fla\mu qt ,
\cr
\Cioo\mu\lambda \nu{q}{t}
&:=&
\Cooi\mu \lambda \nu{q}{t}
v^{-|\mu|-|\nu|}
\fla\mu qt ^{-1}\fla\nu qt ^{-1}
=
\Coii{\mu^\vee}{\lambda^\vee}{\nu^\vee}{t}{q}
(-1)^{|\lambda|+|\mu|+|\nu|}
\\
&=&
P}%{\widetilde P_{\lambda^\vee}(-q^{\rho};t,q)
\sum_\sigma
P}%{\widetilde P_{\nu^\vee/\sigma^\vee}(-t^{\lambda^\vee} q^{\rho};t,q) \
\iota P}%{\widetilde P_{\mu/\sigma}(q^{\lambda } t^{\rho};q,t)
v^{-|\sigma|-|\mu|}
\fla\mu qt ^{-1}. \nonumber
\end{eqnarray}
\FigTV
Note that, when $q=t$,
the topological vertex in \cite{AKMV} is
\begin{equation}
C_{\mu\lambda \nu} (q)
=
s_\lambda (q^\rho)
\sum_\sigma s_{\mu/\sigma}(q^{\lambda^\vee + \rho}) \ s_{\nu^\vee
/\sigma}(q^{\lambda + \rho})
\prod_{s\in\nu} q^{a(s)-\ell(s)}.
\end{equation}
Since
$
s_{\mu/\sigma}(q^{\lambda^\vee + \rho }) =
\iota s_{\mu/\sigma}(q^{-\lambda - \rho})
$,
which follows from \eqn{eq:Maya},
our refined topological vertex
$
\lim_{t\rightarrow q}\Ciio\mu \lambda \nu{q}{t}
$
coincides with the topological vertex
$
C_{\mu\lambda \nu^\vee} (q)
$.
It is well-known that in the operator formalism the Schur functions are realized in terms of free fermions.
Although we have no fermionic realization of the Macdonald functions, they are described by bosons
as shown in \cite{rf:AOS}. Our refined topological vertex has a bosonic realization
by using that for the Macdonald functions.
\subsection{Gluing rules}
Here we show our gluing rules for constructing the partition function from a web diagram.
Let us consider a graph with trivalent vertices and edges.
Each edge is associated with
an integer vector ${\mbox{\boldmath $v$}} = (v_{1},v_{2})\in\bZ^2$.
Hence the trivalent vertex with edges indexed by $(i,j,k)$ in the counterclockwise ordering
is associated with a triplet of integer vectors $({\mbox{\boldmath $v$}}_i,{\mbox{\boldmath $v$}}_j,{\mbox{\boldmath $v$}}_k)$.
If we choose these vectors to be outgoing, they should satisfy the following conditions
\begin{equation}
{\mbox{\boldmath $v$}}_i + {\mbox{\boldmath $v$}}_j + {\mbox{\boldmath $v$}}_k = {\mbox{\boldmath $0$}}}%{{\bf 0},
\qquad
{\mbox{\boldmath $v$}}_i\wedge{\mbox{\boldmath $v$}}_j = 1,
\qquad
(
{\mbox{\boldmath $v$}}_j\wedge{\mbox{\boldmath $v$}}_k =
{\mbox{\boldmath $v$}}_k\wedge{\mbox{\boldmath $v$}}_i =1), \label{normalize}
\end{equation}
with
${\mbox{\boldmath $v$}}_i \wedge {\mbox{\boldmath $v$}}_j := v_{i,1} v_{j,2} - v_{i,2} v_{j,1} $.
These correspond to the Calabi-Yau condition and the smoothness condition.
Since the refined topological vertex has no cyclic symmetry, we should specify a preferred direction.
Therefore one of these three vectors should be the preferred one and we denote it by white arrow.
Note that if we choose the middle edge as the preferred direction; ${\mbox{\boldmath $v$}}_j = (-1, 0)$, then
the condition \eqref{normalize} implies that ${\mbox{\boldmath $v$}}_i = (a,1), {\mbox{\boldmath $v$}}_k=(b,-1)$ with $a+b=1$.
\FigFeynman
Let
$({\mbox{\boldmath $v$}}_i,{\mbox{\boldmath $v$}}_j,{\mbox{\boldmath $v$}}_k)$ and
$({\mbox{\boldmath $v$}}_k,{\mbox{\boldmath $v$}}_{i'},{\mbox{\boldmath $v$}}_{j'})$ be the vectors associated with
the vertices at the origin and at the end of the vector ${\mbox{\boldmath $v$}}_k$ of the $k$th edge, respectively.
If we choose so that
${\mbox{\boldmath $v$}}_i$ and ${\mbox{\boldmath $v$}}_j$ are incoming and
${\mbox{\boldmath $v$}}_k$ and ${\mbox{\boldmath $v$}}_{i'}$ are outgoing,
then the framing index $n_k$ of the $k$th edge is defined by
\begin{equation}
n_k :=
{\mbox{\boldmath $v$}}_i\wedge{\mbox{\boldmath $v$}}_{i'}=
{\mbox{\boldmath $v$}}_j\wedge{\mbox{\boldmath $v$}}_{j'}.
\end{equation}
Each edge is associated also with
a Young diagram $\lambda$ and
a K\"ahler parameter $Q\in{\mathbb C}}%{{\bf C}$
so that the propagator for the $k$th edge is defined as
\begin{equation}
Q_k^{|\Ya\lambda k|} {\fla{\Ya\lambda k}qt}^{n_k},
\end{equation}
and we glue the amplitudes
by summing over the representation $\lambda$ on each edge.
\setcounter{equation}{0} \section{Four-Point Functions}
Here we show how to calculate the partition functions.
The building blocks for them are the following four-point functions.
\subsection{Building blocks}
Assume that each vertex has a horizontal edge, which we take as the preferred direction.
Fix the orientation of the preferred direction, say $(-1,0)$;
then we have four possibilities of the configuration of two horizontal edges [Fig. 5].
Although the slopes and directions of ``vertical," i.e. nonhorizontal, edges can be arbitrary,
we show in Figure 5 the simplest one whose internal edge is orthogonal to the preferred direction and
we tentatively take the orientation of ``vertical" edges from the top to the bottom.
The framing index is $1$, $0$, $0$ and $-1$, respectively.
They are independent of the slope of ``vertical" edges,
but change the sign according to the orientations.
\FigFourpoint
We order the three edges at each vertex in the clockwise direction such that
the preferred direction is in the middle position.
This fixes the ordering of three edges uniquely.
The lower and upper indices of the refined vertex correspond to
the incoming and the outgoing representation.
Then the following four-point functions are building blocks for the partition function:
\begin{eqnarray}
\Ziiio\mu{\Ya\lambda 1}{\Ya\lambda 2}\nu Qqt
&:=&
\sum_\eta
\Ciio\mu{\Ya\lambda 1}\eta qt
\Ciio\eta{\Ya\lambda 2}\nu qt
Q^{|\eta|} \fla\eta qt ,
\cr
\Ziioo{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
&:=&
\sum_{\eta}
\Ciio{\mu}{\Ya\lambda 1}{\eta}{q}{t}
\Cooi{\nu}{\Ya\lambda 2}{\eta}{q}{t}
Q^{|\eta|},
\cr
\Zioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
&:=&
\sum_{\eta}
\Cooi{\eta}{\Ya\lambda 1}{\mu}{q}{t}
\Ciio{\eta}{\Ya\lambda 2}{\nu}{q}{t}
Q^{|\eta|},
\cr
\Ziooo\mu{\Ya\lambda 1}{\Ya\lambda 2}\nu Qqt
&:=&
\sum_\eta
\Cooi\eta{\Ya\lambda 1}\mu qt
\Cooi\nu{\Ya\lambda 2}\eta qt
Q^{|\eta|} \fla\eta qt ^{-1}.
\label{eq:FourPointFunctions}
\end{eqnarray}
Note that
\begin{eqnarray}
\Ziooo\mu{\Ya\lambda 1}{\Ya\lambda 2}\nu Qqt
&=&
\sum_{\eta^\vee}
\Ciio{\eta^\vee}{\Yav\lambda 1}{\mu^\vee} tq
\Ciio{\nu^\vee}{\Yav\lambda 2}{\eta^\vee} tq
(-1)^{|\mu|+|\Ya\lambda 1|+|\Ya\lambda 2|+|\nu|}
Q^{|\eta|} \fla\eta tq
\cr
&=&
\Ziiio{\nu^\vee}{\Yav\lambda 2}{\Yav\lambda 1}{\mu^\vee} Qtq
(-1)^{|\mu|+|\Ya\lambda 1|+|\Ya\lambda 2|+|\nu|}.
\end{eqnarray}
We will show that
$\Zioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt$
is related with
$\Ziioo{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt$
by the flop.
If we take the orientation of ``vertical" edges from the bottom to the top,
the sign of the framing index changes.
The corresponding four-point functions are written by
$\Coii\mu\lambda\nu qt$'s and
$\Cioo\mu\lambda\nu qt$'s and
they are the same as
those in (\ref{eq:FourPointFunctions}) up to
the framing factors
$\fla\mu qt ^{\pm 1} \fla\nu qt ^{\pm 1}$
for the outer ``vertical" edges.
Although we have fixed a preferred direction in this article,
we can change it in some special cases.
Let
\begin{equation}
\Zoiio\bullet\nu\bullet\mu Qqt
:=
\sum_\eta
\Ciio\bullet\eta\mu qt
\Cooi\bullet\eta\nu qt
Q^{|\eta|},
\end{equation}
then from (\ref{eq:SymmViii}),
we have the following symmetry
\begin{equation}
\Ziioo\bullet\mu\nu\bullet Qqt
=
\Zoiio\bullet\nu\bullet\mu Qqt
v^{|\mu|-|\nu|}
\fla\mu qt
/
\fla\nu qt,
\label{eq:Slice}
\end{equation}
which changes the preferred direction.
\FigSlice
\subsection{OPE formula}
Next, we turn to showing some formulas for calculating the partition functions.
Let us denote a symmetric function $f$ in the set of variables
$(x^1_1,x^1_2,\cdots,x^2_1,x^2_2,\cdots,x^N_1,x^N_2,\cdots)$
by
$
f\left(x^1,x^2,\cdots,x^N\right)
$
or
$
f\left(\{x^i\}_{i=1}^N\right)
$.
To calculate the partition functions,
the essential part is the following Cauchy formula for the Macdonald function,
\begin{equation}
\sum_\lambda
P}%{\widetilde P_\lambda(x;q,t) P}%{\widetilde P_{\lambda^\vee} (y;t,q)
=
\Pi_0(x,y),
\label{eq:conjugateCauchy}
\end{equation}
or, more generally,
\begin{equation}
\sum_\lambda
P}%{\widetilde P_{\lambda/\mu} (x;q,t) P}%{\widetilde P_{\lambda^\vee/\nu^\vee} (y;t,q)
=
\Pi_0(x,y)
\sum_\lambda
P}%{\widetilde P_{\mu^\vee/\lambda^\vee} (y;t,q) P}%{\widetilde P_{\nu/\lambda} (x;q,t),
\label{eq:skewCauchy}
\end{equation}
with $\Pi_0(x,y)$ in (\ref{eq:Pizero}) and the adding formula
\begin{equation}
\sum_\mu
P}%{\widetilde P_{\lambda/\mu} (x;q,t)
P}%{\widetilde P_{\mu/\nu} (y;q,t)
=
P}%{\widetilde P_{\lambda/\nu} (x,y;q,t).
\label{eq:addSkewMacdonald}
\end{equation}
Note that for $c\in{\mathbb C}}%{{\bf C}$,
$\Pi_0} %{\widetilde\Pi(cx,y) = \Pi_0} %{\widetilde\Pi(x,cy)$,
and for our involution $\iota$ in (\ref{eq:involution}),
$\Pi_0} %{\widetilde\Pi(\iota x,y) = \Pi_0} %{\widetilde\Pi(x,\iota y) = \Pi_0} %{\widetilde\Pi(x,y)^{-1}$.
Using these we have the following lemma.
\\
{\bf Lemma.}
Let $x$, $y$, $z$ and $w$ be sets of variables
and $\alpha$, $\beta$ and $\gamma\in{\mathbb C}}%{{\bf C}$. Then
\begin{eqnarray}
&& \hskip -4pt
\sum_{\Ya\sigma 1,\eta,\Ya\sigma 2}
P}%{\widetilde P_{{\mu^\vee}/{\Yav\sigma 1} } \left(x;t,q\right)
P}%{\widetilde P_{{\eta}/{\Ya\sigma 1}} \left(y;q,t\right)
P}%{\widetilde P_{{\eta^\vee}/{\Yav\sigma 2} } \left(z;t,q\right)
P}%{\widetilde P_{{\nu}/{\Ya\sigma 2}} \left(w;q,t\right)
\alpha^{|\Ya\sigma 1|}
\beta^{|\eta|}
\gamma^{|\Ya\sigma 2|}
\cr
&=&
\sum_{\eta}
P}%{\widetilde P_{{\mu^\vee}/{\eta^\vee} } \left(x,\alpha\beta z;t,q\right)
P}%{\widetilde P_{{\nu}/{\eta}} \left(\beta\gamma y,w;q,t\right)
\left(\alpha\beta\gamma\right)^{|\eta|}
\Pi_0} %{\widetilde\Pi\left(y, \beta z \right).
\label{eq:FourPointFormula}
\end{eqnarray}
\noindent{\it Proof.\hskip10pt}} %\quad}
Let $\alpha=a/b$, $\beta=b/c$, $\gamma=c/d$;
then, from
(\ref{eq:scaleTrans}),
(\ref{eq:skewCauchy}) and
(\ref{eq:addSkewMacdonald}),
the left-hand side of the above equation is
\begin{eqnarray}
&&
\sum_{\Ya\sigma 1,\eta,\Ya\sigma 2}
P}%{\widetilde P_{{\mu^\vee}/{\Yav\sigma 1} } \left({x\over a};t,q\right)
P}%{\widetilde P_{{\eta}/{\Ya\sigma 1}} \left(b y;q,t\right)
P}%{\widetilde P_{{\eta^\vee}/{\Yav\sigma 2} } \left({z\over c};t,q\right)
P}%{\widetilde P_{{\nu}/{\Ya\sigma 2}} \left(d w;q,t\right)
a^{|\mu|} d^{-|\nu|}
\cr
&=&
\sum_{\Ya\sigma 1,\eta,\Ya\sigma 2}
P}%{\widetilde P_{{\mu^\vee}/{\Yav\sigma 1} } \left({x\over a};t,q\right)
P}%{\widetilde P_{{\Yav\sigma 1}/{\eta^\vee} } \left({z\over c};t,q\right)
P}%{\widetilde P_{{\Ya\sigma 2}/{\eta}} \left(b y;q,t\right)
P}%{\widetilde P_{{\nu}/{\Ya\sigma 2}} \left(d w;q,t\right)
a^{|\mu|} d^{-|\nu|}
\Pi_0} %{\widetilde\Pi\left(b y, {z\over c} \right)
\cr
&=&
\sum_{\eta}
P}%{\widetilde P_{{\mu^\vee}/{\eta^\vee} } \left({x\over a},{z\over c};t,q\right)
P}%{\widetilde P_{{\nu}/{\eta}} \left(b y,d w;q,t\right)
a^{|\mu|} d^{-|\nu|}
\Pi_0} %{\widetilde\Pi\left(b y, {z\over c} \right)
\cr
&=&
\sum_{\eta}
P}%{\widetilde P_{{\mu^\vee}/{\eta^\vee} } \left(x,{a\over c}z;t,q\right)
P}%{\widetilde P_{{\nu}/{\eta}} \left({b\over d}y,w;q,t\right)
\left({a\over d}\right)^{|\eta|}
\Pi_0} %{\widetilde\Pi\left(y, {b\over c}z \right),
\end{eqnarray}
and the lemma is proven.
\hfill\fbox{}
Successively using this lemma, we obtain the following OPE formula,
which is useful for calculating more general diagrams.
\\
{\bf Proposition.}
Let $x^i$'s be sets of variables,
$c_{i,i+1}\in{\mathbb C}}%{{\bf C}$ and
$c_{i,j}:=\prod_{k=i}^{j-1} c_{i,i+1}$. Then
\begin{eqnarray}
\sum_{\{\Ya\lambda 1,\Ya\lambda 2,\cdots,\Ya\lambda{2N-1}\}}
\prod_{i=1}^N
P}%{\widetilde P_{{\Yav\lambda{2i-2}}/{\Yav\lambda{2i-1}} } \left(x^{2i-1};t,q\right)
P}%{\widetilde P_{{\Ya \lambda{2i }}/{\Ya \lambda{2i-1}} } \left(x^{2i };q,t\right)
\prod_{i=1}^{2N-1}
c_{i,i+1}^{|\Ya\lambda i|}
&&
\cr
=
\sum_{\eta}
P}%{\widetilde P_{{\Yav\lambda{0}}/{\eta^\vee} }
\left(\{c_{1,2i-1}x^{2i-1}\}_{i=1}^N;t,q\right)
P}%{\widetilde P_{{\Ya \lambda{2N }}/{\eta} }
\left(\{x^{2i} c_{2i,2N}\}_{i=1}^N;q,t\right)
c_{1,2N}^{|\eta|}
&&
\cr
\times
\prod_{1\leq i<j\leq N}\Pi_0} %{\widetilde\Pi\left( x_{2i}\ , \ c_{2i,2j-1} x_{2j-1}\right),
&&
\label{eq:OPEFormula}
\end{eqnarray}
for any integer $N\geq 2$.
Therefore the number of Young diagrams to perform summation reduces from $2N-1$ to one.
If ${\Ya\lambda 0}$ or ${\Ya\lambda {2N}}$ is the trivial representation,
then since
$P}%{\widetilde P_{\bullet/\lambda}(x;q,t)=\delta_{\bullet,\lambda}$,
the number of Young diagrams for perform summation becomes zero.
The trace over ${\Ya\lambda 0}={\Ya\lambda {2N}}$ is also calculated
by the trace formula explained in the next section.
If we realize the Macdonald polynomials by bosons as in \cite{rf:AOS},
these OPE formulas come from the operator product expansion of vertex operators.
\subsection{Computations of four-point functions}
Here we apply the OPE formula (\ref{eq:FourPointFormula}) to the above building blocks.
Let $x^\a $ and $y^\a $ be the set of variables as
$x^\a = q^{\Ya\lambda\a } t^\rho $ and
$y^\a = t^{{\Yav\lambda\a }} q^{\rho}$, respectively.
Then
\begin{eqnarray}
&&
\Ziiio\mu{\Ya\lambda 1}{\Ya\lambda 2}\nu Qqt
=
P}%{\widetilde P_{\Ya\lambda 1} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{{\Ya\lambda 2} } \left(t^{\rho};q,t\right)
\fla\nu qt^{-1}
v^{-|\nu|}
\cr
&&
\times\hskip-7pt
\sum_{\Ya\sigma 1,\eta,\Ya\sigma 2}
\hskip-4pt
P}%{\widetilde P_{{\mu}^\vee /{\Yav\sigma 1} } \left(-\iota y^1 ;t,q\right)
P}%{\widetilde P_{\eta/\Ya\sigma 1} \left(x^1 ;q,t\right)
P}%{\widetilde P_{{\eta}^\vee/{\Yav\sigma 2} } \left(-\iota y^2;t,q\right)
P}%{\widetilde P_{\nu/\Ya\sigma 2} \left(x^2 ;q,t\right)
v^{|\Ya\sigma 1|+|\Ya\sigma 2|}
\left(v^{-1} Q \right)^{|\eta|},~
\cr
&&
\Ziioo{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
=
P}%{\widetilde P_{\Ya\lambda 1} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{{\Yav\lambda 2} } \left(-q^{\rho};t,q\right)
\cr
&&
\times\hskip-7pt
\sum_{\Ya\sigma 1,\eta,\Ya\sigma 2}
\hskip-4pt
P}%{\widetilde P_{{\mu}^\vee /{\Yav\sigma 1} } \left(-\iota y^1 ;t,q\right)
P}%{\widetilde P_{\eta/\Ya\sigma 1} \left(x^1 ;q,t\right)
P}%{\widetilde P_{{\eta}^\vee/{\Yav\sigma 2} } \left(-y^2;t,q\right)
P}%{\widetilde P_{\nu/\Ya\sigma 2} \left(\iota x^2 ;q,t\right)
v^{|\Ya\sigma 1|-|\Ya\sigma 2|}
Q^{|\eta|},
\cr
&&
\Zioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
=
P}%{\widetilde P_{{\Yav\lambda 1} } \left(-q^{\rho};t,q\right)
P}%{\widetilde P_{\Ya\lambda 2} \left(t^{\rho};q,t\right)
\fla\mu qt
\fla\nu qt^{-1}
v^{|\mu|-|\nu|}
\cr
&&
\times\hskip-7pt
\sum_{\Ya\sigma 1,\eta,\Ya\sigma 2}
\hskip-4pt
P}%{\widetilde P_{{\mu}^\vee /{\Yav\sigma 1} } \left(-y^1 ;t,q\right)
P}%{\widetilde P_{\eta/\Ya\sigma 1} \left(\iota x^1 ;q,t\right)
P}%{\widetilde P_{{\eta}^\vee/{\Yav\sigma 2} } \left(-\iota y^2;t,q\right)
P}%{\widetilde P_{\nu/\Ya\sigma 2} \left(x^2 ;q,t\right)
v^{-|\Ya\sigma 1|+|\Ya\sigma 2|}
Q^{|\eta|}.
\end{eqnarray}
From (\ref{eq:FourPointFormula}), they reduce to
\begin{eqnarray}
\Ziiio\mu{\Ya\lambda 1}{\Ya\lambda 2}\nu Qqt
&=&
P}%{\widetilde P_{\Ya\lambda 1} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{{\Ya\lambda 2} } \left(t^{\rho};q,t\right)
\fla\nu qt^{-1}
v^{-|\nu|}
\cr
&& \hskip -65pt
\times
\sum_{\eta}
\iota P}%{\widetilde P_{{\mu}^\vee /\eta^\vee } \left(-y^1,-Q y^2;t,q\right)
P}%{\widetilde P_{\nu /\eta} \left(Q x^1 ,x^2 ;q,t\right)
(vQ) ^{|\eta|}
\Pi_0} %{\widetilde\Pi\left(-v^{-1}Q x^1, y^2 \right)^{-1},
\cr
\Ziioo{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
&=&
P}%{\widetilde P_{\Ya\lambda 1} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{{\Yav\lambda 2} } \left(-q^{\rho};t,q\right)
\cr
&& \hskip -65pt
\times
\sum_{\eta}
P}%{\widetilde P_{{\mu}^\vee/{\eta}^\vee}
\left(- \iota y^1 , -v Q y^2;t,q\right)
P}%{\widetilde P_{\nu/\eta} \left(v^{-1} Qx^1, \iota x^2 ;q,t\right)
Q^{|\eta|}
\Pi_0} %{\widetilde\Pi\left(-Q x^1,y^2\right),
\cr
\Zioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
&=&
P}%{\widetilde P_{{\Yav\lambda 1} } \left(-q^{\rho};t,q\right)
P}%{\widetilde P_{\Ya\lambda 2} \left(t^{\rho};q,t\right)
\fla\mu qt
\fla\nu qt^{-1}
v^{|\mu|-|\nu|}
\cr
&& \hskip -65pt
\times
\sum_{\eta}
P}%{\widetilde P_{{\mu}^\vee/{\eta}^\vee}
\left(-y^1 , -v^{-1} Q\iota y^2;t,q\right)
P}%{\widetilde P_{\nu/\eta} \left(v Q\iota x^1,x^2 ;q,t\right)
Q^{|\eta|}
\Pi_0} %{\widetilde\Pi\left(-Q x^1,y^2\right).
\end{eqnarray}
Since
$
\Pi_0} %{\widetilde\Pi\left(-Q x^1,y^2\right)
/
\Pi_0} %{\widetilde\Pi\left(-Q t^\rho,q^\rho\right)
=
\Nek{\Ya\lambda 1}{\Ya\lambda 2}{vQ}qt
$,
the instanton part,
such as \break
$
\Ziiio\mu{\Ya\lambda 1}{\Ya\lambda 2}\nu Qqt
/
\Ziiio\bullet\bullet\bullet\bullet Qqt
$
is written not by
$\Pi_0} %{\widetilde\Pi\left(-v^{-1}Q x^1,y^2\right)$'s
but by
$\Nek{\Ya\lambda 1}{\Ya\lambda 2}{Q}qt $'s.
Note that
\begin{equation}
\Ziioo{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
=
\Ziioo{\nu}{\Ya\lambda 2}{\Ya\lambda 1}{\mu}Q{q^{-1}}{t^{-1}}
=
\Ziioo{{\nu}^\vee}{{\Yav\lambda 2} }{{\Yav\lambda 1} }{{\mu}^\vee}Qtq
(-1)^{|\Ya\lambda 1|+|\Ya\lambda 2|+|\mu|+|\nu|}.
\end{equation}
\subsection{Flop operation}
The flop invariance of the topological vertex is shown in \cite{IK-P3, KM}.
We can show the flop invariance of the refined topological vertex as follows.
First,
\begin{equation}
\Zioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
=
\Ziioo{\mu}{\Ya\lambda 2}{\Ya\lambda 1}{\nu}{Q^{-1}}qt
Q^{|\mu|+|\nu|}
{
\Pi_0} %{\widetilde\Pi\left(-Q x^1,y^2\right)
\over
\Pi_0} %{\widetilde\Pi\left(-Q ^{-1}x^2,y^1\right)
}
{
\fla\mu qt
\over
\fla\nu qt
}.
\end{equation}
Next, from
(\ref{eq:NDual}) and
(\ref{eq:NDualQ}), we have
\begin{eqnarray}
{
\Pi_0} %{\widetilde\Pi\left(-Q x^1,y^2\right)
/
\Pi_0} %{\widetilde\Pi\left(-Q t^\rho,q^\rho\right)
\over
\Pi_0} %{\widetilde\Pi\left(-Q^{-1}x^2, y^1\right)
/
\Pi_0} %{\widetilde\Pi\left(-Q^{-1} t^\rho, q^\rho\right)
}
&=&
{
\Nek{\Ya\lambda 1}{\Ya\lambda 2}{vQ}qt
\over
\Nek{\Ya\lambda 2}{\Ya\lambda 1}{vQ^{-1}}qt
}
\cr
=
{
\Nek{\Ya\lambda 1}{\Ya\lambda 2}{vQ}qt
\over
\Nek{{\Yav\lambda 1}}{{\Yav\lambda 2} }{v^{-1} Q^{-1}}tq
}
&=&
Q^{|\Ya\lambda 1|+|\Ya\lambda 2|}
{\fla{\Ya\lambda 1}qt \over \fla{\Ya\lambda 2}qt }.~~
\end{eqnarray}
Thus we obtain the following flop invariance%
\footnote{
The flop invariance of $C_{\mu\nu\lambda}^{(IKV)}(t,q)$
has recently been discussed in \cite{Taki2}.
\begin{eqnarray}
{
\Zioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}Qqt
\over
\Zioio{\mu'}{\bullet}{\bullet}{\nu'}Qqt
}
=
{
\Ziioo{\mu}{\Ya\lambda 2}{\Ya\lambda 1}{\nu}{Q^{-1}}qt
\over
\Ziioo{\mu'}{\bullet}{\bullet}{\nu'}{Q^{-1}}qt
}
Q^{|\Ya\lambda 1|+|\Ya\lambda 2|+|\mu|+|\nu|}
{\fla\mu qt \over \fla\nu qt}
&&\hskip-4pt
{\fla{\Ya\lambda 1}qt \over \fla{\Ya\lambda 2}qt }
\cr
\times
Q^{-|\mu'|-|\nu'|}
{ \fla{\nu'} qt \over \fla{\mu'} qt}
&&\hskip-4pt .
\label{eq:Flop}
\end{eqnarray}
The denominator corresponds to the perturbative part.
\FigFlop
Combining (\ref{eq:Flop}) with (\ref{eq:Slice}), we have
\begin{equation}
{
\Zioio{\bu}{\nu}{\mu}{\bu}Qqt
\over
\Zioio{\bu}{\bullet}{\bullet}{\bu}Qqt
}
Q^{-|\mu|-|\nu|}
=
{
\Ziioo{\bu}{\mu}{\nu}{\bu}{Q^{-1}}qt
\over
\Ziioo{\bu}{\bullet}{\bullet}{\bu}{Q^{-1}}qt
}
{\fla\nu qt \over \fla\mu qt}
=
{
\Zoiio{\bu}{\nu}\bu{\mu}{Q^{-1}}qt
\over
\Zoiio{\bu}{\bullet}{\bullet}{\bu}{Q^{-1}}qt
}
v^{|\mu|-|\nu|},
\end{equation}
which changes the preferred direction also.
\FigFlopSlice
\subsection{Finite $N$ Macdonald polynomial and homological invariants}
When
$\Ya\lambda 1$ or $\Ya\lambda 2$ is the trivial representation,
the amplitudes of the above diagrams are written by the Macdonald polynomials
with a finite number of variables.
Note that
$P}%{\widetilde P_{\lambda} \left(a x^1, b\iota x^2 ;q,t\right)$
with
$x^\a = q^{\Ya\lambda\a } t^\rho $
and
$a,b\in{\mathbb C}}%{{\bf C}$
is the Macdonald function in the power sum functions
\begin{equation}
p_n\left(a x^1\right)+\iota p_n\left(b x^2\right)
=
\sum_{i=1}^\infty
\left\{
a^n \left(q^{n\Yai\lambda 1i} - 1\right)
-
b^n \left(q^{n\Yai\lambda 2i} - 1\right)
\right\}
t^{n({1\over2}-i)}
+
{ a^n - b^n \over t^{n\over 2} - t^{-{n\over 2}} }.
\end{equation}
For $N\in{\mathbb N}}%{{\bf N}$ and $N \geq \ell(\lambda)$,
\begin{equation}
p_n\left(q^\lambda t^\rho\right)+\iota p_n\left(t^{-N+\rho}\right)
=
\sum_{i=1}^\infty
\left(q^{n\lambda_i} - 1\right)
t^{n({1\over2}-i)}
+
{1 - t^{-n N} \over t^{n\over 2} - t^{-{n\over 2}} }
=
\sum_{i=1}^{N}
\left(
q^{\lambda_i} t^{{1\over2}-i}
\right)^n,
\end{equation}
which are the power sum symmetric polynomials in $N$ variables.
Therefore
$
P}%{\widetilde P_{\lambda}
\left(q^\lambda t^\rho , t^{-N-\rho} ; q,t \right)
$
is the Macdonald polynomial in $N$ variables
$\{q^{\lambda_i} t^{{1\over2}-i}\}_{1\leq i\leq N}$.
On the other hand, from
(\ref{eq:appsetFormulaI}) and
(\ref{eq:Specialization})
\begin{eqnarray}
\Nek{\lambda}{\bullet}{v Q}qt
&=&
\prod_{(i,j)\in\lambda}
\left( 1 - vQ q^{\lambda_i-j} t^{1-i} \right)
=
\prod_{(i,j)\in\lambda}
\left( 1 - vQ q^{j-1} t^{1-i} \right)
\cr
&=&
{
P}%{\widetilde P_{\lambda^\vee}(q^\rho,vQq^{-\rho};t,q)
\over
P}%{\widetilde P_{\lambda^\vee}(q^\rho;t,q)
}
=
{
P}%{\widetilde P_\lambda(t^\rho,v^{-1}Q^{-1}t^{-\rho};q,t)
\over
P}%{\widetilde P_\lambda(t^\rho;q,t)
}
Q^{|\lambda|}
{\fla\lambda qt},
\cr
\Nek{\bullet}{\lambda}{v Q}qt
&=&
\prod_{(i,j)\in\lambda}
\left( 1 - vQ q^{-\lambda_i+j-1} t^{i} \right)
=
\prod_{(i,j)\in\lambda}
\left( 1 - v^{-1} Q q^{1-j} t^{i-1} \right)
\cr
&=&
{
P}%{\widetilde P_\lambda(t^\rho,v^{-1}Qt^{-\rho};q,t)
\over
P}%{\widetilde P_\lambda(t^\rho;q,t)
}
=
{
P}%{\widetilde P_{\lambda^\vee}(q^\rho,vQ^{-1}q^{-\rho};t,q)
\over
P}%{\widetilde P_{\lambda^\vee}(q^\rho;t,q)
}
Q^{|\lambda|}
{\fla\lambda qt}^{-1}.
\end{eqnarray}
Therefore, some factors in
$
\Znoniioo{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}
/
\Znoniioo{\bullet}{\bullet}{\bullet}{\bullet}
$
and
$
\Znonioio{\mu}{\Ya\lambda 1}{\Ya\lambda 2}{\nu}
/
\Znonioio{\bullet}{\bullet}{\bullet}{\bullet}
$
might be written by the Macdonald polynomials
in a finite number of variables.
For example, if $\mu$ and one of the
$\Ya\lambda \alpha$ ($\alpha = 1$ or $2$) are the trivial representation,
\begin{eqnarray}
{
\Ziioo{\bullet}{\lambda}{\bullet}{\nu}{Q^{-1}}qt
\over
\Ziioo{\bullet}{\bullet}{\bullet}{\bullet}{Q^{-1}}qt
}
&=&
P}%{\widetilde P_\nu \left(q^\lambda t^\rho, v Qt^{-\rho} ;q,t\right)
P}%{\widetilde P_{\lambda} \left(t^{\rho}, v^{-1}Qt^{-\rho};q,t\right)
v^{-|\nu|}
Q^{-|\lambda|-|\nu|}
\fla\lambda qt
,
\cr
{
\Zioio{\bullet}{\bullet}{\lambda}{\nu}Qqt
\over
\Ziioo{\bullet}{\bullet}{\bullet}{\bullet}Qqt
}
&=&
P}%{\widetilde P_\nu \left(q^\lambda t^\rho, v Qt^{-\rho} ;q,t\right)
P}%{\widetilde P_{\lambda} \left(t^{\rho}, v^{-1}Qt^{-\rho};q,t\right)
v^{-|\nu|} f_\nu(q,t)^{-1}.
\end{eqnarray}
When $v^{\pm1} Q=t^{-N}$ with $N\in{\mathbb N}}%{{\bf N}$,
they are written by the Macdonald polynomials in $N$ variables.
These are candidates for the $SU(N)$ homological invariants.
Note that
\begin{equation}
{\cal W}_{\lambda,\nu} (q,t)
:=
\Ciio\bullet\lambda\nu qt
v^{|\nu|} \fla\nu qt
=
P}%{\widetilde P_\lambda\left( t^\rho;q,t\right)
P}%{\widetilde P_\nu \left(q^\lambda t^\rho;q,t\right),
\end{equation}
has a nice symmetry \cite{Mac}(Ch.\ VI.6):
\begin{equation}
{\cal W}_{\lambda,\nu} (q,t)
= {\cal W}_{\nu,\lambda} (q,t).
\end{equation}
When $t=q$,
${\cal W}_{\lambda,\nu} (q,q)$
gives a large $N$ limit of the Hopf link invariants.
\setcounter{equation}{0} \section{One-Loop Diagrams}
Some one-loop diagrams which correspond to the
trace of the vertex operators
can be calculated by the following trace formula.
\subsection{Trace formula}
First, we have:
\\
{\bf Lemma.}
Let $x$ and $y$ be sets of variables and $a$, $b$ and $c:=ab\in{\mathbb C}}%{{\bf C}$.
If $|c|<1$, then
\begin{eqnarray}
\sum_{\lambda,\mu}
P}%{\widetilde P_{\lambda^\vee/ \mu^\vee}\left(x;t,q\right)
P}%{\widetilde P_{\lambda/ \mu}\left(y;q,t\right)
a^{|\lambda|}b^{|\mu|}
&=&
\prod_{k\geq 0}
{
\Pi_0} %{\widetilde\Pi\left(ac^k x,y\right)
\over
1-c^{k+1}
}
\cr
&=&
\exp\left\{-\sum_{n>0}{1\over n}{p_n(ax)p_n(-y)-c^n\over 1-c^n} \right\}.
\label{eq:TwoTraceFormula}
\end{eqnarray}
\noindent{\it Proof.\hskip10pt}} %\quad}
As in \cite{Mac}(Ch.\ I.5),
let $F(x,y)$
denote the left-hand side of the above equation.
Then it follows from the Cauchy formula (\ref{eq:skewCauchy}) that
\begin{eqnarray}
F(x,y)
&=&
\sum_{\lambda,\mu}
P}%{\widetilde P_{\lambda^\vee/ \mu^\vee}\left(ax;t,q\right)
P}%{\widetilde P_{\lambda/ \mu}\left(y;q,t\right)
(ab)^{|\mu|}
\cr
&=&
\sum_{\lambda,\mu}
P}%{\widetilde P_{\mu^\vee/\lambda^\vee}\left(ax;t,q\right)
P}%{\widetilde P_{\mu/\lambda}\left(y;q,t\right)
(ab)^{|\mu|}\Pi_0} %{\widetilde\Pi\left(ax,y\right)
\cr
&=&
\sum_{\lambda,\mu}
P}%{\widetilde P_{\mu^\vee/\lambda^\vee}\left(abx;t,q\right)
P}%{\widetilde P_{\mu/\lambda}\left(y;q,t\right)
a^{|\mu|}b^{|\lambda|}
\Pi_0} %{\widetilde\Pi\left(ax,y\right).
\end{eqnarray}
Therefore
\begin{equation}
F(x,y)=
F(cx,y)
\Pi_0} %{\widetilde\Pi\left(ax,y\right)
=
F(0,y)
\prod_{k\geq 0} \Pi_0} %{\widetilde\Pi\left(ac^k x,y\right),
\qquad |c|<1.
\end{equation}
But
\begin{equation}
F(0,y)
=
\sum_\lambda
P}%{\widetilde P_{\lambda/\lambda} (y;q,t) c^{|\lambda|}
=
\sum_{\lambda}
c^{|\lambda|}
=
\prod_{n>0}
(1-c^n)^{-1},
\qquad |c|<1,
\end{equation}
and the lemma is proven.
\hfill\fbox{}
From the above lemma and
(\ref{eq:OPEFormula})
we obtain the following trace formula.
\\
{\bf Proposition.}
For $N\in{\mathbb N}}%{{\bf N}$,
let $x^i=x^{2N+i}$'s be sets of variables,
$\Ya\lambda 0 = \Ya\lambda{2N}$,
$c_{i,i+1}=c_{2N+i,2N+i+1}\in{\mathbb C}}%{{\bf C}$,
$c_{i,j}:=\prod_{k=i}^{j-1} c_{i,i+1}$ and
$c:=c_{1,2N+1}=\prod_{i=1}^{2N}c_{i,i+1}$.
If $|c|<1$, then
\begin{eqnarray}
&&\hskip -30pt
\sum_{\{\Ya\lambda 1,\Ya\lambda 2,\cdots,\Ya\lambda{2N}\}}
\prod_{i=1}^N
P}%{\widetilde P_{{\Yav\lambda{2i-2}}/{\Yav\lambda{2i-1}} } \left(x^{2i-1};t,q\right)
P}%{\widetilde P_{{\Ya \lambda{2i }}/{\Ya \lambda{2i-1}} } \left(x^{2i };q,t\right)
\cdot
\prod_{i=1}^{2N}
c_{i,i+1}^{|\Ya\lambda i|}
\cr
&=&
\prod_{k\geq 0} {1\over 1-c^{k+1}}
\prod_{i=1}^N \prod_{j=i+1}^{i+N}
\Pi_0} %{\widetilde\Pi\left( x^{2i}, \ c_{2i,2j-1} c^k x^{2j-1}\right)
\cr
&=&
\exp\left\{-\sum_{n>0}{1\over n}
{1\over 1-c^n}
\left\{
\sum_{i=1}^N \sum_{j=i+1}^{i+N}
c_{2i,2j-1}^n p_n\left(x^{2i}\right) p_n\left(-x^{2j-1}\right)
-c^n
\right\}\right\}.
\label{eq:TraceFormula}
\end{eqnarray}
\noindent{\it Proof.\hskip10pt}} %\quad}
From
(\ref{eq:OPEFormula}) and
(\ref{eq:TwoTraceFormula}),
the left-hand side of the above equation is
\begin{eqnarray}
&&
\sum_{\lambda,\eta}
P}%{\widetilde P_{{\lambda^\vee}/{\eta^\vee} }
\left(\{c_{1,2i-1}x^{2i-1}\}_{i=1}^N;t,q\right)
P}%{\widetilde P_{{\lambda}/{\eta} }
\left(\{x^{2i}c_{2i,2N}\}_{i=1}^N;q,t\right)
c_{1,2N}^{|\eta|} c_{2N,2N+1}^{|\lambda|}
\cr
&&\hskip 6truecm
\times
\prod_{1\leq i<j\leq N} \Pi_0} %{\widetilde\Pi\left( x^{2i}, \ c_{2i,2j-1} x^{2j-1}\right)
\cr
&=&
\prod_{1\leq i<j\leq N} \Pi_0} %{\widetilde\Pi\left( x^{2i}, \ c_{2i,2j-1} x^{2j-1}\right)
\cdot
\prod_{k\geq 0}
{
\Pi_0} %{\widetilde\Pi\left(
\{x^{2i}c_{2i,2N+1}\}_{i=1}^N ,
\{c_{1,2i-1}c^k x^{2i-1}\}_{i=1}^N
\right)
\over
1-c^{k+1}
},
\end{eqnarray}
here
$
\Pi_0} %{\widetilde\Pi\left(\{x^i\}_{i=1}^N ,\{y^j\}_{j=1}^M \right)
=
\prod_{i=1}^N \prod_{j=1}^M
\Pi_0} %{\widetilde\Pi(x^i,y^j)
$.
Then the left-hand side of (\ref{eq:TraceFormula}) reduces to
\begin{eqnarray}
&&\hskip-10pt
\prod_{1\leq i<j\leq N}
\Pi_0} %{\widetilde\Pi\left( x^{2i}, \ c_{2i,2j-1} x^{2j-1}\right)
\cdot
\prod_{k\geq 0}
{1\over 1-c^{k+1}}
\prod_{i,j=1}^N
\Pi_0} %{\widetilde\Pi\left(x^{2i},\ c_{2i,2N+2j-1} c^k x^{2N+2j-1}\right)
\cr
&=&
\prod_{k\geq 0}
{1\over 1-c^{k+1}}
\!\!\prod_{1\leq i<j\leq N}
\Pi_0} %{\widetilde\Pi\left( x^{2i}, \ c_{2i,2j-1} c^k x^{2j-1} \right)
\!\prod_{1\leq j\leq i\leq N}
\Pi_0} %{\widetilde\Pi\left( x^{2i}, \ c_{2i,2N+2j-1} c^k x^{2j-1}\right), \cr
& &
\end{eqnarray}
which equals to the second line of (\ref{eq:TraceFormula}).
\hfill\fbox{}
From this trace formula, we can calculate one-loop diagram
if the loop does not contain preferred directions and
also the framing factors cancel out.
\subsection{Examples for $N=2$ and $4$}
For an example of the trace formula for $N=2$, let
\begin{eqnarray}
Z_2
&:=&
\sum_{\mu,\nu}
\Coii\nu\bullet\lambda qt
\Cioo\nu\bullet\lambda qt
\Lambda^{|\lambda|}
Q^{|\nu|}
\cr
&=&
\sum_{\mu,\nu, \sigma_1, \sigma_2}
P}%{\widetilde P_{\nu^\vee/\Yav\sigma 1}\left(-\iota q^\rho;t,q\right)
P}%{\widetilde P_{\lambda /\Ya \sigma 1}\left( t^\rho;t,q\right)
P}%{\widetilde P_{\lambda^\vee/\Yav\sigma 2}\left(- q^\rho;t,q\right)
P}%{\widetilde P_{\nu /\Ya \sigma 2}\left( \iota t^\rho;t,q\right)
v^{|\Ya\sigma 1|-|\Ya\sigma 2|}
\Lambda^{|\lambda|}
Q^{|\nu|}.
\cr &&
\end{eqnarray}
Then from (\ref{eq:TraceFormula}) with
$(c_{1,2},c_{2,3},c_{3,4},c_{4,5})=(v,\Lambda,v^{-1},Q)$ and
$(x^1,x^2,x^3,x^4)=(-\iota q^\rho,t^\rho,-q^\rho,\iota t^\rho)$,
it follows that $c=Q\Lambda$ and
\begin{equation}
\begin{pmatrix}
c_{2,3} & c_{2,5} \cr
\ c_{4,5}\ &\ c_{4,7}\
\end{pmatrix}
=
\begin{pmatrix}
\Lambda & c/v \cr
\ Q \ &\ cv\
\end{pmatrix},
\end{equation}
and thus
\begin{equation}
Z_2
=
\prod_{k\geq 0}
{
\Pi_0} %{\widetilde\Pi\left(t^\rho,-\Lambda c^k q^\rho\right)
\Pi_0} %{\widetilde\Pi\left(t^\rho,-Q c^k q^\rho\right)
\over
\Pi_0} %{\widetilde\Pi\left(t^\rho,-v c^{k+1}q^\rho\right)
\Pi_0} %{\widetilde\Pi\left(t^\rho,-v^{-1}c^{k+1}q^\rho\right)
}
{1\over 1-c^{k+1}}.
\end{equation}
From (\ref{eq:powersum}), we obtain
\begin{equation}
Z_2
=
\exp\left\{
-\sum_{n>0} {1\over n} {1\over 1-c^n}
\left\{
{
(\Lambda^n + Q^n) - (v^n + v^{-n})c^n
\over
(t^{n\over 2} - t^{-{n\over 2}})
(q^{n\over 2} - q^{-{n\over 2}})
}
-c^n
\right\}\right\}.
\end{equation}
If we separate out the part
$
Z_2^{\rm pert}
:=
Z_2(\Lambda = 0)
=
\exp\left\{
-\sum_{n>0}
Q^n
/
(n
(t^{n\over 2} - t^{-{n\over 2}})
(q^{n\over 2} - q^{-{n\over 2}})
)
\right\},
$
then
$
Z_2^{\rm inst}
:=
Z_2/Z_2^{\rm pert}
$
is
\begin{equation}
Z_2^{\rm inst}
=
\exp\left\{
-\sum_{n>0} {1\over n} {\Lambda^n\over 1-c^n}
{
(Q^n - u^n)(Q^n - u^{-n})
\over
(t^{n\over 2} - t^{-{n\over 2}})
(q^{n\over 2} - q^{-{n\over 2}})
}
\right\}.
\label{eq:LoopChiy}
\end{equation}
As we will see in section 7.2, this gives the equivariant $\chi_y$ genus of
the Hilbert scheme of points on ${\mathbb C}^2$.
\FigTrace
For an example for $N=4$, let
\begin{eqnarray}
Z_4
&:=&
\sum_{\{\Ya\mu\alpha\}}
\prod_{\alpha =1}^4
\Ciio{\Ya\mu\alpha}\bullet{\Ya\mu{\alpha+1}} qt
\fla{\Ya\mu{\alpha+1}}qt
\cr
&=&
\sum_{\{\Ya\mu\alpha\}}
\prod_{\alpha =1}^4
P}%{\widetilde P_{\Yav\mu\alpha/\Yav\sigma\alpha}(-\iota q^\rho)
P}%{\widetilde P_{\Yav\mu{\alpha+1}/\Yav\sigma\alpha}(t^\rho)
v^{|\Ya\sigma\alpha|-|\Ya\mu\alpha|}Q^{|\Ya\mu\alpha|},
\end{eqnarray}
with $\Ya\mu 5 = \Ya\mu 1$.
Then from (\ref{eq:TraceFormula}) with
$(c_{2\a -1,2\a }, c_{2\a ,2\a +1})=(v, v^{-1} Q_\a )$ and
$(x^{2\a },x^{2\a -1})=(t^\rho,-\iota q^\rho)$,
it follows that
\begin{equation}
Z_4=
\prod_{k\geq 0} {1\over 1-c^{k+1}}
\prod_{i=1}^4 \prod_{j=i+1}^{i+4}
\Pi_0} %{\widetilde\Pi\left( t^\rho, \ -c^k c_{2i,2j-1} q^\rho\right)^{-1},
\end{equation}
where $c=Q_1 Q_2 Q_3 Q_4$ and
\begin{equation}
\begin{pmatrix}
c_{2,3} & c_{2, 5} & c_{2, 7} & c_{2, 9} \cr
c_{4,5} & c_{4, 7} & c_{4, 9} & c_{4,11} \cr
c_{6,7} & c_{6, 9} & c_{6,11} & c_{6,13} \cr
\ c_{8,9}\ &\ c_{8,11}\ &\ c_{8,13}\ &\ c_{8,15}\
\end{pmatrix}
=
v^{-1}
\begin{pmatrix}
\ Q_1\ &\ Q_1 Q_2\ &\ Q_1 Q_2 Q_3\ & c\ \cr
Q_2 & Q_2 Q_3 & Q_2 Q_3 Q_4 & c\cr
Q_3 & Q_3 Q_4 & Q_3 Q_4 Q_1 & c\cr
Q_4 & Q_4 Q_1 & Q_4 Q_1 Q_2 & c
\end{pmatrix}.
\end{equation}
Thus
\begin{equation}
Z_4
=
\exp\left\{
\sum_{n>0} {1\over n} {1\over 1-c^n}
\left\{
{
v^{-n}
\sum_{\a =1}^4
\left(
Q_{\a }^n +
Q_{\a }^n Q_{\a +1}^n +
Q_{\a }^n Q_{\a +1}^n Q_{\a +2}^n +
c^n
\right)
\over
(t^{n\over 2} - t^{-{n\over 2}})
(q^{n\over 2} - q^{-{n\over 2}})
}
-c^n
\right\}\right\},
\end{equation}
where $Q_{i+4} = Q_i$.
\setcounter{equation}{0} \section{$U(1)$ Partition Function, $\chi_y$ Genus and Elliptic Genus}
Nekrasov's $U(1)$ partition function,
the $\chi_y$ genus and the elliptic genus
are realized by our refined topological vertex,
as shown in \cite{AK}.
Since the diagrams for $U(1)$ theory have trivial framing,
the vertex in \cite{AK} and the improved vertex in the present paper
give the same answer.
\subsection{$U(1)$ partition function}
First, the $U(1)$ partition function is written as follows.
Let
\begin{equation}
Z
:=
\sum_\lambda\Lambda^{|\lambda|}
\Ciio\bullet\lambda\bullet{q}{t}
\Cooi\bullet\lambda\bullet{q}{t}.
\end{equation}
Then
\begin{eqnarray}
Z
&=&
\sum_\lambda\Lambda^{|\lambda|}
P}%{\widetilde P_\lambda(t^\rho ;q,t) \ P}%{\widetilde P_{\lambda^\vee}(-q^{\rho} ;t,q)
\cr
&=&
\sum_\lambda
\prod_{s\in\lambda}
v^{-1}\Lambda
{1\over
(1-q^{ a(s) } t^{ \ell(s)+1})
(1-q^{-a(s)-1} t^{-\ell(s) })},
\end{eqnarray}
from
(\ref{eq:LargeNPrincipalSpecialization}).
This agrees with the $U(1)$ Nekrasov's formula
$\ZQ{inst}{0}qt{{\bf e}_1}{\Lambda^{1\over 2}}$
in (\ref{eq:NekrasovZinstM}).
Using the Cauchy-formula (\ref{eq:conjugateCauchy})
we have
\begin{eqnarray}
Z
&=&
\Exp{-\sum_{n>0}{1\over n}
{\Lambda^n\over (t^{n\over 2} - t^{-{n\over 2}}) (q^{n\over 2} - q^{-{n\over 2}}) }
}
\cr
&=&
\Exp{\mp\sum_{n>0}{1\over n}\sum_{i,j}\left(\Lambda\,t^{{1\over2} -i}q^{\pm({1\over2} -j)}\right)^n},
\qquad |q^{\mp1}|, |t^{-1}| < 1
\cr
&=&
\prod_{i,j\geq 1}
(1-\Lambda\, t^{{1\over2} -i} q^{\pm({1\over2} -j)})^{\pm1},
\qquad |q^{\mp1}|, |t^{-1}| < 1.
\end{eqnarray}
\FigUi
\subsection{$\chi_y$ genus}
Next, the $\chi_y$ genus is realized as follows.
Let
\begin{equation}
\widetilde Z :=
\sum_{\lambda,\nu }
Q^{|\nu |} \Lambda^{|\lambda|}
\Ciio\bullet\lambda\nu{q}{t}
\Cooi\bullet\lambda\nu{q}{t}.
\end{equation}
Then
\begin{equation}
\widetilde Z
=
\sum_{\lambda, \nu }
Q^{|\nu |}
\Lambda^{|\lambda|}
P}%{\widetilde P_\lambda\left(t^{\rho};q,t\right)
P}%{\widetilde P_{\lambda^\vee} \left(-q^{\rho};t,q\right)
P}%{\widetilde P_\nu \left(q^{\lambda}t^{\rho};q,t\right)
P}%{\widetilde P_{\nu^\vee} \left(-t^{\lambda^\vee}q^{\rho};t,q\right).
\end{equation}
From
(\ref{eq:LargeNPrincipalSpecialization}) and
(\ref{eq:conjugateCauchy}) we have
\begin{equation}
\widetilde Z
=
\sum_{\lambda}
\Pi_0} %{\widetilde\Pi(-Q q^{\lambda}t^{\rho},\, t^{\lambda^\vee}q^{\rho})
\prod_{s\in\lambda}
v^{-1}\Lambda
{1\over
(1-q^{ a(s) } t^{ \ell(s)+1})
(1-q^{-a(s)-1} t^{-\ell(s) })}.
\end{equation}
If we separate out the part
$
\tZm{pert}{}
:=
\sum_{\nu }
Q^{|\nu |}
\Ciio\bullet\bullet\nu{q}{t}
\Cooi\bullet\bullet\nu{q}{t}
=
\Pi_0} %{\widetilde\Pi(-Q t^{\rho},\, q^{\rho})
$,
which is independent of $\Lambda$,
then
$
\Zm{inst}{}
:= \widetilde Z/\tZm{pert}{}
$
is from (\ref{eq:NekIIp})
\begin{eqnarray}
\Zm{inst}{}
&=&
\sum_{\lambda}
\left(v^{-1}\Lambda\right)^{|\lambda|}
{
\Nek{\lambda}{\lambda}{vQ}qt
\over
\Nek{\lambda}{\lambda}1qt
}
\cr
&=&
\sum_{\lambda}
\prod_{s\in\lambda}
v^{-1}\Lambda
{
1-vQ q^{ a(s) } t^{ \ell(s)+1}\over
1- q^{ a(s) } t^{ \ell(s)+1}
}
{
1-vQ q^{-a(s)-1} t^{-\ell(s) }\over
1- q^{-a(s)-1} t^{-\ell(s) }
}.
\label{eq:chiy}
\end{eqnarray}
This agrees with the $\chi_y$ genus (20) of \cite{rf:LiLiuZhou}
with $vQ = y$, $v^{-1}\Lambda = Q^{\rm LLZ}$
and $(q,t)=(1/t_1,t_2)$ or $(1/t_2,t_1)$.
If our refined topological vertex had cyclic symmetry,
then this $\chi_y$ genus $\Zm{inst}{}$ would agree with $Z_2^{\rm inst}$ in section 6.2,
and hence the following identity should hold
\begin{eqnarray}
&&\hskip-60pt
\sum_{\lambda}
\Lambda^{|\lambda|}
\prod_{s\in\lambda}
{
1-Q q^{ a(s) } t^{ \ell(s)+1}\over
1- q^{ a(s) } t^{ \ell(s)+1}
}
{
1-Q q^{-a(s)-1} t^{-\ell(s) }\over
1- q^{-a(s)-1} t^{-\ell(s) }
}
\cr
&=&
\exp\left\{
\sum_{n>0} {1\over n} {\Lambda^n\over 1-\Lambda^n Q^n}
{
(1-t^n Q^n)(1-q^{-n}Q^n)
\over
(1-t^n)(1-q^{-n})
}
\right\}.
\label{eq:conjecture}
\end{eqnarray}
From (\ref{eq:Specialization}),
this
is close to the Cauchy formula for the Macdonald functions in power sums
$p_n = {(1-t^n Q^n)/(1-t^n)}$ and
$(-\Lambda)^n{(1-q^{-n}Q^n)/(1-q^{-n})}$, i.e.
\begin{eqnarray}
&&\hskip-60pt
\sum_{\lambda}
(-\Lambda)^{|\lambda|}
\prod_{s\in\lambda}
{
1-Q q^{ a'(s) } t^{1-\ell'(s)}\over
1- q^{ a(s) } t^{ \ell(s)+1}
}
{
1-Q q^{a'(s)-1} t^{-\ell'(s)}\over
1- q^{-a(s)-1} t^{-\ell(s) }
}
q^{-a'(s)} t^{\ell'(s)}
\cr
&=&
\exp\left\{
-\sum_{n>0} {1\over n} {\Lambda^n}
{
(1-t^n Q^n)(1-q^{-n}Q^n)
\over
(1-t^n)(1-q^{-n})
}
\right\}.
\end{eqnarray}
Although we have no proof for (\ref{eq:conjecture}),
computer calculations support that
$\Zm{inst}{}=Z_2^{\rm inst}$,
which strongly suggests a kind of symmetry of web diagrams.
See also the discussions in the recent papers \cite{IKS, PS}.
\subsection{Elliptic genus}
Finally, the elliptic genus is written as follows.
Let
\begin{equation}
\widetilde Z :=
\sum_{\lambda,\mu,\nu}
Q_1^{|\mu |}\Lambda^{|\lambda|} Q_2^{|\nu |}
\Ciio\mu\lambda\nu{q}{t}
\Cooi\mu\lambda\nu{q}{t}.
\end{equation}
Then
\begin{eqnarray}
\widetilde Z
&=&
\sum_{\lambda,\mu,\nu}
P}%{\widetilde P_\lambda(t^\rho;q,t)
\sum_\sigma
\iota P}%{\widetilde P_{\mu^\vee /\sigma^\vee}( -t^{\lambda^\vee} q^{\rho};t,q) \
P}%{\widetilde P_{\nu /\sigma}(q^{\lambda} t^{\rho};q,t)
Q_1^{|\mu |} \Lambda^{|\lambda|} Q_2^{|\nu |}
\cr
&&\hskip8pt
\times
P}%{\widetilde P_{\lambda^\vee}(-q^{\rho};t,q)
\sum_\eta
\iota P}%{\widetilde P_{\mu /\eta}(q^{\lambda} t^{\rho};q,t) \
P}%{\widetilde P_{\nu^\vee /\eta^\vee}(-t^{\lambda^\vee} q^{\rho};t,q)
v^{ |\sigma|-|\eta| }.
\end{eqnarray}
From
(\ref{eq:LargeNPrincipalSpecialization}) and
the trace formula (\ref{eq:TraceFormula}) with
$(c_{1,2},c_{2,3},c_{3,4},c_{4,5})=(v,Q_2,v^{-1},Q_1)$ and
$(x^1,x^2,x^3,x^4)=
(-\iota t^{\lambda^\vee}q^\rho, q^\lambda t^\rho, -t^{\lambda^\vee}q^\rho, \iota q^\lambda t^\rho)$,
it follows that
\begin{eqnarray}
\widetilde Z
&=&
\sum_{\lambda}
\prod_{s\in\lambda}
v^{-1}\Lambda
{1\over
(1-q^{ a(s) } t^{ \ell(s)+1})
(1-q^{-a(s)-1} t^{-\ell(s) })}
\cr
&&\hskip8pt\times
\prod_{k\geq 0}
{
\Pi_0} %{\widetilde\Pi(-Q_1c^k q^{\lambda}t^{\rho},\, t^{\lambda^\vee}q^{\rho})\
\Pi_0} %{\widetilde\Pi(-Q_2 c^k q^{\lambda}t^{\rho},\, t^{\lambda^\vee}q^{\rho})
\over
\Pi_0} %{\widetilde\Pi(-v^{-1} c^{k+1} q^{\lambda}t^{\rho},\, t^{\lambda^\vee}q^{\rho})
\Pi_0} %{\widetilde\Pi(-v c^{k+1} q^{\lambda}t^{\rho},\, t^{\lambda^\vee} q^{\rho})
}
{1\over 1-c^{k+1} },
\end{eqnarray}
with $c=Q_1 Q_2$ and $|c|<1$.
If we factor out the $\Lambda$-independent part
\begin{eqnarray}
\tZm{pert}{}
&:=&
\sum_{\mu,\nu}
Q_1^{|\mu |} Q_2^{|\nu |}
\Ciio\mu\bullet\nu{q}{t}
\Cooi\mu\bullet\nu{q}{t}
\cr
&=&
\prod_{k\geq 0}
{
\Pi_0} %{\widetilde\Pi(-Q_1c^k t^{\rho},\, q^{\rho})\
\Pi_0} %{\widetilde\Pi(-Q_2 c^k t^{\rho},\, q^{\rho})
\over
\Pi_0} %{\widetilde\Pi(-v^{-1} c^{k+1} t^{\rho},\, q^{\rho})
\Pi_0} %{\widetilde\Pi(-v c^{k+1} t^{\rho},\, q^{\rho})
}
{1\over 1-c^{k+1} },
\end{eqnarray}
then
$
\Zm{inst}{}
:= \widetilde Z/\tZm{pert}{}
$
is from (\ref{eq:NekIIp}),
\begin{eqnarray}
\Zm{inst}{}
&=&
\sum_{\lambda}
\left(v^{-1}\Lambda\right)^{|\lambda|}
\prod_{k\geq 0}
{
\Nek{\lambda}{\lambda}{ v Q_1c^k }qt
\Nek{\lambda}{\lambda}{ v Q_2c^k }qt
\over
\Nek{\lambda}{\lambda}{ c^{k } }qt
\Nek{\lambda}{\lambda}{ v^2 c^{k+1} }qt
}
\cr
&=&
\sum_{\lambda}
\prod_{k\geq 0}
\prod_{s\in\lambda}
v^{-1}\Lambda
{
\left(1- vQ_1^{k+1} Q_2^{k }q^{ a(s) } t^{ \ell(s)+1}\right)
\left(1- vQ_1^{k } Q_2^{k+1}q^{ a(s) } t^{
\ell(s)+1}\right)\over
\left(1- Q_1^{k } Q_2^{k }q^{ a(s) } t^{
\ell(s)+1}\right)
\left(1-v^2 Q_1^{k+1} Q_2^{k+1}q^{ a(s) } t^{ \ell(s)+1}\right)
}
\cr
&&\hskip15mm\times
{
\left(1- vQ_1^{k+1} Q_2^{k }q^{-a(s)-1} t^{-\ell(s)}\right)
\left(1- vQ_1^{k } Q_2^{k+1}q^{-a(s)-1} t^{-\ell(s)}\right)
\over
\left(1- Q_1^{k } Q_2^{k }q^{-a(s)-1}
t^{-\ell(s)}\right)
\left(1-v^2 Q_1^{k+1} Q_2^{k+1}q^{-a(s)-1} t^{-\ell(s)}\right)
}.
\end{eqnarray}
This agrees with the elliptic genus (24) of \cite{rf:LiLiuZhou}
with $Q_1 Q_2 = p$, $v Q_1 = y$, $v^{-1}\Lambda = y^{-1} Q^{\rm LLZ}$
and $(q,t)=(t_1,1/t_2)$ or $(t_2,1/t_1)$.
\setcounter{equation}{0} \section{$SU({N_c})$ Partition Function}
Nekrasov's $SU({N_c})$ partition function
is also realized by our refined topological vertex,
as mentioned in \cite{AK}.
\subsection{Pure $SU(2)$ partition function}
The pure $SU(2)$ partition function without Chern-Simons couplings
is written as follows.
Let
\begin{eqnarray}
\ZlaQ{ \Ya\lambda 1,\Ya\lambda 2 }{ \Q_1,\Q_2 }qt
&:=&
\sum_\mu
\Ciio\bullet{\Ya\lambda 1}\mu{q}{t}
\Ciio\mu{\Ya\lambda 2}\bullet{q}{t}
Q_{1,2}^{|\mu |} \fla\mu qt
\cr
&=&
\sum_\mu
P}%{\widetilde P_{\Ya\lambda 1} \left(t^{\rho};q,t\right)
P}%{\widetilde P_\mu \left(q^{\Ya\lambda 1} t^\rho ;q,t\right)
\iota P}%{\widetilde P_{\mu^\vee} \left(-t^{\lambda_2^\vee} q^{\rho };t,q\right)
P}%{\widetilde P_{\Ya\lambda 2} \left(t^{\rho};q,t\right)
\left(v^{-1} Q_{1,2}\right)^{|\mu |}
\cr
&=&
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{1,2}\, q^{\Ya\lambda 1} t^{\rho},\ t^{{\Yav\lambda 2}}
q^{\rho}\right)^{-1}
P}%{\widetilde P_{\Ya\lambda 1} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{\Ya\lambda 2} \left(t^{\rho};q,t\right),
\end{eqnarray}
from (\ref{eq:conjugateCauchy}),
where $Q_{\alpha,\beta} := \QQ\alpha\beta$.
The dual part is
\begin{eqnarray}
\ZlaQ{ {\Yav\lambda 2},{\Yav\lambda 1} }{ \Q_2^{-1},\Q_1^{-1} }tq
&=&
\sum_\nu
\Ciio\bullet{{\Yav\lambda 2}}{\nu^\vee}{t}{q}
\Ciio{\nu^\vee}{{\Yav\lambda 1}}\bullet{t}{q}
Q_{1,2}^{|\nu |} \fla{\nu^\vee}tq
\cr
&=&
\sum_\nu
\Cooi\bullet{{\Ya\lambda 2}}{\nu}{q}{t}
\Cooi{\nu}{{\Ya\lambda 1}}\bullet{q}{t}
Q_{1,2}^{|\nu |} \fla{\nu}qt ^{-1}
(-1)^{|\Ya\lambda 1|+|\Ya\lambda 2|}.
\end{eqnarray}
Then, from
(\ref{eq:LargeNPrincipalSpecialization}) and
(\ref{eq:conjugateCauchy}),
it follows that
\begin{eqnarray}
\widetilde Z
&:=&
\sum_{\Ya\lambda 1,\Ya\lambda 2}
\ZlaQ{ \Ya\lambda 1,\Ya\lambda 2 }{ \Q_1,\Q_2 }qt
\ZlaQ{ {\Yav\lambda 2},{\Yav\lambda 1} }{ \Q_2^{-1},\Q_1^{-1} }tq
(\Lambda Q_{1,2})^{|\Ya\lambda 1|+|\Ya\lambda 2|}
\fla{\Ya\lambda 1}qt / \fla{\Ya\lambda 2}qt
\cr
&=&
\sum_{\Ya\lambda 1,\Ya\lambda 2}
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{1,2}\, q^{\Ya\lambda 1} t^{\rho},\ t^{{\Yav\lambda 2}}
q^{\rho}\right)^{-1}
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{2,1}\, t^{{\Yav\lambda 1}} q^{\rho},\ q^{\Ya\lambda 2}
t^{\rho}\right)^{-1}
\cr
&\times&
\prod_{s\in\Ya\lambda 2 }
v^{-1}\Lambda
{1\over \left(1-q^{a(s)}t^{\ell(s)+1}\right)\left(1-q^{-a(s)-1}t^{-\ell(s)}\right)}
\cr
&\times&
\prod_{s\in\Ya\lambda 1 }
v^{-1}\Lambda
{1\over \left(1-q^{a(s)}t^{\ell(s)+1}\right)\left(1-q^{-a(s)-1}t^{-\ell(s)}\right)}.
\label{eq:suiiZ}
\end{eqnarray}
If we factor out the $\Lambda$-independent part
$
\tZm{pert}{}
:=
\ZlaQ{\bullet,\bullet}{\Q_1,\Q_2}qt
\ZlaQ{\bullet,\bullet}{\Q_2^{-1},\Q_1^{-1}}tq
$,
then
$
\Zm{inst}{}
:= \widetilde Z/\tZm{pert}{}
$
agrees with the $SU(2)$ Nekrasov's formula
$\ZQ{inst}{0}qt{{\bf e}_1,{\bf e}_2}{\Lambda^{1\over 4}}$
in (\ref{eq:NekrasovZinstM}).
\FigSUii
\subsection{Pure $SU({N_c})$ partition function}
The pure $SU({N_c})$ partition function with Chern-Simons terms is written as follows.
Let
\begin{eqnarray}
&&
\hskip-20pt
\ZlaQ{\Ya\lambda {1},\cdots,\Ya\lambda {{N_c}}}{\Q_1,\cdots,\Q_{{N_c}}}qt
\cr
&:=&
\sum_{ \{\Ya\mu \a \} }
\prod_{\a =1}^{{N_c}}
\Ciio{\Ya\mu {\a -1}}{\Ya\lambda \a }{\Ya\mu \a }{q}{t}
\prod_{\a =1}^{{N_c}-1}
Q_{\a ,\a +1}^{|\Ya\mu \a |} \fla{\Ya\mu \a }qt
\cr
&=&
\sum_{ \{\Ya\mu \a \} }
\prod_{\a =1}^{{N_c}}\sum_{\Ya\sigma \a }
\iota P}%{\widetilde P_{{\Yav\mu {\a -1}}/{\Yav\sigma \a }} \left(-t^{{\Yav\lambda \a }}
q^{\rho} ;t,q\right)
P}%{\widetilde P_{\Ya\lambda \a } \left(t^{\rho};q,t\right)
P}%{\widetilde P_{\Ya\mu \a /\Ya\sigma \a } \left(q^{\Ya\lambda \a } t^\rho ;q,t\right)
\prod_{\a =1}^{{N_c}-1}
v^{|\Ya\sigma \a |-|\Ya\mu \a |}
Q_{\a ,\a +1}^{|\Ya\mu \a |},
\cr
&&
\end{eqnarray}
with
$Q_{\a ,\b } = \QQ\a\b $
and
$\Ya\mu 0 = \Ya\mu {{N_c}} = 0$.
Note that $\Ya\sigma 1=\Ya\sigma {{N_c}}=0$.
From the OPE formula (\ref{eq:OPEFormula}), we have
\begin{equation}
\ZlaQ{\Ya\lambda {1},\cdots,\Ya\lambda {{N_c}}}{\Q_1,\cdots,\Q_{{N_c}}}qt
=
\prod_{\a <\b }
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{\a ,\b }\, q^{\Ya\lambda \a } t^{\rho},\ t^{{\Yav\lambda \b }}
q^{\rho}\right)^{-1}
\prod_{\a =1}^{{N_c}}
P}%{\widetilde P_{\Ya\lambda \a } \left(t^{\rho};q,t\right).
\end{equation}
The dual part is
\begin{equation}
\ZlaQ{{\Yav\lambda {N_c}},\cdots,{\Yav\lambda 1}}{\Q_{N_c}^{-1},\cdots,\Q_1^{-1}}tq
=
\sum_{ \{\Ya\mu \a \} }
\prod_{\a =1}^{{N_c}}
\Cooi{\Ya\mu \a }{\Ya\lambda \a }{\Ya\mu {\a -1}}{q}{t}
\prod_{\a =1}^{{N_c}-1}
Q_{\a ,\a +1}^{|\Ya\mu \a |} \fla{\Ya\mu \a }qt ^{-1}
(-1)^{|\Ya\lambda \a |},
\end{equation}
with
$Q_{\alpha,\beta} := \QQ\alpha\beta$.
Then, using $\Lambda_{\a ,m}$ in
(\ref{eq:Lambdai}),
\begin{eqnarray}
\tZm{}{m}
&:=&
\sum_{\Ya\lambda {1},\cdots,\Ya\lambda {{N_c}}}
\ZlaQ{\Ya\lambda 1,\cdots,\Ya\lambda {N_c}}{\Q_1,\cdots,\Q_{{N_c}}}qt
\ZlaQ{{\Yav\lambda {N_c}},\cdots,{\Yav\lambda 1}}{\Q_{N_c}^{-1},\cdots,\Q_1^{-1}}tq
\prod_{\a =1}^{{N_c}}
\Lambda_{\a ,m}{}^{|\Ya\lambda \a |}
\fla{\Ya\lambda \a }qt ^{{N_c}-m-2\a +1}
\cr
&=&
\sum_{\Ya\lambda {1},\cdots,\Ya\lambda {{N_c}}}
\prod_{\a <\b }
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{\a ,\b }\, q^{\Ya\lambda \a } t^{\rho},\ t^{{\Yav\lambda \b }}
q^{\rho}\right)^{-1}
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{\b ,\a }\, t^{{\Yav\lambda \a }} q^{\rho},\ q^{\Ya\lambda \b }
t^{\rho}\right)^{-1}
\cr
&& \quad\times\quad
\prod_{\a =1}^{N_c}
\fla{\Ya\lambda \a }qt ^{-m}
\prod_{s\in\Ya\lambda \a }
{v^{-1}\Lambda^{2{N_c}}\left(-Q_\a \right)^{-m}
\over
\left(1-q^{a(s)}t^{\ell(s)+1}\right)\left(1-q^{-a(s)-1}t^{-\ell(s)}\right)},
\label{eq:suNZ}
\end{eqnarray}
with $\Ya\mu 0 = \Ya\mu {{N_c}} = \Ya\nu 0 = \Ya\nu {N_c} = 0$.
If we factor out the $\Lambda$-independent part
$
\tZm{pert}{}
:=
\ZlaQ{\bullet,\cdots,\bullet}{\Q_1,\cdots,\Q_{{N_c}}}qt
\ZlaQ{\bullet,\cdots,\bullet}{\Q_{N_c}^{-1},\cdots,\Q_1^{-1}}tq
$,
then
$
\Zm{inst}{m}
:= { \tZm{}{m}/\tZm{pert}{} }
$
agrees with the $SU({N_c})$ Nekrasov's formula
$\ZQ{inst}{m}qt{{\bf e}_1,\cdots,{\bf e}_{{N_c}}}{\Lambda}$
in (\ref{eq:NekrasovZinstM}).
\FigSUN
\setcounter{equation}{0} \section{$SU({N_c})$ with $N_f = 2 {N_c} $}
The partition functions with fundamental matters are also realized by
the refined topological vertex as follows.
Let
\begin{eqnarray}
&& \hskip -20pt
\ZlaQ{\Ya\lambda {1},\cdots,\Ya\lambda {2{N_c}-1}}{\Q_1,\cdots,\Q_{2{N_c}-1}}qt
\cr
&:=&
\sum_{ \{\Ya\mu \a \} }
\prod_{\a =1}^{{N_c}}
\Ciio{\Ya\mu {2\a -2}}{\Ya\lambda {2\a -1}}{\Ya\mu {2\a -1}}{q}{t}
\Cooi{\Ya\mu {2\a }}{\Ya\lambda {2\a }}{\Ya\mu {2\a -1}}{q}{t}
\prod_{\a =1}^{2{N_c}-1}
Q_{\a ,\a +1}^{|\Ya\mu \a |}
\cr
&=&
\sum_{ \{\Ya\mu \a \} }
\prod_{\a =1}^{{N_c}}
\sum_{\Ya\sigma {2\a -1}}
\iota P}%{\widetilde P_{{\Yav\mu {2\a -2}}/{\Yav\sigma {2\a -1}}}
\left(-t^{{\Yav\lambda {2\a -1}}} q^{\rho} ;t,q\right)
P}%{\widetilde P_{\Ya\lambda {2\a -1}} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{\Ya\mu {2\a -1}/\Ya\sigma {2\a -1}} \left(q^{\Ya\lambda {2\a -1}} t^\rho ;q,t\right)
\cr
&&\times
\sum_{\Ya\sigma {2\a }}
P}%{\widetilde P_{{\Yav\mu {2\a -1}}/{\Yav\sigma {2\a }}}
\left(-t^{{\Yav\lambda {2\a }}} q^{\rho} ;t,q\right)
P}%{\widetilde P_{{\Yav\lambda {2\a }}} \left(-q^{\rho};t,q\right)
\iota P}%{\widetilde P_{\Ya\mu {2\a }/\Ya\sigma {2\a }} \left(q^{\Ya\lambda {2\a }} t^\rho ;q,t\right)
\cr
&&\times
\prod_{\a =1}^{{N_c}}
v^{|\Ya\sigma {2\a -1}|-|\Ya\sigma {2\a }|}
\prod_{\a =1}^{2{N_c}-1}
Q_{\a ,\a +1}^{|\Ya\mu \a |},
\end{eqnarray}
with
$\Ya\mu 0 = \Ya\mu {2{N_c}} = \Ya\sigma 0 = \Ya\sigma {2{N_c}} = 0$.
As in the pure $SU({{N_c}})$ case,
from
(\ref{eq:OPEFormula}) we have
\begin{eqnarray}
&& \hskip -20pt
\ZlaQ{\Ya\lambda {1},\cdots,\Ya\lambda {2{N_c}-1}}{\Q_1,\cdots,\Q_{2{N_c}-1}}qt
\cr
&=&
\prod_{\a <\b }
\Pi_0} %{\widetilde\Pi\left(-
v^{(-1)^\a + (-1)^\b \over 2}
Q_{\a ,\b }\, q^{\Ya\lambda \a } t^{\rho},\ t^{{\Yav\lambda \b }}q^{\rho}\right)^{(-1)^{\a +\b +1}}
\prod_{\a =1}^{{N_c}}
P}%{\widetilde P_{ \Ya\lambda {2\a -1}} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{{\Yav\lambda {2\a }}} \left(-q^{\rho};t,q\right)
\cr
&=&
\prod_{\a <\b }
{
\Pi_0} %{\widetilde\Pi\left(-Q_{2\a ,2\b -1}\, q^{\Ya\lambda {2\a }} t^{\rho},\
t^{{\Yav\lambda {2\b -1}}} q^{\rho}\right)
\Pi_0} %{\widetilde\Pi\left(-Q_{2\a -1,2\b }\, q^{\Ya\lambda {2\a -1}} t^{\rho},\
t^{{\Yav\lambda {2\b }}} q^{\rho}\right)
\over
\Pi_0} %{\widetilde\Pi\left(-vQ_{2\a ,2\b }\, q^{\Ya\lambda {2\a }} t^{\rho},\
t^{{\Yav\lambda {2\b }}} q^{\rho}\right)
\Pi_0} %{\widetilde\Pi\left(-v^{-1} Q_{2\a -1,2\b -1}\, q^{\Ya\lambda {2\a -1}} t^{\rho},\
t^{{\Yav\lambda {2\b -1}}}q^{\rho}\right)
}
\cr
&&\times
\prod_{\a =1}^{{N_c}}
P}%{\widetilde P_{ \Ya\lambda {2\a -1}} \left(t^{\rho};q,t\right)
P}%{\widetilde P_{{\Yav\lambda {2\a }}} \left(-q^{\rho};t,q\right).
\end{eqnarray}
The dual part is
\begin{equation}
\ZlaQ{{\Yav\lambda {2{N_c}-1}},\cdots,{\Yav\lambda 1}}{{\Q'}_{2{N_c}-1}^{-1},\cdots,{\Q'}_1^{-1}}tq
=
\sum_{ \{\Ya\nu \a \} }
\prod_{\a =1}^{{N_c}}
\Cooi{\Ya\nu {2\a -1}}{\Ya\lambda {2\a -1}}{\Ya\nu {2\a -2}}{q}{t}
\Ciio{\Ya\nu {2\a -1}}{\Ya\lambda {2\a }}{\Ya\nu {2\a }}{q}{t}
\prod_{\a =1}^{2{N_c}-1}
{Q'}_{\a ,\a +1}^{|\Ya\nu \a |}
(-1)^{|\Ya\lambda {2\a -1}|},
\end{equation}
with $Q'_{\a ,\b } = \QpQp\a\b $ and $\Q'_{2\a -1} = \Q_{2\a -1}$.
When
$\Ya\lambda {2\a }$ for even integers $2\a $ is a trivial representation,
let
\begin{eqnarray}
\widetilde Z
&:=&
\sum_{\Ya\lambda 1,\Ya\lambda 3,\cdots,\Ya\lambda {2{N_c}-1}}
\ZlaQ{\Ya\lambda 1,\bullet,\Ya\lambda 3,\cdots,\bullet,\Ya\lambda {2{N_c}-1}}{\Q_1,\cdots,\Q_{2{N_c}-1}}qt
\ZlaQ{{\Yav\lambda {2{N_c}-1}},\bullet,\cdots,{\Yav\lambda 3},\bullet,{\Yav\lambda 1}}{{\Q'}_{2{N_c}-1}^{-1},\cdots,{\Q'}_1^{-1}}tq
\prod_{\a =1}^{{N_c}}
\Lambda_\a ^{|\Ya\lambda {2\a -1}|}
\fla{\Ya\lambda {2\a -1}}qt ^{-1},
\cr
\Lambda_\a
&:=&
v^{-1}\Lambda^{2{N_c}}
\prod_{\b =1}^{\a -1} { \Q _{2\b -1}\over \Q'_{2\b } }
\prod_{\b =\a }^{{N_c}} { \Q'_{2\b }\over \Q _{2\b -1} }.
\end{eqnarray}
In addition, let
$Z^{\rm inst} := \widetilde Z/\widetilde Z^{\rm pert}$
with
$
\widetilde Z^{\rm pert} :=
\ZlaQ{\bullet,\cdots,\bullet}{\Q_1,\cdots,\Q_{2{N_c}-1}}qt
\ZlaQ{\bullet,\cdots,\bullet}{{\Q'}_{2{N_c}-1}^{-1},\cdots,{\Q'}_1^{-1}}tq
$.
Then
\begin{eqnarray}
Z^{\rm inst}
&=&
\sum_{ \{ \Ya\lambda {2\a -1} \} }
{
\prod_{\a =1}^{{N_c}}
\Lambda_\a ^{|\Ya\lambda {2\a -1}|}
\fla{\Ya\lambda {2\a -1}}qt ^{-1}
\over
\prod_{\a <\b }
\left(
\Nek{\Ya\lambda \a }{\Ya\lambda \b }{Q_{\a ,\b }}qt
\Nek{{\Yav\lambda \b }}{{\Yav\lambda \a }}{Q'_{\a ,\b }}tq
\right)^{(-1)^{\a +\b }}
\prod_{\a =1}^{{N_c}}
\Nek{\Ya\lambda {2\a -1}}{\Ya\lambda {2\a -1}}1qt
}
\cr
&=&
\sum_{ \{ \Ya\lambda {2\a -1} \} }
{
\prod_{\a =1}^{{N_c}}
\Lambda^{2{N_c} |\Ya\lambda {2\a -1}|}
\over
\prod_{\a <\b }
\left(
\Nek{\Ya\lambda \a }{\Ya\lambda \b }{Q_{\a ,\b }}qt
\Nek{{\Ya\lambda \b }}{{\Ya\lambda \a }}{Q'_{\b ,\a }}qt
\right)^{(-1)^{\a +\b }}
\prod_{\a =1}^{{N_c}}
\Nek{\Ya\lambda {2\a -1}}{\Ya\lambda {2\a -1}}1qt
}, \cr
&&
\end{eqnarray}
gives the $SU({N_c} )$ partition function with $N_f=2{N_c} $.
\FigSUNiiN
\section*{Acknowledgments}
We would like thank T. Eguchi, A. Iqbal, Y. Konishi, H. Konno, S. Minabe,
S. Moriyama, H. Nakajima, N. Nekrasov, H. Ochiai, N. Reshetikhin, J. Shiraishi,
M. Taki, K. Yoshioka and C. Vafa
for discussions and helpful correspondence.
In particular, we are grateful to M. Taki for sharing his result \cite{Taki2}
before he submitted the paper to arXiv.
Part of the results in this paper was presented at the following workshops:
``Infinite analysis
2005'' (27--30 September, 2005) at
Tambara Institute of Mathematical Sciences, University of Tokyo;
``Strings 2006'' (19--24 June, 2006) at
Beijin friendship hotel; and
``Progress of String Theory and Quantum Field Theory''
(7-10 December, 2007) at Osaka City University.
We
would like to thank the organizers for the invitation to the workshops and for the hospitality.
The work of H.K. is supported in part by a Grant-in-Aid for Scientific Research
[\#19654007] from the Japan Ministry of Education, Culture, Sports, Science and Technology.
\setcounter{equation}{0} \section*{Appendix A : Proof of the Proposition in Sect. \ref{sec:PartitionFunctionCS} }
\renewcommand{\theequation}{A.\arabic{equation}}\setcounter{equation}{0}
\renewcommand{\thesubsection}{A.\arabic{subsection}}\setcounter{subsection}{0}
\subsection{Combinatorial identities}
We have the following formula for the Young diagrams,
which translates the summation in squares into that in lows:
\\
{\bf Lemma.}
For all integers
$\NNi\lambda\geq\ell(\lambda)$ and
$\NNi\mu\geq\ell(\mu)$,
\begin{equation}
(1-q)\sum_{(i,j)\in\lambda} q^{j-1} t^{-i+1}
=
\sum_{i=1}^{\NNi\lambda} \left(1-q^{\lambda_i}\right) t^{-i+1},
\label{eq:partitionFormulaI}
\end{equation}
\begin{equation}
(1-q)\sum_{(i,j)\in\mu } q^{\lambda_i-j} t^{\mu^\vee_j-i}
=
\left(
\sum_{i=1}^{\NNi\mu}
\sum_{j=i}^{\NNi\mu}
-t^{-1}
\sum_{i=1}^{\NNi\mu}
\sum_{j=i+1}^{\NNi\mu + 1}
\right)
q^{\lambda_i-\mu_j} t^{j-i}.
\label{eq:partitionFormulaII}
\end{equation}
\noindent{\it Proof.\hskip10pt}} %\quad}
(\ref{eq:partitionFormulaI}) follows from
$\sum_{j=1}^{\lambda} q^{j-1} = {(1-q^{\lambda})/(1-q)}$.
The left-hand side of (\ref{eq:partitionFormulaII}) reduces to
\begin{eqnarray}
(1-q)
\sum_{i=1}^{\ell(\mu)}
\sum_{k=0}^{\ell(\mu)-i}
t^k q^{\lambda_i-\mu_{i+k}}
\sum_{\ell=0}^{\mu_{i+k}-\mu_{i+k+1}-1} q^\ell
&=&
\sum_{i=1}^{\ell(\mu)}
\sum_{k=0}^{\ell(\mu)-i}
t^k \left(q^{\lambda_i-\mu_{i+k}} - q^{\lambda_i-\mu_{i+k+1}} \right)
\cr
&=&
\sum_{i=1}^{\NNi\mu}
\sum_{j=i}^{\NNi\mu}
t^{j-i} \left(q^{\lambda_i-\mu_j} - q^{\lambda_i-\mu_{j+1}} \right),
\end{eqnarray}
which equals the right-hand side of (\ref{eq:partitionFormulaII}).
\hfill\fbox{}
From
\begin{equation}
\sum_{1\leq i < j \leq N+1}
q^{\lambda_i-\mu_j} t^{j-i}
=
\sum_{1\leq i < j \leq N}
q^{\lambda_i-\mu_j} t^{j-i}
+
\sum_{1\leq i \leq N}
q^{\lambda_i-\mu_{N+1}} t^{N+1-i},
\end{equation}
(\ref{eq:partitionFormulaII}) is rewritten as
\begin{equation}
(1-q)\sum_{(i,j)\in\mu } q^{\lambda_i-j} t^{\mu^\vee_j-i+1}
=
(t-1)\sum_{1\leq i<j\leq {\NNi\mu}} q^{\lambda_i-\mu_j} t^{j-i} +
t\sum_{i=1}^{\NNi\mu} q^{\lambda_i} \left(q^{-\mu_i}- t^{N-i}\right).
\label{eq:partitionFormulaIII}
\end{equation}
Note that if $t=q$ and $\lambda= \mu$,
(\ref{eq:partitionFormulaIII}) reduces to
the formula of the Maya diagram:
the length from a black box to a white one or black one is
$(\lambda_i-i) + (\lambda^\vee_j-j)+1$ (the hook length) or
$(\lambda_i-i) - (\lambda_j-j)$, respectively:
\begin{equation}
\sum_{(i,j)\in\lambda}
q^{(\lambda_i-i) + (\lambda^\vee_j-j)+1}
+
\sum_{1\leq i<j\leq {\NNi\lambda} }
q^{(\lambda_i-i) - (\lambda_j-j)}
=
\sum_{1\leq i\leq {\NNi\lambda} }
\sum_{i< j\leq \lambda_i + {\NNi\lambda} }
q^{j-i}.
\end{equation}
By using (\ref{eq:partitionFormulaI}), we have:
\\
{\bf Lemma.}
For all integers
$\NNi\lambda \geq\ell(\lambda)$ and
$\NNii\lambda\geq\ell(\lambda^\vee)$,
\begin{equation}
\left( t^{{1\over2}}-t^{-{1\over2}}\right)
\sum_{i=1}^{\NNi\lambda} \left(q^{\lambda_i} -1\right)t^{{1\over2}-i}
+
\left(q^{{1\over2}}-q^{-{1\over2}}\right)
\sum_{i=1}^{\NNii\lambda} \left(t^{-\lambda^\vee_i}-1\right)q^{i-{1\over2} }
= 0.
\label{eq:ACsum}
\end{equation}
\noindent{\it Proof.\hskip10pt}} %\quad}
Similar to (\ref{eq:partitionFormulaI}),
for all integers
$\NNii\lambda\geq\ell(\lambda^\vee)$,
\begin{equation}
\sum_{i=1}^{\NNii\lambda} \left(1-t^{-\lambda^\vee_i}\right) q^{i-1}
=
(1-t^{-1})\sum_{(i,j)\in\lambda^\vee } t^{1-j} q^{i-1}
=
(1-t^{-1})\sum_{(i,j)\in\lambda} t^{1-i} q^{j-1}.
\end{equation}
Therefore, with (\ref{eq:partitionFormulaI}),
\begin{equation}
(1-q)
\sum_{i=1}^{\NNii\lambda} \left(1-t^{-\lambda^\vee_i}\right) q^{i-1}
=
(1-t^{-1})
\sum_{i=1}^{\NNi\lambda}\left(1-q^{\lambda_i}\right) t^{1-i}.
\end{equation}
\hfill\fbox{}
In the power sum function (\ref{eq:powersum}),
(\ref{eq:ACsum}) is written as
\begin{equation}
\left( t^{{n\over 2}}-t^{-{n\over 2}}\right)
p_n \left(q^{\lambda}t^{\rho}, L t^{-\rho}\right)
+
\left( q^{{n\over 2}}-q^{-{n\over 2}}\right)
p_n \left(t^{-\lambda^\vee}q^{-\rho}, L q^{\rho}\right)
= 0,
\qquad
L\in{\mathbb C}}%{{\bf C}.
\label{eq:ACsumP}
\end{equation}
Note that if $t=q$,
(\ref{eq:ACsum}) reduces to the formula of the Maya diagram:
the black boxes and the white ones are at
$\lambda_i -i +{1\over2}$ and $-(\lambda^\vee_i -i +{1\over2})$
of the Maya diagram, respectively:
\begin{equation}
\sum_{i=1}^{\NNi\lambda} q^{\lambda_i -i +{1\over2} }
+
\sum_{i=1}^{\NNii\mu} q^{-\lambda^\vee_i +i -{1\over2} }
=
\sum_{i=1-{\NNi\lambda}}^{\NNii\mu} q^{i-{1\over2} }.
\label{eq:Maya}
\end{equation}
Hence
$
\sum_{i\geq 1} q^{\lambda_i -i +{1\over2} }
+
\sum_{i\geq 1} q^{-\lambda^\vee_i +i -{1\over2} }
=
\sum_{i\in \bZ} q^{i-{1\over2} }
=
q^{-{1\over2} } \delta(q).
$
\subsection{Factors in Nekrasov's formula}
We have the following formula for the Young diagrams,
which implies the equivalence among several expressions of Nekrasov's formula.
\\
{\bf Proposition.}
The following
$\fnek{L_1}{L_2}i\pm{\lambda}{\mu}qt $'s ($i=1,2,3$)
are all the same.
\begin{eqnarray}
v\fnek{L_1}{L_2}1+{\lambda}{\mu}qt
&:=&
\sum_{(i,j)\in\mu } \left(q^{\lambda_i}-L_1\right) q^{{1\over2}-j} t^{\mu^\vee_j-i+{1\over2}}
+
\sum_{(i,j)\in\lambda} \left(q^{-\mu_i}-L_2\right) q^{j-{1\over2}} t^{-\lambda^\vee_j+i-{1\over2}},
\cr
v\fnek{L_1}{L_2}1-{\lambda}{\mu}qt
&:=&
\sum_{(i,j)\in\lambda} q^{\lambda_i-j+{1\over2}} t^{{1\over2}-i} \left(t^{\mu^\vee_j}-L_2\right)
+
\sum_{(i,j)\in\mu } q^{-\mu_i+j-{1\over2}} t^{i-{1\over2}} \left(t^{-\lambda^\vee_j}-L_1\right),
~~~~~~~~
\end{eqnarray}
\begin{eqnarray}
v\fnek{L_1}{L_2}2+{\lambda}{\mu}qt
&:=&
p_1\left(q^{ \lambda } t^{ \rho}, L_1 t^{-\rho}\right)
p_1\left(t^{ \mu^\vee } q^{ \rho}, L_2 q^{-\rho}\right)
-
p_1\left( t^{ \rho}, L_1 t^{-\rho}\right)
p_1\left( q^{ \rho}, L_2 q^{-\rho}\right),
\cr
v\fnek{L_1}{L_2}2-{\lambda}{\mu}qt
&:=&
p_1\left(t^{-\lambda^\vee} q^{-\rho}, L_1 q^{\rho}\right)
p_1\left(q^{-\mu } t^{-\rho}, L_2 t^{ \rho}\right)
-
[\ \lambda = \mu = 0\ ],
\end{eqnarray}
\begin{eqnarray}
v\fnek{L_1}{L_2}3+{\lambda}{\mu}qt
&:=&\left\{
p_1\left(q^{ \lambda } t^{ \rho},\ L_1 t^{-\rho}\right)
p_1\left(q^{-\mu } t^{-\rho},\ L_2 t^{ \rho}\right)
-
[\ \lambda = \mu = 0\ ]
\right\}{t^{-{1\over2}} - t^{{1\over2}}\over q^{1\over2} - q^{-{1\over2}}},
\cr
v\fnek{L_1}{L_2}3-{\lambda}{\mu}qt
&:=&\left\{
p_1\left(t^{-\lambda^\vee} q^{-\rho},\ L_1 q^{ \rho}\right)
p_1\left(t^{ \mu^\vee } q^{ \rho},\ L_2 q^{-\rho}\right)
-
[\ \lambda = \mu = 0\ ]
\right\}
{q^{1\over2} - q^{-{1\over2}}\over t^{-{1\over2}} - t^{{1\over2}}},
\cr
&&
\end{eqnarray}
with $v := (q/t)^{{1\over2}}$
and $L_1$, $L_2\in{\mathbb C}}%{{\bf C}$.
Here
$p_1$ is the power sum function in (\ref{eq:powersum}) and
$
[\ \lambda = \mu = 0\ ]
$'s
stand for terms substituting $\lambda = \mu = 0$ into the foregoing ones.
\noindent{\it Proof.\hskip10pt}} %\quad}
It is clear that
\begin{eqnarray}
\fnek{L_1}{L_2}1\pm{\lambda}{\mu}qt
v
&=&
\fnek{L_2}{L_1}1\pm{\mu}{\lambda}{q^{-1}}{t^{-1}}
/v
=
\fnek{L_2}{L_1}1\mp{\mu^\vee}{\lambda^\vee}tq
/v,
\cr
\fnek{L_1}{L_2}2\pm{\lambda}{\mu}qt
v
&=&
\fnek{L_2}{L_1}2\mp{\mu}{\lambda}{q^{-1}}{t^{-1}}
/v
=
\fnek{L_2}{L_1}2\pm{\mu^\vee}{\lambda^\vee}tq
/v,
\cr
\fnek{L_1}{L_2}3\pm{\lambda}{\mu}qt
v
&=&
\fnek{L_2}{L_1}3\pm{\mu}{\lambda}{q^{-1}}{t^{-1}}
/v
=
\fnek{L_2}{L_1}3\mp{\mu^\vee}{\lambda^\vee}tq
/v.
\end{eqnarray}
Therefore, it suffices to show
$
\neknon 1+{\lambda}{\mu}qt
=\neknon 2+{\lambda}{\mu}qt
=\neknon 3+{\lambda}{\mu}qt
$.
First, applying (\ref{eq:ACsumP}) yields
$
\neknon 2+{\lambda}{\mu}qt
=\neknon 3+{\lambda}{\mu}qt
$.
Next, we prove that
$
\neknon 1+{\lambda}{\mu}qt
=\neknon 3+{\lambda}{\mu}qt
$.
From (\ref{eq:partitionFormulaII})$\times t$, we have,
for all integers
$\NNi{\lambda\mu}\geq\ell(\lambda)$, $\ell(\mu)$,
\begin{equation}
\sum_{(i,j)\in\mu } q^{\lambda_i-j} t^{\mu^\vee_j-i+1}
=
{1\over 1-q}
\left[\
t
\sum_{i=1}^{\NNi{\lambda\mu}}
\sum_{j=i}^{\NNi{\lambda\mu}}
-
\sum_{i=1}^{\NNi{\lambda\mu}}
\sum_{j=i+1}^{\NNi{\lambda\mu} + 1}
\ \right]
q^{\lambda_i-\mu_j} t^{j-i}.
\end{equation}
By replacing
$q$, $t$ and $\lambda$ in
(\ref{eq:partitionFormulaII}) with
$1/q$, $1/t$ and $\mu $, respectively,
\begin{equation}
\sum_{(i,j)\in\lambda } q^{-\mu_i+j-1} t^{-\lambda^\vee_j+i}
=
{1\over 1-q}
\left[\
t
\sum_{j=1}^{\NNi{\lambda\mu}}
\sum_{i=j+1}^{\NNi{\lambda\mu} + 1}
-
\sum_{j=1}^{\NNi{\lambda\mu}}
\sum_{i=j}^{\NNi{\lambda\mu}}
\ \right]
q^{\lambda_i-\mu_j} t^{j-i}.
\end{equation}
Adding these two equations, we have
\begin{eqnarray}
\fnek{L_1}{L_2}1+{\lambda}{\mu}qt
&+&
L_1\sum_{(i,j)\in\mu } q^{-j} t^{i}
+
L_2\sum_{(i,j)\in\lambda} q^{j-1} t^{1-i}
\cr
&=&
\sum_{(i,j)\in\mu } q^{\lambda_i-j} t^{\mu^\vee_j-i+1}
+\sum_{(i,j)\in\lambda } q^{-\mu_i+j-1} t^{-\lambda^\vee_j+i}
\cr
&=&
{1\over 1-q}
\left[\
t
\sum_{i=1}^{\NNi{\lambda\mu}+1}
\sum_{j=1}^{\NNi{\lambda\mu}}
-
\sum_{i=1}^{\NNi{\lambda\mu}}
\sum_{j=1}^{\NNi{\lambda\mu}+1}
\ \right]
q^{\lambda_i-\mu_j} t^{j-i}
\cr
&=&
{1\over 1-q}
\left[\
t
\sum_{i=1}^{\NNi{\lambda\mu}+1}
\sum_{j=1}^{\NNi{\lambda\mu}}
-
\sum_{i=1}^{\NNi{\lambda\mu}}
\sum_{j=1}^{\NNi{\lambda\mu}+1}
\ \right]
\left(q^{\lambda_i-\mu_j} - 1\right)t^{j-i}.
\label{eq:nekiiiNlambdamu}
\end{eqnarray}
Thus, the following lemma with
$\NNi{\lambda}=\NNi{\mu}=\NNi{\lambda\mu}$
shows that
$
\neknon 1+{\lambda}{\mu}qt
=\neknon 3+{\lambda}{\mu}qt
$.
\hfill\fbox{}
{\bf Lemma.}
For any integers
$\NNi\lambda \geq \ell(\lambda)$,
$\NNi\mu \geq \ell(\mu)$ and
$\NNii\mu \geq \ell(\mu^\vee)$,
\begin{eqnarray}
\fnek{L_1}{L_2}2+{\lambda}{\mu}qt
&=&
\sum_{i=1}^{\NNi\lambda} \sum_{j=1}^{\NNii\mu}
\left(q^{\lambda_i} t^{\mu^\vee_j} - 1\right)
t^{1-i} q^{-j}
\cr
&+&
\sum_{(i,j)\in\lambda}
q^{\lambda_i-j} t^{1-i}
\left(q^{-\NNii\mu}-L_2\right)
+
\sum_{(i,j)\in\mu }
t^{\mu^\vee_j-i+1} q^{-j}
\left(t^{-\NNi\lambda}-L_1\right),
\c
\fnek{L_1}{L_2}3+{\lambda}{\mu}qt
&=&
{1\over 1-q}
\left[\ t
\sum_{i=1}^{\NNi\lambda + 1} \sum_{j=1}^{\NNi\mu } -
\sum_{i=1}^{\NNi\lambda } \sum_{j=1}^{\NNi\mu + 1}
\ \right]
\left(q^{\lambda_i-\mu_j} - 1\right) t^{j-i }
\cr
&-&
L_1\sum_{(i,j)\in\mu } q^{-j} t^{i}
-
L_2\sum_{(i,j)\in\lambda} q^{j-1} t^{1-i}.
\end{eqnarray}
\noindent{\it Proof.\hskip10pt}} %\quad}
\begin{eqnarray}
&&\hskip-10pt
t^{-1}\fnek{L_1}{L_2}2+{\lambda}{\mu}qt
\cr
&=&
\left(
\sum_{i=1}^{\NNi\lambda}
\left(q^{\lambda_i} -1\right) t^{-i} - {1-L_1 \over 1-t}
\right)\left(
\sum_{j=1}^{\NNii\mu}
\left(t^{\mu_j^\vee} -1\right) q^{-j} - {1-L_2 \over 1-q}
\right)
-
{1-L_1\over 1-t}
{1-L_2\over 1-q}
\cr
&=&
\left(
\sum_{i=1}^{\NNi\lambda}
q^{\lambda_i} t^{-i} - {t^{-\NNi\lambda}-L_1 \over 1-t}
\right)\left(
\sum_{j=1}^{\NNii\mu}
t^{\mu_j^\vee} q^{-j} - {q^{-\NNii\mu}-L_2 \over 1-q}
\right)
-
[\ \lambda = \mu = 0\ ]
\cr
&=&
\sum_{i=1}^{\NNi\lambda} \sum_{j=1}^{\NNii\mu}
\left(q^{\lambda_i} t^{\mu^\vee_j} - 1\right)
t^{1-i} q^{-j}
+
\sum_{i=1}^{\NNi\lambda} {1-q^{\lambda_i}\over 1-q} t^{1-i}
\left(q^{-\NNii\mu}-L_2\right)
+
\sum_{j=1}^{\NNii\mu} {t^{\mu_j^\vee-1}\over 1-t^{-1}} q^{-j}
\left(t^{-\NNi\lambda}-L_1\right).
\nonumber
\end{eqnarray}
\begin{eqnarray}
&&\hskip-12pt
{1-q \over t-1}
\fnek{L_1}{L_2}3+{\lambda}{\mu}qt
\cr
&=&
\left(
\sum_{i=1}^{\NNi\lambda}
\left(q^{\lambda_i} -1\right) t^{-i} - {1-L_1 \over 1-t}
\right)\left(
\sum_{j=1}^{\NNi\mu}
\left(q^{-\mu_j} -1\right) t^{j} - {1-L_2 \over 1-t^{-1}}
\right)
-
{1-L_1\over 1-t}
{1-L_2\over 1-t^{-1}}
\cr
&=&
\left(
\sum_{i=1}^{\NNi\lambda}
q^{\lambda_i} t^{-i} - {t^{-\NNi\lambda}-L_1\over 1-t}
\right)\left(
\sum_{j=1}^{\NNi\mu}
q^{-\mu_j} t^{j} - {t^{\NNi\mu}-L_2 \over 1-t^{-1}}
\right)
-
[\ \lambda = \mu = 0\ ]
\cr
&=&
\sum_{i=1}^{\NNi\lambda}
\sum_{j=1}^{\NNi\mu}
\left( q^{\lambda_i-\mu_j} - 1 \right) t^{j-i}
-
\sum_{j=1}^{\NNi\mu}
\left( q^{-\mu_j} -1 \right) t^j {t^{-\NNi\lambda}-L_1 \over 1-t}
-
\sum_{i=1}^{\NNi\lambda}
\left( q^{\lambda_i} -1 \right)t^{-i} {t^{\NNi\mu}-L_2 \over 1-t^{-1}}.
\nonumber
\end{eqnarray}
\hfill\fbox{}
Note that
$\neknon 2\pm{\lambda}{\mu}qt $ and
$\neknon 3\pm{\lambda}{\mu}qt $
are independent of $\NNi\lambda$'s,
if they are sufficiently large.
Let $\fnekrasov{L_1}{L_2}\lambda\mu qt := \fnek{L_1}{L_2}i\pm{\lambda}{\mu}qt $;
then it satisfies
\begin{equation}
\fnekrasov{L_1}{L_2}{\lambda}{\mu}qt
v
=
\fnekrasov{L_2}{L_1}{\mu}{\lambda}{q^{-1}}{t^{-1}}
/v
=
\fnekrasov{L_2}{L_1}{\mu^\vee}{\lambda^\vee}tq
/v.
\label{eq:nDual}
\end{equation}
Let
\begin{equation}
\fNek{L_1}{L_2}i{\pm}{\lambda}{\mu}{vQ}qt
:=
\Exp{-
\sum_{n>0}{Q^n\over n}v^n
\fnek{L_1^n}{L_2^n}i{\pm}{\lambda}{\mu}{q^n}{t^n}
},
\end{equation}
then we have:
\\
{\bf Corollary.}
The following
$\fNek{L_1}{L_2}i\pm{\lambda}{\mu}{Q}qt $'s ($i=1,2,3$)
are all the same.
\begin{eqnarray}
\fNek{L_1}{L_2}1+{\lambda}{\mu}{vQ}qt
&=&
\prod_{(i,j)\in\mu }
{
1 - Q\, q^{\lambda_i-j+{1\over2}} t^{\mu^\vee_j-i+{1\over2}}
\over
1 - QL_1\, q^{{1\over2}-j} t^{i-{1\over2}}
}
\prod_{(i,j)\in\lambda}
{
1 - Q\, q^{-\mu_i+j-{1\over2}} t^{-\lambda^\vee_j+i-{1\over2}}
\over
1 - QL_2\, q^{j-{1\over2}} t^{{1\over2}-i}
},
\cr
\fNek{L_1}{L_2}1-{\lambda}{\mu}{vQ}qt
&=&
\prod_{(i,j)\in\lambda}
{
1 - Q\, q^{\lambda_i-j+{1\over2}} t^{\mu^\vee_j-i+{1\over2}}
\over
1 - QL_2\, q^{j-{1\over2}} t^{{1\over2}-i}
}
\prod_{(i,j)\in\mu }
{
1 - Q\, q^{-\mu_i+j-{1\over2}} t^{-\lambda^\vee_j+i-{1\over2}}
\over
1 - QL_1\, q^{{1\over2}-j} t^{i-{1\over2}}
},
~~~~~~~
\end{eqnarray}
\begin{eqnarray}
\fNek{L_1}{L_2}2+{\lambda}{\mu}{vQ}qt
&=&
{
\Pi_0} %{\widetilde\Pi\left(
- Q\,
\left\{q^{\lambda}t^{\rho},L_1 t^{-\rho}\right\},
\
\left\{t^{\mu^\vee}q^{\rho}, L_2 q^{-\rho}\right\}
\right)
\over
\Pi_0} %{\widetilde\Pi\left(
- Q\,
\left\{t^{\rho},L_1 t^{-\rho}\right\},
\
\left\{q^{\rho}, L_2 q^{-\rho}\right\}
\right)
},
\cr
\fNek{L_1}{L_2}2-{\lambda}{\mu}{vQ}qt
&=&
{
\Pi_0} %{\widetilde\Pi\left(
- Q\,
\left\{t^{-\lambda^\vee}q^{-\rho},L_1 q^{\rho}\right\},
\
\left\{q^{-\mu}t^{-\rho}, L_2 t^{\rho}\right\}
\right)
\over
\Pi_0} %{\widetilde\Pi\left(
- Q\,
\left\{q^{-\rho},L_1 q^{\rho}\right\},
\
\left\{t^{-\rho}, L_2 t^{\rho}\right\}
\right)
},
\end{eqnarray}
\begin{eqnarray}
\fNek{L_1}{L_2}3+{\lambda}{\mu}{vQ}qt
&=&
{
\Pi\left(
vQ\,
\left\{q^{\lambda}t^{\rho},L_1 t^{-\rho}\right\},
\
\left\{q^{-\mu}t^{-\rho}, L_2 t^{\rho}\right\}
;q,t\right)
\over
\Pi\left(
vQ\,
\left\{t^{\rho},L_1 t^{-\rho}\right\},
\
\left\{t^{-\rho}, L_2 t^{\rho}\right\}
;q,t\right)
},
\cr
\fNek{L_1}{L_2}3-{\lambda}{\mu}{vQ}qt
&=&
{
\Pi\left(
vQ\,
\left\{t^{-\lambda^\vee}q^{-\rho},L_1 q^{\rho}\right\},
\
\left\{t^{\mu^\vee}q^{\rho}, L_2 q^{-\rho}\right\}
;t^{-1},q^{-1}\right)
\over
\Pi\left(
vQ\,
\left\{q^{-\rho},L_1 q^{\rho}\right\},
\
\left\{q^{\rho}, L_2 q^{-\rho}\right\}
;t^{-1},q^{-1}\right)
},
\end{eqnarray}
By letting $L_1$ and $L_2=0$,
we obtain six expressions of $\Nek{\lambda}{\mu}Qqt $ in Nekrasov's formula.
This completes the proof of the proposition
in section \ref{sec:PartitionFunctionCS}.
\setcounter{equation}{0} \section*{Appendix B : Formula for the Macdonald Symmetric Function}
\renewcommand{\theequation}{B.\arabic{equation}}\setcounter{equation}{0}
\renewcommand{\thesubsection}{B.\arabic{subsection}}\setcounter{subsection}{0}
Here we recapitulate basic properties of
the Macdonald symmetric function \cite{Mac}.
\subsection{Definition for the Macdonald symmetric function}
Bases of the ring of symmetric functions in an infinite number of variables
$x=(x_1,x_2,\cdots)$
are indexed by the Young diagram,
i.e. the partition
$\lambda =(\lambda_1,\lambda_2,\cdots)$,
which is a sequence of nonnegative integers such that
$\lambda_{i} \geq \lambda_{i+1}$ and
$|\lambda| = \sum_i \lambda_i < \infty$.
For example, the monomial symmetric function
is defined by
$
m_{\lambda}(x)=\sum_{\sigma}
x_1^{\lambda_{\sigma(1)}}
x_2^{\lambda_{\sigma(2)}}
\cdots ,
$
where the summation is over all distinct permutations of
$(\lambda_1,\lambda_2,\cdots )$.
The power sum symmetric function $p_{\lambda}(x)$ is
defined by
\begin{equation}
p_{\lambda}(x)=
p_{\lambda_1}(x)
p_{\lambda_2}(x)\cdots ,
\qquad
p_n(x)=\sum_{i=1}^{\infty}x_i^n.
\end{equation}
We introduce an inner product on the ring of symmetric functions
in the following manner:
for any symmetric functions $f$ and $g$, in power sums $p_\lambda$'s,
\begin{equation}
\langle f(p) | g(p)\rangle_{q,t}
:= f(p^*)\, g(p)\,\vert_{{\rm constant\, part}},\qquad
p_n^* := n {1-q^n \over 1-t^n} {\partial \over \partial
p_n}.
\end{equation}
The Macdonald symmetric function
$P_{\lambda}=P_{\lambda}(x;q,t)$
is uniquely specified by the following orthogonality and normalization:
\begin{eqnarray}
&&
\langle P_{\lambda} | P_{\mu}\rangle_{q,t} =0\qquad { \rm if } \;
\lambda\neq \mu,\\
%
&&
P_{\lambda}(x;q,t)
=
m_{\lambda}(x) + \sum_{\mu<\lambda} u_{\lambda\mu}m_{\mu}(x),
\quad
u_{\lambda\mu}\in {\mathbb Q}(q,t).
\end{eqnarray}
Here we used the dominance partial ordering on the Young diagrams defined as
$\lambda\geq\mu \Leftrightarrow |\lambda|=|\mu|$ and
$\lambda_1+\cdots+\lambda_i\geq\mu_1+\cdots+\mu_i$ for all
$i$.
The scalar product is given by
\begin{equation}
\langle P_\lambda|P_\lambda\rangle_{q,t}
=
\prod_{s\in\lambda}
{
1-q^{a (s)+1} t^{ \ell(s) }
\over
1-q^{a (s) } t^{ \ell (s)+1}
},
\end{equation}
which satisfies
\begin{equation}
\langle P_\lambda|P_\lambda\rangle_{q,t}
=
\left({q\over t}\right)^{|\lambda|}
\langle P_\lambda|P_\lambda\rangle_{q^{-1},t^{-1}}
=
\langle P_{\lambda^\vee} |P_{\lambda^\vee} \rangle_{t,q}^{-1}.
\end{equation}
If we define
\begin{equation}
g_\lambda(q,t)
:=
{
v^{|\lambda|}
\over
\langle P_\lambda|P_\lambda\rangle_{q,t}
},
\end{equation}
with $v = (q/t)^{1\over2}$,
then
\begin{equation}
g_\lambda(q,t)=
g_\lambda(q^{-1},t^{-1})=
g_{\lambda^\vee}(t,q)^{-1}.
\end{equation}
The skew Macdonald symmetric function
$P}%{\widetilde P_{\lambda/\mu}(x;q,t)$ is defined by
\begin{equation}
P}%{\widetilde P_{\lambda/\mu}(x;q,t)
:=
g_\mu(q,t)
P}%{\widetilde P_{\mu}^*\left(v^{-1} x;q,t\right) \, P}%{\widetilde P_\lambda(x;q,t),
\end{equation}
where $*$ acts on the power sum as
$p_n^* := n {1-q^n \over 1-t^n} {\partial \over \partial p_n}$.
Finally let $\iotaP}%{\widetilde P_{\lambda/\mu}(x;q,t)$ be
the skew Macdonald function with the involution $\iota$
acting on the power sum $p_n$ as
$\iota(p_n) = -p_n$.
Let
$x=(x_1,x_2,\cdots)$ and
$y=(y_1,y_2,\cdots)$
be two sets of variables. Then we have
\begin{equation}
\sum_\mu
P}%{\widetilde P_{\lambda/\mu} (x;q,t)
P}%{\widetilde P_{\mu/\nu} (y;q,t)
=
P}%{\widetilde P_{\lambda/\nu} (x,y;q,t),
\label{eq:AppaddSkewMacdonald}
\end{equation}
where
$P_{\lambda/\nu} (x,y;q,t)$
denotes the skew Macdonald function in the set of variables
\break
$(x_1,x_2,\cdots,y_1,y_2,\cdots)$.
\subsection{Symmetries and Cauchy formulas}
Next, we turn to showing the basic properties of
the (skew) Macdonald symmetric function.
The Macdonald function enjoys the symmetries
\begin{equation}
P}%{\widetilde P_{\lambda/\mu} (cx; q,t) =
c^{|\lambda|-|\mu|}
P}%{\widetilde P_{\lambda/\mu} (x; q,t),
\qquad
c\in{\mathbb C}}%{{\bf C} ,
\label{eq:scaleTrans}
\end{equation}
\begin{equation}
P}%{\widetilde P_{\lambda/\mu} \left(x; q^{-1},t^{-1}\right)
=
P}%{\widetilde P_{\lambda/\mu} (x; q,t),
\end{equation}
\begin{equation}
P}%{\widetilde P_{\lambda^\vee/\mu^\vee} (vx; t,q)
=
{g_\lambda(q,t)\over g_\mu(q,t) }
\Endomega qt{} P}%{\widetilde P_{\lambda/\mu} (x; q,t),
\label{eq:skewConjugate}
\qquad
\Endomega qt{} (p_n)
=
(-1)^{n-1}{1-q^n \over 1-t^n} p_n.
\end{equation}
When $t=q$, the Schur function has the extra symmetries
\begin{equation}
s_{\lambda^\vee}(x) = \iota s_\lambda(-x) = (-1)^{|\lambda|}\iota
s_\lambda(x).
\end{equation}
The following Cauchy formula is especially important:
\begin{eqnarray}
\sum_\lambda
g_\lambda(q,t)
P}%{\widetilde P_\lambda(x;q,t) P}%{\widetilde P_\lambda(y;q,t)
=
\Pi(v x,y)
&:=&
\exp\left\{
\sum_{n>0}{ v^n \over n}{1-t^n \over 1-q^n} p_n(x) p_n(y)
\right\}
\cr
&=&
\prod_{k\geq 0}
\prod_{i,j}
{1-tv x_i y_j q^k
\over
1- v x_i y_j q^k },
\quad |q|<1.
\label{eq:AppCauchy}
\\
\sum_\lambda
P}%{\widetilde P_\lambda(x;q,t) P}%{\widetilde P_{\lambda^\vee} (y;t,q)
=
\Pi_0(x,y)
&:=&
\exp\left\{
\sum_{n>0}{(-1)^{n-1}\over n} p_n(x) p_n(y)
\right\}
\cr
&=&
\prod_{i,j}(1+ x_i y_j).
\label{eq:AppconjugateCauchy}
\end{eqnarray}
The Cauchy formulas for the skew Macdonald function are
\begin{eqnarray}
\sum_\lambda
{g_\lambda(q,t)\over g_\mu(q,t) }
P}%{\widetilde P_{\lambda/\mu} (x;q,t) P}%{\widetilde P_{\lambda/\nu} (y;q,t)
&=&
\Pi(v x,y)
\sum_\lambda
P}%{\widetilde P_{\mu/\lambda} (y;q,t) P}%{\widetilde P_{\nu/\lambda} (x;q,t)
{g_\nu(q,t)\over g_\lambda(q,t)} ,
\cr
\sum_\lambda
P}%{\widetilde P_{\lambda/\mu} (x;q,t) P}%{\widetilde P_{\lambda^\vee/\nu^\vee} (y;t,q)
&=&
\Pi_0(x,y)
\sum_\lambda
P}%{\widetilde P_{\mu^\vee/\lambda^\vee} (y;t,q) P}%{\widetilde P_{\nu/\lambda} (x;q,t).
\label{eq:AppskewCauchy}
\end{eqnarray}
If we denote by $\Endomega qtx$
the endmorphism $\Endomega qt{}$ on variables $x$, then
\begin{equation}
\Pi(v x,y;q,t)
=
\Pi(v^{-1} x,y;q^{-1},t^{-1})
=
\Endomega tqx \Endomega tqy\Pi(v^{-1} x,y;t,q).
\end{equation}
\subsection{Specialization formulas}
We denote
\begin{eqnarray}
p_n(c q^\lambda t^\rho)
&:=&
c^n\sum_{i=1}^\infty (q^{n\lambda_i}-1)t^{n({1\over2}-i)}
+ { c^n \over t^{n\over 2} - t^{-{n\over 2}} },
\qquad
c\in{\mathbb C}}%{{\bf C},
\cr
p_n(c q^\lambda t^\rho, c L t^{-\rho} )
&:=&
p_n(c q^\lambda t^\rho) + p_n(c L t^{-\rho} ),
\cr
&=&
c^n\sum_{i=1}^\infty (q^{n\lambda_i}-1)t^{n({1\over2}-i)}
+ c^n {1 - L^n \over t^{n\over 2} - t^{-{n\over 2}} },
\qquad
c, L\in{\mathbb C}}%{{\bf C}.
\end{eqnarray}
Then by using (\ref{eq:skewConjugate}) and (\ref{eq:ACsumP}), we obtain
\begin{equation}
P}%{\widetilde P_{\mu^\vee/\nu^\vee}
\left(-t^{\lambda^\vee}q^{\rho},\ -L q^{-\rho};t,q\right)
=
{g_\mu(q,t)\over g_\nu(q,t)}
P}%{\widetilde P_{\mu /\nu }
\left(q^{-\lambda}t^{-\rho},\ L t^{\rho};q,t\right),
\label{eq:ACMac}
\end{equation}
The Macdonald function in the power sums
$p_n = (1-L^n)/(t^{{n\over 2}}- t^{-{n\over 2}})$ is
\cite{Mac}(Ch.\ VI.6)
\begin{equation}
P_\lambda\left(t^\rho, L t^{-\rho};q,t\right)
=
\prod_{s\in\lambda}
(-1)t^{1\over2} q^{a'(s)}
{
1-L q^{-a'(s)} t^{\ell'(s)}
\over
1-q^{a(s)} t^{\ell(s)+1}
},
\label{eq:Specialization}
\end{equation}
for a generic $L\in{\mathbb C}}%{{\bf C}$.
By replacing $(q,t)$ and $\lambda$
with $(t,q)$ and $\lambda^\vee$, respectively,
\begin{equation}
P_{\lambda^\vee}\left(q^\rho, L q^{-\rho};t,q\right)
=
\prod_{s\in\lambda} q^{-{1\over2}} q^{-a'(s)}
{
1-L q^{a'(s)} t^{-\ell'(s)}
\over
1-q^{-a(s)-1} t^{-\ell(s)}
}.
\end{equation}
Then we have
\begin{equation}
g_\lambda(q,t)
{
P}%{\widetilde P_\lambda(t^\rho,L_1 t^{-\rho};q,t)
\over
P}%{\widetilde P_{\lambda^\vee}(q^\rho,L_2 q^{-\rho};t,q)}
=
\prod_{s\in\lambda}q^{a'(s)}t^{-\ell'(s)}
{
1-L_1 q^{-a'(s)}t^{\ell'(s)}
\over
1-L_2 q^{a'(s)}t^{-\ell'(s)}},
\qquad L_1, L_2\in{\mathbb C}}%{{\bf C}.
\label{eq:DualFormula}
\end{equation}
Note that
\begin{equation}
\iota P_\lambda\left(t^{-\rho}, L t^{\rho};q,t\right)
=
P_\lambda\left(t^{\rho}, L t^{-\rho};q,t\right),
\qquad L\in{\mathbb C}}%{{\bf C}.
\end{equation}
If $L=t^{-N}$ with $N\in{\mathbb N}}%{{\bf N}$, then
$
p_n(q^\lambda t^\rho, t^{-N-\rho} )
= \sum_{i=1}^N q^{n\lambda_i} t^{n({1\over2}-i)}
$
is the power sum symmetric polynomial in $N$ variables
$\{q^{\lambda_i}t^{{1\over2}-i}\}_{1\leq i \leq N}$,
hence
$
P_\lambda\left(t^\rho, t^{-N-\rho};q,t\right)
$
reduces to the Macdonald symmetric polynomial in $N$ variables.
Therefore
\begin{equation}
P_\lambda\left(t^\rho, t^{-N-\rho};q,t\right)
=
0,
\qquad
{\rm for}
\quad
\ell(\lambda) > N\in{\mathbb N}}%{{\bf N}.
\end{equation}
Note that
\begin{equation}
{\cal W}_{\lambda,\mu} (q,t):=
P}%{\widetilde P_\lambda\left( t^\rho,t^{-N-\rho};q,t\right)
P}%{\widetilde P_\mu \left(q^\lambda t^\rho,t^{-N-\rho};q,t\right),
\qquad
N\in{\mathbb N}}%{{\bf N},
\end{equation}
has a nice symmetry \cite{Mac}(Ch.\ VI.6):
\begin{equation}
{\cal W}_{\lambda,\mu} (q,t)
= {\cal W}_{\mu,\lambda} (q,t).
\label{eq:Wsymm}
\end{equation}
When $L=0$ (the case of principal specialization),
\begin{eqnarray}
P}%{\widetilde P_\lambda\left(t^{\rho}; q,t\right)
\prod_{s\in\lambda} (-1)q^{-a(s)} t^{\ell(s)}
&=&
P}%{\widetilde P_\lambda\left(t^{-\rho}; q,t\right)
=
P}%{\widetilde P_{\lambda^\vee}\left(-q^{\rho}; t,q\right)
/g_\lambda(q,t)
\cr
&=&
\iota P}%{\widetilde P_\lambda\left(t^{\rho}; q,t\right)
=
\iotaP}%{\widetilde P_{\lambda^\vee}\left(-q^{-\rho}; t,q\right)
/g_\lambda(q,t).
\label{eq:ACprincipalMac}
\end{eqnarray}
\section*{Appendix C : Refined BPS State Counting}
\renewcommand{\theequation}{C.\arabic{equation}}\setcounter{equation}{0}
\renewcommand{\thesubsection}{C.\arabic{subsection}}\setcounter{subsection}{0}
From the instanton expansion of Nekrasov's partition function,
\begin{equation}
Z_{Nek} = 1 + \sum_{k=1}^\infty \Lambda^k \Zk ktq{Q_\alpha}~,
\end{equation}
we can compute the refined Gopakumar-Vafa integer invariant $N_\beta^{(j_L,j_R)}$
as follows. We expect the following multicover structure of the partition function
\begin{equation}
Z_{Nek} = \exp \left( \sum_{n=1}^\infty \frac{\GV{}{}{t^n}{q^n}{Q_\alpha^n, Q_B^n}}{n} \right)~,
\end{equation}
from the argument of Gopakumar-Vafa type.
Assuming the scale parameter $\Lambda$ is proportional to the K\"ahler
parameter $Q_B$ of the base space ${\bf P}^1$ of ALE fibration, we expand
\begin{equation}
\GV{}{}tq{Q_\alpha, Q_B} = \sum_{k=1}^\infty Q_B^k \GV k{}tq{Q_\alpha}~, \label{base-expansion}
\end{equation}
where
\begin{equation}
\GV k{}tq{Q_\alpha} = \sum_{\{\ell_\alpha\}} \sum_{(j_L,j_R)}
\frac{N_{k, \{\ell_\alpha\}}^{(j_L, j_R)}}{(q^{1/2} - q^{-1/2})(t^{1/2} - t^{-1/2})}
\chi_{j_L} (u) \chi_{j_R}(v) \prod_{\alpha=1}^{N-1} Q_\alpha^{\ell_\alpha}~, \label{fiber-expansion}
\end{equation}
and $\chi_j(x)$ is the irreducible character of $SU(2)$ with spin $j$.
We have introduced the notations $u^2 = t\cdot q$ and $v^2 = q/t$.
Comparing the coefficients of $\Lambda^k \sim Q_B^k$, up to $k=4$ we obtain
\begin{eqnarray}
\GV 1{}tq{Q_\alpha} &=& \Zk 1tq{Q_\alpha}, \nonumber \\
\GV 2{}tq{Q_\alpha} &=& \Zk 2tq{Q_\alpha} - \frac{1}{2} \left( \Zk 1tq{Q_\alpha} \right)^2
-\frac{1}{2} \Zk 1{t^2}{q^2}{Q_\alpha^2}, \nonumber \\
\GV 3{}tq{Q_\alpha} &=& \Zk 3tq{Q_\alpha} -\Zk 2tq{Q_\alpha} \Zk 1tq{Q_\alpha}
+ \frac{1}{3} \left( \Zk 1tq{Q_\alpha} \right)^3 -\frac{1}{3} \Zk 1{t^3}{q^3}{Q_\alpha^3}, \nonumber \\
\GV 4{}tq{Q_\alpha} &=& \Zk 4tq{Q_\alpha} - \Zk 3tq{Q_\alpha} \Zk 1tq{Q_\alpha}
-\frac{1}{2} \left( \Zk 2tq{Q_\alpha} \right)^2 \nonumber \\
& &~~~+ \Zk 2tq{Q_\alpha} \left( \Zk 1tq{Q_\alpha} \right)^2
-\frac{1}{4} \left( \Zk 1tq{Q_\alpha} \right)^4 -\frac{1}{2} \Zk 2{t^2}{q^2}{Q_\alpha^2} \nonumber \\
& &~~~~~+ \frac{1}{4} \left( \Zk 1{t^2}{q^2}{Q_\alpha^2} \right)^2.
\end{eqnarray}
There is a cancellation of $\Zk 1{t^4}{q^4}{Q_\alpha^4}$ in the computation of $\GV 4{}tq{Q_\alpha}$.
In \cite{AK} we reported some results for $SU(2)$ theory with no Chern-Simons coupling.
This corresponds to the refined GV invariants for the local Hirzebruch surface of ${\bf F}_0$.
For $SU(2)$ theory the expansion at instanton number $k$ becomes
\begin{equation}
\GV k{}tq{Q_F} = \sum_{n=0}^\infty \sum_{(j_L,j_R)}
\frac{N_{kB + nF}^{(j_L, j_R)}}{(q^{1/2} - q^{-1/2})(t^{1/2} - t^{-1/2})}
\chi_{j_L} (u) \chi_{j_R}(v) v^{2k} Q_F^{k+n}~,
\end{equation}
where $Q_F$ is the K\"ahler parameter of the fiber ${\bf P}^1$.
The analysis of the symmetry of Nekrasov's partition function made in section 2
instructs us to factor out $v^{2k} Q_F^k$ in computing $N_{kB + nF}^{(j_L, j_R)}$.
Our results are
\begin{equation}
N_{B + n F}^{(j_L, j_R)} = \delta_{j_L, 0} \delta_{j_R, n+\frac{1}{2}}~,
\end{equation}
for one instanton and
\begin{equation}
\bigoplus_{(j_L, j_R)} N_{2B + nF}^{(j_L, j_R)} \left( j_L, j_R \right) =
\bigoplus_{\ell=1}^n \bigoplus_{m=1}^{n-\ell+1} \left[\frac{m+1}{2} \right]
\left( \frac{\ell -1}{2}, \frac{3\ell +2m}{2}\right)~, \label{2inst-spin}
\end{equation}
for two instantons.
We have computed the invariants of $SU(2)$ theory with the Chern-Simons coupling
$m=1,2$, which are expected to give the refined GV invariants for local ${\bf F}_1$
and ${\bf F}_2$. It has been known that the GV invariants of
${\bf F}_0$ and ${\bf F}_2$ are simply related by a \lq\lq shift\rq\rq\ of the K\"ahler
parameters. We have found this relation survives for the refined GV invariants up to
instanton number $3$. To describe the result neatly,
let $\GV k{(m)}tq{Q_F}$ be the coefficients of the
instanton expansion \eqref{base-expansion} for local ${\bf F}_m$. Then what we
have checked is
\begin{equation}
\GV k{{(2)}}tq{Q_F}
= Q_F^k \cdot
\GV k{{(0)}}tq{Q_F}, \quad (1\leq k \leq 3)~,
\end{equation}
which implies $N_{kB + nF}^{(j_L, j_R)}$ for local ${\bf F}_0$ is the same as
$N_{kB + (n+k) F}^{(j_L, j_R)}$ for local ${\bf F}_2$.
We would like to stress this is a somewhat surprising result, since the refined
GV invariants are not BPS protected quantities and they may jump under the
deformation of complex structures\footnote{However, for local CY the deformation
of complex structure may not be well-defined, because of noncompactness of the
total space.}. For the GV invariants which are BPS protected, the
agreement of the invariants may be explained by the fact that ${\bf F}_2$ is obtained
from ${\bf F}_0$ by a deformation of complex structure\footnote{We thank
Y. Konishi and S. Minabe for discussion on this issue.}. However, for BPS nonprotected quantities
it is not certain if the same argument applies. In any case
what we have found supports the expectation
that on noncompact Calabi-Yau manifold the refined GV invariants are
actually invariant under the complex structure deformation, which is pointed
out in \cite{HIV}.
For local ${\bf F}_1$ the invariants are qualitatively different from local ${\bf F}_0$
at one instanton. We have
\begin{equation}
N_{B + n F}^{(j_L, j_R)} = \delta_{j_L, 0} \delta_{j_R, n}~.
\end{equation}
For ${\bf F}_0$ and ${\bf F}_2$ the right spin $j_R$ at one instanton is always half-integer,
while for ${\bf F}_1$ it is integer. However, at two instanton our computation shows
that the refined GV invariants of ${\bf F}_1$
are related to ${\bf F}_0$ quite similarly to the relation between ${\bf F}_0$ and ${\bf F}_2$.
We have checked that
\begin{equation}
\GV {2k}{(1)}tq{Q_F} = Q_F^k \cdot \GV {2k}{(0)}tq{Q_F}, \quad (k=1)~. \label{F0F1}
\end{equation}
It has been pointed out that for even instanton
number one may expect the GV invariants of local ${\bf F}_0$ and local ${\bf F}_1$ are
related \cite{IK-P1}.
It is tempting to conjecture that the above relation is valid for any $k$.
For general values of the Chern-Simons coupling, our preliminary computation
shows that the refined invariants have no simple relation to those of
local ${\bf F}_{0,1,2}$. Even wrong the structure of $Spin(4)$ character seems
lost in this region. This may be related to the fact that the five-dimensional theory
is physically not well-defined for these Chern-Simons couplings.
For $SU(3)$ case the computation of the refined invariants gets more involved.
The corresponding local toric Calabi-Yau geometry is the ALE fibration
of $A_2$ type over ${\bf P}^1$ and we have two K\"ahler parameters $Q_1 := e^{-t_{F_1}}$
and $Q_2 := e^{-t_{F_2}}$, for the fiber. The instanton expansion takes the form
\begin{equation}
\GV k{}tq{Q_i} = \sum_{n_1, n_2=0}^\infty \sum_{(j_L,j_R)}
\frac{N_{\beta(n_1,n_2)}^{(j_L, j_R)}}{(q^{1/2} - q^{-1/2})(t^{1/2} - t^{-1/2})}
\chi_{j_L} (u) \chi_{j_R}(v) v^{3k} Q_1^{k+n_1} Q_2^{k+n_2}~,
\end{equation}
where $\beta(n_1, n_2):=kB + n_1F_1+ n_2F_2$ represents two cycles wrapping $k$ times
on the base space. The analysis of the symmetry of Nekrasov's partition function
made in section 2 instructs us to factor out $v^{3k} (Q_1Q_2)^k$. At one instanton
we found that the spin contents for the homology class $B + n_1F_1+ n_2F_2$ are
\begin{equation}
(0, n_{\max}) \oplus (0, n_{\max}-1) \oplus \cdots \oplus (0, |n_1 - n_2|)~,
\end{equation}
where $n_{\max} := \max (n_1, n_2)$.
We note that the left spin always vanishes at one instanton.
When $n_1=0$ or $n_2=0$ the geometry reduces to local ${\bf F}_1$ and the
above result is consistent with the refined GV invariants of local ${\bf F}_1$.
At two instanton since we cannot find any simple rule for the refined GV invariants,
let us present a short list of our computation. When $n_1=0$ or $n_2=0$,
the result is again consistent with \eqref{2inst-spin} in view of the relation \eqref{F0F1}.
\vskip10mm
\begin{tabular}{| c | l |}
\hline
\rule[-12pt]{0pt}{32pt}
$(n_1, n_2)$
&~
spin contents \\
\hline
\kern75pt{}
&~\\
$ (1,0), (0,1), (1,1)$
&~
$\emptyset$ \\
&~\\
$ (2,0), (0,2)$
&~
$(0, \frac{5}{2})$ \\
&~\\
$ (2,1), (1,2)$
&~ $(0, \frac{5}{2}) \oplus (0,\frac{3}{2})$ \\
&~\\
$ (3,0), (0,3)$
&~
$(\frac{1}{2}, 4) \oplus (0, \frac{7}{2}) \oplus (0,\frac{5}{2})$ \\
&~\\
$ (2,2)$
&~
$(0, \frac{7}{2}) \oplus 2 (0,\frac{5}{2})
\oplus 2 (0, \frac{3}{2}) \oplus 2 (0,\frac{1}{2})$ \\
&~\\
$ (3,1), (1,3)$
&~
$(\frac{1}{2}, 4) \oplus (\frac{1}{2}, 3)
\oplus 2 (0, \frac{7}{2}) \oplus 3 (0,\frac{5}{2}) \oplus (0,\frac{3}{2})$ \\
&~\\
$ (4,0), (0,4)$
&~
$(1,\frac{11}{2})
\oplus (\frac{1}{2}, 5) \oplus (\frac{1}{2}, 4)
\oplus 2 (0, \frac{9}{2}) \oplus (0,\frac{7}{2}) \oplus (0,\frac{5}{2})$ \\
&~\\
$ (3,2), (2,3)$
&~
$(\frac{1}{2}, 4) \oplus (\frac{1}{2}, 3) \oplus (\frac{1}{2}, 2)$ \\
&~
$\oplus (0, \frac{9}{2}) \oplus 3 (0,\frac{7}{2}) \oplus 5 (0, \frac{5}{2})
\oplus 4 (0, \frac{3}{2}) \oplus 2 (0,\frac{1}{2})$ \\
&~\\
$ (4,1), (1,4)$
&~
$(1, \frac{11}{2}) \oplus (1, \frac{9}{2})
\oplus (\frac{1}{2}, 5) \oplus 3 (\frac{1}{2}, 4) \oplus (\frac{1}{2}, 3)$ \\
&~
$\oplus 3 (0, \frac{9}{2}) \oplus 5 (0,\frac{7}{2}) \oplus 3 (0, \frac{5}{2}) \oplus (0,\frac{3}{2})$ \\
&~\\
$ (5,0), (0,5)$
&~
$(\frac{3}{2}, 7) \oplus (1, \frac{13}{2}) \oplus (1, \frac{11}{2})
\oplus 2 (\frac{1}{2}, 6) \oplus (\frac{1}{2}, 5) \oplus (\frac{1}{2}, 4)$ \\
&~
$\oplus 2 (0, \frac{11}{2}) \oplus 2 (0, \frac{9}{2}) \oplus (0, \frac{7}{2}) \oplus (0, \frac{5}{2})$ \\
&~\\
$ (3,3)$
&~
$(\frac{1}{2}, 5) \oplus 2 (\frac{1}{2}, 4) \oplus 2 (\frac{1}{2}, 3)
\oplus 2 (\frac{1}{2}, 2) \oplus 2 (\frac{1}{2}, 1)$ \\
&~
$\oplus (0, \frac{11}{2}) \oplus 3 (0,\frac{9}{2}) \oplus 6 (0, \frac{7}{2})
\oplus 8 (0, \frac{5}{2}) \oplus 8 (0,\frac{3}{2}) \oplus 6 (0, \frac{1}{2})$ \\
&~\\
\hline
\end{tabular}
\newpage
\setcounter{equation}{0} \section*{Appendix D : $q$-Dunkl Operator Realization for the Refined Topological Vertex}
\renewcommand{\theequation}{D.\arabic{equation}}\setcounter{equation}{0}
\renewcommand{\thesubsection}{D.\arabic{subsection}}\setcounter{subsection}{0}
In this appendix,
we use the Macdonald polynomials $P^N_\lambda(x;q,t)$ in the finite number of variables
$x=(x_1,x_2,\cdots, x_N)$
with setting $x_{N+1}=x_{N+2}=\cdots=0$.
Here we assume that
$|q|$, $|t| > 1 $,
and define the following refined topological vertex (without framing factor)
\begin{eqnarray}
V_{\mu \lambda}{}^\nu
&:=&
\lim_{N\rightarrow \infty}
\sum_\sigma\iota P}%{\widetilde P^N_{\mu ^\vee /\sigma^\vee}(-t^{\lambda^\vee} q^{\rho};t,q) \
P}%{\widetilde P^N_{\nu /\sigma}(q^{\lambda} t^{\rho};q,t)
P}%{\widetilde P^N_\lambda(t^\rho;q,t)
v^{|\sigma|},
\cr
V_{\mu }{}^{\lambda\nu}
&:=&
\lim_{N\rightarrow \infty}
\sum_\sigma
P}%{\widetilde P^N_{\mu ^\vee/\sigma^\vee}(-t^{\lambda^\vee} q^{\rho};t,q) \
\iota P}%{\widetilde P^N_{\nu /\sigma}(q^{\lambda} t^{\rho};q,t)
P}%{\widetilde P^N_{\lambda^\vee}(-q^{\rho};t,q)
v^{|\sigma|}
=\iota V_{\mu \lambda}{}^\nu .
\label{eq:appRTV}
\end{eqnarray}
These also reproduce Nekrasov's partition function.
Let $Y_i$, ($i=1,\cdots,N$)
be the $q$-Dunkl operator
\cite{rf:Cherednik}
\cite{rf:KirillovNoumi}
acting on the variables
$x_i$, ($i=1,\cdots,N$);
\begin{eqnarray}
Y_i(x) &=& t^{-{N\over 2}}
T_i T_{i+1} \cdots T_{N-1}
\omega
T_1^{-1} \cdots T_{i-1}^{-1},
\cr
T_i &=& t^{1\over2} + t^{-{1\over2}} {1 - t x_i/x_{i+1} \over 1 -
x_i/x_{i+1}} (s_i - 1),
\end{eqnarray}
where
\begin{equation}
s_i = (i,i+1),
\qquad
\omega = \tau_N s_{N-1}\cdots s_1,
\qquad
\tau_N(x_i) = q^{\delta_{i,N}} x_i .
\end{equation}
They commute with each other,
\begin{equation}
[Y_i(x),Y_j(x)]=0,
\end{equation}
and the Macdonald polynomials are
eigenfunctions of any symmetric operator $f$ in them:
\begin{equation}
f(Y_1(x),\cdots,Y_N(x)) P^N_\lambda(x;q,t)
= f(q^{\lambda_1} t^{{1\over2} -1},\cdots,q^{\lambda_N} t^{{1\over2} -N})
P^N_\lambda(x;q,t).
\end{equation}
Let $\widetilde Y_i(x)$ be the dual $q$-Dunkl operator
which is given by replacing $q$ with $t$ in $Y_i(x)$,
i.e.
\begin{equation}
f(\widetilde Y_1(x),\cdots,\widetilde Y_N(x)) P^N_\lambda(x;t,q)
= f(t^{\lambda_1} q^{{1\over2} -1},\cdots,t^{\lambda_N} q^{{1\over2} -N})
P^N_\lambda(x;t,q).
\end{equation}
Note that
$Y_i(x)$ and $\widetilde Y_i(x)$
may not commute with each other.
Using these (dual) $q$-Dunkl operators,
our vertices in (\ref{eq:appRTV})
are written as follows:
\begin{eqnarray}
V_{\mu\lambda}{}^\nu
&=&
\lim_{N\rightarrow \infty} \sum_\sigma
v^{|\sigma|}
P}%{\widetilde P^N_{\nu /\sigma}(-Y(x);q,t) \
\Endomega qt{}
\left(\iota P}%{\widetilde P^N_{\mu^\vee /\sigma^\vee}(\widetilde Y(x);t,q)
P}%{\widetilde P^N_{\lambda^\vee}(x;t,q)\right)
\vert_{x=t^{\rho}},
\cr
V_{\mu}{}^{\lambda\nu}
&=&
\lim_{N\rightarrow \infty} \sum_\sigma
v^{|\sigma|}
P}%{\widetilde P^N_{\mu^\vee /\sigma^\vee}(-\widetilde Y(x);t,q) \
\Endomega qt{}
\left(\iota P}%{\widetilde P^N_{\nu /\sigma}(Y(x);q,t)
P}%{\widetilde P^N_{\lambda}(-x;q,t)\right)
\vert_{x=q^{\rho}}.
\end{eqnarray}
Here $\Endomega qt{}$ is the involution in
(\ref{eq:skewConjugate}).
Therefore the summation in the Young diagrams in Nekrasov's formula
is formally performed by using these $q$-Dunkl operators.
For example, the $SU(2)$ partition function
in (\ref{eq:suiiZ}) is
\begin{eqnarray}
\tZm{}{}
&=&
\lim_{N\rightarrow \infty}
\Pi_0} %{\widetilde\Pi\left(-Q_1 Y(x),\widetilde Y(z)\right)^{-1} \
\Pi_0} %{\widetilde\Pi\left(-Q_2 Y(w),\widetilde Y(y)\right)^{-1}
\cr
&&\qquad
\times
\Pi_0} %{\widetilde\Pi(-\Lambda x,y) \
\Pi_0} %{\widetilde\Pi(-\Lambda z,w)
\vert_{x=z=t^{\rho}, y=w=q^{\rho}}.
\end{eqnarray}
In the $SU({N_c})$ case,
let
\begin{eqnarray}
D_0
&:=&
\prod_{\a =1}^{N_c} \Pi_0} %{\widetilde\Pi\left(-\Lambda x^\a ,y^\a \right),
\cr
D_\a
&:=&
\prod_{\b =\a +1}^{N_c}
\Pi_0} %{\widetilde\Pi\left(-Q_{\a ,\b } Y(x^\a ),\widetilde Y(x^\b )\right)^{-1}
\Pi_0} %{\widetilde\Pi\left(-Q_{\b ,\a } Y(y^\b ),\widetilde Y(y^\a )\right)^{-1},
\quad
0<\a <{N_c},~~~~~~
\end{eqnarray}
and
$
D'_\a := D_\a
\Endomega tq{x^\a }\Endomega qt{y^\a }
$,
then the $SU({N_c})$ partition function in (\ref{eq:suNZ}) with $m=0$ is
\begin{eqnarray}
\tZm{}{0}
&=&
\sum_{\Ya\lambda \a ,\Ya\mu \a ,\Ya\nu \a }
V_{\bullet \Ya\lambda 1}{}^{\Ya\mu 1}
V_{\Ya\mu 1 \Ya\lambda 2}{}^{\Ya\mu 2}
\cdots
V_{\Ya\mu {{N_c}-2} \Ya\lambda {{N_c}-1}}{}^{\Ya\mu {{N_c}-1}}
V_{\Ya\mu {{N_c}-1} \Ya\lambda {{N_c} }}{}^{\bullet}
\cr
&&\hskip24pt\times
V_{\bullet }{}^{\Ya\lambda {{N_c} }\Ya\nu {{N_c}-1}}
V_{\Ya\nu {{N_c}-1}}{}^{\Ya\lambda {{N_c}-1}\Ya\nu {{N_c}-2}}
\cdots
V_{\Ya\nu 2}{}^{\Ya\lambda 2\Ya\nu 1}
V_{\Ya\nu 1}{}^{\Ya\lambda 1\bullet}
\cr
&&\hskip24pt\times
\prod_{\a =1}^{N_c}
Q_{B^\a }^{|\Ya\lambda \a |}
\prod_{\a =1}^{{N_c}-1}
v^{-|\Ya\mu \a |-|\Ya\nu \a |}
Q_{\a ,\a +1}^{|\Ya\mu \a |}
Q_{\a +1,\a }^{|\Ya\nu \a |}
\cr
&=&
D'_{{N_c}-1}\cdots
D'_2
D_1 D_0
\vert_{x^\a =t^{\rho}, y^\a =q^{\rho}}.
\end{eqnarray}
Since
$
\Endomega tqx\Endomega qty
\Pi_0} %{\widetilde\Pi(x,y)
=
\Pi_0} %{\widetilde\Pi(x,y)
$,
we have the following $q$-Dunkl operator realization for Nekrasov's formula
\begin{equation}
\tZm{}{0}
= D_{{N_c}-1} \cdots D_2 D_1 D_0
\vert_{x^\a =t^{\rho}, y^\a =q^{\rho}}.
\end{equation}
\setcounter{equation}{0} \section*{Appendix E : Notations and identities for Partitions}
\renewcommand{\theequation}{E.\arabic{equation}}\setcounter{equation}{0}
\renewcommand{\thesubsection}{E.\arabic{subsection}}\setcounter{subsection}{0}
For each square $s=(i,j)$ in the Young diagram of
a partition $\lambda = (\lambda_1,\lambda_2,\cdots)$, we define
\begin{equation}
a_\lambda(s) := \lambda_i - j, \quad \ell_\lambda(s) := \lambda_j^\vee - i,
\quad a'(s) := j -1, \quad \ell'(s) := i -1~,
\end{equation}
where $\lambda_j^\vee$ denotes the conjugate (dual) diagram.
They are called arm length, leg length, arm colength and leg colength,
respectively.
The hook length $h_\lambda(s)$ and the content $c(s)$ at $s$ are given by
\begin{equation}
h_\lambda(s) := a_\lambda(s) + \ell_\lambda(s) + 1~, \quad
c(s) := a'(s) - \ell'(s)~.
\end{equation}
The weight $|\lambda |$ and $||\lambda ||^2$ are
\begin{equation}
|\lambda |:=\sum_i \lambda_i,
\qquad
||\lambda ||^2:=\sum_i \lambda_i{}^2 =2 \sum_{s\in\lambda}(a(s)+{1\over2}).
\end{equation}
We also need the following integer
\begin{equation}
n(\lambda) := \sum_{s \in \lambda} \ell'(s) = \sum_{i=1}^\infty (i-1) \lambda_i
= \frac{1}{2} \sum_{i=1}^\infty \lambda_i^\vee ( \lambda_i^\vee -1)
= \sum_{s \in \lambda} \ell_\lambda(s)~.
\end{equation}
Similarly, we have
\begin{equation}
n(\lambda^\vee) := \sum_{s \in \lambda} a'(s) = \sum_{s \in \lambda} a_\lambda(s)~.
\end{equation}
They are related to the integer $\kappa(\lambda)$ as follows:
\begin{equation}
\kappa(\lambda) := 2 \sum_{s \in \lambda} (j-i) = 2(n(\lambda^\vee) - n(\lambda))
= |\lambda| + \sum_{i=1}^\infty \lambda_i (\lambda_i -2i)~.
\end{equation}
Note that, since
$
\{\lambda_i-j\}_{j=1}^{\lambda_i}
=
\{j-1\}_{j=1}^{\lambda_i},
$
\begin{equation}
\sum_{(i,j)\in\lambda}
f(\lambda_i-j , \ i)
=
\sum_{(i,j)\in\lambda}
f(j-1 ,\ i),
\label{eq:appsetFormulaI}
\end{equation}
for any function $f$.
Also since
\begin{equation}
\{\lambda_i-j\}_{j=1}^{\mu_i}
\cup
\{-\mu_i+j-1\}_{j=1}^{\lambda_i}
=
\{j\}_{j=-\mu_i}^{\lambda_i-1}
=
\{\lambda_i-j\}_{j=1}^{\lambda_i}
\cup
\{-\mu_i+j-1\}_{j=1}^{\mu_i},
\end{equation}
we have
\begin{equation}
\sum_{(i,j)\in\mu}(\lambda_i-j)-
\sum_{(i,j)\in\lambda}(\mu_i-j+1)
=
\sum_{(i,j)\in\lambda}(\lambda_i-j)-
\sum_{(i,j)\in\mu}(\mu_i-j+1).
\label{eq:appsetFormulaII}
\end{equation}
We list their relations:
$$
\begin{array}{c c c c c c c}
\hline
\rule[-17pt]{0pt}{42pt}
|\lambda|,||\lambda||^2, n, \kappa
&&
\sum_i
&&
\sum_{(i,j)\in\lambda}
&&
\sum_{s\in\lambda}
\cr
\hline
&&&&&&\cr
|\lambda|
&=&
\sum_i \lambda_i
&=&
\sum_{(i,j)\in\lambda} 1
&=&
\sum_{s\in\lambda} 1,
\cr
||&&&&&&\cr
|\lambda^\vee|
&=&
\sum_j \lambda^\vee_j.
&&
&&\cr
&&&&&&\cr
{1\over2}||\lambda||^2
&=&
{1\over2}\sum_i {\lambda_i}^2
&=&
\sum_{(i,j)\in\lambda} (\lambda_i-j+{1\over2})
&=&
\sum_{s\in\lambda} \left(a(s)+{1\over2}\right).
\cr
&&&&&&\cr
{1\over2}||\lambda^\vee||^2
&=&
{1\over2}\sum_j \lambda_j^{\vee 2}
&=&
\sum_{(i,j)\in\lambda} (\lambda^\vee_j-i+{1\over2})
&=&
\sum_{s\in\lambda} \left(\ell(s)+{1\over2}\right),
\cr
&&&&&&\cr
\hline
&&&&&&\cr
n(\lambda)
&=&
\sum_i (i-1){\lambda_i}
&=&
\sum_{(i,j)\in\lambda}(i-1)
&=&
\sum_{s\in\lambda} \ell'(s),
\cr
||&&&&&&\cr
{1\over2}( || \lambda^\vee||^2- |\lambda^\vee|)
&=&
{1\over2}\sum_j \lambda^\vee_j(\lambda^\vee_j -1)
&=&
\sum_{(i,j)\in\lambda}( \lambda^\vee_j-i)
&=&
\sum_{s\in\lambda} \ell(s).
\cr
&&&&&&\cr
n(\lambda^\vee)
&=&
\sum_j (j-1){\lambda^\vee_j}
&=&
\sum_{(i,j)\in\lambda}(j-1)
&=&
\sum_{s\in\lambda} a'(s),
\cr
||&&&&&&\cr
{1\over2}( || \lambda||^2- |\lambda|)
&=&
{1\over2}\sum_i \lambda_i (\lambda_i -1)
&=&
\sum_{(i,j)\in\lambda}( \lambda_i-j)
&=&
\sum_{s\in\lambda} a(s).
\cr
&&&&&&\cr
\hline
&&&&&&\cr
{1\over2}\kappa(\lambda)
&=&
{1\over2}\sum_i \lambda_i( \lambda_i+1-2i),
&&
&&
\sum_{s\in\lambda} c(s),
\cr
||&&&&&&||\cr
n(\lambda^\vee)-n(\lambda)
&=&
\sum_i i\left({\lambda^\vee_i}-{\lambda_i}\right)
&=&
\sum_{(i,j)\in\lambda} (j-i)
&=&
\sum_{s\in\lambda} (a'(s)-\ell'(s)),
\cr
||&&&&&&\cr
{1\over2}( || \lambda||^2 - ||\lambda^\vee||^2)
&=&
{1\over2}\sum_i \left({\lambda_i}^2 - \lambda_i^{\vee 2}\right)
&=&
\sum_{(i,j)\in\lambda} (\lambda_i-\lambda^\vee_j+i-j)
&=&
\sum_{s\in\lambda} (a(s)-\ell(s)).
\cr
&&&&&&\cr
n(\lambda^\vee)+n(\lambda)+|\lambda|
&=&
\sum_i \left(i-{1\over2}\right)\left({\lambda_i}+{\lambda^\vee_i}\right)
&=&
\sum_{(i,j)\in\lambda} (i+j-1)
&=&
\displaystyle{\sum_{s\in\lambda}} (a'(s)+\ell'(s)+1),
\cr
||&&&&&&\cr
{1\over2}( || \lambda||^2 + ||\lambda^\vee||^2)
&=&
{1\over2}\sum_i \left({\lambda_i}^2 +\lambda_i^{\vee 2}\right)
&=&
\displaystyle{\sum_{(i,j)\in\lambda}} (\lambda_i+\lambda^\vee_j-i-j+1)
&=&
\displaystyle{\sum_{s\in\lambda}} (a(s)+\ell(s)+1),
\cr
&&&&&&||\cr
&=&
{1\over2}\sum_i \lambda_i( \lambda_i-1+2i),
&&&&
\sum_{s\in\lambda} h(s).
\cr
&&&&&&\cr
\hline
\end{array}
$$
|
1,116,691,497,694 | arxiv | \section{Determination of neutrino energy after scattering} \label{appendix}
In our code, we employ the reaction-rate tables for the nucleon and electron/positron scatterings.
In order to ensure the detailed balance between the direct and inverse reactions between the initial and final states with the neutrino energies $E_\nu$ and $E^\prime_\nu$, respectively, we take the following method.
\begin{enumerate}
\item{$E_\nu \le E^\prime_\nu \le E_{\rm{max}}$}\\
The reaction rates for \textcolor{black}{up-scatterings} $E_\nu \le E^\prime_\nu$ are included in the table and we get $E^\prime_\nu$ interpolating data in the table.
We use the modified reaction rate $\bar{R}$ instead of $R_{\rm{rec}}$ for convenience:
\begin{eqnarray}
&&\bar{R}\left(E_\nu,\Delta E,\cos{\psi}\right) = R_{\rm{rec}}\left(E_\nu,E^\prime_\nu,\cos{\psi}\right)\exp{\left(-\frac{E_\nu}{T}\right)},
\end{eqnarray}
with the energy difference $\Delta E \equiv E^\prime_\nu-E_\nu$.
The modified reaction rates are described by the reaction rates in the table $\bar{R}_{ij} \equiv \bar{R}\left(E_i,\Delta E_{ij},\cos{\psi}\right)$ with the on-grid neutrino energy $E_i$ employed in the table $E_1 \le E_\nu \le E_2$ and $E^\prime_1 \le E^\prime_\nu \le E^\prime_2$ and the energy difference $\Delta E_{ij} \equiv E^\prime_j - E_i$:
\begin{eqnarray}
\bar{R}\left(E_\nu,\Delta E,\cos{\psi}\right) = q_1k_1\bar{R}_{11} + q_1k_2\bar{R}_{12} + q_2k^\prime_1\bar{R}_{21} + q_2k^\prime_2\bar{R}_{22},
\end{eqnarray}
where the coefficients are defined as follows:
\begin{eqnarray}
&&q_1 = \frac{E_2-E_\nu}{E_2-E_1},\ \ q_2 = \frac{E_\nu-E_1}{E_2-E_1}, \\
&&k_1 = \frac{\Delta E_{12} - \Delta E}{\Delta E_{12} - \Delta E_{11}},\ \ k_2 = \frac{\Delta E - \Delta E_{11}}{\Delta E_{12} - \Delta E_{11}}, \\
&&k^\prime_1 = \frac{\Delta E_{22} - \Delta E}{\Delta E_{22} - \Delta E_{21}},\ \ k^\prime_2 = \frac{\Delta E - \Delta E_{21}}{\Delta E_{22} - \Delta E_{21}}.
\end{eqnarray}
\item{$E_{\rm{min}} \le E^\prime_\nu \le E_\nu$}\\
The reaction rates for \textcolor{black}{down-scatterings} $E_\nu \ge E^\prime_\nu$ are derived from the rates for \textcolor{black}{up-scatterings} $E_\nu \le E^\prime_\nu$ using the following relation:
\begin{eqnarray}
\bar{R}\left(E_\nu,E^\prime_\nu,\cos{\psi}\right) = \bar{R}\left(E^\prime_\nu,E_\nu,\cos{\psi}\right),
\end{eqnarray}
based on the detailed balance.
The modified reaction rate is described as
\begin{eqnarray}
&&\bar{R}\left(E^\prime_\nu,E_\nu,\cos{\psi}\right) = q_3k_3\bar{R}_{33} + q_3k_4\bar{R}_{34} + q_4k^\prime_3\bar{R}_{43} + q_4k^\prime_4\bar{R}_{44}.
\end{eqnarray}
with the neutrino energy employed in the table $E_3 \le E^\prime_\nu \le E_4$ and $E^\prime_3 \le E_\nu \le E^\prime_4$, the energy difference $\Delta E^\prime \equiv E_\nu - E^\prime_\nu$ and the coefficients:
\begin{eqnarray}
&&q_3 = \frac{E_4-E^\prime_\nu}{E_4-E_3},\ \ q_4 = \frac{E^\prime_\nu-E_3}{E_4-E_3}, \\
&&k_3 = \frac{\Delta E_{34} - \Delta E^\prime}{\Delta E_{34} - \Delta E_{33}},\ \ k_4 = \frac{\Delta E^\prime - \Delta E_{33}}{\Delta E_{34} - \Delta E_{33}}, \\
&&k^\prime_3 = \frac{\Delta E_{44} - \Delta E^\prime}{\Delta E_{44} - \Delta E_{43}},\ \ k^\prime_4 = \frac{\Delta E^\prime - \Delta E_{43}}{\Delta E_{44} - \Delta E_{43}},
\end{eqnarray}
\end{enumerate}
The total rate integrated over $E^\prime_\nu$ is
\begin{eqnarray}
A &\equiv& \int^{E_{\rm{max}}}_{E_{\rm{min}}} R \left(E_\nu,\bar{E}_\nu,\cos{\psi}\right) 2\pi \bar{E}^2_\nu d\bar{E}_\nu \nonumber \\
&=& \int^{E_{\rm{max}}}_{E_{\rm{min}}} \bar{R}\left(E_\nu,\bar{E}_\nu,\cos{\psi}\right)\exp{\left(\frac{E_\nu}{T}\right)}2\pi \bar{E}^2_\nu d\bar{E}_\nu \nonumber \\
&=& 2\pi \exp{\left(\frac{E_\nu}{T}\right)} \left[ \int^{E_\nu}_{E_{\rm{min}}} \bar{R}\left(E_\nu,\bar{E}_\nu,\cos{\psi}\right) \bar{E}^2_\nu d\bar{E}_\nu + \int^{E_{\rm{max}}}_{E_\nu} \bar{R}\left(\bar{E}_\nu,E_\nu,\cos{\psi}\right) \bar{E}^2_\nu d\bar{E}_\nu \right] \nonumber \\
&=& \frac{1}{4}\left(E^4_\nu-E^4_{\rm{min}}\right)A_{11} + \frac{1}{3} \left(E^3_\nu - E^3_{\rm{min}}\right)A_{12} \nonumber \\
&& \ + \frac{1}{5} \left(E^5_{\rm{max}}-E^5_\nu\right)B_{11} + \frac{1}{4} \left(E^4_{\rm{max}}-E^4_\nu\right)B_{12}+ \frac{1}{3} \left(E^3_{\rm{max}}-E^3_\nu\right)B_{13},
\end{eqnarray}
with the minimum and maximum energies $E_{\rm{min}}, E_{\rm{max}}$, at which the reaction rates become $10^{-5}$ times less than the peak value, and the coefficients:
\begin{eqnarray}
A_{11} &=& \frac{- \bar{R}_{11} + \bar{R}_{12}}{\Delta E_{12}-\Delta E_{11}}q_1 + \frac{- \bar{R}_{21} + \bar{R}_{22}}{\Delta E_{22} - \Delta E_{21}}q_2, \\
A_{12} &=& \frac{\left(\Delta E_{12} + E_\nu\right)\bar{R}_{11} - \left(\Delta E_{11} + E_\nu\right)\bar{R}_{12}}{\Delta E_{12}-\Delta E_{11}}q_1 \nonumber \\
&&\ \ \ \ \ + \frac{\left(\Delta E_{22} + E_\nu \right) \bar{R}_{21} - \left(\Delta E_{21} + E_\nu\right) \bar{R}_{22}}{\Delta E_{22} - \Delta E_{21}}q_2, \\
B_{11} &=& \frac{1}{E_4 - E_3} \left(\frac{-\bar{R}_{33}+\bar{R}_{34}}{\Delta E_{34} - \Delta E_{33}} + \frac{\bar{R}_{43}-\bar{R}_{44}}{\Delta E_{44} - \Delta E_{43}}\right),\\
B_{12} &=& \frac{1}{E_4 - E_3} \left(\frac{\bar{R}_{33}\left(E_4-\Delta E_{34} + E_\nu\right) - \bar{R}_{34} \left(E_4-\Delta E_{33} + E_\nu\right)}{\Delta E_{34} - \Delta E_{33}} \right. \nonumber \\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left. + \frac{\bar{R}_{43} \left(\Delta E_{44} - E_\nu -E_3\right) -\bar{R}_{44}\left(\Delta E_{43}-E_\nu-E_3\right)}{\Delta E_{44} - \Delta E_{43}}\right),\\
B_{13} &=& \frac{1}{E_4 - E_3} \left(\frac{\bar{R}_{33}E_4\left(\Delta E_{34} - E_\nu\right) + \bar{R}_{34} E_4\left(E_\nu - \Delta E_{33}\right)}{\Delta E_{34} - \Delta E_{33}} \right. \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \left. + \frac{\bar{R}_{43} E_3\left(E_\nu - \Delta E_{44}\right) + \bar{R}_{44}E_3\left(\Delta E_{43}-E_\nu\right)}{\Delta E_{44} - \Delta E_{43}}\right).
\end{eqnarray}
The neutrino energy after scattering $E^\prime_\nu$ is determined by the random number $x$ in the range of [0,1] and the normalized spectrum $\int^{E^\prime_\nu}_{E_{\rm{min}}}R \left(E_\nu,\bar{E}_\nu,\cos{\psi}\right) 2\pi \bar{E}^2_\nu d\bar{E}_\nu/A$.
\section{Numerical method of our MC code} \label{appendix2}
\subsection{Sample particles} \label{subch:MC}
In the MC method, we follow the tracks of sample particles, which represent a bundle of neutrinos, interacting with matters.
The numbers of sample particles $N_{s}$ and physical neutrinos $N_{\nu}$ are related with the weight $W_{s}$ as follows:
\begin{eqnarray}
W_{s} = \frac{ N_{\nu} }{ N_{s} } .
\end{eqnarray}
In our simulations, the weight is constant in all the time and calculation domain.
\subsection{Treatments of the transport of sample particles} \label{subch:trans}
Each sample particle has 6-dimensional information about a space $(r,\theta,\phi)$ and a phase space $(E_{\nu},\theta_{\nu},\phi_{\nu})$, and we calculate their time evolutions by solving geometric equations.
In order to calculate the transport of sample particles, we introduce three lengths : ``reaction length'' $l_{\text{r}}$, ``background length'' $l_{\text{b}}$ and ``distribution length'' $l_{\text{f}}$.
\begin{enumerate}
\item{reaction length $l_{\text{r}}$ \\}
We define a ``reaction length'', which is a distance to the point where the sample particle interacts with matter subsequently, by the optical depth:
\begin{eqnarray}
\tau(S,E_\nu) = \int_0^{S} \frac{1}{\lambda(r,E_\nu)} ds ,
\end{eqnarray}
using the local mean free path $\lambda$:
\begin{eqnarray}
\lambda(r,E_\nu) = \frac{1}{\sigma_{\text{tot}}},
\end{eqnarray}
with the total cross section $\sigma_{\text{tot}} = \sum_{\alpha}^{}\sigma_\alpha(r,E_\nu)$ using the cross section of $\alpha$-th type of reaction $\sigma_\alpha$.
The reaction occurs at $\tau(l_{\text{r}},E_\nu) = \tau_{\text{max}}$, which is determined by the random number obeying the Poisson distribution whose average becomes 1.
\item{background length $l_{\text{b}}$ \\}
We employ the results of the dynamical SN simulations as a background for the neutrino transport calculations.
\textcolor{black}{We assume that the hydrodynamical values, i.e. density, temperature and chemical potential of matters, are uniform in each spatial zone.}
A ``background length'' is defined by the distance between the nearest spatial boundary of the hydrodynamical background and the current position of a sample particle.
\item{distribution length $l_{\text{f}}$\\}
The distribution functions of neutrinos change with time because of interactions with matter or advection.
We have to update it within an appropriate timescale, because the Fermi-blocking of neutrinos should be taken into account for neutrino reactions.
A "distribution length'' is defined as $cdt_{\rm{f}}$ with the remaining time until the update of the distribution function $dt_{\rm{f}}$ (``distribution time'').
\end{enumerate}
Sample particles can propagate independently, but their global times have to be coincident updating the neutrino distribution function.
We hence take a time step of calculations as the distribution time $dt = dt_{\text{f}}$ and calculate the individual evolution of the sample particle during each time step.
If the other two lengths are longer than the distribution length, the sample particle of interest just propagates freely during this time step.
If not, comparing between the reaction and background lengths, this sample undergoes the process with the shorter length, subsequently, and we recalculate these lengths.
We repeat this cycle for each sample particle until the distribution time $dt_{\rm{f}}$ elapses.
After calculating the evolutions of all sample particles, individually, we update the distribution function as described in Section~\ref{subch:f}.
\subsection{\textcolor{black}{Evaluation} of the neutrino distribution function} \label{subch:f}
In this calculation, we employ the spherical symmetric background and the neutrino distribution function is reduced to $f(r,E_{\nu},\theta_{\nu})$.
At every time step, we count the number of sample particles inside each volume element in a space and a phase space, and calculate the $i,j,k$-th discretized neutrino distribution function $f_{ijk}$:
\begin{eqnarray}
f_{ijk} = \frac{N_{ijk}W_{s}}{\textcolor{black}{V_{r,i}V_{m,jk}}},
\end{eqnarray}
where $i, j$ and $k$ describe the components of $r,\ E_\nu$ and $\theta_\nu$, respectively; the total number of sample particles in the $i,j,k$-th volume element $N_{ijk}$; the $i$-th spatial volume element $\textcolor{black}{V_{r,i}} = 4\pi\left(r^3_i-r^3_{i-1}\right)/3$ and the $j,k$-th phase space volume element $\textcolor{black}{V_{m,jk}} = 2\pi\left(\cos{\theta_{\nu,k}}-\cos{\theta_{\nu,k-1}}\right)\left(E^3_{\nu,j}-E^3_{\nu,j-1}\right)/3$.
\subsection{Treatments of neutrino reactions} \label{subch:2.5}
Neutrinos interact with matter via several reactions inside stars (See Table~\ref{reac_MC}).
We divide neutrino reactions into three processes: absorption, emission and scattering, and adopt different treatments to them in our MC code.
\subsubsection{Absorption and scattering}
Existing samples are absorbed or scattered by matter.
After the subsequent reaction point is determined by the reaction length, which is defined by the mean free path of all absorption and scattering processes taken into account, we choose which reaction will occur actually using the uniform random number $x$ whose range is [0, 1].
If we get the random number in the range of $\Sigma_{\alpha=1}^{i-1} \sigma_\alpha/\sigma_{\text{tot}} \leqq x < \Sigma_{\alpha=1}^{i} \sigma_{\alpha}/\sigma_{\text{tot}}$, the sample particle will undergo the $i$-th reaction \citep{1978ApJS...37..287T,Lucy:2003zx}.
If the $i$-th reaction is an absorption process, such as $\nu_e + n \rightarrow p + e^-$, we stop following the track of this sample particle at this point.
If the $i$-th reaction is a scattering process, such as $\nu + N \rightarrow \nu + N$, on the other hand, we calculate the angles and energy after the scattering, $\theta^\prime_\nu, \phi^\prime_\nu$ and $E^\prime_\nu$, with random numbers mentioned in Section~\ref{new_MC}
\subsubsection{Emission}
The total number of neutrinos emitted during a time step $dt_{\rm{f}}$ in unit spatial volume is calculated by the reaction rate $R_{i,\rm{ems}}$ and we add the corresponding number of sample particles uniformly in that volume element at the beginning of each time step.
The energies and angles of sample particles are distributed following the distribution of the reaction rate.
We put the distribution time into sample particles randomly in the range of [0, $dt_{\rm{f}}$] in order to get the constant emission rate and calculate their evolutions in the same way as those for existing sample particles.
\section{Neutrino reactions} \label{appendix3}
\subsection{Electron/positron scatterings}
The reaction rates of the electron/positron scattering are derived from the \textcolor{black}{similar form} as the nucleon scattering in eqs.~(\ref{R_rec})-(\ref{R_rec_f}), if we change the coefficients $\beta$'s summarized in Table~\ref{coef_esc}, the target mass $m_N \rightarrow m_e$ and the chemical potential $\mu_N \rightarrow \mu_e, -\mu_e$ for electrons and positrons, respectively.
In this paper, we denote the total reaction rates of electron and positron scatterings as $R_{\rm{esc}}$.
Their cross section $\sigma_{\rm{esc}}$ and normalized spectra $P_\psi$ and $P_{E^\prime_\nu}$ are defined in the same way as those for the nucleon scattering.
Note that we should distinguish the reaction rates of $\nu_x$ and $\bar{\nu}_x$, but we adopt that of $\nu_x$ in this study.
\begin{table*}[htbp]
\caption{The coefficients for the reaction rates of electron and positron scatterings. In this expression, $C^\prime_{Ve}= C_{Ve} + 1$ and $C^\prime_{Ae} = C_{Ae} + 1$ with $C_{Ve} = -1/2 + 2\sin^2{\theta_w}$ and $C_{Ae} = 1/2$. \label{coef_esc}}
\begin{center}
\begin{tabular}{c|ccc} \hline
reaction & $\beta_1$ & $\beta_2$ & $\beta_3$ \\
\hline\hline
$\nu_e e^-$ / $\bar{\nu}_e e^+$ & $\left(C^\prime_{Ve} + C^\prime_{Ae}\right)^2$ & $\left(C^\prime_{Ve} - C^\prime_{Ae}\right)^2$ & $ C^{\prime2}_{Ae} - C^{\prime2}_{Ve} $ \\
$\nu_e e^+$ / $\bar{\nu}_e e^-$ & $\left(C^\prime_{Ve} - C^\prime_{Ae}\right)^2$ & $\left(C^\prime_{Ve} + C^\prime_{Ae}\right)^2$ & $ C^{\prime2}_{Ae} - C^{\prime2}_{Ve} $ \\
$\nu_x e^-$ & $\left(C_{Ve} + C_{Ae}\right)^2$ & $\left(C_{Ve} - C_{Ae}\right)^2$ & $ C^2_{Ae} - C^2_{Ve} $ \\
$\nu_x e^+$ & $\left(C_{Ve} - C_{Ae}\right)^2$ & $\left(C_{Ve} + C_{Ae}\right)^2$ & $ C^2_{Ae} - C^2_{Ve} $ \\
\end{tabular}
\end{center}
\end{table*}
\subsection{Electron capture on free proton and positron capture on free neutron}
The emission rate of EC's and PC's on free nucleons $R_{\rm{EC,ems}}$, $E_{\rm{PC,ems}}$ are calculated by \cite{1985ApJS...58..771B}:
\begin{eqnarray}
R_{\rm{EC,ems}} &=& \frac{{G_F}^2}{\pi\hbar c}\eta_{\mathrm{pn}}\left({g_V}^2+3{g_A}^2\right)\left(E_{\nu_e}+Q\right)^2 \nonumber \\
&&\times \sqrt{1-\frac{m_e^2}{\left(E_{\nu_e}+Q\right)^2}}f_e\left(E_{\nu_e}+Q\right), \\
R_{\rm{PC,ems}} &=& \frac{{G_F}^2}{\pi\hbar c}\eta_{\mathrm{np}}\left({g_V}^2+3{g_A}^2\right)\left(E_{\nu_e}-Q\right)^2 \nonumber \\
&&\times \sqrt{1-\frac{m_e^2}{\left(E_{\nu_e}-Q\right)^2}} f_{e^+}\left(E_{\nu_e}-Q\right) \nonumber\\
&&\times\Theta\left(E_{\nu_e} - Q - m_e\right),
\end{eqnarray}
in which nucleons are non-relativistic and they neglect nucleon recoils.
The absorption rates are derived from the detailed balance relations, $R_{\rm{\ast,ems}} (1-f_{\ast,\rm{eq}}) = R_{\rm{\ast,abs}}f_{\ast,\rm{eq}}$, using the Fermi-Dirac distribution of electrons and positrons $f_{\ast,\rm{eq}}$ with the chemical potential $\mu_e$ for EC's and $-\mu_e$ for PC's; the reaction rates $R_{\ast,\rm{abs}} = R_{\rm{EC,abs}}, R_{\rm{PC,abs}}$.
The cross sections are calculated as $\sigma_{\ast} = R_{\ast,\rm{abs}}$.
\subsection{Electron-positron pair annihilation}
We use the reaction rate of the electron-positron pair annihilation $R_{\rm{pair}}$\footnote{The reaction rate in \cite{2017ApJ...848...48K} is described in the natural unit ($c=\hbar=1$). In this paper, $R_{\rm{pair}}$ is defined by multiplying a factor $1/c\hbar$ to that in the previous paper.} described in \cite{2017ApJ...848...48K} (See eqs.~(1)-(9) in this paper).
The emission rate and cross section for neutrinos are derived from the integrals of the reaction rate in a phase space for anti-neutrinos:
\begin{eqnarray}
R_{\rm{pair,ems}} &=& \int\int \frac{1}{2E_\nu \left(2\pi\right)^3} \frac{2\pi E^2_{\bar{\nu}}}{2E_{\bar{\nu}}\left(2\pi\right)^3} \nonumber \\
&&\ \ \ \ \ \ \ \times R_{\rm{pair}} \left(1-f_{\bar{\nu}}\right) d\cos{\psi} dE_{\bar{\nu}}, \\
\sigma_{\rm{pair}} &=& \int\int \frac{1}{2E_\nu \left(2\pi\right)^3} \frac{2\pi E^2_{\bar{\nu}}}{2E_{\bar{\nu}}\left(2\pi\right)^3} \nonumber \\
&& \ \ \ \ \ \ \ \times R_{\rm{pair}} f_{\bar{\nu}} d\cos{\psi} dE_{\bar{\nu}},
\end{eqnarray}
with the energy of anti-neutrinos $E_{\bar{\nu}}$, the angle between 4 momenta of neutrino pair $\psi$ and the distribution for anti-neutrinos $f_{\bar{\nu}}$.
For anti-neutrinos, we integrate the reaction rate over $E_\nu$ instead of $E_{\bar{\nu}}$.
In this calculation, we employ the distribution function for the other neutrinos derived from the background CCSN simulations.
\subsection{Nucleon bremsstrahlung}
We calculate the reaction rate of the nucleon bremsstrahlung $R_{\rm{brem}}$ based on \cite{1979ApJ...232..541F,1987ApJ...316..691M}.
The emission and absorption rates $R_{\rm{brem,ems}}$, $R_{\rm{brem,abs}}$ and the cross section $\sigma_{\rm{brem}}$ are derived in the same way as those for pair annihilations.
\section{Introduction}
Core-collapse supernovae (CCSNe) are violent explosions of massive stars with $M_{\mathrm{ZAMS}} \gtrsim 8\ M_\odot$.
The explosion is instigated by the gravitational collapse of a central core, which is followed by the formation of a shock wave at core bounce.
If the shock wave passes through the central core and propagates through outer envelopes up to the stellar surface, these envelopes are ejected and a compact remnant is left behind at the center.
In numerical simulations, the shock wave stagnates inside the core and how to get the shock wave out of the core has been explored for a long time but has not been settled yet \citep[references therein]{2012ARNPS..62..407J,2012AdAst2012E..39K,2019arXiv190411067M}.
One of the favored mechanisms for shock revival is the heating by neutrinos emitted from a proto-neutron star (PNS) and is called the neutrino heating mechanism.
In multi-dimensional simulations, non-spherical matter motions, such as convection or the standing accretion shock instability (``SASI''), push up the shock wave and enhance the neutrino heating behind it \citep{2003ApJ...584..971B,2008ApJ...678.1207I}, and shock revival is obtained more often than not recently \citep{Skinner:2015uhw,Summa:2015nyk,2015ApJ...801L..24M,2015ApJ...807L..31L,2015ApJ...800...10D,Takiwaki:2016qgc,2016ApJ...831...98R,2017MNRAS.472..491M,2017ApJ...850...43R,OConnor:2015rwy,2018ApJ...855L...3O,2019MNRAS.482..351V,2019MNRAS.485.3153B,adam2020}.
Neutrino reaction rates are certainly important for SN explosion.
\cite{1985ApJS...58..771B} provided a comprehensive set of neutrino opacities, which have been widely incorporated in SN simulations.
Possible corrections to these rates have been investigated for the last 30 years.
For example, the important updates are summarized in \cite{2018ApJ...853..170K} (see also references therein).
They have been taken into account in numerical simulations of late \citep{2006A&A...447.1049B,2012ApJ...761...72M,2012ApJ...760...94L,2018ApJ...853..170K}.
Nucleon recoils in neutrino-nucleon scattering are one of them.
Since the energy exchange by nucleon recoils is only a few \% of initial neutrino energy owing to the nucleon mass much larger than the typical neutrino energy $\lesssim$~100 MeV, they were considered to be less important in the spectral formation than electron scattering, in which the energy exchange is much more efficient, and ignored in the past SN simulations.
The cross section of nucleon scattering is much larger than that of electron scattering, however, and it is possible that neutrino spectra are changed by nucleon recoils, especially for heavy-lepton neutrinos, which interact with matter only via neutral current reactions.
As a matter of fact, the effects of nucleon recoils have been already investigated.
For example, \cite{Keil:2002in} used their Monte Carlo (MC) code for the assessment and demonstrated that the average neutrino energy is indeed decreased by nucleon recoils.
Their effects have been also studied by dynamical simulations of CCSNe \citep{2002A&A...396..361R,2006A&A...447.1049B,2009ApJ...694..664M,2010PhRvL.104y1101H,2012ApJ...760...94L,2012ApJ...761...72M,2015ApJ...808..188P,2015ApJ...807L..31L,Skinner:2015uhw,2017ApJ...850...43R,2018ApJ...853..170K,2018arXiv180905608B,2019MNRAS.482..351V,2019MNRAS.485.3153B,2019arXiv190110523R,2019ApJ...873...45G}.
They found that nucleon recoils reduce the opacity for neutrinos and accelerate the PNS cooling, which in turn increases neutrino luminosities, thus helping shock revival.
We revisit this issue from a bit different point of view.
In most of CCSNe simulations one employs a finite-difference method for neutrino transport.
In so doing, we normally can not afford to deploy a sufficiently large number of energy bins needed to resolve the small energy exchange by nucleon recoils.
For example, only 20 energy bins are deployed to cover the range of 0-300 MeV in our CCSN simulations with full Boltzmann neutrino transport \citep{nagakura2018,2019ApJ...880L..28N,harada2019} and the widths of these energy bins are larger by an order than the typical energy exchange through nucleon recoils.
Note that although in those simulations energy sub-grids are normally employed to evaluate the transfer rate from an energy cell to the next one \citep{2006A&A...447.1049B}, the resolution problem still remains, since the neutrino distribution in the energy bin is not assumed one way or another.
We will quantify the effects of the coarse energy grid and present a possible improvement in this paper.
We perform neutrino transport calculations with our own MC code for a static hydrodynamical background derived from our dynamical SN simulation.
Note that these MC simulations are free of the energy-resolution problem.
It is also mentioned that in this study we do not use the approximation given by \cite{Horowitz:2001xf} but employ the exact reaction rate for nucleon scattering\footnote{Note that we neglect the effect of weak magnetism, which is embedded in the form factor of the scattering kernel, in order to purely focus on the effects of nucleon recoils in this study. The incorporation of the weak magnetism in our MC code is straightforward, though.}.
After validating our MC code, we look into the effects of nucleon recoils on neutrino spectra, that is, how they are thermalized with radius, comparing their contributions with others, particularly electron scattering, in detail.
We then assess the energy-resolution issue by introducing energy grids with different numbers of grid points: $N_{E_\nu}$ = 10 and 20 in our MC calculations to assess the energy-resolution issue.
Note that the latter energy grid is exactly the same as the one used in our CCSN simulations with the finite-difference Boltzmann solver.
In order to mimic the situation in the finite-difference methods, we re-distribute by hand in a couple of ways the MC particles in each energy bin repeatedly after some periods given by the typical time step of CCSN simulations and see their effects on neutrino spectra.
The organization of the paper is as follows: the new features in our MC code are briefly described in Section \ref{ch2}, particularly the treatment of neutrino-nucleon scattering; several numerical tests for the validation of our new code are presented in Section \ref{ch3}; the effects of nucleon recoils on neutrino spectra are discussed in Section \ref{ch4}; the possible influence of energy resolution in the finite-difference methods is studied in Section \ref{ch5}, and finally we give summary and discussions in Section \ref{ch6}.
\begin{table*}[htbp]
\caption{The neutrino reaction set included in our calculations. The base model incorporate the sub-set of neutrino reactions normally considered in dynamical supernova simulations.
The nucleon recoil in the nucleon scattering is taken into account in model r1 whereas the electron/positron scattering is also included in model e1.
\label{reac_MC}}
\begin{center}
\begin{tabular}{l|l|l|ccc} \hline
\multicolumn{3}{c|}{reactions} & base & r1 & e1 \\
\hline\hline
electron-positron pair annihilation & pair & $e^- + e^+ \longrightarrow \nu + \bar{\nu} $ & $\checkmark$ & $\checkmark$ &$\checkmark$\\
bremsstrahlung & brems & $N + N \longrightarrow N + N + \nu + \bar{\nu}$ & $\checkmark$ & $\checkmark$ &$\checkmark$\\
electron capture & ecp & $p + e^- \longleftrightarrow n + \nu_e$ & $\checkmark$ & $\checkmark$ &$\checkmark$\\
positron capture & pc & $n + e^+ \longleftrightarrow p + \bar{\nu}_e$ & $\checkmark$ & $\checkmark$ &$\checkmark$\\
\hline
nucleon scattering & nsc (Bruenn) & $N + \nu \longrightarrow N + \nu$ & $\checkmark$ & & \\
& nsc (rec) & & & $\checkmark$ &$\checkmark$\\
electron scattering & esc & $e^- + \nu \longrightarrow e^- + \nu$ & & &$\checkmark$\\
positron scattering & psc & $e^+ + \nu \longrightarrow e^+ + \nu$ & & &$\checkmark$\\
\end{tabular}
\end{center}
\end{table*}
\section{Numerical methods of MC transport} \label{ch2}
\subsection{MC method $\text{vs}$ finite-difference methods}
There are two \textcolor{black}{representative} approaches to the numerical solution of the radiation transport equation: the discretized methods and the MC method.
In the former method, such as the $S_N$ method (see e.g. \cite{2004rahy.book.....C}), we discretize the transport equation in phase space.
In the latter method, we follow the tracks of ``sample particles'', which represent a bundle of radiation particles interacting with matter.
The interactions are treated probabilistically and physical quantities, such as the distribution function of radiation, are obtained by collecting individual sample evolutions.
Each method has its own advantages and drawbacks.
In the discretized method, it is normally no problem to treat the entire system having both optically thick and thin regions.
The time-dependent coupling with hydrodynamics is also straightforward.
On the other hand, the numerical resolution is mainly determined by the number of mesh points one can afford and, as repeatedly mentioned, the energy-grid number cannot be very large particularly in multi-spatial-dimensions.
This may be particularly critical for the treatment of the small energy exchanges in the nucleon scattering and special cares, such as the employment of sub-grids, are taken normally \citep{2006A&A...447.1049B,2018arXiv180905608B}.
Recently, \cite{2019arXiv190405047S} shows that the Fokker-Planck approximation is also useful.
It is noted that even if such a measure is taken, the coarse-resolution problem may remain, since the neutrino energy spectrum is still represented on the rather small number of energy-grid points.
The MC method is mesh-free and hence favorable for multi-dimensional simulations.
Various reactions can be treated in a simple and direct way.
In fact, the small energy exchanges in the nucleon scattering pose no problem in this approach.
On the other hand, statistical errors inherent to the probabilistic description and slow convergence scaled as $\sqrt{N}$ are big disadvantages for the MC method.
It is normally counted as another demerit that it is difficult to treat optically thick regime and/or couplings with hydrodynamics (but see \cite{2012ApJ...755..111A,2017ApJ...847..133R}).
In this study, we employ the MC method for neutrino transport for two reasons.
First, we focus on nucleon recoils, which can be treated most accurately with the MC method as explained above.
Second, we are concerned with the thermalization of neutrino spectrum via the nucleon scattering and hence we do not need to worry about the high density region, where the MC method performs poorly.
As a matter of fact, neutrinos are already thermalized by other processes well inside the neutrino sphere and we have only to impose the thermal distribution functions as the inner boundary condition (but see Section~\ref{subch:thermal_neutrino} for more details of our treatment).
\subsection{New features in our MC code} \label{new_MC}
Here we summarize some new features of our MC code worth particular mention.
Other information on the code is provided in Appendices \ref{appendix}-\ref{appendix3}.
The basics are essentially the same as in previous works \citep{1978ApJS...37..287T,1989A&AS...78..375J,Keil:2002in}.
The main difference in the neutrino transport from the photon transport is the Fermi-blocking at the final state.
For example, neutrino scatterings are suppressed by the blocking factor $1-f$, where the distribution function is denoted by $f$.
This makes the transport equation nonlinear and we need to update the distribution function at an appropriate rate during the MC simulation (see Appendices \ref{subch:trans} and \ref{subch:f}).
In our code, four emission and two scattering processes are implemented (see Table~\ref{reac_MC}).
Here we focus on the nucleon scattering, the key reaction in this paper.
As mentioned earlier, we treat this process as precisely as possible.
We do not use the approximate formula commonly used but employ the exact reaction rate, which is essentially the same as for the electron scattering.
We store it in a table as $R_i(E_\nu, E^\prime_\nu, \psi)$ for various combinations of density, temperature and electron fraction.
In this expression, $E_\nu$ and $E^\prime_\nu$ are the neutrino energies before and after scattering, respectively; $\psi$ is the scattering angle, i.e., the angle that the incident and outgoing momenta make.
The table actually contains the reaction rates only for $E_\nu \le E^\prime_\nu$ and the other case $E_\nu > E^\prime_\nu$ is derived from the former so that the detailed balance relation should be satisfied.
The detailed procedure is given in Appendix \ref{appendix}.
For a given incident energy $E_\nu$, the scattering angle $\psi(\theta^\prime_\nu, \phi^\prime_\nu)$ and the energy after scattering $E^\prime_\nu$ are determined probabilistically according to their normalized distributions $P_\psi$ and $P_{E^\prime_\nu}$, which are derived from the cumulative reaction rate $R_i(E_\nu,E^\prime_\nu,\psi)$ (see eqs. (\ref{Ppsi}) and (\ref{penu}) ).
The azimuth of the scattering direction $\Psi$ is determined randomly in the range of [0, 2$\pi$].
Then, the propagation direction of neutrinos after scattering in phase space specified by the zenith and azimuth angles measured from the local radial direction, ($\theta^\prime_\nu, \phi^\prime_\nu$), is given from the angles ($\psi, \Psi$) by an appropriate coordinate transformation.
Note that the normalized distributions $P_\psi$ and $P_{E^\prime_\nu}$ do not include the blocking factor $1-f$ (see Section~\ref{NNscat}).
It is taken into account after $E^\prime_\nu$, $\theta^\prime_\nu$ and $\phi^\prime_\nu$ are determined in this way.
We throw a dice yet again to get a random number $z$ in the range of [0,~1].
If the condition $0 \leq z \leq f(r,E^\prime_\nu,\theta^\prime_\nu)$ is satisfied, we accept this scattering whereas it is "blocked'' otherwise and the energy and angles of neutrinos are not changed after all.
Note that this procedure correctly reproduces the mean free path in the presence of Fermi-blocking.
It has an advantage that the reaction table can be independent of the neutrino distribution.
\subsection{reaction rate of neutrino-nucleon scattering} \label{NNscat}
The reaction rate of the neutrino-nucleon scattering is given essentially in the same way as for the electron scattering \citep{1993ApJ...410..740M}:
\begin{eqnarray}
R_{\rm{rec}}\left(q,q^\prime\right) = \frac{G^2_F}{2\pi^2\hbar c} \frac{1}{E_\nu E_{\nu}^\prime}
\left[\beta_1I_1 + \beta_2 I_2 + \beta_3I_3\right]. \label{R_rec}
\end{eqnarray}
In the above expression, $G_F = 1.166364 \times 10^{-11} \rm{MeV}^{-2}$ is the Fermi coupling constant and $\beta$'s are the following combinations of the coupling constants:
$\beta_1 = \left(C_V - C_A \right)^2$, $\beta_2 = \left(C_V + C_A \right)^2$ and $\beta_3 = C^2_A - C^2_V$,
and $I$'s are functions of the energies $E_\nu$, $E^\prime_\nu$ of the incident and outgoing neutrinos and the angle $\psi$ between their momenta $q$ and $q^\prime$:
\begin{eqnarray}
I_1 &=& \frac{2\pi T}{\Delta^5} E^2_\nu E_{\nu}^{\prime2} (1-\cos{\psi})^2
\frac{1}{\exp{\left(\frac{E_\nu - E_\nu^\prime}{T}\right)}-1} \nonumber \\
&& \times \left[ AT^2\left(G_2(y_0) + 2y_0G_1(y_0) + y_0^2G_0(y_0)\right)\right. \nonumber \\
&& \left. + BT\left(G_1(y_0) + y_0G_0(y_0) \right) + CG_0(y_0) \right], \\
I_2 &=& I_1\left(-q,-q^\prime \right), \\
I_3 &=& \frac{2\pi T m_N^2}{\Delta}E_\nu E_\nu^\prime\left(1-\cos{\psi}\right)
\frac{G_0\left(y_0\right)}{\exp{\left(\frac{E_\nu - E_\nu^\prime}{T}\right)}-1},
\end{eqnarray}
with
\begin{eqnarray}
\Delta^2 &\equiv& E_\nu^2 + E_\nu^{\prime^2}-2E_\nu E_\nu^\prime\cos{\psi}, \\
A &\equiv& E_\nu^2 + E_\nu^{\prime2} + E_\nu E_\nu^\prime\left(3+\cos{\psi}\right), \\
B &\equiv& E_\nu^\prime \left[ 2E_\nu^{\prime2} + E_\nu E_\nu^\prime\left(3-\cos{\psi}\right) \right. \nonumber \\
&&\ \ \ \left. - E_\nu^2\left(1+3\cos{\psi}\right)\right], \\
C &\equiv& E_\nu^{\prime2} \left[ \left(E_\nu^\prime-E_\nu\cos{\psi}\right)^2 - \frac{E_\nu^2}{2}\left(1-\cos^2{\psi}\right) \right. \nonumber \\
&&\ \ \ \left. - \frac{1}{2}\frac{1+\cos{\psi}}{1-\cos{\psi}}\frac{m_N^2}{E_\nu^{\prime2}}\Delta^2\right], \label{R_rec_fin}
\end{eqnarray}
and $y_0 = E_{N0}/T$, $\eta=\mu_N/T$, $\eta^\prime=\eta+(E_\nu-E^\prime_\nu)/T$ and $G_n(y) \equiv F_n(\eta^\prime-y) - F_n(\eta-y)$, in which the Fermi integral $F_n(z)$ is defined as
\begin{eqnarray}
F_n\left(z\right)=\int_0^\infty \frac{x^n}{e^{x-z}+1}dx ,
\end{eqnarray}
and $E_{N0}$ is expressed as
\begin{eqnarray}
E_{N0} = \frac{E_\nu-E_\nu^\prime}{2} + \frac{\Delta}{2}\sqrt{1+\frac{2m_N^2}{E_\nu E^\prime_\nu\left(1-\cos{\psi}\right)}}. \label{R_rec_f}
\end{eqnarray}
\textcolor{black}{Assuming that} the energy exchange is much smaller than the neutrino energy before scattering $\Delta E/E_\nu \ll 1$ and the nucleon mass is infinitely large $m_N \rightarrow \infty$, one reproduces the reaction rate given by \cite{1985ApJS...58..771B}, which is commonly incorporated in SN simulations:
\begin{eqnarray}
&&\textcolor{black}{R_{\rm{Bruenn}}} = \frac{2\pi G^2_F}{\hbar c} \eta_{NN} \delta\left(E_\nu-E^\prime_\nu\right) \nonumber \\
&&\times \left\{ \left(h^N_V\right)^2 + 3\left(h^N_A\right)^2 + \left[\left(h^N_V\right)^2 - \left(h^N_A\right)^2 \right] \cos{\psi} \right\}, \ \ \ \label{bruenrate}
\end{eqnarray}
and $\eta_{NN}$ is defined as
\begin{eqnarray}
\eta_{NN} &\equiv& \int \frac{2d^3 p_N}{\left(2 \pi\right)^3} \tilde{F}_N\left(\tilde{E}\right) \left[1-\tilde{F}_N\left(\tilde{E}\right)\right],
\end{eqnarray}
where $\tilde{F}_N(\tilde{E}) = 1/[1+\exp{(\tilde{E}-\mu_N)/T}]$ is the Fermi-Dirac distribution of nucleons with the non-relativistic energy $\tilde{E}=p^2_N/2m_N$.
The exact and (Bruenn's) approximate total cross sections are obtained by integrating the corresponding reaction rates $R_\ast = R_{\rm{rec}}, \textcolor{black}{R_{\rm{Bruenn}}}$:
\begin{eqnarray}
\sigma_{\rm{N}} = \int \tilde{R}_\ast d\cos{\psi},
\end{eqnarray}
with
\begin{eqnarray}
\tilde{R}_\ast = \frac{1}{\left(2\pi\right)^3} \int 2\pi E^{\prime2}_\nu R_\ast dE^\prime_\nu.
\end{eqnarray}
The quantities after scattering $E^\prime_\nu$, $\cos{\theta^\prime_\nu}$ and $\phi^\prime_\nu$ are determined as follows.
We first determine the scattering angle $\psi$ according to the normalized cumulative distribution:
\begin{eqnarray}
&&P_\psi\left(\cos{\psi_k}; E_\nu\right) \nonumber \\
&& = \frac{\int_{-1}^{\cos{\psi_k}}\int 2\pi E^{\prime2}_\nu R_\ast\left(E_\nu, E^\prime_\nu,\cos{\psi}\right) dE^\prime_\nu d\cos{\psi} }{\int^1_{-1}\int 2\pi E^{\prime2}_\nu R_\ast\left(E_\nu,E^\prime_\nu,\cos{\psi}\right) dE^\prime_\nu d\cos{\psi}}.\ \ \ \ \label{Ppsi}
\end{eqnarray}
For the derived $\psi_k$, the energy after scattering is determined in the same way according to the following normalized cumulative distribution:
\begin{eqnarray}
&& P_{E^\prime_\nu} \left(E^\prime_{\nu,i}; \cos{\psi_k}, E_\nu\right) \nonumber \\
&& \ \ \ \ = \frac{\int_{E^\prime_{\rm{min}}}^{E^\prime_{\nu,i}} 2\pi E^{\prime2}_\nu R_\ast\left(E_\nu,E^\prime_{\nu},\cos{\psi_k}\right) dE_\nu^\prime}{\int_{E^\prime_{\rm{min}}}^{E^\prime_{\rm{max}}} 2\pi E^{\prime2}_\nu R_\ast\left(E_\nu, E^\prime_\nu,\cos{\psi_k}\right) dE^\prime_\nu}. \label{penu}
\end{eqnarray}
The minimum and maximum energies $E^\prime_{\rm{min}}$, $E^\prime_{\rm{max}}$ in the integration are determined so that the reaction rates there should be $10^{-5}$ times smaller than the maximum rate.
The treatments of other reactions are summarized in Appendices~\ref{appendix2} and \ref{appendix3}.
\begin{figure}[htbp]
\center
\epsscale{1.4}
\plotone{hydro_20_100.eps}
\caption{The radial profiles of density, temperature, electron fraction and total mean free paths for different species of neutrinos in the progenitor model with $M_{\rm{ZAMS}}$ = 11.2~$M_\odot$ at 100 ms after core bounce \citep{2019ApJS..240...38N}. The mean free paths for each species are shown for $E_\nu$ = 5, 14, 24 and 40 MeV (from above) with the same color for the r1 set of neutrino reactions (see Table~\ref{reac_MC}). We focus on the regions painted in yellow in the comparison. \label{hydro}}
\end{figure}
\begin{figure*}[htbp]
\epsscale{0.9}
\plottwo{compair_nue.eps}{compair_nue_angle.eps}
\plottwo{compair_nueb.eps}{compair_nueb_angle.eps}
\plottwo{compair_nux.eps}{compair_nux_angle.eps}
\caption{The comparison of the energy spectra of $\nu_e$'s (top), $\bar{\nu}_e$'s (middle) and $\nu_x$'s (bottom) between the MC code and the finite-difference Boltzmann solver by \cite{Nagakura:2014nta} for some selected radii in region I (left) and II (right). Color lines represent the MC results and gray symbols show the results by the Boltzmann solver. In the left panels, different lines correspond to different radii and the scattering angle is fixed to $\cos{\theta_\nu}=0.973$, whereas in the right panels, the radius is fixed to $r$ = 34 km and the scattering angle is varied. \label{compair}}
\end{figure*}
\begin{figure}[htbp]
\center
\epsscale{1.4}
\plotone{therm_sm.eps}
\caption{The thermalization of neutrino spectrum by neutron recoils. In the upper half, the solid lines present the spectra at different times and the red dotted line gives the Fermi-Dirac distribution $f_{\rm{eq}}$ with $T = 9.96$ MeV and $\mu_\nu = -1.75$ MeV expected after thermalization. The lower half exhibits the mean free time of neutrinos as a function of the neutrino energy. \label{therm}}
\end{figure}
\begin{figure}[htbp]
\center
\epsscale{1.2}
\plotone{mp_change.eps}
\plotone{cross_comparison2.eps}
\plotone{omega_depend_new2.eps}
\caption{Top: the proton scattering rate as a function of proton mass: $m_p$ (red), $10\times m_p$ (blue) and $100\times m_p$ (green). The horizontal axis is the ratio of the lost energy to the initial energy. Middle: the cross sections of the proton scattering with (red) and without (blue) recoils as a function of neutrino energy.
Bottom: the angle dependence of the proton-scattering rates at $E_\nu = 40$ MeV with (red) and without (blue) recoils. \label{ef_rec}}
\end{figure}
\section{Code validation} \label{ch3}
In this section we present some of the test calculations we conducted for the validation of our MC code.
We first compare the results obtained with our MC code and those with another Boltzmann solver based on discretization \citep{Nagakura:2014nta,Nagakura:2016jcl,2019ApJ...878..160N} in Section~\ref{sub:compair}.
The numerical treatment of the detailed balance in the neutrino-nucleon scattering, a key ingredient in this paper, is then validated in the computation of the thermalization of neutrino spectrum via this process in a single spatial zone in Section~\ref{sub:thermalization}.
\subsection{Comparison with the finite-difference Boltzmann solver} \label{sub:compair}
We validate our MC code with another Boltzmann solver developed by \cite{Nagakura:2014nta,Nagakura:2016jcl,2019ApJ...878..160N}, which is a finite-difference code based on the $S_N$ method.
We take a similar strategy to that in \cite{2017ApJ...847..133R}: we employ a snapshot at 100 ms after bounce taken from our realistic one-dimensional dynamical SN simulation with $M_{\text{ZAMS}}$ = 11.2 $M_\odot$ \citep{2019ApJS..240...38N}; fixing the matter distribution so obtained, we run the two neutrino transport codes to obtain a steady neutrino distribution.
Note that the same background model is used for the later studies.
Top three panels in Figure~\ref{hydro} show the radial profiles of density, temperature and electron fraction in this model.
We focus on two regions: region I ($r$ = 20 -- 25 km) and region II ($r$ = 28 -- 34 km) painted in yellow.
In the former region, neutrinos are nearly in thermal equilibrium with matter, whereas in the latter region they get gradually out of equilibrium as the density decreases and their distribution starts to become anisotropic.
The set of neutrino reactions employed in this comparison is referred to ``base'' in Table~\ref{reac_MC}.
Note that the nucleon recoil is not included.
We deploy $2\times10^6$ sample particles and adopt the time step of $dt_{\rm{f}}= 10^{-7}$ s, which is the same as the time step for updating the neutrino distribution function in this case (see Appendix~\ref{appendix2}).
We adopt exactly the same spatial grid as employed in the SN simulation and assume that hydrodynamical quantities are constant in each cell.
In order to set the inner and outer boundary conditions, we introduce ghost cells both inside and outside the active region and deploy sample particles uniformly there according to the distribution functions imposed at the boundaries.
Turning off all the interactions with matter, we follow the motions of these sample particles in the ghost cells to make the fluxes at the boundaries as close to the prescribed values as possible.
We follow the time evolution of neutrino radiation field by MC simulations until the system settles down to a nearly steady state, in which the total number of sample particles do not change more than 0.5 \% from a certain value for the total number of sample particles.
We then take the average over 8,000 time steps ($8\times10^{-4}$ s) after the steady-state is achieved to reduce the statistical error, and evaluate the number spectra of neutrinos from the mean distribution function.
Note that neutrinos with $E_\nu \gtrsim 5$ MeV experience scatterings with nucleons more than 10 times during this period.
This may be understood from the total mean free path for the nucleon-scattering\footnote{Note that we use the exact reaction rate $R_{\rm{rec}}$ for the cross sections of nucleon scattering $\sigma_N$ in the bottom panel of Figure~\ref{hydro}.} in the bottom panel of Figure~\ref{hydro}.
Figure~\ref{compair} shows the comparison of the energy spectra $dN(r,E_\nu)/dE_\nu$:
\begin{eqnarray}
\frac{dN\left(r,E_\nu \right)}{dE_\nu} = \frac{1}{\left(2\pi \hbar c\right)^3} \int 2\pi E^2_\nu f\left(r,E_\nu,\theta_\nu\right) d\cos{\theta_\nu},\ \ \
\end{eqnarray}
for $\nu_e$'s (top), $\bar{\nu}_e$'s (middle) and $\nu_x$'s (bottom) between the MC code and the finite-difference Boltzmann solver.
The left panels show the results in the region I.
Color lines correspond to the results of the MC calculation for $\cos{\theta_\nu} = 0.973$ at different radii.
We use the same energy and angle grids as those employed by the finite-difference Boltzmann solver to facilitate comparisons.
Gray symbols present the results obtained with the finite-difference Boltzmann solver.
We find a good agreement between the two methods.
In the right panels, on the other hand, we pick up the neutrino spectra at $r$ = 34 km in the region II.
Different colors denote the different cosines of angles $\cos{\theta_\nu}$.
One can see that the angular distributions of neutrinos start to become forward-peaked with $\nu_x$ being the most anisotropic as expected.
The neutrino spectra given by our MC code are again in an excellent agreement with those by the finite-difference Boltzmann solver in this bit outer region.
\subsection{Thermalization by nucleon recoils} \label{sub:thermalization}
In this paper, we focus on the effects of nucleon recoils, particularly the thermalization of neutrinos.
In so doing, the detailed balance should be satisfied in the numerical simulations:
\begin{eqnarray}
&&R_{\rm{rec}}\left(E_\nu,E^\prime_\nu,\cos{\theta_\nu} \right)f_{\rm{eq}}\left(E_\nu\right)\left(1-f_{\rm{eq}}\left(E^\prime_\nu\right)\right) \nonumber \\
&&\ \ \ = R_{\rm{rec}}\left(E^\prime_\nu,E_\nu,\cos{\theta_\nu} \right)\left(1-f_{\rm{eq}}\left(E_\nu\right)\right)f_{\rm{eq}}\left(E^\prime_\nu\right). \ \
\end{eqnarray}
This is ensured simply by calculating the reaction rates for $E_\nu \le E^\prime_\nu$ and obtaining those for the other case $E^\prime_\nu > E_\nu$ from the former so that the detailed balance is guaranteed.
We tabulate the reaction rates obtained for the thermodynamical conditions encountered in the matter background.
The detailed procedure is described in Appendix \ref{appendix}.
Ignoring the spatial dependence, we perform a one-zone calculation with $T=9.96$ MeV and the chemical potential of neutrons, $\mu_n = 921$ MeV, this time.
We follow the thermalization of neutrino spectra only by neutrino-neutron scatterings in this test.
We inject sample particles with the monochromatic energy, $E_\nu$ = 30 MeV, as an initial condition.
Figure~\ref{therm} shows the time evolution of the neutrino spectrum.
Different colors correspond to different time steps.
The expected thermal spectrum (red dotted) is obtained from the Fermi-Dirac distribution $f_{\rm{eq}}$ as
\begin{eqnarray}
\frac{dN\left(E_\nu\right)}{dE_\nu} = \frac{1}{\left(2\pi \hbar c\right)^3}\frac{4\pi E^2_\nu}{1+\exp{\left(\frac{E_\nu-\mu_\nu}{T}\right)}}.
\end{eqnarray}
Since the total number of neutrinos $N$ is conserved in this calculation, the chemical potential of neutrinos $\mu_\nu$ is determined by $N$ and $T$.
In this test, we set $N=10^{28}$, which leads to $\mu_\nu= -1.75$ MeV.
We find that the neutrino spectrum approaches this distribution indeed and they are in good agreement with each other at the end ($t = 9.95 \times 10^{-4}$ s) (see the red dotted and black solid lines in Figure~\ref{therm}).
This lends confidence to our treatment of the nucleon scattering for the detailed balance.
We also give in the bottom panel of the same figure the mean free time of neutrinos $t_{\rm{mfp}}$ for the neutrino-neutron scattering:
\begin{eqnarray}
t_{\rm{mfp}} \equiv \frac{\lambda_{n}}{c} = \frac{1}{\sigma_n c}.
\end{eqnarray}
The exact reaction rate $R_{\rm{rec},n}$ is used for the cross section $\sigma_n$ in the evaluation.
We find that the computation time is long enough to guarantee the thermalization except at the lowest end of energies, where the scattering occurs only rarely.
\begin{figure}[htbp]
\center
\epsscale{1.2}
\plotone{spe_radi_nue.eps}
\plotone{spe_radi_nueb.eps}
\plotone{spe_radi_nux.eps}
\caption{The energy spectra of neutrino for the ``base'' (dotted) and ``r1'' (solid) sets of neutrino reactions (see Table~\ref{reac_MC}). Line colors denote the radii. The top, middle and bottom panels show the spectra of $\nu_e$'s, $\bar{\nu}_e$'s and $\nu_x$'s, respectively.\label{spe:rec}}
\end{figure}
\section{Impacts of nucleon recoils on neutrino spectra} \label{ch4}
We apply the MC code to the thermalization of energy spectra as neutrinos propagate outwards in the post-shock region.
We pay particular attention to the relative importance of various processes including the nucleon recoil for different neutrino flavors.
\subsection{Iso-energy limit of nucleon scattering}
Before looking into the individual contributions of different processes to the thermalization of neutrino spectra in detail, it may be worth to see the iso-energy limit of the nucleon scattering, which was derived by \cite{1985ApJS...58..771B} and was employed in most of SN simulations in the past.
The Bruenn rate (eq.~(\ref{bruenrate})) can be derived from the generic expression for the non-isoenergetic scattering (eqs.~(\ref{R_rec})-(\ref{R_rec_f})) by taking a limit of $m_N \rightarrow \infty$ and $\Delta \epsilon/E_\nu \rightarrow 0$.
The top panel of Figure~\ref{ef_rec} shows the dependence on the proton mass of the reaction rate for the proton scattering.
The vertical axis is the reaction rate $R_{\rm{rec}, p}$ and the horizontal axis is the ratio of the energy change to the initial energy, $\Delta \epsilon/E_\nu$.
It is clear that as the proton mass increases, the energy exchange becomes smaller, making the reaction rate more sharply peaked at $\Delta \epsilon/E_\nu = 0$, the iso-energetic scattering limit.
Note that in these calculations of $R_{\rm{rec}, p}$ we modify the chemical potential of protons so that the number density should be unchanged.
In addition to the energy re-distribution, the effect of proton recoils is the reduction of the reaction rate at high energies and/or at backward scattering-angles as shown in the middle and bottom panels of Figure~\ref{ef_rec}, respectively, for $T = 5.85$ MeV, $\rho = 10^{12}\ \rm{g/cm^3}$ and $\mu_p = 907$ MeV.
We find that the latter effectively modifies the angular dependence of the nucleon scattering, making it less backward-peaked.
\subsection{Sensitivity of neutrino spectra on recoils in the nucleon scattering} \label{subch:thermal_neutrino}
We assess the impact of nucleon recoils by comparing the energy spectra in MC simulations with/without the recoils on a realistic CCSN matter background.
We run the MC code to obtain steady-state solutions of the neutrino transport on the static matter background given by the same progenitor model employed in the code validation (see Figure~\ref{hydro}).
The inner and outer boundaries are put at 20 and 100 km, respectively.
The neutrino fluxes coming in from these boundaries are obtained automatically by setting the neutrino distribution functions on the ghost mesh points to the ones derived from the SN simulation.
As the first comparison, we adopt two sets of neutrino reactions: ``base'' and ``r1'' given in Table~\ref{reac_MC}.
In the r1 set, the nucleon recoil is taken into account in addition to the base set.
For both cases of calculations, we use $2\times10^6$ sample particles and take $dt_{\rm{f}} = 10^{-7}$ s for the distribution time.
Figure~\ref{spe:rec} shows the energy spectra of neutrino number densities obtained in the two calculations.
Colors denote the radii, at which the spectra are evaluated, and solid and dotted lines show the results for the r1 and base sets, respectively.
The spectra of $\nu_e$'s (top) and $\bar{\nu}_e$'s (middle) do not change by the inclusion of the nucleon recoil, whereas high-energy $\nu_x$'s are depleted and low-energy ones are increased due to down-scatterings by nucleons (bottom).
As a result, the average energy of $\nu_x$'s is reduced by $\sim$~15\% at the outer boundary as shown in Figure~\ref{ave_ene}.
Note that the maximum difference is $\sim$~30\% at $r\sim40$ km.
The number density of $\nu_x$'s is also decreased by $\sim$~7\% at the outer boundary.
This is due to the opacity reduction caused by the nucleon recoil itself as well as by the decrease of average energy.
In order to understand the different responses to the inclusion of the nucleon recoil among different flavors, we show the rates per volume for different reactions as a function of radius in the left panels of Figure~\ref{reac:rec}.
Line colors denote the different reactions.
The vertical line shows the number of neutrinos, which experience each neutrino reaction per unit time and volume, denoted as $n_s$.
One finds in the top panel that the electron capture dominates the other reactions for $\nu_e$'s.
This is the reason why the spectrum is not changed by the inclusion of the nucleon recoil.
Note that the number of nucleon scatterings itself is smaller than that of electron captures by a factor of $\sim$ 5.
The dominant reaction for $\nu_x$'s, on the other hand, is the nucleon scattering in the absence of charged-current reactions (see the bottom panel).
As a result, the spectrum is pinched by the inclusion of the nucleon recoil.
For $\bar{\nu}_e$'s (middle), the number of nucleon scatterings is larger than those of the other reactions.
Although this seems to contradict at first glance with the previous result that the spectrum of $\bar{\nu}_e$'s is not affected by the nucleon recoil, this is simply due to the small energy exchange in the nucleon scattering.
The right panels of Figure~\ref{reac:rec} demonstrates this.
They show the energies exchanged between neutrino and matter for different reactions.
The vertical axis is the exchanged energy per unit time and volume and denoted by $E_s$.
In the figure, the pair-annihilation and bremsstrahlung are put together into ``others''.
We find for $\nu_e$'s (top) and $\nu_x$'s (bottom) that the orders of lines in the right panels are unchanged from those in the corresponding left panel.
For $\bar{\nu}_e$'s (middle), on the other hand, the positron capture is dominant over the nucleon scattering in terms of the energy exchange although the opposite is true for the reaction rates.
This is, as mentioned above, due to the small energy exchange in the individual scattering on nucleons.
As a result, the nucleon recoil affects the spectrum of $\nu_x$'s but not of $\bar{\nu}_e$'s.
Note that our result is qualitatively consistent with the result in \cite{Keil:2002in}.
\begin{figure}[htbp]
\centering
\epsscale{1.2}
\plotone{ave_ene_withnonchemi.eps}
\caption{The radial profiles of the average energies of $\nu_e$'s (red), $\bar{\nu}_e$'s (blue) and $\nu_x$'s (green). Solid and dotted lines correspond to the ``r1'' and ``base'' sets of neutrino reactions, respectively (see Table~\ref{reac_MC}). \label{ave_ene}}
\end{figure}
\begin{figure*}[htbp]
\epsscale{1.0}
\plottwo{reaction_nue.eps}{energy_nue.eps}
\plottwo{reaction_nueb.eps}{energy_nueb.eps}
\plottwo{reaction_nux.eps}{energy_nux.eps}
\caption{Left: the radial profiles of the number of neutrinos, which experience interactions with matter per unit time and volume on each neutrino reaction for $\nu_e$'s (top), $\bar{\nu}_e$'s (middle) and $\nu_x$'s (bottom). Right: the radial profiles of the energy exchanged between neutrino and matter on each neutrino reaction. In the right panels, the pair-annihilation and bremsstrahlung are put together into ``others''. \label{reac:rec}}
\end{figure*}
We have so far omitted electron/positron scatterings on purpose.
\textcolor{black}{The energy exchange per scattering for electron and positron is much larger than that for nucleon because of the smaller mass of the former, $m_e = 0.511$ MeV.}
In the top panel of Figure~\ref{rec:esc}, we compare the energy exchanges between the two scatterings for the incident-neutrino energy $E_\nu = 25$ MeV and the scattering angle $\cos{\theta}_\nu = -1.0$.
Note that we show the case of $\bar{\nu}_e$'s for the electron/positron scattering.
The vertical and horizon axes are the normalized reaction rate and the ratio of the energy exchange to the incident energy, respectively.
It is clear that the peak of the reaction rate for the electron/positron scattering is dislocated from the iso-energy condition $\Delta \epsilon/E_\nu = 0$ by a large amount, which means that neutrinos give larger energy to electrons/positrons than to nucleons on average.
In the bottom panel of Figure~\ref{rec:esc}, we show the total cross sections for the two scatterings as a function of the incident-neutrino energy.
For the electron/positron scattering, we give them separately for the three neutrino flavors.
We calculate these cross sections at \textcolor{black}{$T=5.85$ MeV, $\rho=1.01\times10^{12}\ \rm{g/cm^3}$, $Y_e = 0.10$, $\mu_p=907$ MeV, $\mu_n=924$ MeV and $\mu_e=19.6$ MeV.}
One finds that the nucleon scattering has larger cross sections at $E_\nu \gtrsim$ a few MeV because of the different energy dependences of the total cross sections: $\sigma \propto E_\nu^2$ for the nucleon scattering whereas $\sigma \propto E_\nu$ for the electron/positron scattering.
\begin{figure}[htbp]
\centering
\epsscale{1.1}
\plotone{out_dist.eps}
\plotone{compair_cross.eps}
\caption{Top: the normalized reaction rates of the electron/positron scattering for $\bar{\nu}_e$'s (green) and the nucleon scattering with recoils (red) as a function of the energy change normalized by the initial neutrino energy. Bottom: the total cross sections of the nucleon scattering (red solid) and the electron/positron scattering (dotted) for each neutrino species. \label{rec:esc}}
\end{figure}
We now rerun the MC code, this time with the e1 set of the neutrino reactions given in Table~\ref{reac_MC}, in which the electron/positron scattering is taken into account in addition to the r1 set.
The number of sample particles and the distribution time $dt_f$ are the same as those in the previous calculations.
This run is meant to see the relative importance of the two scatterings in thermalizing the neutrino spectra.
Figure~\ref{reac:esc} is the same as the right panels of Figure~\ref{reac:rec} except for the addition of the electron/positron scattering as shown in orange.
We find that apart from the charged-current reactions for $\nu_e$'s and $\bar{\nu}_e$'s, the accumulation of small recoils in the nucleon scattering is more important than a smaller number of large recoils in the electron/positron scattering in the thermalization of neutrinos at least for this particular model.
Indeed, we find that the energy spectra of neutrinos are almost identical to those without the electron/positron scattering\footnote{Note that the cross section of the electron/positron scattering for low energy neutrinos ($\sim$ a few MeV) is higher than that of the nucleon scattering. On the other hand, those low energy neutrinos have already decoupled from matter, and hence the energy spectra of neutrinos at low energy are less sensitive to the change of the cross sections.} (see Figure~\ref{spe:rec}).
Note also that \cite{2000PhRvC..62c5802T} calculated the thermalization of $\nu_x$'s in a uniform background matter with their own MC code and reached the same conclusion.
\begin{figure}[htbp]
\epsscale{1.2}
\plotone{energy_nue_esc.eps}
\plotone{energy_nueb_esc.eps}
\plotone{energy_nux_esc.eps}
\caption{The same as the right panels of Figure~\ref{reac:rec} except for the inclusion of the electron/positron scattering (orange). \label{reac:esc}}
\end{figure}
\section{Implications for the numerical implementation of nucleon recoils in the finite-difference method} \label{ch5}
The nucleon recoil affects the neutrino luminosity and dynamics of explosions as discussed in the literature \citep{2002A&A...396..361R,2006A&A...447.1049B,2009ApJ...694..664M,2010PhRvL.104y1101H,2012ApJ...760...94L,2012ApJ...761...72M,2015ApJ...808..188P,2015ApJ...807L..31L,Skinner:2015uhw,2017ApJ...850...43R,2018ApJ...853..170K,2018arXiv180905608B,2019MNRAS.482..351V,2019MNRAS.485.3153B,2019arXiv190110523R,2019ApJ...873...45G}.
Although the finite-difference method is normally employed for neutrino transport in the CCSNe simulations, we can not afford to deploy a sufficiently large number of energy bins needed to resolve the small energy exchange via the nucleon recoil in each scattering.
Some sub-grid technique is hence adopted \citep{2006A&A...447.1049B}.
In this section we conduct some experimental MC runs to investigate possible consequences of such numerical implementations of the nucleon recoil in the finite-difference transport schemes such as the $S_N$ method.
We quantify the effects of coarse-energy grids on the energy spectrum of neutrino and present a possible improvement.
When the cell width of the energy grid is much larger than the typical value of the energy exchange in the scattering, it is certainly inappropriate to use the cell-center values of energies and neutrino distribution functions to evaluate the rate of the scattering that transfer neutrinos in one energy cell to another adjacent to it.
This is because those neutrinos existing in the close vicinity of the energy-cell boundary can cross it over to the next cell.
In the finite-difference method adopting such an energy grid, it is required to reconstruct the neutrino distribution inside the energy bin somehow to estimate the neutrino populations near the cell boundary and calculate the scattering rate based on them; once the neutrinos enter the next energy cell, they are mixed with others in the same cell and their individual energies are forgotten.
We mimic such a situation in the MC simulation by introducing the energy grid and re-distributing MC particles in each energy bin after a certain interval in a couple of ways and study how the results are affected.
We adopt three artificial ways of the re-distribution in each energy bin: ``flat'', ``linear+Ncons'' and ``linear+NEcons''.
The first one is the simplest but the coarsest reconstruction, in which we homogenize the distribution of sample particles in each energy bin.
In the second and third cases we introduce linear distributions.
The inclination and intercept of the linear functions are determined in both cases so that the number of the MC particles should be unchanged and in the second case the values at the two neighbor cells are employed in the interpolation.
In the third case, on the other hand, we impose the energy conservation in the reconstruction.
The distribution of sample particles in the $k$-th energy bin is given as follows:
\begin{eqnarray}
\frac{dN_{T,k}}{dE} = a_kE + b_k,
\end{eqnarray}
with the inclination, the intercept and the total number of sample particles in the $k$-th energy bin $a_k$, $b_k$ and $N_{T,k}$, respectively.
We determine $a_k$ by the weighted average of two inclinations $a_1$ and $a_2$,
\begin{eqnarray}
&& a_k = a_1\frac{E_{\nu,k+1}-E_{\nu m,k}}{E_{\nu,k}-E_{\nu,k-1}}
+ a_2 \frac{E_{\nu m,k}-E_{\nu,k-1}}{E_{\nu,k}-E_{\nu,k-1}}, \label{a_k} \\
&& a_1 = \frac{N_{T,k}/(E_{\nu,k+1}-E_{\nu,k})}{E_{\nu m,k+1} - E_{\nu m,k}}, \\
&& a_2 = \frac{N_{T,k}/(E_{\nu,k}-E_{\nu,k-1})}{E_{\nu m,k} - E_{\nu m,k-1}}, \ \ \ \
\end{eqnarray}
with the mid-point energy of the $k$-th energy bin $E_{\nu m,k}$ and obtain $b_k$ from solving the equation for $N_{T,k}$,
\begin{eqnarray}
N_{T,k} = \int_{E_{\nu,k-1}}^{E_{\nu,k}} \left(a_kE + b_k\right)\ dE, \label{N_TK}
\end{eqnarray}
for the second case, while we adopt eq.~(\ref{N_TK}) and the equation for the total energy of the $k$-th energy bin $E_{T,k}$,
\begin{eqnarray}
E_{T,k} = \int_{E_{\nu,k-1}}^{E_{\nu,k}} \left(a_kE + b_k\right)E\ dE,
\end{eqnarray}
to determining $a_k$ and $b_k$ for the third case.
We introduce two energy grids with different numbers of cells: $N_{E_\nu}$ = 10 and 20 to cover the energy range of 0 -- 300 MeV.
Note that the latter is exactly the same as the energy grid employed in the Boltzmann solver by \cite{2019ApJS..240...38N}.
We focus on the spectra of $\nu_x$'s, which are affected most by the inclusion of nucleon recoils as shown in the previous sections.
The artificial re-distributions of sample particles in each energy bin are repeated on the time scale of a single time step of CCSN simulations to mimic their situation in the finite-difference method.
We adopt as a background the same hydrodynamical model as that employed in the previous sections (see Figure~\ref{hydro}) and deploy the same number of sample particles and use the same $dt_f$ as well.
We run the MC code for the spectrum obtained in the previous steady-state calculations with the re-distribution implemented.
The r1 set of neutrino reactions is adopted.
We take the average of the distribution function over 8,000 time steps after the steady-state is achieved.
Figure~\ref{check_inc} demonstrates the three different reconstructions of neutrino spectrum described above for the two energy grids with $N_{E_\nu}$ = 20 (top panel) and 10 (bottom panel).
The gray line is the original spectrum obtained by the MC calculation without re-distribution.
The lines with other colors denote the spectra reconstructed as explained above.
In the case of $N_{E_\nu}$ = 20 the linear+Ncons and linear+NEcons models give similar distributions (see the green and red lines), whereas they are more deviated from each other for $N_{E_\nu}$ = 10.
This difference turns out to be an important later.
Figure~\ref{compair_initial} shows the resultant state distributions (upper half) and the deviations from the original, supposedly correct ones $\Delta$ (lower half) at $r$ = 20 (top), 60 (middle) and 100 km (bottom).
The color coding is the same as before.
In the case of $N_{E_\nu}$ = 20 presented in the left panels, we find that the flat re-distribution produces errors as large as $\sim$20\% near the average energy (see the orange lines).
This is because a larger number of sample particles can get across the boundaries of energy bins and move to the next cells thanks to the re-distribution and may be regarded as the overestimation of the energy exchange via nucleon recoils.
In the two linear re-distribution models (the green and red lines) the error is reduced to a few\%.
We find smaller differences in the former model at lower and higher energies, whereas the latter model reproduces the peak of neutrino spectra better.
It is difficult to say which of the two is better from these results.
If we reduce the number of energy grids to $N_{E_\nu}$ = 10 (right panels), however, their results differ more from each other.
The error $\Delta$ in the liner+NEcons model increases but still stay within 10\% for almost all energies even at large radii.
The spectra for the linear+Ncons model, on the other hand, deviate from the correct ones by $\sim$ 20\%.
This difference is a consequence of the difference in the re-distributions, which we found becomes remarkable when the energy grid gets coarser.
Note that $N_{E_\nu}$ = 10 is not very low compared to that employed in current CCSNe simulations.
We had better hence impose, if possible, the energy conservation in reconstructing the neutrino distribution in each energy bin to incorporate the effects of nucleon recoils in neutrino transport accurately, particularly when the energy resolution is not high.
This will be possible if not only the number but also the energy in each energy bin is stored in the transport.
\begin{figure}[htbp]
\epsscale{1.2}
\plotone{check_inc_all.eps}
\plotone{check_inc_all_10mesh.eps}
\caption{The energy spectra of $\nu_x$'s for $N_{E_\nu}$ = 20 (top) and 10 (bottom). The gray line is the original spectrum obtained without re-distribution, whereas the other lines correspond to the different re-distribution models: flat (orange), linear+Ncons (green) and linear+NEcons (red). \label{check_inc}}
\end{figure}
\begin{figure*}[htbp]
\epsscale{1.0}
\plottwo{compair_initial_all.eps}{compair_initial_all_10mesh.eps}
\caption{The energy spectra of $\nu_x$'s at $r$ = 20 (top), 60 (middle) and 100 km (bottom) obtained with three artificial neutrino re-distributions: flat (orange), linear+Ncons (green) and linear+NEcons (red). The left and right panels show the results with the number of grids $N_{E_\nu}$ = 20 and 10, respectively. The gray lines denote the original, supposedly correct neutrino spectra derived from the previous steady-state calculations without re-distribution. The relative errors $\Delta$ are shown for the same models in the lower half panels. \label{compair_initial}}
\end{figure*}
\section{Summary and discussions} \label{ch6}
The nucleon recoils in the neutrino-nucleon scattering is one of the important factors for the dynamics of supernova explosions and neutrino observations and their effects have been already investigated in the literature.
In these studies the finite-difference method is normally adopted for neutrino transport.
In so doing, we cannot afford to deploy a sufficiently large number of energy bins needed to resolve the small energy exchange in the nucleon recoil.
In this paper we have performed neutrino transport calculations with our own MC code for a static matter background derived from a dynamical SN simulation to quantify the effects of the coarse energy grid and suggest a possible improvement in the sub-grid modeling.
We have first conducted two test calculations for the validation of our MC code.
We have compared steady-state solutions obtained with the MC code and those with our finite-difference Boltzmann solver, in which we employ a matter background computed from one of our recent CCSN simulations \citep{2019ApJS..240...38N}.
The nucleon recoil has been ignored in this comparison.
We have demonstrated that the two results are in excellent agreement with each other.
In order to confirm the detailed balance in our treatment of the nucleon recoil, we have done a one-zone calculation of the thermalization of neutrino spectrum via the neutron scattering.
This is ensured by calculating the reaction rates only for $E_\nu \le E^\prime_\nu$ and deriving those for $E_\nu > E^\prime_\nu$ from them via the detailed balance relation.
We have confirmed indeed that the neutrino spectrum approaches a thermal distribution as expected.
We have then run the MC code to compute the thermalization of energy spectra as neutrinos propagate outwards in the post-shock region.
We have first studied the large proton mass limit of the proton scattering, in which it becomes iso-energetic, and have made clear three important effects of the recoil on its reaction rate: the broadening of neutrino spectra, the reduction of the cross section and the change of the angle dependence of the reaction rate.
We have then re-applied the MC code to the neutrino transport calculations on the same static matter background as that employed in the code validation but with the nucleon recoil being incorporated this time.
We have found a significant change in the spectra of $\nu_x$'s by the inclusion of the nucleon recoil.
High-energy $\nu_x$'s are depleted while low-energy ones are increased due to down-scatterings and their average energy is reduced by $\sim$15\%.
The spectra of $\nu_e$'s and $\bar{\nu}_e$'s, on the other hand, do not change much by the inclusion of the nucleon recoil.
These different responses to the nucleon recoil among different flavors of neutrinos are explained as follows.
The number of nucleon scatterings is smaller than that of electron captures by a factor $\sim$ 5 for $\nu_e$'s, whereas the dominant reaction for $\nu_x$'s is the nucleon scattering.
For $\bar{\nu}_e$'s the number of the nucleon scattering is larger than those of other reactions, which seems to contradict with the result that the spectrum of $\bar{\nu}_e$'s is not changed by the nucleon recoil.
The reason is simply because the energy exchange in the nucleon scattering is much smaller.
Next, we have incorporated the electron/positron scattering in the MC code and compared the contributions to thermalization between the two scatterings.
The energy exchange per scattering for the electron/positron scattering is much larger than that for the nucleon scattering because of the smaller mass of the former, $m_e = 0.511$ MeV, whereas the cross section of the latter is larger than that of the former at $E_\nu \gtrsim$ a few MeV.
We have found that the accumulation of small recoils in the nucleon scattering is more important than a smaller number of large recoils in the electron/positron scattering in the thermalization of neutrinos at least for this particular model.
We have then conducted some experimental MC runs to investigate the implications for the numerical implementation of the nucleon recoil in the finite-difference transport schemes, which have been frequently employed in CCSN simulations.
The width of energy bins employed in these schemes is normally much larger than the typical energy exchanged via the nucleon recoil and the sub-grid modeling is somehow needed.
In order to mimic such situations, we have introduced energy grids in the experimental MC runs and artificially re-distributed sample particles repeatedly after a typical time interval in the CCSNe simulations.
We have considered three artificial distributions of neutrinos in each energy bin and referred to them as ``flat'', ``linear+Ncons'' and ``linear+NEcons''.
In this study, we have adopted two energy grids with different numbers of grid points: $N_{E_\nu}$ = 10 and 20.
Note that the latter grid is exactly the same as that employed in our axisymmetric CCSN simulations with the Boltzmann solver \citep{nagakura2018,2019ApJ...880L..28N}.
We have run the MC code with this re-distribution scheme implemented for the same matter background as that in the previous calculations without re-distributions.
We have found that the neutrino spectra in the flat model are deviated from the correct one by $\sim$20\% even in the high energy resolution $N_{E_\nu}$ = 20, whereas the difference is reduced to a few \% in the linear+Ncons and linear+NEcons models.
Both of the latter two models can reconstruct the original spectra equally accurately for $N_{E_\nu}$ = 20.
If we reduce the number of energy grid points to $N_{E_\nu}$ = 10, however, their results differ from each other.
Although the errors in the liner+NEcons model are still within 10\% at almost all energies even in the outer region, they rise up to $\sim$ 20\% in the linear+Ncons model.
Since the energy resolution typically employed in the finite-difference methods is rarely higher than the $N_{E_\nu}$ = 20 case in this paper, it is recommended to keep the track of not only the number but also the energy in each energy bin somehow and use the number and energy conservations to reconstruct the sub-grid distributions of neutrinos when dealing with the small energy exchange in the nucleon recoil.
Our next task is to actually implement these sub-grid modellings into the Boltzmann solver, in which they are employing the reaction rate of \cite{1985ApJS...58..771B} for the nucleon scattering currently, and to perform CCSNe simulations.
This will enable us to discuss the effects of the nucleon recoil, particularly its energy-resolution dependence, on the dynamics of explosion and PNS cooling quantitatively.
It should be also important from the observational point of views.
\acknowledgments{This work was partly supported by Research Fellowships of Japan Society for the Promotion of Science (JSPS). H.N. was supported by Princeton University through DOE SciDAC4 Grant DE-SC0018297 (subaward 00009650). Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan.}
|
1,116,691,497,695 | arxiv | \section{Introduction}\label{sec:introduction}
A brand new graph neural network named {\textsc{Graph-Bert}} (Graph based {\textsc{Bert}}) is introduced in \cite{zhang2020graph} for graph data representation learning. Different from conventional graph neural networks \cite{Kipf_Semi_CORR_16,Velickovic_Graph_ICLR_18,Li_Deeper_CORR_18,sun2019adagcn,DBLP:journals/corr/abs-1907-02586}, via linkless subgraph batching, {\textsc{Graph-Bert}} redefines the conventional graph representation learning problem as the target node instance representation learning within individual learning context instead. One of the great advantages of such a new learning setting is that {\textsc{Graph-Bert}} can effectively get rid of many common learning effectiveness and efficiency problems, e.g., suspended animation \cite{Zhang2019GResNetGR} and hard to parallelize, with the existing graph neural networks. Also it enables the pre-training and fine-tuning of {\textsc{Graph-Bert}} across different learning tasks on the same graph dataset, which has transformative impacts on building functional model pipelines for graph learning.
In this paper, we will further explore the transfer of {\textsc{Graph-Bert}} across different graph datasets, which still remains a great challenge and an open problem by this context so far. To be more precise, we propose to learn {\textsc{Graph-Bert}} with multiple different graph datasets, which have totally different properties, e.g., graph sizes, graph structures, input feature space and output label space. What's more, the learned {\textsc{Graph-Bert}} on one or several source graph dataset(s) can be further transferred as the pre-trained model for other graph dataset(s) suffering from the lack of training data. For each of these graph data datasets, multiple different application tasks can also be studied concurrently, which may or may not have correlations with each other.
To address such a problem, a novel learning model, i.e., {\textsc{G5}}, will be introduced in this paper, where the five Gs correspond to the ``\underline{g}raph-to-\underline{g}raph transfer of a universal \textsc{\underline{G}raph-Bert} for \underline{g}raph representation learning across different \underline{g}raph datasets''. {\textsc{G5}} effectively extends the {\textsc{Graph-Bert}} model for the cross-graph representation learning, which brings about lots of new challenges and new opportunities at the same time.
On the one hand, to learn the {\textsc{G5}} model, we may need to explore many great challenges in handling the different graph property differences and the different objectives of diverse learning tasks. To be more specific, {\textsc{G5}} introduces a pluggable model architecture: (a) each data source will be pre-learned with a unique input component for data pre-processing; (b) each output application task will also have a specific functional component for computing the output; and (c) all such diverse input and output components will be conjuncted with a universal {\textsc{Graph-Bert}} core component in {\textsc{G5}} via an \textit{input size unification layer} \cite{zhang2020segmented} and an \textit{output representation fusion layer} \cite{zhang2020graph}, respectively.
On the other hand, in addition to building the functional model pipelines across graphs for representation learning, a successfully learned {\textsc{G5}} will also allow us to explore some new yet challenging problems. Besides the model transfer to graph sources with limited training data, the architecture of {\textsc{G5}} also allows us to learn a supervised functional classifier for certain graph sources without any training data at all, which is also named as the \textit{Apocalypse Learning} (AL) problem formally in this paper. It should be easy to identify that the \textit{apocalypse learning} task is different from the well-studied \textit{zero-shot learning} task \cite{NIPS2013_5027}. Here, we would like to further clearly illustrate their differences: (1) \textit{apocalypse learning} is for multi-dataset but \textit{zero-shot learning} focuses on one dataset; (2) \textit{apocalypse learning} uses no training data in the target data source but \textit{zero-shot learning} uses training data; and (3) \textit{apocalypse learning} requires no prior knowledge but \textit{zero-shot learning} usually needs to know prior class representations or correlations in advance.
We summarize our contributions in this paper as follows:
\begin{itemize}
\item \textbf{A Universal GNN}: We introduce a new graph neural network model in this paper for multi-graph concurrent representation learning. To adapt the diverse input and output configuration distinctions, as well as the graph information distributions differences, {\textsc{G5}} introduces a pluggable model architecture which can be tied up with many different input and output components. All such diverse input and output components will be conjuncted with a universal {\textsc{Graph-Bert}} core component in {\textsc{G5}} via the \textit{input size unification layer} and \textit{output representation fusion layer}, respectively.
\item \textbf{Pre-Train \& Transfer \& Fine-Tune}: To learn various application task objectives, {\textsc{G5}} will be pre-trained on multiple graphs in a hybrid manner with multiple different learning tasks, which also define the output component pool involving various supervised and unsupervised learning tasks. Meanwhile, a pre-trained {\textsc{G5}} can also be transferred and applied to new graph data sources either directly or with necessary fine-tuning in a similarly hybrid manner. There is no specific correlation requirements on these fine-tuning tasks, which can be totally different from those pre-training tasks on the source graph(s) actually.
\item \textbf{Apocalypse Learning}: Besides investigating the model transfer to graph sources with limited training data, in this paper, we also introduce a new learning problem, i.e., \textit{apocalypse learning}, which aims to build a classifier on certain target graph source without any training data at all. Based on the learning results of the hybrid tasks on other graph datasets, {\textsc{G5}} introduces two different strategies, i.e., Cross-Source Classification Consistency Maximization (CCCM) and Cross-Source Dynamic Routing (CDR), to reason for the labels in the target graph source in this paper.
\end{itemize}
The remaining parts of this paper are organized as follows. Definitions of several important terminologies and the formulation of the studied problem will be provided in Section~\ref{sec:formulate}. Detailed information about the {\textsc{G5}} model will be introduced in Section~\ref{sec:method}, and the two reasoning strategies to address the {apocalypse learning} problem will be discussed in Section~\ref{sec:analysis}. The effectiveness of {\textsc{G5}} will be tested in Section~\ref{sec:experiment}. Finally, we will introduce the related work in Section~\ref{sec:related_work} and conclude this paper in Section~\ref{sec:conclusion}.
\section{Notations, Terminology Definition and Problem Formulation}\label{sec:formulate}
In this section, we will first introduce the notations used in this paper. After that, we will provide the definitions of several important terminologies and the studied problem.
\subsection{Notations}\label{subsec:notation}
In the sequel of this paper, we will use the lower case letters (e.g., $x$) to represent scalars or mappings, lower case bold letters (e.g., $\mathbf{x}$) to denote column vectors, bold-face upper case letters (e.g., $\mathbf{X}$) to denote matrices, and upper case calligraphic letters (e.g., $\mathcal{X}$) to denote sets or high-order tensors. Given a matrix $\mathbf{X}$, we denote $\mathbf{X}(i,:)$ and $\mathbf{X}(:,j)$ as its $i_{th}$ row and $j_{th}$ column, respectively. The ($i_{th}$, $j_{th}$) entry of matrix $\mathbf{X}$ can be denoted as either $\mathbf{X}(i,j)$. We use $\mathbf{X}^\top$ and $\mathbf{x}^\top$ to represent the transpose of matrix $\mathbf{X}$ and vector $\mathbf{x}$. For vector $\mathbf{x}$, we represent its $L_p$-norm as $\left\| \mathbf{x} \right\|_p = (\sum_i |\mathbf{x}(i)|^p)^{\frac{1}{p}}$. The Frobenius-norm of matrix $\mathbf{X}$ is represented as $\left\| \mathbf{X} \right\|_F = (\sum_{i,j} |\mathbf{X}(i,j)|^2)^{\frac{1}{2}}$. The element-wise product of vectors $\mathbf{x}$ and $\mathbf{y}$ of the same dimension is represented as $\mathbf{x} \otimes \mathbf{y}$, whose concatenation is represented as $\mathbf{x} \sqcup \mathbf{y}$.
\subsection{Terminology Definitions}\label{subsec:terminology_definition}
Several terminologies will be used in this paper to present the proposed method, which include \textit{graph}, \textit{multi-source graph set} and \textit{linkless subgraph}.
\begin{definition}
(Graph): Formally, we can represent the studied graph data as $G = (\mathcal{V}, \mathcal{E}, w, x, y)$, where $\mathcal{V}$ and $\mathcal{E}$ denote the sets of nodes and links, respectively. Mapping $w: \mathcal{E} \to \mathbbm{R}$ projects links to their weights; whereas mappings $x: \mathcal{V} \to \mathcal{X}$ and $y: \mathcal{V} \to \mathcal{Y}$ can project the nodes to their raw features and labels, respectively.
\end{definition}
Given a graph $G$, its size can be represented by the number of involved nodes, i.e., $|\mathcal{V}|$. Notations $\mathcal{X}$ and $\mathcal{Y}$ used in the above definition denote the feature space and label space, respectively. In this paper, they can also be represented as $\mathcal{X} = \mathbbm{R}^{d_x}$ and $\mathcal{Y} = \mathbbm{R}^{d_y}$ (with dimensions $d_x$ and $d_y$) for simplicity. For node $v_i$, we can also simplify its raw feature and label vector representations as $\mathbf{x}_i = x(v_i) \in \mathbbm{R}^{d_x \times 1}$ and $\mathbf{y}_i = y(v_i) \in \mathbbm{R}^{d_y \times 1}$. In this paper, we are studying the transfer of {\textsc{Graph-Bert}} across multiple graphs, and the studied graphs can be denoted as the multi-source graph set as follows.
\begin{definition}
(Multi-Source Graph Set): Formally, we can represent the set of $n$ different input graphs that we are studying in this paper as $\mathcal{G} = \left\{G^{(1)}, G^{(2)}, \cdots, G^{(n)} \right\}$, among which some of them may have very limited or even no training data (i.e., labeled nodes). All these $n$ input graphs can have different properties, e.g., graph sizes, graph structures, node feature space and label space.
\end{definition}
Given a node, e.g., $v_i^{(m)} \in \mathcal{V}^{(m)}$, in graph $G^{(m)} \in \mathcal{G}$, based on the approach introduced in \cite{zhang2020graph}, we will be able to sample a unique linkless sub-graph for it involving node $v_i$ and its surrounding node context.
\begin{definition}
(Linkless Subgraph): Given an input graph $G^{(m)}$, we can denote the sampled linkless subgraph for each node $v_i^{(m)}$ in the graph as $g_i^{(m)} = (\mathcal{V}_i^{(m)}, \emptyset)$. Here, the node set $\mathcal{V}_i^{(m)} = \{v_i^{(m)}\} \cup \Gamma(v_i^{(m)}, k^{(m)})$ covers both $v_i$ and its top $k^{(m)}$ intimate nearby nodes, and the link set is empty. Furthermore, the batch of linkless subgraphs sampled for all the nodes in graph $G^{(m)}$ can be denoted as $\mathcal{G}^{(m)} = \left\{g_i^{(m)}\right\}_{v_i^{(m)} \in \mathcal{V}^{(m)}}$.
\end{definition}
Therefore, for all the graphs covered in $\mathcal{G}$, we can represent their sampled subgraph batches as $\left\{\mathcal{G}^{(1)}, \mathcal{G}^{(2)}, \cdots, \mathcal{G}^{(n)}\right\}$. According to the experimental studies provided in \cite{zhang2020graph}, different graphs may have different optimal parameters to control the sampled subgraph size, e.g., $k^{(m)}$ for graph $G^{(m)}$. Therefore, the subgraphs sampled in batch $\mathcal{G}^{(l)}$ will usually have different sizes from those in $\mathcal{G}^{(m)}, \forall l, m \in \{1, 2, \cdots, n\} \land l \neq m$.
\begin{figure*}[t]
\centering
\begin{minipage}{1.0\textwidth}
\includegraphics[width=\linewidth]{framework.pdf}
\end{minipage}
\caption{The Architecture of the {\textsc{G5}} Model.}
\label{fig:framework}
\end{figure*}
\subsection{Problem Formulation}\label{subsec:formulation}
Based on the above terminology definitions, we can define the problem studied in this paper as follows:
\noindent \textbf{Problem Statement}: Formally, given the multi-source graph set $\mathcal{G} = \{G^{(1)}, G^{(2)}, \cdots, G^{(n)}\}$ with $n$ different graphs, we aim to learn a shared representation learning mapping $f: \bigcup_{m=1}^n \mathcal{V}^{(m)} \to \mathbbm{R}^{d_h}$ to learn the representations of nodes in all these $n$ graphs concurrently. Such learned node representations will be further utilized in various downstream application tasks for either pre-training or fine-tuning the model. Furthermore, depending on the learning settings, the mapping pre-trained based on some graphs can also be further transferred to the other graphs with limited even no supervision information directly or with necessary fine-tuning. In this way, it can hopefully help address the labeled data sparsity problem or even the \textit{apocalypse learning} problem for some input graph datasets.
\section{The Proposed Method}\label{sec:method}
In this section, we will introduce the {\textsc{G5}} model architecture in detail.
\subsection{The Key Challenges}
As introduced in Section~\ref{subsec:terminology_definition}, for the multi-source input graphs $\mathcal{G}$, a batch of linkless subgraphs can be sampled from them for target node representation learning. To enable the concurrent learning of the {\textsc{G5}} model with all these $n$ graphs in $\mathcal{G}$, several important differences among these graph datasets cannot be ignored:
\begin{itemize}
\item \textbf{Input Space Difference}: For any two nodes from two different graphs, their raw features can be very different in (1) data types: their feature vectors can be in totally different data types, e.g., image, text, or tags; (2) feature length: the vectors can also have different length; (3) feature domain: for the features of the same type and have the same dimensions, they may also from totally different domains and carry different information, e.g., medical images vs traffic images; and (4) feature distribution: for the identical features in different graph sources, they may follow distinct distributions.
\item \textbf{Model Configuration Difference}: In addition to the input feature space differences aforementioned, there may also exist a lot of model configuration differences in the favored {\textsc{Graph-Bert}} component in {\textsc{G5}} by different graph datasets. For instance, according to \cite{zhang2020graph}, the sampled subgraph size parameter $k$ may affect the learning performance of {\textsc{Graph-Bert}} a lot; whereas different graph datasets may also prefer different parameter $k$s, which may lead to different model configurations.
\item \textbf{Output Space Difference}: Meanwhile, for the downstream application tasks to be studied in {\textsc{G5}} on the same/different graph datasets, they tend to have different output space actually, which may cast certain task oriented requirements on the representation learning process. For instance, the node raw feature reconstruction and graph structure recovery tasks actually focus more on embedding node attributes and graph structures into the learned representations, respectively; whereas the node classification aims at learning a classifer to project nodes to the label space instead.
\end{itemize}
To handle these above differences properly, as illustrated in Figure~\ref{fig:framework}, we design the {\textsc{G5}} model with a pluggable architecture containing several key parts: (1) pluggable input dataset-wise processing components, (2) input size unification interlayer, (3) the universal {\textsc{Graph-Bert}} model shared across graphs, (4) representation fusion interlayer, (5) pluggable task-wise output components for each dataset, and (6) reasoning component for \textit{apocalypse learning}. For each input graph data, it will have a unique input component to handle its initial embeddings based on their unique subgraph batches, which will accommodate the input feature space differences and information distribution differences for {\textsc{G5}}. Meanwhile, each graph dataset will have several output components as multiple pre-train/fine-tune tasks will be studied concurrently, which can handle the output space difference problem. The input size unification interlayer introduced in this paper can effectively accommodate the configures of diverse inputs from different datasets prior to feeding them into the universal {\textsc{Graph-Bert}} model; whereas the representation fusion interlayer will aggregate the learned representations to generate the fused representations for the output components.
The {\textsc{G5}} model will be effectively pre-trained based on the graph datasets with sufficient supervision information, which can be further transferred to the graph datasets lacking enough labeled data with fine-tuning. Furthermore, if certain fine-tuning task on the target graph dataset doesn't contain any supervision information, the \textit{apocalypse learning} based component will be used for label reasoning. In this section, we will introduce the first five components in {\textsc{G5}}, except the reasoning component for \textit{apocalypse learning}, which will be introduced in the next Section~\ref{sec:analysis} in detail.
\subsection{Input Accommodation Component}\label{subsec:input_component}
For presentation simplicity, in this part, we will first ignore the script index for the graphs in the notations. Formally, given the sampled subgraph batch from an input graph $G$, for the target nodes $v_i$ together with their learning context (with $k$ nodes), according to \cite{zhang2020graph}, we can represent its initial embedding vector as
\begin{equation}
\mathbf{h}_i^{(0)} = \mbox{Aggregate} \left( \mathbf{e}_i^{x}, \mathbf{e}_i^{r}, \mathbf{e}_i^{p}, \mathbf{e}_i^{d} \right),
\end{equation}
where $\mathbf{e}_i^{x}$, $\mathbf{e}_i^{r}$, $\mathbf{e}_i^{p}$ and $\mathbf{e}_i^{d} \in \mathbbm{R}^{(k+1)d_e \times 1 }$ denote the embedding vectors based on the raw features, WL based roles, relative positions and the hop based distance as introduced in \cite{zhang2020graph}, respectively. In the notation, $k$ denotes the subgraph sampling parameter, and $d_e$ is the raw embedding feature vector dimension in the graph. The $\mbox{Aggregate}(\cdot)$ function will effectively aggregate the input vectors together, which can be defined in different ways. In this paper, we will follow the previous work, and just define it as the simple vector summation.
It is easy to know that the raw embedding feature dimension, i.e., $d_e$, in the graph datasets can be different form each other. Also the initial embedding features can lie in different feature spaces for different graph datasets. Therefore, instead of directly feeding vector $\mathbf{h}_i^{(0)}$ to the universal {\textsc{Graph-Bert}} model, to accommodate the input feature space, {\textsc{G5}} introduces an input component for each graph dataset based on the graph-transformer to project the inputs to a shared feature space of dimension $d_h$ as follows:
\begin{equation}
\begin{cases}
\vspace{8pt}
\mathbf{H}_i^{(0)} & = \left[\mathbf{h}_i^{(0)}, \mathbf{h}_{i,1}^{(0)}, \cdots, \mathbf{h}_{i,k}^{(0)}\right]^\top ,\\
\mathbf{H}_i^{(l)} & = \mbox{G-Transformer} \left( \mathbf{H}_i^{(l-1)}\right), \forall l \in \{1, 2, \cdots, D\},
\end{cases}
\end{equation}
where $D$ denotes the input component depth and the nodes in set $\left\{v_{i,1}, v_{i,2}, \cdots, v_{i,k} \right\} = \Gamma(v_i, k)$ denotes the learning context of $v_i$ in its sampled subgraph. Notation $\mbox{G-Transformer}(\cdot)$ denotes the graph-transformer layers consisting of both the transformer and graph residual terms as introduced in \cite{zhang2020graph}, which will also be defined in the following Equation~(\ref{equ:gtransformer}) in detail. Formally, the finally learned representation matrix $\mathbf{H}_i^{(D)} \in \mathbbm{R}^{(k+1) \times d_h}$ for the subgraph $g_i$ will be the representation input of the subgraph to the follow-up universal {\textsc{Graph-Bert}} model.
According to the above descriptions, we can accommodate the input representations for all the sampled subgraphs from all the input graphs, i.e., $\mathcal{G} = \left\{G^{(1)}, G^{(2)}, \cdots, G^{(n)} \right\}$. For instance, by adding the graph index superscript into the notations, we can represent such learned nodes' representations from graph $G^{(m)}$ as $\left\{ \mathbf{H}_i^{(m, D^{(m)})} \right\}_{v_i^{(m)} \in \mathcal{V}^{(m)}}$, where $\mathbf{H}_i^{(m, D^{(m)})} \in \mathbbm{R}^{(k^{(m)}+1) \times d_h}$. Here, we may need to add a remark: the input components for the different graphs in $\mathcal{G}$ will not share the weight parameters, and they can also be in different depths (i.e., $D^{(m)}$) depending on their unique requirements.
\subsection{Input Size Unification Interlayer}
According to the previous subsection, for the input feature dimension, feature domain and distribution differences, they can be effectively handled with the input components consisting of several graph-transformer layers. Meanwhile, it is easy to observe that the accommodated input representations for subgraphs from different graph source still have different configurations, since the subgraph size parameter used in them are usually different, i.e., $k^{(l)} \neq k^{(m)}$ for $G^{(l)}, G^{(m)} \in \mathcal{G}$. Therefore, prior to feeding them to the universal {\textsc{Graph-Bert}} model, we introduce one more layer to accommodate the input subgraph representation sizes from different graph datasets, which is called the \textit{input size unification interlayer}. There exist different input size unification approaches that can be used, e.g., \textit{full-input strategy}, \textit{padding/pruning strategy} and \textit{segment shifting strategy} as introduced in \cite{zhang2020segmented}. We can take the \textit{padding/pruning strategy} as an example to introduce here, but the other two strategies can be used as well depending on the specific learning settings.
Formally, we can denote the dimension of the inputs for the universal {\textsc{Graph-Bert}} model (to be introduced in the next subsection) as $\mathbbm{R}^{(k+1) \times d_h}$, where the parameter $k$ without superscript denotes the objective subgraph node context size desired by the universal {\textsc{Graph-Bert}} model. Meanwhile, depending on the input subgraph representations and their subgraph size parameters $k^{(m)}$ for graph $G^{(m)}$, the \textit{padding/pruning strategy} based size unification layer will handle them as follows:
\begin{itemize}
\item \textit{Pruning}: If $k^{(m)} > k$, the input has more feature entries than that the universal {\textsc{Graph-Bert}} model can handle. Therefore, the size unification layer will prune the last $k^{(m)} - k$ vector entries from the input, which correspond to the context nodes less relevant to the target node.
\item \textit{No Action}: If $k^{(m)} = k$, the inputs can be handled by the universal {\textsc{Graph-Bert}} directly and no action is necessary to be performed at the size unification layer.
\item \textit{Padding}: If $k^{(m)} < k$, necessary dummy vectors will be needed to be padded to the inputs to increase the involved subgraph node number from $k^{(m)}$ to $k$. We will use the zero padding for simplicity in this paper, which will not dramatically affect the learning results according to \cite{zhang2020segmented} but can introduce more learning time costs.
\end{itemize}
Formally, given the input representation matrix $\mathbf{H}_i^{(m,D^{(m)})}$ learned for subgraph $g_i^{(m)}$ from graph $G^{(m)}$, we can denote its size-unified output representations as
\begin{equation}
\mathbf{Z}_i^{(m, 0)} = \mbox{Unify} \left( \mathbf{H}_i^{(m,D^{(m)})} \right) \in \mathbbm{R}^{(k+1) \times d_h}.
\end{equation}
Similar operators can be applied to all the remaining subgraphs sampled from all these $n$ input graph datasets.
\subsection{Universal {\textsc{Graph-Bert}}}\label{subsec:graph_bert}
The universal {\textsc{Graph-Bert}} model is shared for all the input graph datasets, which can learn the representations based on the inputs iteratively with several layers. Here, we can denote the inputs to {\textsc{Graph-Bert}} from the \textit{input size unification layer} as $\mathbf{Z}^{(0)} \in \mathbbm{R}^{(k+1) \times d_h}$ without indicating its node index or the graph index in the subscript/superscript. The representation learning component in {\textsc{Graph-Bert}} also contains several layers of the graph-transformers. Formally, at the $l_{th}$ layer, we can represent the learned representation as follows:
\begin{equation}\label{equ:gtransformer}
\begin{aligned}
\hspace{-5pt} &{\mathbf{Z}}^{(l)} = \mbox{G-Transformer} \left( \mathbf{Z}^{(l-1)}\right)\\
&= \mbox{softmax} \left(\frac{\mathbf{Q}^{(l)} (\mathbf{K}^{(l)})^\top}{\sqrt{d_h}} \right) \mathbf{V}^{(l)} + \mbox{G-Res} \left( \mathbf{Z}^{(l-1)}, \mathbf{X}\right),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{cases}
\vspace{3pt}
\mathbf{Q}^{(l)} & = \mathbf{Z}^{(l-1)} \mathbf{W}_Q^{(l)},\\
\vspace{3pt}
\mathbf{K}^{(l)} & = \mathbf{Z}^{(l-1)} \mathbf{W}_K^{(l)},\\
\vspace{3pt}
\mathbf{V}^{(l)} & = \mathbf{Z}^{(l-1)} \mathbf{W}_V^{(l)}.\\
\end{cases}
\end{equation}
In the above equation, $\mathbf{W}_Q^{(l)}, \mathbf{W}_K^{(l)}, \mathbf{W}_K^{(l)} \in \mathbbm{R}^{d_h \times d_h}$ denote the involved variables in the $l_{th}$ layer. In this paper, to simplify the presentation and notations, the hidden representations at different layers in the universal {\textsc{Graph-Bert}} are assumed to have the identical dimension $d_h$ by default. Notation $\mbox{G-Res} \left( \mathbf{Z}^{(l-1)}, \mathbf{X}\right)$ defines the graph residual term introduced in \cite{Zhang2019GResNetGR}, and $\mathbf{X}$ is the raw features of all nodes in the subgraph. For both the shared universal {\textsc{Graph-Bert}} component and the individual graph input components introduced in Section~\ref{subsec:input_component}, we will use the ``\textit{graph-raw}'' residual term in this paper by default. The universal {\textsc{Graph-Bert}} component involved in {\textsc{G5}} will contain $D$ layers, and we can denote the output by the $D_{th}$ layer as $\mathbf{Z}^{(D)} \in \mathbbm{R}^{(k+1) \times d_h}$.
\subsection{Output Representation Fusion Interlayer}
As illustrated in Figure~\ref{fig:framework}, one more fusion layer is stacked on the universal {\textsc{Graph-Bert}} model to fuse such learned representations to define the ultimate representation vector of the target node, which can be denoted as:
\begin{equation}
{\mathbf{z}}= \mbox{Fusion} \left( \mathbf{Z}^{(D)} \right) = \frac{1}{k+1} \sum_{i=1}^{k+1} \mathbf{Z}^{(D)}(i,:).
\end{equation}
Many advanced fusion strategies can also be used here, e.g., fusion with further node selections or weighted fusion based on certain attention scores. However, in this paper, we will not explore them and a simple averaging function can be used here to define the above fusion component across all the nodes in the sampled subgraphs. Based on the above descriptions, by bringing the node and graph index subscript/superscript back, we can represent the outputted representations of all the nodes in graph $G^{(m)}$ by the universal {\textsc{Graph-Bert}} component as $\left\{\mathbf{z}_i^{(m)} \right\}_{v_i^{(m)} \in \mathcal{V}^{(m)}}$, which will be fed to the following functional components to study various downstream application tasks.
\subsection{Output Application Components}\label{subsec:hybrid_training}
To learn such representations together with the model variables, necessary optimization objective function will be needed. In this paper, we introduce a hybrid learning task combo by following \cite{zhang2020graph}, which covers \textit{unsupervised node attribute reconstruction}, \textit{unsupervised graph structure recovery} and \textit{supervised node classification}.
\begin{itemize}
\item \textbf{Node Attribute Reconstruction}: Based on the learned node representations, via several fully connected layers (with necessary activation functions), we will be able to project the learned representation vectors to their raw features, i.e., the node raw attribute reconstruction. By minimizing the difference between nodes' original raw attributes versus the reconstructed ones, we will be able to learn the {\textsc{G5}} model.
\item \textbf{Graph Structure Recovery}: Given any two nodes from the same graph, based on their learned representations, via either fully connected layers or simple similarity metrics, we will be able to project the node pair representation vectors to their corresponding link labels or similarity scores. Also by minimizing the differences between such learned link scores versus the graph link ground truth, {\textsc{G5}} can also be effectively learned
\item \textbf{Node Classification}: In some cases, the nodes are also attached with labels denoting their categories or certain properties. Based on the nodes' representations, we can effectively project them to their desired labels via the fully connected layers (with necessary activation functions). By comparing such learned nodes' labels versus the node ground truth label vectors, we will be able to learn the {\textsc{G5}} model
\end{itemize}
Considering that different graph datasets and different application tasks may require different learning parameter settings, instead of summing all the loss terms together to optimize, we introduce an iterative training mechanism for {\textsc{G5}} with the hybrid application tasks on all these multi-source graph inputs. To be more specific, for each graph source and in each iteration, we will train {\textsc{G5}} with a number epochs on the node classification task, and then on graph structure recovery task with several epochs, and finally a certain number epochs on the node classification task (the specific epoch numbers are different for different graph dataset, which will be introduced in Section~\ref{subsec:parameter_setting} in detail). Such an iterative training process will continue for several rounds on the graph source until there exist no dramatic changes as we shift between different learning tasks. After such a process, the {\textsc{G5}} model can be transferred and applied to certain graph sources for necessary fine-tuning
\begin{figure}[t]
\centering
\begin{minipage}{.48\textwidth}
\includegraphics[width=\linewidth]{reasoning_1.pdf}
\end{minipage}%
\caption{The reasoning process based on CCCM.}
\label{fig:reasoning1}
\end{figure}
\section{{\textsc{G5}} based Apocalypse Learning}\label{sec:analysis}
In this section, we will study a special and novel learning task, i.e., the \textit{apocalypse learning} problem, which aims at learning a classifier without using any labeled data. Such an open problem is intractable before, but the {\textsc{G5}} model actually provides us with the opportunity to explore it in this paper. In this part, we will introduce two different learning strategies, i.e., \textit{Cross-Source Classification Consistency Maximization} (CCCM) and \textit{Cross-Source Dynamic Routing} (CDR), to reason for the potential labels for the nodes in an input graph lacking supervision information, respectively.
\subsection{Reasoning Strategy \# 1: CCCM}
One approach proposed in this paper for the potential label reasoning for nodes in graph without supervision information is called the \textit{cross-source classification consistency maximization} (CCCM). Formally, as illustrated in Figure~\ref{fig:reasoning1}, let's take one of the target graph $G^{(m)}$ as an example, which contains no node labels, and we are studying the \textit{node classification} task based on it. Given the pre-trained {\textsc{G5}} model with several other graph datasets (containing supervised application functional components), via necessary fine-tuning with the other unsupervised learning tasks on $G^{(m)}$, e.g., \textit{node attribute reconstruction} and \textit{graph structure recovery}, we can still learn the representations of the nodes in the graph with {\textsc{G5}}, which can be representations as $\left\{\mathbf{z}_i^{(m)} \right\}_{v_i^{(m)} \in \mathcal{V}^{(m)}}$. Furthermore, for node $v_i^{(m)}$ with representation $\mathbf{z}_i^{(m)}$, via several fully connected layers, we can represent the node's label to be
\begin{equation}\label{equ:label_inference}
\bar{\mathbf{y}}_i^{(m)} = \mbox{softmax} \left( \mbox{FC}^{(m)} \left( \mathbf{z}_i^{(m)} \right) \right).
\end{equation}
According to the previous descriptions, with the input processing components for each datasets, the learned node representations from different graphs will lie in identical feature spaces actually. Based on such an intuition, via the learned {\textsc{G5}} models on the other graph datasets like $G^{(l)}$, given the node representation $\mathbf{z}_i^{(m)}$, we can also define their inferred labels by {\textsc{G5}} directly as $\left\{ \bar{\mathbf{y}}_i^{(l)} \right\}_{l = 1 \land l \neq m}^{n}$, where
\begin{equation}
\bar{\mathbf{y}}_i^{(l)} = \mbox{softmax} \left( \mbox{FC}^{(l)} \left( {\mathbf{z}}_i^{(m)} \right) \right).
\end{equation}
Meanwhile, based on the inferred label vector $\bar{\mathbf{y}}_i^{(m)}$, we propose to project it to the other graph datasets via several FC layers, and the projected label vectors in the other datasets can be denoted as $\left\{ \hat{\mathbf{y}}_i^{(l)} \right\}_{l = 1 \land l \neq m}^{n}$, where
\begin{equation}
\hat{\mathbf{y}}_i^{(l)} = \mbox{softmax} \left( \mbox{FC}^{(m \to l)} \left( \bar{\mathbf{y}}_i^{(m)} \right) \right).
\end{equation}
In this paper, we assume that its learned class labels should carry consistent information across all these different graphs, since they are learned within the identical framework. Therefore, to learn the nodes label vectors in graph $G^{(m)}$ as well as the involved fully connected layers, we propose to minimize the below classification consistency loss term:
\begin{equation}
\min \sum_{v_i^{(m)} \in \mathcal{V}^{(m)}} \sum_{l = 1 \land l \neq m}^n \left\| \bar{\mathbf{y}}_i^{(l)} - \hat{\mathbf{y}}_i^{(l)} \right\|_2.
\end{equation}
\subsection{Reasoning Strategy \# 2: CDR}
The CCCM approach may need to learn several fully connected layers for the node label reasoning based on the classification result consistency assumption across graphs for common representation inputs. Here, in this part, we will introduce another reasoning approach based on the dynamic routing algorithm instead, which works very differently. Formally, for any node $v_i^{(m)}$ in graph $G^{(m)}$, we can denote its representation in $G^{(m)}$ as vectors $\mathbf{z}_i^{(m)}$. Furthermore, by feeding $\mathbf{z}_i^{(m)}$ as the input for classifiers in other graph sources, we can represent their learned label vectors as $\left\{ \bar{\mathbf{y}}_i^{(l)} \right\}_{l = 1 \land l \neq m}^{n}$, respectively. The \textit{cross-source dynamic routing} (CDR) approach reasons nodes' labels in $G^{(m)}$ iteratively as follows:
\begin{equation}
\begin{cases}
\mathbf{c}_i &= \mbox{softmax} \left( \mathbf{b}_i \right), \\
\mathbf{u}_i^{(l \to m)} &= \mathbf{W}^{(l \to m)} \bar{\mathbf{y}}_i^{(l)} ,\\
\mathbf{s}_i &= \sum_{l} \mathbf{c}_i(l) \mathbf{u}_i^{(l \to m)}, \\
\mathbf{v}_i &= \frac{\left\| \mathbf{s}_i \right\|^2}{1+\left\| \mathbf{s}_i \right\|^2} \frac{\mathbf{s}_i}{ \left\| \mathbf{s}_i \right\|},\\
\mathbf{b}_i(l) &= \mathbf{b}_i(l) + \mathbf{v}_i^\top \mathbf{u}_i^{(l \to m)}.
\end{cases}
\end{equation}
where $\mathbf{W}^{(l \to m)} \in \mathbbm{R}^{d_y^{(m)} \times d_y^{(l)}}$ denotes the label vector dimension adjustment variable between graphs $G^{(l)}$ and $G^{(m)}$. Formally, the vector $\mathbf{v}_i$ outputted by such a process will represent the reasoned label vector of node $v_i^{(m)}$. By minimizing its difference with the inferred label by {\textsc{G5}}, i.e., as defined in Equation~(\ref{equ:label_inference}), we will be able to represent the introduced reasoning loss function as follows:
\begin{equation}
\min \sum_{v_i^{(m)} \in \mathcal{V}^{(m)}} \left\| \bar{\mathbf{y}}_i^{(m)} - {\mathbf{v}}_i \right\|_2.
\end{equation}
More information about the experimental studies of these two \textit{apocalypse learning} oriented reasoning strategies will be provided in the following section in detail.
\section{Experiments}\label{sec:experiment}
To test the effectiveness of {\textsc{G5}} on graph representation learning, in this section, we will report some preliminary experimental results of {\textsc{G5}} that we obtain on three real-world benchmark graph datasets. More experimental results will be provided in the followup updated version of this paper as well.
\subsection{Dataset and Learning Settings}\label{subsec:parameter_setting}
The graph benchmark datasets used in the experiments include Cora, Citeseer and Pubmed \cite{YCS16}, which are used in most of the recent state-of-the-art graph neural network research works \cite{Kipf_Semi_CORR_16,Velickovic_Graph_ICLR_18,Li_Deeper_CORR_18,sun2019adagcn,DBLP:journals/corr/abs-1907-02586,Zhang2019GResNetGR}. For fair comparison, the experimental settings, e.g., train/validation/test set partition, will be identical as these existing research papers as well. Based on the input graph data, we will first pre-compute the node intimacy scores, based on which subgraph batches will be sampled subject to the subgraph size $k$ for each dataset. In addition, we will also pre-compute the node pairwise hop distance and WL node codes. In this paper, we aim to examine the transfer of the universal {\textsc{Graph-Bert}} across different graph datasets based on the {\textsc{G5}} framework. Considering that different datasets will have different learning settings, instead of summing the loss functions of all the datasets, we propose to train {\textsc{G5}} with multiple graph datasets iteratively. To be more specific, the pre-training of {\textsc{G5}} will last for several iterations. In each iteration, we will train the corresponding components in {\textsc{G5}} with Cora, Citeseer and Pubmed sequentially subject to their unique parameter settings shown as follows. The default evaluation metric used in the experiments is Accuracy.
\noindent \textbf{Default Parameter Settings}: If not clearly specified, the results reported in this paper are based on the following parameter settings of {\textsc{G5}}: \textit{subgraph size}: $k=7$ (Cora), $k=5$ (Citeseer), $k=30$ (Pubmed); \textit{hidden size}: 32; \textit{attention head number}: 2; \textit{hidden layer number}: $D=2$; \textit{learning rate}: 0.01 (Cora) and 0.001 (Citeseer) and 0.001 (Pubmed); \textit{weight decay}: $5e^{-4}$; \textit{intermediate size}: 32; \textit{hidden dropout rate}: 0.5; \textit{attention dropout rate}: 0.3; \textit{graph residual term}: graph-raw; \textit{optimizer}: Adam; \textit{training epoch}: 150 (Cora), 500 (Pubmed), 2000 (Citeseer). For the universal {\textsc{Graph-Bert}}, we evaluate the learning performance by changing its parameter $k$ with different values from $\{5, 7, 15, 30\}$ in the experiments, where $5$, $7$, $30$ are the optimal parameters for these three datasets, respectively, and value $15$ can balance among all the datasets.
\noindent \textbf{Experiment Organization}: We intend to use the experiments to answer several questions that readers may have in mind:
\begin{itemize}
\item \textbf{Q1}: Can {\textsc{G5}} still work well for isolated graph input?
\item \textbf{Q2}: Can {\textsc{G5}} be applicable to multiple graph inputs, which all have abundant training data actually?
\item \textbf{Q3}: How will the pre-trained {\textsc{G5}} perform when being transferred to target graphs lacking enough training data?
\item \textbf{Q4}: How is the learning performance of two reasoning strategies in {\textsc{G5}} on addressing the apocalypse learning task?
\end{itemize}
The following experiments will be designed to address these above above questions specifically.
\subsection{Isolated {\textsc{G5}} on Node Classification}
\begin{table}[t]
\caption{Learning performance of {\textsc{G5}} compared against existing baseline methods on node classification. The results of {\textsc{G5}} reported here denotes the best observed scores obtained on each dataset in the isolated mode.}\label{tab:isolated_learning}
\centering
\begin{tabular}{l c c c c }
\toprule
\multirow{2}{*}{Methods} & \multicolumn{3}{c}{Datasets (Accuracy)} \\
\cline{2-4}
\addlinespace[0.05cm]
& \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\
\hline
\addlinespace[0.05cm]
{LP (\cite{ZGL03}) } &0.680 &0.453 &0.630 \\
{ICA (\cite{LG03})} &0.751 &0.691 &0.739 \\
{ManiReg (\cite{BNS06})} &0.595 &0.601 &0.707 \\
{SemiEmb (\cite{WRC08})} &0.590 &0.596 &0.711 \\
\hline
\addlinespace[0.05cm]
{DeepWalk (\cite{PAS14})} &0.672 &0.432 &0.653 \\
{Planetoid (\cite{YCS16})} &0.757 &0.647 &0.772 \\
{MoNet (\cite{MBMRSB16})} &0.817 &- &0.788 \\
\hline
\addlinespace[0.05cm]
{{\textsc{GCN}} (\cite{Kipf_Semi_CORR_16})} &0.815 &0.703 &\textbf{0.790} \\
{{\textsc{GAT}} (\cite{Velickovic_Graph_ICLR_18})} &\textbf{0.830} &\textbf{0.725} &\textbf{0.790} \\
{{\textsc{LoopyNet}} (\cite{loopynet})} &{0.826} &\textbf{0.716} &\textbf{0.792} \\
\hline
\addlinespace[0.05cm]
{\textsc{Graph-Bert}} (\cite{zhang2020graph}) &\textbf{0.843} &{0.712} &\textbf{0.793} \\
\hline
\addlinespace[0.05cm]
\multirow{2}{*}{{\textsc{G5}} (isolated)}
&\textbf{0.841} &\textbf{0.715} &{0.789} \\
\addlinespace[-0.05cm]
&($k=7$)&($k=5$)&($k=30$)\\
\bottomrule
\end{tabular}
\end{table}
Prior to showing the learning performance of {\textsc{G5}} across multiple graph datasets, we will first provide the learning results of {\textsc{G5}} on node classification based on each graph dataset in an isolated learning mode in Table~\ref{tab:isolated_learning}. The isolated version of {\textsc{G5}} is very similar to {\textsc{Graph-Bert}} studied in \cite{zhang2020graph} actually, except that {\textsc{G5}} will have two more graph-transformer layers (i.e., the input processing component for each dataset) besides the shared universal {\textsc{Graph-Bert}} component. To make the comparison more complete, in addition to {\textsc{Graph-Bert}} \cite{zhang2020graph}, we also provide the learning results of several classic graph classification methods, e.g., LP \cite{ZGL03}, ICA \cite{LG03}, ManiReg \cite{BNS06}, SemiEmb \cite{WRC08}, recent graph embedding methods, DeepWalk \cite{PAS14}, Planetoid \cite{YCS16}, MoNet \cite{MBMRSB16}, and the latest graph representation learning approaches, e.g., {\textsc{GCN}} \cite{Kipf_Semi_CORR_16}, {\textsc{GAT}} \cite{Velickovic_Graph_ICLR_18}, {\textsc{LoopyNet}} \cite{loopynet}. According to the results, the scores achieved by {\textsc{G5}} are very close to those of {\textsc{Graph-Bert}}, which are much higher than the scores obtained by the other baseline methods.
\subsection{Results of {\textsc{G5}} on Mixed Graph Input}
\begin{table}[t]
\caption{Learning performance of {\textsc{G5}} with a mixed pre-training for node classification on multiple input graph datasets. Parameter $k$ denotes the input portal size of the universal {\textsc{Graph-Bert}} component.}\label{tab:mixed_learning}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c}{{Input Graphs} \& $k$ } & \multicolumn{3}{|c|}{Datasets (Accuracy)} \\
\hline
Graphs & $k$ & {\textbf{Cora}} & {\textbf{Citeseer}} & {\textbf{Pubmed}} \\
\hline
\hline
\multirow{4}{*}{Cora \& Citeseer}
&5 &{0.834} &{0.707} &$-$ \\
\cline{2-5}
&7 &\textbf{0.835} &\textbf{0.717} &$-$ \\
\cline{2-5}
&15 &0.828 &{0.702} &$-$ \\
\cline{2-5}
&30&{0.822} &{0.698} &$-$ \\
\hline
\hline
\multirow{4}{*}{Cora \& Pubmed}
&5 &\textbf{0.832} &$-$ &{0.772} \\
\cline{2-5}
&7 &0.828 &$-$ &{0.766} \\
\cline{2-5}
&15 &0.829 &$-$ &{0.782} \\
\cline{2-5}
&30 &{0.816} &$-$ &\textbf{0.791} \\
\hline
\hline
\multirow{4}{*}{Citeseer \& Pubmed}
&5 &$-$ &\textbf{0.705} &{0.772} \\
\cline{2-5}
&7 &$-$ &0.702 &{0.773} \\
\cline{2-5}
&15 &$-$ &0.683 &{0.787} \\
\cline{2-5}
&30&$-$ &{0.675} &\textbf{0.782} \\
\hline
\end{tabular}
\end{table}
\begin{table*}[t]
\caption{Learning performance of {\textsc{G5}} with model transfer. The source graphs are for {\textsc{G5}} pre-training, and the target graph are used for {\textsc{G5}} evaluation with necessary fine-tuning. We focus on studying the effectiveness of {\textsc{G5}} transfer to the target graph with sparse training data, where the training data sampling ratio denotes the percentage of training data used for model fine-tuning. For comparison, we also illustrate the learning performance of {\textsc{G5}} without pre-training at all in the table.}\label{tab:transfer}
\centering
\setlength{\tabcolsep}{8pt}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c}{\textbf{Source Graph(s)} \& \textbf{Target Graph} \& $k$ } & \multicolumn{10}{|c|}{\textbf{Training Data Sampling Ratio (Accuracy)}} \\
\hline
\textbf{Source(s)} & \textbf{Target} & $k$ & \textbf{{5\%}} & \textbf{{10\%}} & \textbf{{15\%}} & \textbf{{20\%}} & \textbf{{25\%}} & \textbf{{30\%}} & \textbf{{35\%}} & \textbf{{40\%}} & \textbf{{45\%}} & \textbf{{50\%}} \\
\hline
\hline
\multirow{2}{*}{Cora}
&Citeseer &15 &0.418 &0.569 &0.541 &0.546 &0.557 &0.600 &0.593 &0.607 &0.623 &0.661 \\
\cline{2-13}
&Pubmed &15 &0.530 &0.649 &0.669 &0.692 &0.692 &0.687 &0.692 &0.697 &0.710 &0.743 \\
\hline
\hline
\multirow{2}{*}{Citeseer}
&Cora &15 &0.262 &0.420 &0.546 &0.619 &0.684 &0.662 &0.706 &0.727 &0.729 &0.748 \\
\cline{2-13}
&Pubmed &15 &0.524 &0.692 &0.697 &0.682 &0.723 &0.717 &0.736 &0.744 &0.740 &0.741 \\
\hline
\hline
\multirow{2}{*}{Pubmed}
&Cora &15 &0.317 &0.405 &0.551 &0.559 &0.740 &0.753 &0.747 &0.759 &0.804 &0.805 \\
\cline{2-13}
&Citeseer &15 &0.362 &0.583 &0.553 &0.553 &0.643 &0.626 &0.624 &0.620 &0.616 &0.667 \\
\hline
\hline
\multirow{1}{*}{Cora \& Citeseer}
&Pubmed &15 &0.501 &0.662 &0.643 &0.658 &0.655 &0.667 &0.664 &0.670 &0.659 &0.672 \\
\hline
\multirow{1}{*}{Cora \& Pubmed}
&Citeseer &15 &0.368 &0.571 &0.584 &0.573 &0.572 &0.586 &0.584 &0.590 &0.595 &0.698 \\
\hline
\multirow{1}{*}{Citeseer \& Pubmed}
&Cora &15 &0.300 &0.456 &0.544 &0.662 &0.746 &0.765 &0.778 &0.769 &0.787 &0.784 \\
\hline
\hline
\multirow{3}{*}{None (No Pre-train)}
&Cora &7 &0.299 &0.404 &0.480 &0.574 &0.701 &0.688 &0.706 &0.768 &0.777 &0.794 \\
\cline{2-13}
&Citeseer &5 &0.341 &0.567 &0.541 &0.553 &0.558 &0.580 &0.583 &0.582 &0.598 &0.637 \\
\cline{2-13}
&Pubmed &30 &0.485 &0.630 &0.638 &0.617 &0.604 &0.608 &0.608 &0.572 &0.599 &0.641 \\
\hline
\end{tabular}
\end{table*}
\begin{table}[t]
\caption{Reasoning performance of {\textsc{G5}} with different strategies for apocalypse learning (``Random'': random guess).}\label{tab:reasoning}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c}{\textbf{Source \& {Target Graph(s)}} } & \multicolumn{3}{|c|}{\textbf{Reaning Strategies}}\\
\hline
\textbf{Source(s)} & \textbf{Target} & \textbf{CCCM} & \textbf{CDR} & \textbf{Random}\\
\hline
\hline
\multirow{2}{*}{Cora}
&Citeseer &0.280 &\textbf{0.312} &0.167 \\
\cline{2-5}
&Pubmed &0.551 &\textbf{0.544} &0.333 \\
\hline
\multirow{2}{*}{Citeseer}
&Cora &0.323 &\textbf{0.358} &0.143 \\
\cline{2-5}
&Pubmed &0.505 &\textbf{0.515} &0.333 \\
\hline
\multirow{2}{*}{Pubmed}
&Cora &\textbf{0.342} &{0.304} &0.143 \\
\cline{2-5}
&Citeseer &0.323 &\textbf{0.331} &0.167 \\
\hline
{Cora \& Citeseer}
&Pubmed &0.516 &\textbf{0.519} &0.333 \\
\hline
{Cora \& Pubmed}
&Citeseer &0.318 &\textbf{0.332} &0.167 \\
\hline
{Citeseer \& Pubmed}
&Cora &\textbf{0.327} &{0.319} &0.143 \\
\hline
\end{tabular}
\end{table}
In Table~\ref{tab:mixed_learning}, we provide the learning results of {\textsc{G5}} learned with multiple graph inputs. To be more specific, given the input graphs, we will pre-train {\textsc{G5}} with the hybrid application tasks on these graph datasets. Such pre-trained {\textsc{G5}} model will be further fine-tuned on each graph for the node classification task. For each graph, the parameter $k$ of its input pre-processing component is assigned with the default parameter as introduced before. Meanwhile, for the universal {\textsc{Graph-Bert}} involved in {\textsc{G5}}, we change its input size parameter $k$ with values in $\{5, 7, 15, 30\}$, where $5$, $7$ and $30$ are the optimal parameter $k$ for Citeseer, Cora and Pubmed, respectively, and value $15$ balances among these optimal parameters.
According to the results, we observe that training {\textsc{G5}} concurrently with multiple input graphs and hybrid application tasks will have some minor impacts on its performance on the node classification task. In some cases, compared with Table~\ref{tab:isolated_learning}, there are some drops of the scores, e.g., {\textsc{G5}} on Cora. Meanwhile, in some other cases, the learning performance of {\textsc{G5}} can also be very good, which are highlighted in the table. What's more, parameter $k$ of the universal {\textsc{Graph-Bert}} model does have an impact on the performance of {\textsc{G5}}, where Cora and Citeseer favor small $k$, whereas Pubmed prefers larger $k$ instead. To achieve the balanced performance, we will set $k=15$ for the following studies on {\textsc{G5}} transfer across different graph datasets.
\subsection{Transfer of {\textsc{G5}} to Sparsely Labeled Graph}
In Table~\ref{tab:transfer}, we provide the learning results of {\textsc{G5}} on graphs with sparse labels. To be more specific, we will pre-train {\textsc{G5}} on the source graphs with the hybrid application tasks and transfer the pre-trained model to the target graph(s) for evaluation. Since we focus on the graphs with sparse labels, a small portion of the labeled data are sampled from the target graph for model fine-tuning, where the sampling ratio changes with value in $\{$5\%$, $10\%$, \cdots, $50\%$\}$. Meanwhile, for comparison completeness, we also provide the results of {\textsc{G5}} without pre-training in the table, where the parameter $k$ of the universal component is assigned with the optimal values favored by the graph datasets. According to the results, in most of the cases, {\textsc{G5}} with pre-training can out-perform that without pre-training consistently.
\subsection{Reasoning of {\textsc{G5}} for Apocalypse Learning}
In Table~\ref{tab:reasoning}, we provide the learning results of {\textsc{G5}} based on the apocalypse learning settings, where the target graph has no labeled data at all. All the existing graph neural networks will fail to work in such a learning setting. To enable {\textsc{G5}} can work to address the node classification problem on the target graph, we pre-train {\textsc{G5}} on the source graphs to learn the universal {\textsc{Graph-Bert}} component shared across graphs. Furthermore, such pre-trained {\textsc{G5}} will be further fine-tuned on the target graph with the unsupervised learning tasks, i.e., \textit{node attribute reconstruction} and \textit{graph recovery}, so as to learn the input component for the target graph in {\textsc{G5}}. Based on the CCCM and CDR reasoning strategies, {\textsc{G5}} will still be able to reason for the potential labels for the nodes in the target graph. For comparison, we also provide the results of random guess in the table, and the scores achieved by {\textsc{G5}} with these two reasoning strategies are both much higher than random guess.
\section{Related Work}\label{sec:related_work}
Several interesting research topics are related to this paper, which include \textit{graph neural network} and \textit{{\textsc{Bert}}}.
\noindent \textbf{Graph Neural Network}: In addition to the graph convolutional neural network \cite{Kipf_Semi_CORR_16} and its derived variants \cite{Velickovic_Graph_ICLR_18,sun2019adagcn,DBLP:journals/corr/abs-1907-02586}, many great research works on graph neural networks have been witnessed in recent years for graph representation learning \cite{SPIGCN,Zhang2018AnED,Ivanov_Anonymous_18,xinyi2018capsule}. Many existing graph neural network models will suffer from performance problems with deep architectures. In \cite{Zhang2019GResNetGR,Li_Deeper_CORR_18,sun2019adagcn,Huang_Inductive_19}, the authors explore to build deep graph neural networks with residual learning, dilated convolutions, and recurrent network, respectively. In \cite{zhang2020graph}, the authors introduce a new type of graph neural network based on graph transformer and BERT, i.e., the {\textsc{Graph-Bert}} model. Different from the node representation learning \cite{Kipf_Semi_CORR_16,Velickovic_Graph_ICLR_18}, GNNs proposed for the graph representation learning aim at learning the representation for the entire graph instead \cite{Narayanan_Graph_17}. To handle the graph node permutation invariant challenge, solutions based various techniques, e.g., attention \cite{Chen_Dual_19,Meltzer_Permutation_19}, pooling \cite{Meltzer_Permutation_19,ranjan2019asap,Jiang_Gaussian_18}, capsule net \cite{Mallea_Capsule_19}, Weisfeiler-Lehman kernel \cite{NIPS2016_6166} and sub-graph pattern learning and matching \cite{Meng_Isomorphic_NIPS_19}, have been proposed. To apply {\textsc{Graph-Bert}} on graph instance modeling and handle diverse graph instance sizes, \cite{zhang2020segmented} proposes several different graph instance size unification approaches.
\noindent \textbf{{\textsc{Bert}}}: {\textsc{Transformer}} \cite{Vaswani_Attention_17} and {\textsc{Bert}} \cite{Bert} based models have almost dominated NLP and related research areas in recent years due to their great representation learning power. Prior to that, the main-stream sequence transduction models in NLP are mostly based on complex recurrent \cite{Hochreiter_Long_Neural_97,DBLP:journals/corr/ChungGCB14} or convolutional neural networks \cite{kim-2014-convolutional}. However, as introduced in \cite{Vaswani_Attention_17}, the inherently sequential nature precludes parallelization within training examples. To address such a problem, a brand new representation learning model solely based on attention mechanisms, i.e., the {\textsc{Transformer}}, is introduced in \cite{Vaswani_Attention_17}, which dispense with recurrence and convolutions entirely. Based on {\textsc{Transformer}}, \cite{{Bert}} further introduces {\textsc{Bert}} for deep language understanding, which obtains new state-of-the-art results on eleven natural language processing tasks. By extending {\textsc{Transformer}} and {\textsc{Bert}}, many new {\textsc{Bert}} based models, e.g., T5 \cite{raffel2019exploring}, ERNIE \cite{Sun_ERNIE} and RoBERTa \cite{Liu_RoBERTa}, can even out-perform the human beings on almost all NLP benchmark datasets.
\section{Conclusion}\label{sec:conclusion}
In this paper, we have studied the graph-to-graph transfer of a universal {\textsc{Graph-Bert}} for graph representation learning across different graph datasets. To address the problem, we introduce a new learning model named {\textsc{G5}}, whose pluggable architecture containing several key parts, i.e., (1) pluggable input dataset-wise components, (2) input size unification interlayer, (3) the universal Graph-Bert model shared across graphs, (4) representation fusion interlayer; (5) pluggable task-wise output components for each dataset, and (6) reasoning component for \textit{apocalypse learning}. Furthermore, based on the {\textsc{G5}} model, we also investigate a special and novel learning task, i.e., the \textit{apocalypse learning} problem, which aims at learning a classifier without using any labeled data. Two different reasoning strategies, i.e., CCCM and CDR, are proposed to reason for the potential labels for the nodes. To test the effectiveness of {\textsc{G5}}, some preliminary experiments have been done on real-world graph datasets and the results also demonstrate the effectiveness of both {\textsc{G5}} and these two proposed reasoning strategies.
\newpage
{
\bibliographystyle{abbrv}
|
1,116,691,497,696 | arxiv | \section{Introduction}
In the standard theory of gravity due to Einstein all conformally flat spacetimes are known. They are either the Schwarzcshild interior metric in the case of no expansion or the Stephani universes when expansion is permitted \cite{steph1,steph2,hans1}. A similar result is not known in the more complicated Einstein-Gauss--Bonnet (EGB) theory so this is one of the motivations behind this work. Conformal structures are important in gravitational field theory as conformal symmetries generate constants of the motion or conserved quantities along null geodesics. Conformal flatness is characterised by the vanishing of the Weyl tensor and physically this means that a spacetime is conformal to the Minkowski metric at spatial infinity. This is a reasonable restriction when it comes to modelling relativistic compact objects such as neutron stars, white dwarfs or cold planets. It is already known both in the four dimensional general theory of relativity as well as its extensions to higher dimensions, that that the interior Schwarzschild metric is both a necessary and sufficient condition for conformal flatness \cite{hansrajjmp}. Whether this holds when higher curvature effects are at play will be discussed herein.
Recently it has been demonstrated by Dadhich {\it{et al}} \cite{dad-mol-khug} that the Schwarzschild interior metric is universal as a constant density solution in Lovelock gravity of which the EGB theory is a special case. These authors claimed necessity and sufficiency. The assumption of the Schwarzschild interior spacetime does indeed generate a constant density fluid. However, the converse is not true in general. The assumption of constant density generates a solution which generalises the Schwarzschild interior spacetime. That is, the prescription of constant density is only a necessary but not sufficient condition for the Schwarzschild interior metric. The aforesaid authors argued that an integration constant must vanish in order to ensure regularity at the stellar centre and consequently the solution reduces to the Schwarzschild metric. We shall show that this is not required in the five dimensional constant density case which possesses no central singularity however the six dimensional incompressible hypersphere metric does have a persistent singularity at the stellar centre. Nevertheless, in neither case is there a basis to remove integration constants arbitrarily. These constants are necessary and may be determined through the boundary conditions.
These aspects are being mentioned in the context of conformal flatness because the Schwarzschild solution is conformally flat but it needs to be checked whether all conformally flat static metrics are necessarily the Schwarzschild interior solution. It must be remarked that ordinarily it is not advisable to set integration constants to zero arbitrarily since these must be settled by matching of the interior and exterior spacetime across a common hypersurface. Effectively deciding the value of constants of integration rearly in modelling runs the risk of over-determining the model. This option may only succeed when there is no boundary and a universe model is in evidence. This is the case for the isothermal spherically symmetric universe of \cite{saslaw} which was clearly over-determined but successful. In this same spirit, Hansraj and Moodly \cite{hans-moodly} showed that the dual requirement of conformal flatness and spacetime being of embedding class one, gives a null-result.
At the outset, we sketch motivations for considering higher dimensional spacetimes and the importance of the EGB theory in gravitational field theory.
Interest in higher dimensional spacetimes originated with the seminal works of Kaluza \cite{kaluza} and Klein \cite{klein}. A five dimensional manifold was introduced in the context of the Einstein--Maxwell equations and now there were 15 nontrivial components as opposed to 10 in the standard four dimensions. Four of the components of the metric tensor were associated with the electromagnetic field while ten were connected to the usual 4 dimensional space. The remaining component was given the interpretation of a scalar field termed a scalaron or dilatonic field. Subsequently many other ideas emerged that considered higher dimensions. In fact, in the quantum world, it is now understood that the most promising idea superstring theory and its generalisation $M$-theory both require dimensions of the order of 10 or 11 at least. Additionally, brane-world cosmology \cite{maartens} requires spacetimes of dimension five. But what would be the explanation for our inability to access extra spatial dimensions? These are believed to be topologically curled into microscopic circles of very small size yet they exert an influence on the gravitational field \cite{kaluza,klein}. Probing gravitational waves for information on extra dimensions was considered in a recent paper \cite{yu} that reported the non-detection of large extra dimensions. Earlier theories predicting large extra dimensions were tested through the Large Hadron Collider (LHC) experiment but found to be inconclusive \cite{lhc1,lhc2,lhc3,lhc4}. This however, does not eliminate the possibility of small extra dimensions \cite{mack} presumed to be of the planck length. It must be remarked that the extra dimensions are actually angular dimensions so they do not necessarily manifest overtly. In light of these, investigations into higher dimensional spaces are justifiable.
Why is the EGB gravity proposal of immense importance? Firstly it offers a natural generalization of Einstein's theory to higher dimensions with the inclusion of higher curvature terms but without violating well established ingredients of gravitational field theory namely the Bianchi identities, diffeomorphism invariance and second order (ghost--free) equations of motion. The energy conservation conditions also hold in the usual way adjusted for extra dimensions. In fact EGB is superseded by its generalisation Lovelock \cite{lov1,lov2} theory which has the same aforementioned properties. The $Nth$ order Lovelock Lagrangian is constructed from invariants polynomial in the Riemann tensor, Ricci tensor and the Ricci scalar and to second order ($N = 2$) the EGB special case consists of quadratic invariants. The critical spacetime dimensions $d$ in Lovelock theory are $d = 2N + 1$ and $d = 2N + 2$. In particular it is sufficient to consider only dimensions 5 and 6 in studying EGB spacetimes \cite{dad-ghosh-jhingan}. It is also well known that the Gauss-Bonnet term features in heterotic string theory and the coupling constant carries the meaning of the string tension in that area. This is also a good reason to probe the behaviour of higher curvature invariants in gravitational field theory in view of the long time project to merge quantum field theory, of which string theory is a leading candidate, and gravitational physics.
The exterior solution for EGB gravity was found by Boulware and Deser \cite{boul} in 1985 and a year later Wiltshire promoted the solution to include the effects of the electrostatic field \cite{wilt}. The constant density solution of Dadhich {\it{et al}} \cite{dad-mol-khug} was the first interior solution reported and happened to be the Schwarzschild solution independent of the dimension. As in Einstein theory the solution suffers physical pathologies such as an infinite speed of sound. The square of the speed of sound is calculated with the formula $\frac{dp}{d\rho}$ where $p$ and $\rho$ are the pressure and density respectively. A thorough discussion of the relativistic sound speed was considered by Ellis {\it {et al}} \cite{ellis1}. Clearly a constant density $\rho$ renders the sound speed meaningless. Kang {\it{et al}} \cite{kang} proposed a solution however, the metric demanded a further integration to fully reveal the nature of the spacetime manifold. The first reported explicit exact solutions that satisfied elementary conditions for physical plausibility were generated in \cite{hans-maha,maha-chil,chilambwe-hansraj}. It is seriously difficult to locate exact solutions for perfect fluid matter in EGB because the extra curvature terms make the governing differential equations intractable. An additional solution for constant potentials in six dimensional EGB spacetimes was found in \cite{hans-maha-chil} and recently Hansraj and Mkhize generated a physically viable six dimensional model with variable potentials and density \cite{hans-mkhize}. A greater number of the extra curvature terms survive in 6D as opposed to 5D making the differential equations even more difficult to work with. However, the effects of extra curvature may be studied more efficiently compared to the 5D case where such terms are suppressed.
Employing an equation of state (EoS) to describe static compact objects in classical general relativity and modified gravity theories have proved not to be fruitful. The imposition of a barotropic equation of state of the form $p = p(\rho)$ reduces the problem of finding exact solutions to the Einstein field or the EGB equations to a single-generating function when pressure isotropy is dropped. Requiring the vanishing of the Weyl stresses at each interior point of the fluid sphere completes the gravitational description of the model. These models have been shown to describe compact objects such as pulsars, neutron stars and colour-flavoured locked-in quark stars \cite{weyl1,weyl2,weyl3,Banerjee:2020stc,Singh:2020bdv}. In fact, some models predicting the existence of self-gravitating anisotropic fluid spheres have been investigated in the literature. For example, see \cite{Singh:2020cnu,Maurya:2020gjw,Rahaman:2020dgv,Singh:2020ebx,Tello-Ortiz:2019gcl} and the references therein.
The role of the Weyl tensor in dissipative collapse has been thoroughly investigated. Herrera and co-workers demonstrated that departure from hydrostatic equilibrium (or quasi-static equilibrium) is sensitive to changes in the conditions surrounding the Weyl tensor. In particular, in the case of conformally flat self-gravitating bodies, departure from equilibrium depends on the deviation from conformal flatness. It is well known that density inhomogeneities and pressure anisotropy influence the outcome of dissipative collapse. It has been shown that the thermal behaviour of radiating stars can be drastically altered in the presence of anisotropic stresses within the collapsing core. Furthermore, specific combinations of the Weyl tensor components, pressure anisotropy and dissipative fluxes (for example radial heat flux) give rise to density inhomogeneities within the collapsing fluid. The stability of the shear-free condition has come under scrutiny as it was shown that an initially shear-free configuration would evolve into a shearing-like regime due to a combination of density inhomogeneity, pressure anisotropy and dissipation\cite{prisco1}. Several exact models describing conformally flat radiating stars have been obtained. These solutions have been shown to obey the physical requirements necessary for dissipative collapse. Gravitational collapse from an initial static configuration described by the interior Schwarzschild solution has rewarded researchers richly in terms of insights into horizon formation, stability in the Newtonian and Post-Newtonian regimes and thermal evolution within the framework of extended irreversible thermodynamics\cite{stab1,stab2,stab3}.
This paper is arranged as follows: After a brief sketch of the main ingredients of EGB theory in section \ref{sec2}, we derive the five dimensional field EGB field equations and consider the exterior metric in section \ref{sec3}. The incompressible hypersphere is examined in section \ref{sec4} and the new exact solution for conformal flatness together with its physical properties is presented in section \ref{sec5}. In section \ref{sec6} we find he conformally flat metrics in six dimensions and study its main features. Finally in section \ref{sec7} we conclude by reiterating our main results.
\section{Einstein--Gauss--Bonnet Gravity}\label{sec2}
The Gauss--Bonnet action is written as
\begin{equation}
S = \int \sqrt{-g} \left[ \frac{1}{2} \left(R - 2\Lambda + \alpha L_{GB}\right)\right] d^N x + S_{\mbox{ matter}}, \label{1}
\end{equation}
where $ \alpha $ is the Gauss--Bonnet coupling constant and $N$ is the spacetime dimension. The advantage of the action $ L_{G B} $ is that despite the Lagrangian being quadratic in the Ricci tensor, Ricci scalar and the Riemann tensor, the gravitational field equations are second order quasilinear which is expected of a viable theory of gravity. The Gauss--Bonnet term exerts no influence for $ n \leq 4 $ but becomes dynamic for $ n > 4 $.
The EGB field equations may be written as
\begin{equation}
G_{a b} + \alpha H_{a b} = T_{a b}, \label{2}
\end{equation}
with metric signature $ (- + + + +) $ where $ G_{ab} $ is the Einstein tensor. The Lanczos tensor is given by
\begin{equation}
H_{a b} = 2 \left(R R_{a b} - 2 R_{a c}R^{c}_{\,\,b} - 2 R^{c d} R_{a c b d} + R^{c d e}_{\,\,\,\,\,\,\,\,a} R_{b c d e} \right) - \frac{1}{2} g_{a b} L_{G B}, \label{3}
\end{equation}
where the Lovelock term has the form
\begin{equation}
L_{G B} = R^2 - 4R_{a b} R^{a b} + R_{a b c d} R^{a b c d} . \label{4}
\end{equation}
In what follows it will be desirable to divide the analysis into the 5 and 6 dimensional cases separately as these embody different physical characteristics. The 5 dimensional case has the feature of eliminating some terms in the field equations which are dynamic only in 6-$d$ and consequently affect the physics of the distribution.
\section{Field equations in 5-$d$}\label{sec3}
The generic 5--dimensional line element for static spherically symmetric spacetimes is customarily written as
\begin{equation}
ds^{2} = -e^{2 \nu} dt^{2} + e^{2 \lambda} dr^{2} + r^{2} \left( d\theta^{2} + \sin^{2} \theta d \phi^2 + \sin^{2} \theta \sin^{2} \phi d\psi^2 \right), \label{5}
\end{equation}
where $ \nu(r) $ and $ \lambda(r) $ are the gravitational potentials. We utilise a comoving fluid velocity of the form $ u^a = e^{-\nu} \delta_{0}^{a} $ and the matter field is that of a perfect fluid with energy momentum tensor $ T_{a b} = (\rho + p) u_a u_b + p g_{a b} $. Accordingly the EGB field equations (\ref{2}) assume the form
\begin{eqnarray}
\rho &=&- \frac{3}{e^{4 \lambda} r^{3}} \left( 4 \alpha \lambda ^{\prime} + r e^{2 \lambda} - r e^{4 \lambda} - r^{2} e^{2 \lambda} \lambda ^{\prime} - 4 \alpha e^{2 \lambda} \lambda ^{\prime} \right), \label{6a} \\ \nonumber \\
p_r &=& \frac{3}{e^{4 \lambda} r^{3}} \left(- r e^{4 \lambda} + \left( r^{2} \nu^{\prime} + r + 4 \alpha \nu^{\prime} \right) e^{2 \lambda} - 3 \alpha \nu^{\prime} \right), \label{6b} \\ \nonumber \\
p_T &=& \frac{1}{e^{4 \lambda} r^{2}} \left( -e^{4 \lambda} - 4 \alpha \nu^{\prime \prime} + 12 \alpha \nu^{\prime} \lambda^{\prime} - 4 \alpha \left( \nu^{\prime} \right)^{2} \right) \nonumber \\
& \quad & + \frac{1}{e^{2 \lambda} r^{2}} \left( 1 - r^{2} \nu^{\prime} \lambda^{\prime} + 2 r \nu^{\prime} - 2 r \lambda^{\prime} + r^{2} \left( \nu^{\prime} \right)^{2} \right) \nonumber \\
& \quad & + \frac{1}{e^{2 \lambda} r^{2}} \left( r^{2} \nu^{\prime \prime} - 4 \alpha \nu^{\prime} \lambda^{\prime} + 4 \alpha \left( \nu^{\prime} \right) ^{2} + 4 \alpha \nu^{\prime \prime} \right). \label{6c}
\end{eqnarray}
where $p_r$ and $p_T$ are the radial and tangential pressure respectively. Note that the system (\ref{6a})--(\ref{6c}) comprises three field equations in four unknowns for isotropic particle pressure $p_r = p_T$. This is similar to the standard Einstein case for spherically symmetric perfect fluids. In order to close the system of equations, it is necessary to stipulate one more condition. Traditionally, one of the metric potentials is prescribed in the hope of finding the remaining potential by integrating the equation of pressure isotropy. Alternatively mathematical insights such as shown by Tolman \cite{tolman} in rearranging the pressure isotropy condition in a convenient way and setting individual terms to vanish yielded eight classes of exact solutions five of which were new. The attempts at implementing physically reasonable assumptions such as an equation of state has not worked in Einstein gravity and the works of Nilsson and Uggla \cite{nilsson1,nilsson2} in this direction had to be completed with numerical methods. Therefore it is not expected that an {\it{a priori}} equation of state $p = p(\rho)$ will succeed in the more complicated situation of EGB at hand. Additionally, conditions such as being able to embed a spacetime in a higher dimensional geometry such as through use of the Kamarkar condition on the Riemann tensor components may also be an interesting route to follow \cite{Tello-Ortiz:2019gcl,Maurya:2017non,Ashraf:2020yyo,Mustafa:2020ikt}. Historically, Schlaefli \cite{schlaefli} first raised the question of embedding a Riemannian space into a higher dimensional Euclidean space and Kamarkar \cite{kamarkar} established a condition on the components of the Riemann tensor. The Kamarkar condition was corrected in time to rule out the vanishing of one of the components \cite{pandey}. The existence of symmetries such as conformal Killing vectors also offer an avenue to pursue. In our case we have elected to impose the condition for conformal flatness. The vacuum metric describing the gravitational field exterior to the 5--dimensional static perfect fluid may be described by the Boulware--Deser \cite{boul} spacetime as
\begin{equation}
ds^2 = - F(r) dt^2 + \frac{dr^2}{F(r)} + r^{2} \left( d\theta^{2} + \sin^{2} \theta d \phi + \sin^{2} \theta \sin^{2} \phi d\psi \right), \label{7}
\end{equation}
where
\begin{equation}
F(r) = 1 + \frac{r^2}{4\alpha} \left( 1 - \sqrt{1 + \frac{8M\alpha}{r^4}} \right),
\end{equation}
and where we have taken a negative sign before the square root since expansion in powers of $\alpha$ give the Schwarzschild solution when $\alpha $ approaches zero.
In the above $ M $ is associated with the gravitational mass of the hypersphere. The exterior solution is unique up to branch cuts.
We invoke the transformation $ e^{2 \nu} = y^{2}(x) $, $ e^{-2 \lambda} = Z(x) $ and $ x = C r^{2} $ ($ C $ being an arbitrary constant) which was utilised successfully by Durgapal and Bannerji \cite{dur-ban}, Finch and Skea \cite{finch-skea} and Hansraj and Maharaj \cite{hansraj-maharaj} to generate new exact solutions for neutral and charged isotropic spheres. The motivation for using this transformation lies in the fact that the equation of pressure isotropy may be expressed as a second order linear differential equations thus increasing our chances of locating exact solutions. The field equations (\ref{6a})--(\ref{6c}) may be recast as
\begin{eqnarray}
3 \dot{Z} + \frac{3 (Z - 1) ( 1 - 4 \alpha C \dot{Z} )}{x} &=& \frac{\rho}{C}, \label{8a} \\ \nonumber \\
\frac{3 (Z - 1)}{x} + \frac{6 Z \dot{y}}{y} - \frac{24 \alpha C (Z - 1) Z \dot{y}}{x y} &=& \frac{p}{C}, \label{8b} \\ \nonumber \\
2 x Z \left( 4 \alpha C [Z - 1] - x \right) \ddot{y} - \left( x^{2} \dot{Z} + 4 \alpha C \left[ x \dot{Z} - 2 Z + 2 Z^{2} - 3 x Z \dot{Z} \right] \right) \dot{y} \nonumber \\ - \left( 1 + x \dot{Z} - Z \right) y &=& 0, \label{8c}
\end{eqnarray}
where the last equation is the equation of pressure isotropy. Equation (\ref{8c}) has been arranged as a second order differential equation in $ y $, which for some analyses in the 4--dimensional Einstein models, proves to be a useful form. Functional forms for $ Z(x) $ may be selected {\it a priori} so as to allow for the complete integration of the field equations.
For the present work it should be noted that (\ref{8c}) may also be regarded as a first order ordinary differential equation in $ Z $, and may be expressed in the form
\begin{eqnarray}
\left( x^{2} \dot{y} + x y + 4 \alpha C x \dot{y} - 12 \alpha C x \dot{y} Z \right) \dot{Z} + 8 \alpha C \left( \dot{y} - x \ddot{y} \right) Z^{2} + \left( 2 x^{2} \ddot{y} + 8 \alpha C x \ddot{y} - 8 \alpha C \dot{y} - y \right) Z + y = 0. \label{9}
\end{eqnarray}
This is an Abel equation of the second kind for which few solutions are known. However note that choosing forms for $ y $ should in theory result in the expressions for $ Z $ by integration. Therefore we seek choices for the metric potential $ y $ which will allow for a complete resolution of the geometrical and dynamical variables.
\section{Incompressible 5-$d$ hypersphere}\label{sec4}
Since it is true in standard Einstein gravity that conformal flatness, the interior Schwarzschild metric and constant energy density are all equivalent \cite{hansrajjmp}, it is important to examine the incompressible sphere in the present context. The form $ Z=1+x $ which is the Schwarzschild potential generates the constant density distribution in higher dimensional Einstein gravity and also in higher curvature EGB gravity. That is the Schwarzschild solution is a sufficient condition for constant density. This corroborates the result of Dadhich {\it{et al}} \cite{dad-mol-khug} independent of spacetime dimension. However the question of necessaity arises. Does the assumption of constant density give the Schwarzschild metric? We shall analyse this problem below. Moreover, it is known that the Schwarzschild interior metric is conformally flat. This is a purely geometric consequence irrespective of the gravity theory under consideration. In the Einstein theory it is already well known that invoking the conformal flatness condition generates the Schwarzschild metric. It is therefore natural to ask whether the prescription of conformal flatness to close the system of equations also generates the interior Schwarzschild potential in EGB theory? Additionally does the general conformally flat EGB solution produce a constant density fluid. We examine these questions separately for 5 and 6 dimensions.
We note that equation (\ref{8a}) can be integrated when the density is assumed to be constant $\rho = \rho_0$ and gives
\begin{eqnarray}
Z(x) &=& 1 + \frac{x}{4\alpha C} \left( 1 \pm \sqrt{(3 - 4\alpha \rho_0) + \frac{c_1 \alpha^2 C^2}{x^2}} \right), \label{101}
\end{eqnarray}
where $c_1$ is an integration constant. Setting the constant density to vanish $\rho_0 = 0$ regains the vacuum solution (\ref{7}) of Boulware and Deser \cite{boul}. In the event that $c_1 = 0$ the solution of Dadhich {\it{et al}} is regained and it is then demanded that the coupling constant satisfies $\alpha \leq \frac{3}{4\rho_0}$. In this case the form $Z = 1 + x$ arises but this is not in general true for EGB theory in general. Substituting (\ref{101}) in the conformal flatness condition, to be displayed later, regretfully leads to an intractable differential which cannot be solved for now. So the $y$ potential remains unknown in this general case. Note that Dadhich {\it{et al}} claim the integration constant should vanish for regularity at the centre however this is not a valid issue. If (\ref{101}) is written as
\begin{equation} Z = 1 + \frac{x}{4\alpha C} \pm \frac{1}{4} \sqrt{ c_1 + \frac{x^2(3 - 4\alpha \rho_0)}{\alpha^2 C^2}}
\end{equation}
then it is clear that there is no singularity at the stellar centre specifically for the five dimensional case. In fact $Z(0) = 1 + \frac{\sqrt{c_1}}{4}$. Generally it is not acceptable to set integration constants to vanish as their values must be settled when matching across a common boundary interface of the interior and exterior spacetime. Note that switching off the higher curvature effects by setting $\alpha = 0$ is not possible. What is true from (\ref{101}) is that expansion in powers of $\alpha$ will give the interior Schwarzschild metric of standard Einstein theory as $\alpha \rightarrow 0$ and for arbitrary $c_1$. It shall be seen later that the absence of a singularity in the metric at the centre of an incompressible sphere in EGB theory is only peculiar to the 5-$d$ case and not the 6-$d$ constant density sphere. These results challenge the conclusion that the Schwarzschild metric is universal in EGB theory. Clearly the potential (\ref{101}) generalises the Schwarzschild potential.
In what follows we shall analyse the impact of imposing conformal flatness on the field equations without any further restrictions. It will be interesting to see whether the interior Schwarzschild metric is preserved and whether a constant density is an unavoidable consequence of conformal flatness in EGB theory.
\section{Conformal flatness}\label{sec5}
Conformally flat spacetimes are distinguished by the vanishing of the Weyl tensor which in turn determines the constraint equation
\begin{equation}{r^2(\nu'' + 2\nu'^2 -2\nu' \lambda') - r(\nu' - \lambda') + (1-e^{2\lambda}) = 0}\end{equation}
for the line element (\ref{5}). This may be converted to the form
\begin{equation}
4x^2 Z \ddot{y} + 2x^2 \dot{Z}\dot{y} - (\dot{Z}x -Z+1)y = 0, \label{110}
\end{equation}
in our coordinates and must be satisfied for spherically symmetric spacetimes of any dimension. Equation (\ref{110}) admits the separation of variables
\begin{equation}
y = A\sqrt{x} \cosh \left(\frac{1}{2} \int \frac{dx}{x\sqrt{Z}} + B\right), \label{111}
\end{equation}
where $A$ and $B$ are integration constants.
Plugging the result (\ref{111}) into the pressure isotropy condition (\ref{8c}) gives the neat factorisation
\begin{equation}
\left(x \dot{Z}-Z+1\right) \left(-2 \beta \sqrt{Z} \sinh f(x) +(\beta +3 x) \cosh f(x) -3 \beta Z \cosh f(x) \right) =0, \label{112}
\end{equation}
where $f(x) = \left(B+\frac{1}{2} \int \frac{dx}{x \sqrt{Z(x)}} \right)$. The vanishing of the first bracket gives
\begin{equation}
\dot{Z}x -Z + 1 = 0,
\end{equation}
which is solved by $Z = 1 + cx$, $c$ some constant, in general. It can easily be checked that this is the well known Schwarzschild interior solution which yields a constant density or incompressible perfect fluid hypersphere. In standard Einstein gravity it is known that the Schwarzschild interior metric is conformally flat and conversely demanding that a spacetime be conformally flat yields the Schwarzschild solution. Note that on setting $\beta = 0$ in (\ref{112}) it can easily be checked that Schwarzschild remains the unique solution for 5D Einstein gravity and indeed for any dimension thereafter. However the Schwarzschild solution is not the unique conformally flat solution in the context of Einstein-Gauss-Bonnet gravity. The second pair of brackets in (\ref{112}) gives the relationship
\begin{equation}
\tanh f(x) = \frac{3x + \beta (1-3Z)}{2\beta \sqrt{Z}} \label{113}
\end{equation}
on rearranging. To solve equation (\ref{113}) for $Z$ in its present form would be difficult so we endeavor to cast it as a differential equation and profit from results on exact solutions of differential equations.
Differentiating (\ref{113}) permits us to recast the equation as
\begin{equation}
\beta (\beta +3 \beta Z+3 x) \dot{Z}+\sqrt{Z} \left(-(\beta +3 x)^2-9 \beta ^2 Z^2-6 \beta \sqrt{Z}+2 \beta (5 \beta +9 x) Z\right) = 0, \label{114}
\end{equation}
where we used the result $\frac{d(\tanh f)}{dx} = 1 - \tanh ^2 f$. The general solution of equation (\ref{114}) is given by
\begin{equation}
Z = \frac{(9x+5\beta)(e^{2x}-k)^2 + 8k\beta e^{2x} \pm 2(e^{2x} +k)\sqrt{\beta} \sqrt{(9x +4\beta)(e^{2x}-k)^2 +4k\beta e^{2x}}}{9\beta (e^{2x}-k)^2}, \label{115}
\end{equation}
where $k$ is a constant of integration. It is easily checked that the metric potential remains finite at the stellar centre $x = 0$ however this is not the case for the dynamical quantities as shall be seen below.
In order to establish the function $y$ we use the form (\ref{111}) however, in light of the complexity of the spatial potential $Z$, an explicit form of $y$ free of integrals could not be realised. Nevertheless, the problem is exacerbated by the presence of two square roots in the integrand. It may still be possible to study the physical characteristics of the model generated by the metric with the help of numerical methods.
Now it remains to find the physical dynamics that should be analyzed for their contributions to the 5 dimensional model. To pursue this we plug (\ref{111}) and (\ref{115}) into the system (\ref{8a})--(\ref{8b}) then find succinct forms for the energy density and isotropic pressure as
\begin{eqnarray}
\frac{\rho}{C}&=&\frac{\sqrt{\beta } \left(e^{4 x}-k^2\right) K_8+\left(e^{2 x}-k\right) \left(9 k^2-2 k e^{2 x} (2 \beta +18 x+9)+e^{4 x} (20 \beta +36 x+9)\right)-8 \sqrt{\beta } e^{2 x} J_1}{3 \beta \left(e^{2 x}-k\right)^3} \nonumber \\
&&+\frac{\sqrt{\beta }\left(e^{2 x}+k\right) K_6 J_2-36 e^{2 x}\left(e^{2 x}-k\right)^2 \left(\left(k^2+e^{4 x}\right) (5 \beta +9 x)+2 k e^{2 x} (\beta +9 x)\right)J_1}{27 \sqrt{\beta } x \left(e^{2 x}-k\right)^5 J_1}, \label{116} \\
\frac{p}{C}&=& \frac{K_4 -6\beta\left(e^{2 x}-k\right) \left(K_1-2 \sqrt{\beta } \left(k^2-4 k e^{2 x}+e^{4 x}\right)\right) \tanh \left(B+\frac{3}{2} K_3\right) \sqrt{K_2+2\sqrt{\beta}K_1}}{27 \beta x^2 \left(e^{2 x}-k\right)^4},\label{117}
\end{eqnarray}
where for ease of reference we put \\
$
J_1=\sqrt{\left(k+e^{2 x}\right)^2 \left(k^2 (4 \beta +9 x)-2 k e^{2 x} (2 \beta +9 x)+e^{4 x} (4 \beta +9 x)\right)}
$,\\
$
J_2=9 k^4-16 k^2 e^{4 x} (\beta +9 x)+2 k e^{2 x} \left(k^2 (20 \beta +36 x-9)+8 \sqrt{\beta } J_1\right)+2 k e^{6 x} (20 \beta +36 x+9)-9 e^{8 x}$,\\
$
K_1=\sqrt{k^4 (4 \beta +9 x)+4 \beta k^3 e^{2 x}-18 k^2 e^{4 x} x+4 \beta k e^{6 x}+e^{8 x} (4 \beta +9 x)}$,\\
$K_2=e^{4 x} (5 \beta +9 x)-2 k e^{2 x} (\beta +9 x)+k^2 (9x+5\beta)$, \\
$K4=k^4 (9 x-2 \beta )^2-2 \beta ^{3/2} k^2 K_1+2 e^{4 x} \left(9 k^2 \left(4 \beta ^2+27 x^2+4 \beta x\right)-\beta ^{3/2} K_1\right)-4 k e^{2 x} \left(k^2 \left(26 \beta ^2+81 x^2\right)+7 \beta ^{3/2} K_1\right)-4 k e^{6 x} \left(26 \beta ^2+81 x^2\right)+e^{8 x} (9 x-2 \beta )^2$,\\
$K_5K_1=9 k^4+8 \beta k^3 e^{2 x}-18 k^2 e^{4 x}-72 k^2 e^{4 x} x+24 \beta k e^{6 x}+8 e^{8 x} (4 \beta +9 x)+9 e^{8 x}$,\\
$K_6=2 \sqrt{\beta } J_1-4 \beta k^2+9 k^2 x-2 k e^{2 x} (9 x-8 \beta )+e^{4 x} (9 x-4 \beta )$,\\
$K_8J_1=9 k^3+(8 \beta -9) k^2 e^{2 x}-k e^{4 x} (8 \beta +72 x+9)+e^{6 x} (32 \beta +72 x+9)$,\\
$ K_3 =\int \left(x \sqrt{\frac{K_2 +2\sqrt{\beta}J_1}{\beta \left(e^{2 x}-k\right)^2}} \, \right)^{-1} \, dx$. \\
Invoking (\ref{116}) and (\ref{117}) renders the ratio
\begin{eqnarray}
S=\frac{dp}{d\rho}&=&\left[\frac{8 e^{4 x} \left(9 k^2 \left(27 x^2+4 \beta x+4 \beta ^2\right)-{K_1} \beta ^{3/2}\right)-24 e^{6 x} k \left(81 x^2+26 \beta ^2\right)-8 e^{2 x} k \left(7 K_1 \beta ^{3/2}+k^2 \left(81 x^2+26 \beta ^2\right)\right)}{27 \left(e^{2 x}-k\right)^4 x^2 \beta } \right. \nonumber \\
&&\left.+\frac{2 \left(9 k^2 (54 x+4 \beta )-k^2 K_5 \beta ^{3/2}-\frac{1}{2} \beta ^{3/2} K_5\right) e^{4 x}+8 e^{8 x} (9 x-2 \beta )^2-4 e^{2 x} k \left(\frac{7}{2} K_5 \beta ^{3/2}+162 k^2 x\right)-648 e^{6 x} k x}{27 \left(e^{2 x}-k\right)^4 x^2 \beta } \right. \nonumber \\
&&\left.+\frac{8 e^{2 x} K_9 \tanh \left(B+\frac{3 }{2}K_3\right)- \sqrt{2 \sqrt{\beta } K_1+K_2} \left(K_5-4 \left(4 e^{4 x}-8 e^{2 x} k\right) \sqrt{\beta }\right) \tanh \left(B+\frac{3 }{2}K_3\right)}{9 \left(e^{2 x}-k\right)^3 x^2}-\frac{8 e^{2 x} K_4}{27 \left(e^{2 x}-k\right)^5 x^2 \beta } \right.\nonumber \\
&&\left.-\frac{K_9 \beta^{3/2} \text{sech}^2\left(B+\frac{3}{2}K_3\right)}{3 \left(e^{2 x}-k\right)x^3 \sqrt{2 \sqrt{\beta } J_1+K_2}}-\frac{2 K_4-18 e^{8 x} (9 x-2 \beta )-18 k^4 (9 x-2 \beta )}{27 \left(e^{2 x}-k\right)^4 x^3 \beta } \right. \nonumber \\
&&\left.+\frac{4 K_9 \tanh \left(B+\frac{3 }{2}K_3\right)}{9 \left(e^{2 x}-k\right)^2 x^3}-\frac{K_9 \beta ^{3/2}K_{10} \tanh \left(B+\frac{3 }{2}K_3\right)}{9 x^2 \left(2 \sqrt{\beta } K_1+K_2 \right)} \right] \div \nonumber \\
&&\left[\frac{2J_2^3 \left(k+e^{2 x}\right) \left(K_{11}+K_7 \sqrt{\beta }\right)-J_2 \left(k+e^{2 x}\right)^2 4 e^{2 x}K_6 \left((9 x+4 \beta ) k^2-2 e^{2 x} (9 x+2 \beta ) k+e^{4 x} (9 x+4 \beta )\right) }{54 \left(e^{2 x}-k\right)^5 x \sqrt{\beta } J_1^{3}} \right. \nonumber \\
&&\left.+\frac{2 e^{2 x} \left(9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9)\right)-8 e^{2 x} \left(\left(k^2+e^{4 x}\right) (9 x+5 \beta )-2 e^{2 x} k (9 x+\beta )\right)}{3 \left(e^{2 x}-k\right)^3 \beta } \right. \nonumber \\
&&\left.\frac{+2 e^{2 x} \left(9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9)\right)+2 e^{2 x} \left(e^{2 x}-k\right) K_8 \sqrt{\beta }+2 e^{2 x} \left(k+e^{2 x}\right) K_8 \sqrt{\beta }}{3 \left(e^{2 x}-k\right)^3 \beta } \right.\nonumber \\
&&\left.\frac{+\left(e^{2 x}-k\right) \left(36 e^{4 x}-36 e^{2 x} k-4 e^{2 x} (18 x+2 \beta +9) k+4 e^{4 x} (36 x+20 \beta +9)\right)}{3 \left(e^{2 x}-k\right)^3 \beta }-\frac{J_2 \left(k+e^{2 x}\right)^3 K_6 K_{11}}{54 \left(e^{2 x}-k\right)^5 x \sqrt{\beta } J_1^{3}} \right. \nonumber \\
&&\left.+\frac{\left(k+e^{2 x}\right) K_6 \left(72 e^{6 x} k-144 e^{4 x} k^2-64 e^{4 x} (9 x+\beta ) k^2+2 e^{2 x} \left(36 k^2+4 K_7 \sqrt{\beta }\right) k \right)-J_2 \left(k+e^{2 x}\right) K_6}{27 J_1 \left(e^{2 x}-k\right)^5 x \sqrt{\beta }} \right.\nonumber \\
&&\left.+\frac{\left(k+e^{2 x}\right) K_6\left(12 e^{6 x} (36 x+20 \beta +9) k+4 e^{2 x} \left((36 x+20 \beta -9) k^2+8 J_1 \sqrt{\beta }\right) k-72 e^{8 x}\right)+2 e^{2 x} J_2 K_6}{27 J_1 \left(e^{2 x}-k\right)^5 x \sqrt{\beta }} \right.\nonumber \\
&&\left.+\frac{\left(e^{4 x}-k^2\right) \left(2 e^{2 x} (8 \beta -9) k^2-72 e^{4 x} k-4 e^{4 x} (72 x+8 \beta +9) k+72 e^{6 x}+6 e^{6 x} (72 x+32 \beta +9)\right)}{3 \left(e^{2 x}-k\right)^3 \sqrt{\beta } J_1} \right.\nonumber \\
&&\left.+\frac{2 e^{2 x} \left(8 e^{2 x} \sqrt{\beta } J_1+4 e^{2 x} \left(\left(k^2+e^{4 x}\right) (9 x+5 \beta )+2 e^{2 x} k (9 x+\beta )\right)\right)}{\left(e^{2 x}-k\right)^4 \beta }-\frac{10 e^{2 x} J_2\left(k+e^{2 x}\right) K_6}{27 J_1 \left(e^{2 x}-k\right)^6 x \sqrt{\beta }}\right. \nonumber \\
&&\left.+\frac{4 e^{4 x} (9 x+5 \beta )-16 e^{2 x} \sqrt{\beta } J_1-4 e^{2 x} \sqrt{\beta } K_7-4 e^{2 x} \left(9 \left(k^2+e^{4 x}\right)-18 e^{2 x} k-4 e^{2 x} (9 x+\beta ) k \right)}{3 \left(e^{2 x}-k\right)^3 \beta } \right. \nonumber \\
&&\left.-\frac{2 e^{2 x}\left(e^{2 x}-k\right) \left(9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9)\right)+\left(e^{4 x}-k^2\right)K_8 \sqrt{\beta }}{\left(e^{2 x}-k\right)^4 \beta } \right. \nonumber \\
&&\left.-\frac{\left(e^{4 x}-k^2\right) \left(9 k^3+e^{2 x} (8 \beta -9) k^2-e^{4 x} (72 x+8 \beta +9) k+e^{6 x} (72 x+32 \beta +9)\right) K_7}{6 \left(e^{2 x}-k\right)^3 \sqrt{\beta } J_1^2}\right],
\end{eqnarray}}\def\ds{\displaystyle
which represents the square of the speed of sound and where we make the additional substitutions
\begin{eqnarray}
K_7J_1&=&\sqrt{\beta } \left(\left(k+e^{2 x}\right)^2 \left(9 k^2-4 k e^{2 x} (2 \beta +9 x)-18 k e^{2 x}+4 e^{4 x} (4 \beta +9 x)+9 e^{4 x}\right) \right. \nonumber \\
&&\left.+4 e^{2 x} \left(k+e^{2 x}\right) \left(k^2 (4 \beta +9 x)-2 k e^{2 x} (2 \beta +9 x)+e^{4 x} (4 \beta +9 x)\right)\right), \nonumber \\
K_8J_1&=&9 k^3+(8 \beta -9) k^2 e^{2 x}-k e^{4 x} (8 \beta +72 x+9)+e^{6 x} (32 \beta +72 x+9), \nonumber \\
K_9&=&\left(K_1-2 \left(k^2-4 e^{2 x} k+e^{4 x}\right) \sqrt{\beta }\right) \sqrt{\frac{2 \sqrt{\beta } K_1+K_2}{\left(e^{2 x}-k\right)^2 \beta }}, \nonumber \\
K_{10}&=&\frac{9 k^2-18 e^{2 x} k-4 e^{2 x} (9 x+\beta ) k+9 e^{4 x}+4 e^{4 x} (9 x+5 \beta )+K_5 \sqrt{\beta }}{\left(e^{2 x}-k\right)^2 \beta }-\frac{4 e^{2 x} \left(2 \sqrt{\beta } K_1+K_2\right)}{\left(e^{2 x}-k\right)^3 \beta }, \nonumber \\
K_{11}&=& 9 k^2-18 e^{2 x} k-4 e^{2 x} (9 x+2 \beta ) k+9 e^{4 x}+4 e^{4 x} (9 x+4 \beta ), \nonumber
\end{eqnarray}}\def\ds{\displaystyle
to alleviate the complexity of the expression.
Using our expressions (\ref{116}) and (\ref{117}) we are now able to find expressions for the weak, strong and dominant energy conditions in the forms
\begin{eqnarray}
\frac{\rho -p}{C}&=&\frac{2 \sqrt{\beta } K_{13}\left(K_1-2 \left(k^2-4 e^{2 x} k+e^{4 x}\right) \sqrt{\beta }\right) \tanh \left(B+\frac{3}{2}K_3\right)}{9 \left(e^{2 x}-k\right)^2 x^2}-\frac{K_{12}}{27 \left(e^{2 x}-k\right)^4 x^2 \beta }+\frac{ J_2K_6\left(e^{2 x}+k\right)}{27 \left(e^{2 x}-k\right)^5 x J_1\sqrt{\beta } } \nonumber \\
&&+\frac{\left(e^{4 x}-k^2\right) \sqrt{\beta } K_8+\left(e^{2 x}-k\right) \left(9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9)\right)-8 e^{2 x} \sqrt{\beta }J_1-4K_2e^{2 x}}{3 \left(e^{2 x}-k\right)^3 \beta }, \nonumber \\\\ \label{118}
\frac{\rho +p}{C}&=&\frac{2 \sqrt{\beta }K_{13} \left(2 \left(k^2-4 e^{2 x} k+e^{4 x}\right) \sqrt{\beta }-K_1\right) \tanh \left(B+\frac{3}{2}K_3 \right)}{9 \left(e^{2 x}-k\right)^2 x^2}-\frac{4 e^{2 x} \left(\left(k^2+e^{4 x}\right) (9 x+5 \beta )+2 e^{2 x} k (9 x+\beta )\right)}{3 \left(e^{2 x}-k\right)^3 \beta } \nonumber \\
&&+\frac{\left(e^{4 x}-k^2\right) \sqrt{\beta } K_8+\left(e^{2 x}-k\right) \left(9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9)\right)-8 e^{2 x} \sqrt{\beta }J_1}{3 \left(e^{2 x}-k\right)^3 \beta } \nonumber \\
&&+\frac{K_{12}}{27 \beta \left(e^{2 x}-k\right)^4 x^2 }+\frac{ J_2K_6\left(e^{2 x}+k\right)}{27 \left(e^{2 x}-k\right)^5 x J_1\sqrt{\beta } },
\label{119} \\
\frac{\rho +4p}{C}&=&\frac{K_{13} \left(2 \left(k^2-4 e^{2 x} k+e^{4 x}\right) \sqrt{\beta }-K_1\right) \tanh \left(B+\frac{3}{2} K_3\right)}{9 \left(e^{2 x}-k\right)^2 x^2}-\frac{4 e^{2 x} \left(\left(k^2+e^{4 x}\right) (9 x+5 \beta )+2 e^{2 x} k (9 x+\beta )\right)}{3 \left(e^{2 x}-k\right)^3 \beta }\nonumber \\
&&+\frac{\left(e^{4 x}-k^2\right) \sqrt{\beta } K_8+\left(e^{2 x}-k\right) \left(9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9)\right)-8 e^{2 x} \sqrt{\beta }J_1}{3 \left(e^{2 x}-k\right)^3 \beta } \nonumber \\
&& +\frac{4K_{12}}{27 \beta \left(e^{2 x}-k\right)^4 x^2 }+\frac{ J_2K_6\left(e^{2 x}+k\right)}{27 \left(e^{2 x}-k\right)^5 x J_1\sqrt{\beta } }, \label{120}
\end{eqnarray}}\def\ds{\displaystyle
respectively. We have additionally set\\
$K_{12}=2 e^{4 x} \left(9 k^2 \left(27 x^2+4 \beta x+4 \beta ^2\right)-\beta ^{3/2} J_1\right)-4 e^{2 x} k \left(7 J_1 \beta ^{3/2}+k^2 \left(81 x^2+26 \beta ^2\right)\right)+e^{8 x} (9 x-2 \beta )^2-2 k^2 J_1 \beta ^{3/2}+k^4 (9 x-2 \beta )^2-4 e^{6 x} k \left(81 x^2+26 \beta ^2\right)$,
$K_{13}= \sqrt{\frac{K_2+2 \sqrt{\beta }J_1}{\left(e^{2 x}-k\right)^2 \beta }}$ and
$K_{14}=\frac{K_4}{27 \left(e^{2 x}-k\right)^4 x^2 \beta }$.\\
The Chandrasekhar stability index is given by
\begin{eqnarray}
\Gamma &=& \left(e^{2 x}-k\right) \left(\left(e^{2 x}+k\right)K_{15}\right)^{3/2} \left(\frac{\beta K_{16}}{(9 x+4 \beta ) k^3-9 e^{2 x} x k^2-9 e^{4 x} x k+e^{6 x} (9 x+4 \beta )}\right. \nonumber\\
&& \left. +\frac{3 \left(e^{2 x}-k\right)^2 \beta^{3/2} \sech^2 \left(B+\frac{3 }{2}K_3\right) K_{17}}{2 K_{15}\left(2 \sqrt{\beta} K_1+K_2\right)}\right) \times \left(\frac{K_{12}}{27 \left(e^{2 x}-k\right)^4 x^2 \beta}+\frac{\left(k+e^{2 x}\right) K_6 K_{18}}{27 \left(e^{2 x}-k\right)^5 x \sqrt{\beta} J_1} \right. \nonumber\\
&& \left. +\frac{2 \left(2 \left(k^2-4 e^{2 x} k+e^{4 x}\right) \sqrt{\beta}-K_1\right) \sqrt{2 \sqrt{\beta} J_1+K_2 } \tanh \left(B+\frac{3 }{2}K_3\right)}{9 \left(e^{2 x}-k\right)^3 x^2} \right. \nonumber\\
&& \left. +\frac{\left(e^{2 x}-k\right) K_{19}-8 e^{2 x} \sqrt{\beta } J_1-4 e^{2 x} K_2+\left(e^{4 x}-k^2\right) K_8 \sqrt{\beta }}{3 \left(e^{2 x}-k\right)^3 \beta }\right) \div \nonumber\\
&&\left[\frac{1}{27 \left(e^{2 x}-k\right)^4 x^2 \beta }\left(2 \left(k+e^{2 x}\right)^2 x \beta \left(9 \left((9 x+4 \beta ) J_1-k^2 \sqrt{\beta } (27 x+8 \beta )\right) k^8-36\beta k^7 e^{2 x}\left(20 x^2+6 x+3\right)J_1 \right.\right.\right. \nonumber \\
&&\left.\left.\left.-4 e^{2 x} \left(\left(8\beta ^2 (2 x-1) +81 x \left(4 x^2+2 x+1\right)\right) J_1-2 k^2 \sqrt{\beta } \left(8 (2 x-1) \beta ^2+9 (4 (x-1) x+7) \beta +243 x\right)\right) k^7 \right.\right.\right.\nonumber \\
&&\left.\left.\left.-e^{4 x} \left(\sqrt{\beta } \left(64 (95 x-27) \beta ^2+72 (64 x (6 x-1)+15) \beta +243 x \left(128 x^2+21\right)\right) k^2+4 J_1 \left(648 (2 \beta -1) x^2\right)\right) k^6 \right.\right. \right.\nonumber \\
&&\left.\left. \left.+4 e^{6 x} \left(4 \sqrt{\beta } \left(-16 (19 x+11) \beta ^2+36 (x (28 x-17)+1) \beta +243 \left(16 x^3+x\right)\right) k^2+36 \beta \left(20 x^2-78 x+3\right)J_1 \right.\right.\right.\right.\nonumber \\
&&\left.\left.\left.\left.+J_1 \left(-8 (114 x+13) \beta ^2 +81 x (2 x (18 x-5)+1)\right)\right) k^5 +2\sqrt{\beta } e^{8 x} \left(32 (43-65 x) \beta ^2+576 (6 x (4 x+1)+1) \beta\right)k^2\right.\right. \right.\nonumber \\
&&\left.\left. \left.+2 e^{8 x} \left(\sqrt{\beta } \left( 243 x \left(64 x^2+11\right)\right) k^2+J_1 (x (-64 (9 x+\beta ) (18 x-11 \beta )-405)-108 \beta )\right) k^4 +32\beta ^2k^3 e^{10 x}(13-114 x) \right.\right.\right. \nonumber \\
&&\left.\left.\left.+4 e^{10 x} \left(\left(9 \left(20 x^2+78 x+3\right) \beta +81 x (2 x (18 x+5)+1)\right) J_1-4 k^2 \sqrt{\beta } \left(27 \left(76 x^2+5\right) \beta +243 x \left(32 x^2+3\right)\right)\right) k^3 \right.\right. \right.\nonumber \\
&&\left.\left.\left.-2 e^{12 x} \left(\sqrt{\beta } \left(32 (65 x+43) \beta ^2-576 (6 x (4 x-1)+1) \beta -243 x \left(64 x^2+11\right)\right) k^2+2k^2(48 \beta (11 \beta +9)-81) x J_1 \right.\right. \right. \right.\nonumber \\
&&\left.\left.\left.\left.+2 J_1 \left(648 (2 \beta +1) x^2+2 \beta (52 \beta -9)\right)\right) k^2+16\sqrt{\beta }k^2 e^{14 x}\left(16 (11-19 x) \beta ^2+36 (x (28 x+17)+1) \beta\right) \right. \right.\right.\nonumber \\
&&\left.\left.\left.+4 e^{14 x} \left(4 k^2 \sqrt{\beta } \left(243 \left(16 x^3+x\right)\right)-\left(8 (2 x+1) \beta ^2+9 \left(20 x^2-6 x+3\right) \beta +81 x \left(4 x^2-2 x+1\right)\right) J_1\right) k\right.\right. \right.\nonumber \\
&&\left.\left.\left.+8 e^{18 x} \sqrt{\beta } \left(8 (2 x+1) \beta ^2+9 (4 x (x+1)+7) \beta +243 x\right) k-e^{4x}\left((48 \beta (11 \beta -9)-81) x-2 \beta (52 \beta +9)\right)J_1 \right. \right.\right. \nonumber \\
&&\left.\left.\left.+e^{16 x} \left(9 (9 x+4 \beta ) J_1-k^2 \sqrt{\beta } \left(64 (95 x+27) \beta ^2+72 (64 x (6 x+1)+15) \beta +243 x \left(128 x^2+21\right)\right)\right)\right.\right. \right.\nonumber \\
&&\left.\left. \left. -11008k^5 \beta ^{5/2} x e^{10x} -9 e^{20 x} \sqrt{\beta } (27 x+8 \beta )\right) K_4 \right)-\frac{2 \sqrt{\beta } K_9 \tanh \left(B+\frac{3 }{2}K_3\right)}{9 \left(e^{2 x}-k\right)^2 x^2}\right],
\end{eqnarray}
where
\begin{eqnarray}
K_{15}&=&\left(k+e^{2 x}\right) \left((9 x+4 \beta ) k^2-2 e^{2 x} (9 x+2 \beta ) k+e^{4 x} (9 x+4 \beta )\right), \nonumber \\
K_{16}&=& 8k^7e^{2x}\left(81 (4 x+1) x^2+18 (19 x-13) \beta x+4 (22 x-27) \beta ^2\right)-\left(4 (9 x+4 \beta ) (9 x-2 \beta ) k^2+\sqrt{\beta } (27 x+16 \beta ) K_1\right) k^6 \nonumber \\
&&+4 e^{2 x} \left(2 \left(81 (4 x+1) x^2+18 (19 x-13) \beta x+4 (22 x-27) \beta ^2\right) k^2+K_1 \sqrt{\beta } (x (162 x+74 \beta -81)-52 \beta )\right) k^5 \nonumber \\
&&-e^{4 x} \left(8 k^2 \left(81 (8 x-1) x^2-18 (2 x+37) \beta x-8 (21 x+22) \beta ^2\right)-\sqrt{\beta } K_1 (208 \beta +x (864 x+800 \beta +729))\right) k^4 \nonumber \\
&&-8 e^{6 x} \left(\left(81 (4 x+3) x^2+18 (19 x+33) \beta x+4 (43-42 x) \beta ^2\right) k^2+18 K_1 x \sqrt{\beta } (21 x+\beta )\right) k^3 \nonumber \\
&&+e^{8 x} \left(64 x \left(162 x^2-9 \beta x+22 \beta ^2\right) k^2+K_1 \sqrt{\beta } (x (864 x+800 \beta -729)-208 \beta )\right) k^2+4 e^{16 x} (9 x+4 \beta ) (9 x-2 \beta ) \nonumber \\
&&+8 e^{14 x} \left(81 (4 x-1) x^2+18 (19 x+13) \beta x+4 (22 x+27) \beta ^2\right) k+4ke^{10 x}\sqrt{\beta } K_1 (52 \beta +x (162 x+74 \beta +81)) \nonumber \\
&&-4 e^{10 x} \left(2 k^2 \left(81 (4 x-3) x^2+18 (19 x-33) \beta x-4 (42 x+43) \beta ^2\right)\right) k+\sqrt{\beta }e^{12 x} (27 x+16 \beta ) K_1 \nonumber \\
&&+e^{12 x} \left(-8 k^2 \left(81 (8 x+1) x^2+18 (37-2 x) \beta x+8 (22-21 x) \beta ^2\right)\right), \nonumber \\
K_{17}&=&6 e^{4 x} \left(e^{-2 x} k^2-e^{2 x}\right) \left(-2 e^{8 x} \beta ^{3/2}+4 e^{6 x} k (27 x+13 \beta ) \sqrt{\beta }+k^2 \left((9 x+\beta ) K_1-2 k^2 \beta ^{3/2}\right)\right)-36 k^2 \sqrt{\beta }e^{4 x} (6 x+\beta ) \nonumber \\
&&+e^{4 x} \left((9 x+\beta ) K_1+2 e^{2 x} k \left(2 \sqrt{\beta } (27 x+13 \beta ) k^2+K_1 (7 \beta -9 x)\right)\right) \left(e^{-2 x} \left(k^2+e^{4 x}\right) (9 x+4 \beta )-2 k (9 x+2 \beta )\right) \nonumber \\
&&+\left(\left(k^2 \sqrt{\beta } (9 x+4 \beta ) (9 x+8 \beta )-\left(162 x^2+117 \beta x+16 \beta ^2\right) K_1\right) k^6+4 K_1 e^{2 x}\left(9 (28 x-3) \beta x+26 (3 x-2) \beta ^2\right)k^5 \right.\nonumber \\
&&\left.+2 e^{2 x} \left(3 \sqrt{\beta } \left(27 (20 x-19) x^2+6 (82 x-93) \beta x+16 (7 x-9) \beta ^2\right) k^2+2 K_1 \left(162 (x+1) x^2\right)\right) k^5 \right. \nonumber \\
&&\left.-e^{4 x} \left(\left(162 (16 x+5) x^2+27 (32 x-21) \beta x-16 (24 x+13) \beta ^2\right) K_1-2 k^2 \sqrt{\beta } \left(81 (71-40 x) x^2\right)\right) k^4 \right. \nonumber \\
&&\left.+2 e^{6 x} \left(\sqrt{\beta } \left(-81 (20 x+87) x^2-18 (82 x+225) \beta x+16 (27 x-43) \beta ^2\right) k^2+72 K_1 x \left(27 x^2-2 \beta x+\beta ^2\right)\right) k^3 \right. \nonumber \\
&&\left.+e^{8 x} \left(96 k^2 x \sqrt{\beta } \left(135 x^2+21 \beta x+14 \beta ^2\right)-\left(162 (16 x-5) x^2+27 (32 x+21) \beta x+16 (13-24 x) \beta ^2\right) K_1\right) k^2 \right. \nonumber \\
&&\left.+2 e^{10 x} \left(\sqrt{\beta } \left(81 (87-20 x) x^2+18 (225-82 x) \beta x+16 (27 x+43) \beta ^2\right) k^2+2 K_1 \left(162 (x-1) x^2+9 (28 x+3) \beta x\right)\right) k \right. \nonumber \\
&&\left.+6 e^{14 x} \sqrt{\beta } \left(27 (20 x+19) x^2+6 (82 x+93) \beta x+16 (7 x+9) \beta ^2\right) k +72k^6\beta^{3/2}e^{4x} (123-14 x) x\right. \nonumber \\
&&\left.+e^{12 x} \left(\left(162 x^2+117 \beta x+16 \beta ^2\right) K_1-2 k^2 \sqrt{\beta } \left(81 (40 x+71) x^2+36 (14 x+123) \beta x+16 (44-27 x) \beta ^2\right)\right) \right. \nonumber \\
&&\left.+104k\beta^2 (3 x+2)K_1-32 k^6 \beta ^{5/2}e^{4x} (27 x+44) -e^{16 x} \sqrt{\beta } (9 x+4 \beta ) (9 x+8 \beta )\right) \sinh (2 B+3 K_3) \sqrt{\frac{K_2+2 K_1 \sqrt{\beta }}{\left(e^{2 x}-k\right)^2 \beta }}, \nonumber \\
K_{18}&=&9 k^4-16 e^{4 x} (9 x+\beta ) k^2+2 e^{2 x} \left((36 x+20 \beta -9) k^2+8 J_1 \sqrt{\beta }\right) k+2 e^{6 x} (36 x+20 \beta +9) k-9 e^{8 x}, \nonumber \\
K_{19}&=&9 k^2-2 e^{2 x} (18 x+2 \beta +9) k+e^{4 x} (36 x+20 \beta +9). \nonumber
\end{eqnarray}
Inserting the energy density profile into the gravitational mass formula gives
\begin{eqnarray}
M&=&\frac{1}{3\sqrt{\beta }C^{3/2}}\int x^{3/2} \left(\frac{e^{4 x} (20 \beta +36 x+9)-8 \sqrt{\beta } J_1 e^{2 x}+\left(e^{2 x}-k\right) \left(9 k^2-2 k e^{2 x} (2 \beta +18 x+9)\right)}{\sqrt{\beta } \left(e^{2 x}-k\right)^3} \right. \nonumber \\
&&\left.+\frac{ K_8 \sqrt{\beta }\left(e^{2 x}+k\right)-4 K_2 e^{2 x}}{\sqrt{\beta } \left(e^{2 x}-k\right)^2}+\frac{\left(e^{2 x}+k\right) J_2 K_6 }{9 \left(e^{2 x}-k\right)^5xJ_1}\right) \, dx ,
\end{eqnarray}}\def\ds{\displaystyle
for a $5-d$ hypersphere of perfect fluid.
\begin{figure}[h]
\centering
\includegraphics[width=0.43\linewidth]{Fig1.png}
\includegraphics[width=0.43\linewidth]{Fig7.png}
\caption{The pressure profile is plotted as a function of radius for $5D$ and $6D$ EGB gravity, respectively.
} \label{f1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\linewidth]{Fig2.png}
\includegraphics[width=0.43\linewidth]{Fig8.png}
\caption{Variation of density as a function of the radial
coordinate for compact star. The labels of the curves are the same as given in Fig. \ref{f1}.} \label{f2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\linewidth]{Fig3.png}
\includegraphics[width=0.43\linewidth]{Fig9.png}
\caption{Variation of speed of sound with radius. } \label{f3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\linewidth]{Fig4.png}
\includegraphics[width=0.43\linewidth]{Fig10.png}
\caption{Variation of energy conditions with radius. } \label{f4}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\linewidth]{Fig5.png}
\includegraphics[width=0.45\linewidth]{Fig11.png}
\caption{Variation of adiabatic index with radius. } \label{f5}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\linewidth]{Fig6.png}
\includegraphics[width=0.43\linewidth]{Fig12.png}
\caption{Mass as a function of the radial
coordinate for compact star. In both
cases the parameters are given in Fig. \ref{f1}. } \label{f6}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.49\linewidth]{Fig19.png}
\includegraphics[width=0.45\linewidth]{Fig20.png}
\caption{Variation of hydrodynamic ($F_h$) and gravitational ($F_g$) forces versus the radial coordinate $x$. } \label{f9}
\end{figure}
\subsection{Physical admissibility of the five dimensional solution}
We now proceed to analyse in a qualitative sense the physical properties of our new conformally flat spacetime in EGB. The very complicated and lengthy expressions that emanate prohibit an analytical treatment of the physics therefore we have resorted to a graphical analysis using suitably selected parameter spaces.
The physical characteristics of the 5-$d$ hypersphere are displayed in the left panels of each of the figures while the 6-d plots appear on the right. In the case of the 5-d plots we have selected the parameters as follows after extensive empirical fine-tuning: $A=4, ~B=1,~C=0.06, k=0.6,~ \beta =0.4$ and $\alpha =0.01$.
It is worth noting at this point that the specific models we are presenting are only valid for the choices of parameter space in both 5 and 6 dimensions. There are a number of parameters that are moving parts in the system so it is not possible to say that the solution is generic. We have exhibited the existence of at least one physically well behaved model for the general EGB spacetime we have found in 5 and 6D. Researchers working in simulations may utilise our general solution and incorporate data pertaining to known structures to make an identification with observed phenomena such as neutron stars, white dwarfs, cold planets and so on.
\begin{enumerate}
\item [(i)] From Fig. \ref{f1} it may be observed that the pressure decreases smoothly towards the surface layer of the star. The pressure vanishes for some finite radius corresponding approximately to $x = 0.01$ which defines the boundary of the star. This immediately suggests that the model may represent a compact star. \\
\item[(ii)] The density profile plotted as a function of the radial coordinate $x$ in Fig. \ref{f2} reveals that the density is a monotonically decreasing function as one moves from the stellar center towards the stellar boundary. This is considered to be physically reasonable. \\
\item[(iii)] In order to prevent superluminal sound speeds within the stellar fluid it is required that the condition $0 \leq \frac{dp}{d\rho} \leq 1$ be satisfied throughout the stellar interior. From Fig. \ref{f3} we observe that this criterion is met everywhere except at the centre of the star. In order for our model to obey causality requirements we must make an incision from $r = 0$ to some finite radius $r = r_0$, thus removing this portion of the solution. We can then replace this portion with a well--behaved solution from the center ($r = 0$) to $r = r_0$ and match it to our present solution from $r = r_0$ to $r = R$ where $R$ is the boundary of the star. In this way we can effectively create core-envelope models of compact objects in 5D EGB gravity which to our knowledge do not appear in the literature. We do not pursue this aspect now however we indicate a work-around the presence of the undesirable central singularity. \\
\item[(iv)] The physical admissibility of our solutions require that the following energy conditions, viz., (i)weak energy condition (WEC), (ii) strong energy
condition (SEC) and (iii) dominant energy condition (DEC) hold true at each interior point of the star.
These conditions are equivalent to
\begin{eqnarray}
\text{WEC}: \rho - p \geq 0,~~~\text{SEC}: \rho+p \geq 0,~~~
\text{DEC}: \rho+4p \geq 0.\label{eq24}
\end{eqnarray}
It is clear from Fig. \ref{f4}. that all three energy conditions are satisfied for the 5-$d$ model we have developed and for our choice of parametric space.\\
\item[(v)] In the absence of anisotropy or dissipation it is well-known that there has no upper mass limit if the EoS has an adiabatic index $\Gamma > 4/3$ where
\begin{equation}
\Gamma=\frac{p+\rho}{p}\,\frac{dp}{d\rho}\, ,\label{eq31}
\end{equation}
characterising stable regions within the fluid sphere. This result has been extended to include pressure anisotropy, viscosity and heat flow by Chan and co-workers \cite{st1,st2}. They showed that dissipation in particular, renders the system less unstable by diminishing the total mass entrapped inside the fluid sphere. Fig. \ref{f5} reveals an interesting peculiarity in the 5D stability index. The central portion of the star appears to be more stable than regions greater than some finite radius away from the center. This behaviour is unphysical in a realistic stellar model as the central density is highest here and would expect more heating and thus thermodynamically unstable regions. As one moves away to the cooler regions, heat generation would drop-off thus leading to stable regions. The adiabatic index points to the fact that our solutions would be suitable in describing a core-envelope model of a compact object in EGB gravity. \\
\item[(vi)] The mass profile for the 5D model is plotted in Fig. \ref{f6}. We observe that the mass function increases as one tends away from the centre of the stellar configuration with $m(0) = 0$ which is expected as more mass is contained within larger concentric shells. \\
\item[(vii)] A star remains in static equilibrium under the forces namely, gravitational force and hydrostatics force, if the summation of two active forces on the system is zero. Along these directions, a wide class of astrophysical solutions
(including wormholes/compact stars) have been studied ( see, e.g.,
Refs. \cite{Rahaman:2013xoa,Rahaman:2014dpa,Sarkar:2019lay,Maurya:2018kxg}).
This condition is formulated mathematically via,
\begin{eqnarray}\label{mtov}
-\frac{dp}{dx}-{\dot{y}\over y}(\rho+p)=0, ~~~\text{or}~~~F_h+F_g =0,
\end{eqnarray}
where the first term represents the hydrodynamic force ($F_h $) and the second term corresponds to gravitational force ($F_g$), respectively. The variation of the forces is shown in Fig. \ref{f9} and our system is stable in terms of the equilibrium of forces.
\end{enumerate}
The analysis above suggests that the conformally flat stellar model we have derived does exhibit pleasing physical behaviour consistent with the elementary requirements. In contrast with the other conformally flat solution the Schwarzschild interior, this solution displays a variable energy density and consequently does not suffer from the pathology of an infinite sound speed.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Fig13.png}\\
\includegraphics[width=1\linewidth]{Fig14.png}\\
\includegraphics[width=1\linewidth]{Fig15.png}
\caption{Pressure, energy density and speed of sound versus radial pulsation $x + \epsilon x$ for $\epsilon$ of the order $0.1$, $ 0.01$ and $0.001$, respectively. The labels of the curves are the same as given in Fig. \ref{f1}.} \label{f7}
\end{figure}
\section{Six dimensional case}\label{sec6}
We now turn our attention to the remaining critical dimension in EGB namely the six dimensional case. The EGB field equations for a six dimensional metric
\begin{equation}
ds^{2} = -e^{2 \nu} dt^{2} + e^{2 \lambda} dr^{2} + r^{2}d\Omega^2, \label{5}
\end{equation}
where $d\Omega^2= d\theta^{2} + \sin^{2} \theta d \phi^{2} + \sin^{2} \theta \sin^{2} \phi d\psi^{2} + \sin^{2} \theta \sin^{2} \phi \sin^{2}\psi d\eta^{2}$
(\ref{2}) when expanded amount to the system
\begin{eqnarray}
\rho &=& \frac{1}{e^{4 \lambda} r^{4}} \left[ ( 4 r^{3} e^{2 \lambda} - 48 \alpha r ( 1 - r e^{2 \lambda}) ) \lambda^{\prime} - 6 r^{2} e^{2 \lambda} (1 - e^{2 \lambda}) + 12 \alpha (e^{2 \lambda} - 1)^{2} \right], \label{7a} \\ \nonumber \\
p &=& \frac{1}{e^{4 \lambda} r^{4}} \left[ (1 - e^{2 \lambda}) (6 r^{2} e^{2 \lambda} - 48 \alpha r \nu^{\prime} + 12 \alpha e^{2 \lambda} - 12 \alpha) + 4 r^{3} e^{2 \lambda} \nu^{\prime} \right], \label{7b} \\ \nonumber \\
p &=& \frac{1}{e^{4 \lambda} r^{2}} \left( (12 \alpha (e^{2 \lambda} - 1) + r^{2} e^{2 \lambda}) (\nu^{\prime \prime} + (\nu^{\prime})^{2} - \nu^{\prime} \lambda^{\prime}) + 24 \alpha \nu^{\prime} \lambda^{\prime} \right) \nonumber \\
& \quad & + \frac{1}{e^{4 \lambda} r^{3}} \left( (3 r^{2} e^{2 \lambda} + 12 \alpha (e^{2 \lambda} - 1)) (\nu^{\prime} - \lambda^{\prime}) + 3 r e^{2 \lambda} (1 - e^{2 \lambda}) \right), \label{7c}
\end{eqnarray}
in the canonical spherical coordinates $ (x^{a}) $. The nonlinearity in the system (\ref{7a})--(\ref{7c}) has now greatly increased because of the presence of the EGB coupling parameter $ \alpha $ and further dynamical terms that were suppressed in 5-d.
The system (\ref{7a})--(\ref{7c}) may be converted to the equivalent form
\begin{eqnarray}
\frac{ 12\beta x (Z - 1) \dot{Z} - 4 x^{2} \dot{Z} - 6 x (Z - 1) + 3\beta (Z - 1)^{2} }{x^{2}} &=& \frac{\rho}{C}, \label{9a} \\ \nonumber \\
\frac{\left( 24\beta x (1 - Z) + 8 x^{2} \right) Z \dot{y} + (Z - 1) (6x + 3\beta (1 - Z)) y}{x^{2} y} &=& \frac{p}{C}, \label{9b} \\ \nonumber \\
4 x^{2} Z \left( x + 3\beta [1-Z] \right) \ddot{y} + 2 x \left( x^{2} \dot{Z} + 3 \beta \left[ (1 - 3 Z) \dot{Z}x - 2 Z ( 1 - Z) \right] \right) \dot{y} \nonumber \\ + 3 \left(\beta (1-Z)+x\right)\left(\dot{Z}x-Z+1\right) y &=& 0. \label{9d}
\end{eqnarray}
through the coordinate change $ x = C r^{2} $, $ e^{2 \nu} = y^{2} (x) $ and $ e^{- 2 \lambda} = Z(x) $. Note that (\ref{9d}) is the equation of pressure isotropy in six-dimensional EGB theory and we have redefined $\beta = 4\alpha C$. It has been expressed as a linear second order differential equation in $ y $ (if $ Z $ is a known quantity) which is the great advantage of this coordinate choice. The condition of pressure of isotropy may also be expressed as
\begin{eqnarray}
\left[ x^{2} (2 x \dot{y} + 3 y) + 3\beta x (2 x \dot{y} + y - (6 x \dot{y} + y) Z) \right] \dot{Z} \nonumber \\
- 3 \beta \left[ 4 x^{2} \ddot{y} - 4 x \dot{y} - y \right] Z^{2} + \left[ x (4 x^{2} \ddot{y} - 3 y) + 6\beta (2 x^{2} \ddot{y} - 2 x \dot{y} - y) \right] Z
+ 3 (x + \beta) y &=& 0 \label{9e}
\end{eqnarray}
a nonlinear first order differential in $ Z $ (if $y$ is a known quantity). We observe that equation (\ref{9a}) is easily integrated if we assume that the density is constant, ie., $\rho = \rho_0$. Integration of (\ref{9a}) yields
\begin{eqnarray}
Z(x) &=& 1 + \frac{x}{3\beta} \left(1 \pm \sqrt{\frac{9c_1\beta^2 }{x^{\frac{5}{2}}} + \frac{3\beta \rho_0 + 5C}{5C} } \right) , \label{102}
\end{eqnarray}
where $c_1$ is a constant of integration. Note that (\ref{102}) when written in the form
\begin{equation}
Z = 1 + \frac{x}{3\beta} \pm \frac{1}{3\beta} \sqrt{\frac{9c_1 \beta^2}{x^{1/2}}+ \left(1 + \frac{3\beta \rho_0}{5C} \right)x^2}
\end{equation}
reveals a singularity at the centre $x = 0$ unlike the 5-dimensional case which was singularity-free. This point further illustrates the manifest different physics inherent in 5 and 6 dimensional EGB gravity. Note that the six dimensional vacuum solution of Boulware and Deser \cite{boul} is regained from result (\ref{102}) by setting the constant density $\rho_0 = 0$ and is given by
\begin{eqnarray}
Z(x) &=& 1 + \frac{x}{3\beta} \left(1 \pm \sqrt{1 + \frac{ 9c_1\beta^2 }{x^{\frac{5}{2}}} } \right). \label{102a}
\end{eqnarray}
Bogdanos {\it{et al}} \cite{bogd} have analysed the 6--dimensional case in EGB and demonstrated the validity of Birkhoff's theorem for this order.
Equation (\ref{9e}) is an Abel differential equation of the second kind and admits only a few exact solutions for certain cases.
Inserting (\ref{111}) into the pressure isotropy equation gives
\begin{equation}
\left(x \dot{Z}-Z+1\right) \left(-3 \beta \sqrt{Z} \sinh f(x) +(3 \beta +2 x) \cosh f(x) -6 \beta Z \cosh f(x) \right) = 0 \label{150}
\end{equation}
after factorization. As for the 5 dimensional case, the Schwarzschild interior solution emanates from the vanishing of the first factor on the left hand side of (\ref{150}). The other factor generates the condition
\begin{equation}
\tanh f(x) = \frac{2x + 3\beta (1 -2Z)}{3\beta \sqrt{Z}} \label{151}
\end{equation}
which has a remarkably similar form similar to that for the 5 dimensional case despite the presence of extra terms in the pressure isotropy equation.
Following the same procedures as in the previous section the general solution of (\ref{151}) has the form
\begin{equation}
Z =\frac{(e^{4 x} + h^2) (15 \beta +8 x)-2h (9 \beta +8 x) e^{2 x }}{24 \beta \left(e^{2 x}-h\right){}^2}
\pm \frac{ \sqrt{ \left(e^{2 x}+h \right){}^2 \left((e^{4 x} +h^2) (27 \beta +16 x)-2 h(21 \beta +16 x) e^{2 x}\right)}}{8 \sqrt{ 3\beta} \left(e^{2 x}-h\right){}^2} \label{152}
\end{equation}
where $h$ is a constant of integration. Once again the form of the temporal potential may be obtained via equation (\ref{111}) albeit in terms of an unresolved integral.
We calculate the physical quantities relevant to our investigation. Not unexpected, these expressions are unwieldy on account of the long expressions for the gravitational potentials. The energy density is given by
\begin{eqnarray}
\frac{\rho}{C} &=& \left[4 L_0 x \left(-9 k^3 \left(\sqrt{L_3 \left(k+e^{2 x}\right)^2}-\sqrt{\beta } k^2\right)+k^2 e^{2 x} \left(\sqrt{\beta } k^2 (40 \beta +72 x-9)+(27-16 \beta ) \sqrt{L_3 \left(k+e^{2 x}\right)^2}\right) \right. \right. \nonumber\\
&& \left. \left.
-k e^{4 x} \left(6 \sqrt{\beta } k^2 (-4 \beta +12 x+3)+(16 \beta +27) \sqrt{L_3 \left(k+e^{2 x}\right)^2}\right)+3 e^{6 x} \left(2 \sqrt{\beta } k^2 (4 \beta -12 x+3) \right. \right. \right. \nonumber\\
&& \left. \left. \left.
+3 \sqrt{L_3 \left(k+e^{2 x}\right)^2}\right)+\sqrt{\beta } k e^{8 x} (40 \beta +72 x+9)-9 \sqrt{\beta } e^{10 x}\right)\right] \times \frac{1}{x^2}+\frac{\left(-L_0\right){}^2}{27 \beta x^2 \left(e^{2 x}-k\right)^4} \nonumber\\
&&
+\frac{2 x \left(4 \beta \left(k^2-4 k e^{2 x}+e^{4 x}\right)-9 x \left(e^{2 x}-k\right)^2+2 \sqrt{\beta } L_1\right)}{3 \beta x^2 \left(e^{2 x}-k\right)^2}
\Big/ \left[\frac{27 \beta \left(e^{2 x}-k\right)^5 \sqrt{L_3 \left(k+e^{2 x}\right)^2}}{x^2} \right. \nonumber\\
&& \left.
-4 \left(\frac{\frac{9 \left(k^2-2 k e^{2 x} (2 x+1)+e^{4 x} (4 x+1)\right)}{\beta }+\frac{L_5}{\sqrt{\beta } \sqrt{L_3 \left(k+e^{2 x}\right)^2}}-4 k e^{2 x}+20 e^{4 x}}{9 \left(e^{2 x}-k\right)^2}-\frac{4 L_0 e^{2 x}}{9 \beta \left(e^{2 x}-k\right)^3}\right)\right],
\end{eqnarray}
while the isotropic particle pressure assumes the form
\begin{eqnarray}
\frac{p}{C} &=& \frac{L_0 \left(4 \beta k^2+9 k^2 x+2 \sqrt{\beta } \sqrt{L_3 \left(k+e^{2 x}\right)^2}-2 k e^{2 x} (8 \beta +9 x)+e^{4 x} (4 \beta +9 x)\right)}{27 \beta x^2 \left(e^{2 x}-k\right)^4} \nonumber\\
&&
+ \frac{1}{27 x^2 \left(e^{2 x}-k\right)^2}
\left[8 \sqrt{\frac{9 \beta \left(k^2-2 k e^{2 x}+e^{4 x}\right)+L_0}{\beta \left(e^{2 x}-k\right)^2}} \left(2 \beta k^2-3 k^2 x+\sqrt{\beta } \sqrt{L_3 \left(k+e^{2 x}\right)^2} \right. \right. \nonumber\\
&& \left. \left.
+2 k e^{2 x} (3 x-4 \beta )+e^{4 x} (2 \beta -3 x)\right) \left(\sqrt{\frac{9 \beta \left(k^2-2 k e^{2 x}+e^{4 x}\right)+L_0}{\beta \left(e^{2 x}-k\right)^2}}+L_4\right)\right].
\end{eqnarray}
The expressions below will assist us analyse the energy conditions.
\begin{eqnarray}
\frac{\rho-p}{C} &=& \left[2 \left(12 \sqrt{\beta } k^6 L_1 \left(-4 \beta ^2+81 x^2+17 \beta x\right)-2 k^5 e^{2 x} \left(k^2 \left(270 \beta (8 x+5) x^2-3645 x^3+48 \beta ^2 (23 x+32) x \right.\right. \right.\right. \nonumber\\
&& \left.\left.\left.\left.
+16 \beta ^3 (4 x+9)\right)-8 \sqrt{\beta } L_1 \left(27 (2 x-9) x^2+\beta ^2 (33-4 x)+6 \beta (5 x+6) x\right)\right) \right.\right. \nonumber\\
&& \left.\left.
+2 k^4 e^{4 x} \left(3 k^2 \left(720 \beta (2 x+5) x^2-2835 x^3+8 \beta ^2 (199-44 x) x-32 \beta ^3 (10 x+1)\right) \right.\right. \right. \nonumber\\
&& \left.\left. \left.
+2 \sqrt{\beta } L_1 \left(27 (45-32 x) x^2+4 \beta ^2 (104 x-33)+9 \beta (64 x-49) x\right)\right)+6 k^3 e^{6 x} \left(k^2 \left(90 \beta (8 x-57) x^2+2835 x^3 \right.\right. \right. \right. \nonumber\\
&& \left.\left. \left. \left. +16 \beta ^2 (23 x-84) x+16 \beta ^3 (1-20 x)\right)+32 \sqrt{\beta } L_1 x \left(-6 \beta ^2+27 x^2-29 \beta x\right)\right)
\right.\right. \nonumber\\
&& \left.\left.
-4 \sqrt{\beta } k^2 e^{8 x} \left(32 \sqrt{\beta } k^2 x (9 x-\beta ) (15 x-2 \beta )+L_1 \left(27 (32 x+45) x^2-4 \beta ^2 (104 x+33)-9 \beta (64 x+49) x\right)\right) \right.\right. \nonumber\\
&& \left.\left.
+2 k e^{10 x} \left(3 k^2 \left(90 \beta (8 x+57) x^2-2835 x^3+16 \beta ^2 (23 x+84) x-16 \beta ^3 (20 x+1)\right) \right.\right. \right. \nonumber\\
&& \left.\left. \left.
+8 \sqrt{\beta } L_1 \left(27 (2 x+9) x^2-\beta ^2 (4 x+33)+6 \beta (5 x-6) x\right)\right)-6 e^{12 x} \left(k^2 \left(-32 \beta ^3-45 (32 \beta +63) x^3 \right.\right. \right. \right. \nonumber\\
&& \left.\left. \left. \left.
+16 \beta (22 \beta +225) x^2+8 \beta ^2 (40 \beta +199) x\right)+2 \sqrt{\beta } L_1 \left(-4 \beta ^2+81 x^2+17 \beta x\right)\right)
\right.\right. \nonumber\\
&& \left.\left.
-3 k^8 (4 \beta +9 x) \left(8 \beta ^2+45 x^2+20 \beta x\right)-2 k e^{14 x} \left(270 \beta (8 x-5) x^2+3645 x^3+48 \beta ^2 (23 x-32) x \right.\right. \right. \nonumber\\
&& \left.\left.\left.
+16 \beta ^3 (4 x-9)\right)+3 e^{16 x} (4 \beta +9 x) \left(8 \beta ^2+45 x^2+20 \beta x\right)\right)\right] \nonumber \\
&&
\Big/ \frac{9 x^2 \left(e^{2 x}-k\right)^2}{243 \beta L_3 x^4 \left(e^{2 x}-k\right)^7 \left(k+e^{2 x}\right) -8 c L_4 \sqrt{\frac{L_3-2 \sqrt{\beta } L_1}{\beta \left(e^{2 x}-k\right)^2}} \left(L_2 +\sqrt{\beta } L_1\right)},
\end{eqnarray}
\begin{eqnarray}
\frac{\rho+p}{C} &=& \left[8 \left(-k^6 L_1 \left(4 \beta ^2+27 x^2+51 \beta x\right)+4 k^5 e^{2 x} \left(L_1 \left(27 (2 x+1) x^2-\beta ^2 (4 x+13)+6 \beta (5 x+3) x\right) \right.\right. \right. \nonumber\\
&& \left.\left.\left.
-\sqrt{\beta } k^2 \left(27 (10 x+7) x^2+2 \beta ^2 (4 x-27)+6 \beta (23 x-11) x\right)\right)+k^4 e^{4 x} \left(L_1 \left(-27 (32 x+5) x^2 \right.\right. \right. \right. \nonumber\\
&& \left.\left. \left.\left.
+52 \beta ^2 (8 x+1)+9 \beta (64 x+1) x\right)-4 \sqrt{\beta } k^2 \left(-108 (5 x+2) x^2+8 \beta ^2 (15 x+11)+3 \beta (44 x+85) x\right)\right) \right. \right. \nonumber\\
&& \left.\left.
+4 k^3 e^{6 x} \left(\sqrt{\beta } k^2 \left(27 (10 x-3) x^2+2 \beta ^2 (43-60 x)+6 \beta (23 x+39) x\right)+12 L_1 x \left(-6 \beta ^2+27 x^2-29 \beta x\right)\right) \right.\right. \nonumber\\
&& \left.\left.
-k^2 e^{8 x} \left(32 \sqrt{\beta } k^2 x (9 x-\beta ) (15 x-2 \beta )+L_1 \left(27 (32 x-5) x^2+52 \beta ^2 (1-8 x)+9 \beta (1-64 x) x\right)\right) \right.\right. \nonumber\\
&& \left.\left.
+4 k e^{10 x} \left(\sqrt{\beta } k^2 \left(27 (10 x+3) x^2-2 \beta ^2 (60 x+43)+6 \beta (23 x-39) x\right)+L_1 \left(27 (2 x-1) x^2 \right.\right. \right.\right. \nonumber\\
&& \left.\left. \left.\left.
+\beta ^2 (13-4 x)+6 \beta (5 x-3) x\right)\right)+e^{12 x} \left(4 \sqrt{\beta } k^2 \left(108 (5 x-2) x^2+8 \beta ^2 (11-15 x)+3 \beta (85-44 x) x\right)
\right.\right. \right. \nonumber\\
&& \left.\left. \left.
+L_1 \left(4 \beta ^2+27 x^2+51 \beta x\right)\right)-2 \sqrt{\beta } k^8 (\beta -12 x) (4 \beta +9 x)-4 \sqrt{\beta } k e^{14 x} \left(27 (10 x-7) x^2
\right.\right. \right. \nonumber\\
&& \left.\left. \left.
+2 \beta ^2 (4 x+27)+6 \beta (23 x+11) x\right)+2 \sqrt{\beta } e^{16 x} (\beta -12 x) (4 \beta +9 x)\right)\right]
\nonumber \\ && \Big/
\frac{8 L_4 \left(\sqrt{\beta } L_1+L_2\right) \sqrt{\frac{L_3-2 \sqrt{\beta } L_1}{\beta \left(e^{2 x}-k\right)^2}}}{9 x^2 \left(e^{2 x}-k\right)^2}+27 \sqrt{\beta } x^2 \left(e^{2 x}-k\right)^5 L_5,
\end{eqnarray}
\begin{eqnarray}
\frac{\rho+5p}{C} &=&\frac{40 L_4 \left(\sqrt{\beta } L_1+L_2\right) \sqrt{\frac{L_3-2 \sqrt{\beta } L_1}{\beta \left(e^{2 x}-k\right)^2}}}{9 x^2 \left(e^{2 x}-k\right)^2}
-\left[4 \left(6 \sqrt{\beta } k^6 L_1 \left(-4 \beta ^2+189 x^2+85 \beta x\right) +2 k^5 e^{2 x} \left(k^2 \left(54 \beta (20 x+17) x^2
\right. \right. \right. \right. \nonumber\\
&& \left. \left. \left. \left.
+3645 x^3+24 \beta ^2 (23 x-97) x+8 \beta ^3 (4 x-99)\right)+4 \sqrt{\beta } L_1 \left(-27 (2 x+21) x^2+\beta ^2 (4 x+105)+6 \beta (3-5 x) x\right)\right)
\right. \right. \nonumber\\
&& \left. \left.
-2 k^4 e^{4 x} \left(3 k^2 \left(144 \beta (5 x-19) x^2+2835 x^3-4 \beta ^2 (44 x+653) x-160 \beta ^3 (x+2)\right)
\right. \right. \right. \nonumber\\
&& \left. \left. \left.
+\sqrt{\beta } L_1 \left(-27 (32 x+105) x^2+4 \beta ^2 (104 x+105)+9 \beta (64 x+101) x\right)\right)-6 k^3 e^{6 x} \left(k^2 \left(328 \beta ^3+45 (8 \beta -63) x^3
\right. \right. \right. \right. \nonumber\\
&& \left. \left. \left. \left.
+2 \beta (92 \beta +2403) x^2+40 \beta ^2 (57-4 \beta ) x\right)+16 \sqrt{\beta } L_1 x \left(-6 \beta ^2+27 x^2-29 \beta x\right)\right)
\right. \right. \nonumber\\
&& \left. \left.
+2 \sqrt{\beta } k^2 e^{8 x} \left(32 \sqrt{\beta } k^2 x (9 x-\beta ) (15 x-2 \beta )+L_1 \left(420 \beta ^2-9 (64 \beta +315) x^2+864 x^3+\beta (909-416 \beta ) x\right)\right)
\right. \right. \nonumber\\
&& \left. \left.
-2 k e^{10 x} \left(3 k^2 \left(-328 \beta ^3+45 (8 \beta +63) x^3+2 \beta (92 \beta -2403) x^2-40 \beta ^2 (4 \beta +57) x\right)
\right. \right. \right. \nonumber\\
&& \left. \left. \left.
+4 \sqrt{\beta } L_1 \left(27 (2 x-21) x^2+\beta ^2 (105-4 x)+6 \beta (5 x+3) x\right)\right)-6 e^{12 x} \left(k^2 \left(144 \beta (5 x+19) x^2-2835 x^3
\right. \right. \right. \right. \nonumber\\
&& \left. \left. \left. \left.
+4 \beta ^2 (653-44 x) x-160 \beta ^3 (x-2)\right)+\sqrt{\beta } L_1 \left(-4 \beta ^2+189 x^2+85 \beta x\right)\right)-3 k^8 (4 \beta +9 x)
\right. \right. \nonumber\\
&& \left. \left.
\times \left(4 \beta ^2+45 x^2+68 \beta x\right)+2 k e^{14 x} \left(792 \beta ^3+135 (8 \beta -27) x^3+6 \beta (92 \beta -153) x^2+8 \beta ^2 (4 \beta +291) x\right) \right. \right. \nonumber\\
&& \left. \left.
+3 e^{16 x} (4 \beta +9 x) \left(4 \beta ^2+45 x^2+68 \beta x\right)\right) \times 27 \beta L_3 x^2 \left(e^{2 x}-k\right)^5 \left(k+e^{2 x}\right)\right],
\end{eqnarray}
For simplicity of notations we have used \\
$L_1^2= k^4 (4 \beta +9 x)+4 \beta k^3 e^{2 x}-18 k^2 e^{4 x} x+4 \beta k e^{6 x}+e^{8 x} (4 \beta +9 x),$\\
$L_2= k^2 (2 \beta -3 x)+2 k e^{2 x} (3 x-4 \beta )+e^{4 x} (2 \beta -3 x),$\\
$L_3=k^2 (4 \beta +9 x)-2 k e^{2 x} (2 \beta +9 x)+e^{4 x} (4 \beta +9 x)$\\
$L_5= k^3 (4 \beta +9 x)-9 k^2 e^{2 x} x-9 k e^{4 x} x+e^{6 x} (4 \beta +9 x),$\\
$L_4=\tanh \left(B+\frac{1}{2} \int_0^x \frac{3\beta (e^{2x}-k)}{x \sqrt{L_3-2 (e^{2x}+k) \sqrt{\beta L_3 }} } \, dx\right).$
We have elected not to include the expressions for the adiabatic stability parameter $\Gamma$, mass profile as well as the sound-speed squared as these expressions are extremely long. Nevertheless, we have plotted the behaviour of these quantities and have studied them carefully in what follows.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Fig16.png}\\
\includegraphics[width=1\linewidth]{Fig17.png}\\
\includegraphics[width=1\linewidth]{Fig18.png}
\caption{Pressure, energy density and speed of sound versus radial pulsation $x + \epsilon x$ for $\epsilon$ of the order $0.1$, $0.01$ and $0.001$, respectively. The labels of the curves are the same as given in Fig. \ref{f1}.} \label{f8}
\end{figure}
\subsection{Physical analysis of the 6-$d$ hypersphere}
For the purposes of graphical plots in 6-d the following parameter values were obtained for pleasing physical behaviour: $A=2, ~B=1,~C=0.1,~k= -0.9$ and $\beta= 8$.
Observations of the right panels of each of the figures from 1 to 6 reveal interesting properties of the 6-$d$ hypersphere.
\begin{enumerate}
\item [(i)]The boundedness of the solution is revealed in Fig. \ref{f1} which illustrates that the pressure decreases monotonically from the centre of the star and vanishes at the boundary. This qualifies the model as a potential compact star model suitable for applicability to neutron stars or cold planets. \\
\item[(ii)] The density plotted in Fig. \ref{f2} increases sharply at the centre of the configuration and drops off smoothly towards the surface of the stellar surface. This behaviour is not inconsistent with stellar structure dynamics as the interior processes are not known. Importantly the density remains positive definite within the distribution. \\
\item[(iii)] The causality criterion $0 \leq v_s = \frac{dp}{d\rho} \leq 1$ holds everywhere inside the fluid sphere as observed in Fig. \ref{f3}. We observe that the sound speed decreases from the centre outwards attaining a minimum value at the surface of the star. This could be due to phase transitions leading to different equations of state in different regions of the star \cite{satya1}. \\
\item[(iv)] The (i)weak energy condition (WEC), (ii) strong energy
condition (SEC) and (iii) dominant energy condition (DEC) given by
\begin{eqnarray}
\text{WEC}: \rho - p \geq 0,~~~\text{SEC}: \rho+p \geq 0,~~~
\text{DEC}: \rho+5p \geq 0.\label{eq24}
\end{eqnarray} should hold everywhere inside the star. We observe that all energy conditions are satisfied throughout the interior of the star for the 6-$d$ hypersphere. \\
\item[(v)] The stability index plotted in Fig. \ref{f5} shows that the surface layers are more stable than the central regions. This is expected as the density is highest at the center making energy-generating processes such as nuclear fusion more efficient leading to more heat generation here. The heated core is likely to be more unstable than the surface of the star. The requirement $\Gamma > \frac{4}{3}$ is always satisfied as depicted in Fig. \ref{f5}. \\
\item[(vi)] From Fig. \ref{f6} it may be observed that the mass function increases as one moves away from the centre of the stellar configuration with $m(0) = 0$ which is expected as more mass is contained within larger concentric shells. \\
\item[(vii)] Using the expression (\ref{mtov}) the modified form of the TOV equation has been plotted in Fig. \ref{f9}.
We observe that the system is completely stable under the equilibrium of the different forces, i.e., $F_h+F_g =0$, with respect to the radial coordinate $x$.
\end{enumerate}
The plots above demonstrate the viability of the new 6-d conformally flat solution in representing realistic stellar distributions. The persistent central singularity may be eliminated by the inclusion of a suitable nonsingular core and its attendant matching requirements between the different layers.
\subsection{Stability under radial pulsations}
We combine our comments on the question of whether the models developed are stable under radial pulsations. Historically, stability was analysed extensively by Chandrasekhar \cite{chandra} who developed a test for stability under perturbations of the pressure and energy density for a linearised system of Einstein's equations. This scheme is only applicable to the Einstein's equations and not directly useful in EGB theory. It would be useful to carry out the computations of Chandrasekhar within higher curvature gravity. Bardeen \textit{et al} devised a catalogue of methods to study the normal modes of radial pulsations of stellar models. A normal mode of radial pulsation for an equilibrium solution has the form $\delta r = \xi (r) \exp (i \sigma t)$ where $\xi$ is a trial function and the eigenvalue $\sigma$ is the frequency. Additionally it is assumed, for this process that, the Lagrangian change of the pressure vanishes at the boundary and that the trial function is finite at the centre.
It is worth noting that stability can take on different meanings depending on the context. Ample warnings are contained in the work of Bardeen \textit{et al} \cite{bardeen}. Kokkotas and Ruoff \cite{kokkotas} make use of a numerical technique and give two approaches to studying stability: the shooting method and the method of finite differences. In the context of EGB gravity the stability of scalarized black holes has been considered by Silva \textit{et al} \cite{silva}. Since there is equivalent criteria of the Chandrasekhar integral condition for EGB gravity, we shall analyse our exact solution under radial pulsations by perturbing the radial coordinate $x$ as $x + \epsilon x$, where $\epsilon$ is a very small quantity in relation to the stellar radius, in the energy density, pressure and sound speed. It will be immediately apparent that for small values of $\epsilon$ the dynamical quantities converge to the original shapes. We provide plots where $\epsilon$ is of the order $0.1, 0.01, 0.001$ in order to confirm that our solution is indeed stable under radial pulsations. This follows the same spirit as the well known Lyapunov stability of curves and mimics the approach of Herrera \cite{st2,herrera} in discussing the concept of cracking in anisotropic spheres.
Fig. \ref{f7} and \ref{f8} depicts the energy density, pressure and sound speed profiles for the 5 and 6 dimensional conformally flat EGB models under pulsations of the type $x \rightarrow x + \epsilon x$ for three different orders of magnitude of $\epsilon$ namely $0.1, 0.01, 0.001$ and each plot showing increments up to seven curves. In other words, for each profile we have plotted seven increments to check the trend: for example, for the order of magnitude 0.001 we show the curves for $\epsilon = 0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007$. It is clear that in all cases as $\epsilon$ decreases the perturbed curves converge toward the equilibrium curve $\epsilon = 0$. This is the case also for larger orders of magnitude $\epsilon = 0.01$ and $0.1$. This may be understood to mean that these dynamical quantities are stable under radial pulsations. It must be borne in mind that we have achieved an exact solution in closed form only for $Z$ whereas the form for $y$ is expressed as an integral. Consequently, we had to resort to numerical techniques thus necessitating the check whether small disturbances of the system generate severe deviations from the equilibrium position.
\section{Conclusion}\label{sec7}
We have thoroughly investigated conformally flat static isotropic spacetimes in Einstein--Gauss--Bonnet spacetimes. Our conclusions may be summarised as follows:
\begin{description}
\item{(i)} The interior Schwarzschild metric generates a constant density hypersphere in EGB, however, the hypothesis of a constant density results in a generalized Schwarzschild solution. In the 5 dimensional case the general solution is completely free of singularities at the stellar centre but a coordinate central singularity in the 6 dimensional case is unavoidable.
\item{(ii)} The assumption of constant density leads to the generalised Schwarzschild solutions for 5 and 6 dimensions as mentioned in (i). It has been verified by direct substitution that the generalised Schwarzschild solutions for 5 and 6 dimensions result in the conformal flatness equation being identically satisfied. So the general constant density metric is indeed conformally flat. Conversely, assuming conformal flatness generates two branches of solutions one of which is Schwarzschild and the other is a new solution which does not yield constant density. The temporal gravitational potential is expressed as an integral, however, with the aid of numerical techniques it is possible to analyse the physical properties of the solution graphically. Suitable parameter spaces are exhibited demonstrating that the models in 5 and 6 dimensions satisfy basic physical requirements demanded of stellar distributions.
\item{(iii)} Finally, the Schwarzschild metric is known to be conformally flat. However, the assumption of conformal flatness leads to two branches of solutions only one of which is Schwarzschild. As mentioned in (ii) above, the non-Schwarzschild solution is shown to be reasonably behaved in both 5 and 6 dimensions.
\end{description}
When subjected to an elementary test of stability with respect to radial pulsations, both five and six dimensional models converged to the equilibrium position verifying its stability. The new conformally flat solution did not exhibit the defect of an infinite sound speed as its counterpart the Schwarzschild metric does in standard Einstein gravity. Moreover, a variable density profile was admitted as compared to the incompressible Schwarzschild sphere.
|
1,116,691,497,697 | arxiv | \section{Introduction}
In the interesting paper \cite{1}, the Ermakov-Lewis invariant is used to
study the time-dependent harmonic oscillator (TDHO) in the framework of
Koopman-von Neumann (KvN) mechanics \cite{2,3}. To find the invariant,
a system of coupled differential equations was obtained, which then
was reduced to a single equation related to the Ermakov equation.
Although this method is quite simple, our goal in this short note is to show
that there is an even simpler way to find the Ermakov-Lewis invariant for
TDHO in KvN mechanics.
Recent interest in KvN mechanics is motivated by experiments exploring
the quantum-classical border. To formulate a consistent framework for
a hybrid quantum-classical dynamics is a long-standing problem \cite{3A}.
Its solution, in addition to practical interest, for example, in quantum
chemistry, can clarify deep conceptual issues in quantum mechanics, such
as the problem of measurement. An interesting result in this direction was
obtained in \cite{3B}. It was found that the Wigner function in the nonrelativistic
limit turns into the Koopman-von Neuman wave function, which explains why
the Wigner function is not positive-definite.
Time-dependent harmonic oscillators arise in many quantum mechanical
systems \cite{3C,3D}. At the same time, the existence of Ermakov-Lewis
invariants in such systems has attracted much attention \cite{3D}.
In our opinion, extention of these results to the case of KvN mechanics
is of considerable interest.
\section{KvN evolution equation for TDHO in new variables}
The KvN evolution equation for TDHO wave-function has the form \cite{1}:
\begin{equation}
i \frac{\partial}{\partial t} \psi(x, p ; t)=\left[\hat{p} \hat{\lambda}_{x}-
k(t) \hat{x} \hat{\lambda}_{p}\right] \psi(x, p ; t),
\label{eq1}
\end{equation}
where $ \hat{\lambda}_{x}$ and $\hat{\lambda}_{p}$ operators satisfy the
following commutation rules
\begin{equation}
\left[\hat{x},\hat{\lambda}_{x}\right]=\left[\hat{p},\hat{\lambda}_{p}\right]=
i.
\label{eq1A}
\end{equation}
Note that $m=1$ and $\hbar = 1$ was assumed for simplicity. As Sudarshan
remarked \cite{4}, any KvN-mechanical system can be considered as a hidden
variable quantum system. Correspondingly, we will make a slight change in
notations as follows:
\begin{equation}
x=q , \;\; \lambda _{x}=P, \;\; \lambda _{p}=-Q,
\label{eq2}
\end{equation}
where $Q$ and $P$ are quantum variables that are hidden for classical observers.
Thus eq.(\ref{eq1}) takes the following form in new notations
\begin{equation}
i \frac{\partial}{\partial t} \psi(q, p ; t)=\left[\hat{p} \hat{P}+k(t)
\hat{q} \hat{Q}\right] \psi(q, p ; t).
\label{eq3}
\end{equation}
Correspondingly, the KvN Hamiltonian is given by
\begin{equation}
\mathcal{H}=\hat{p} \hat{P}+k(t) \hat{q} \hat{Q}.
\label{eq4}
\end{equation}
Let us make the following canonical transformation
\begin{eqnarray} &&
\hat{q}=\frac{1}{\sqrt{2}}\left(\hat{q}_{1}-\hat{q}_{2}\right), \;\; \hat{Q}=
\frac{1}{\sqrt{2}}\left(\hat{q}_{1}+\hat{q}_{2}\right),\nonumber \\ &&
\hat{p}=\frac{1}{\sqrt{2}}\left(\hat{p}_{1}+\hat{p}_{2}\right), \;\;
\hat{P}=\frac{1}{\sqrt{2}}\left(\hat{p}_{1}-\hat{p}_{2}\right).
\label{eq5}
\end{eqnarray}
The transformation is canonical in the sense that it doesn't change the
canonical form of the commutation relations:
\begin{equation}
\left[\hat{q}_{i}, \hat{q}_{j}\right]=0, \quad\left[\hat{q}_{i}, \hat{p}_{j}
\right]=i \delta_{ij}, \quad\left[\hat{p}_{i}, \hat{p}_{j}\right]=0.
\label{eq6}
\end{equation}
The KvN Hamiltonian when written in new variables
$\hat{q}_{1},\hat{q}_{2},\hat{p}_{1},\hat{p}_{2}$
splits into difference of two Schr\"{o}dinger type quantum Hamiltonians:
\begin{equation}
\mathcal{H} = \left(\frac{\hat{p}_{1}^{2}}{2}+\frac{1}{2} k(t)
\hat{q}_{1}^{2}\right)-\left(\frac{\hat{p}_{2}^{2}}{2}+\frac{1}{2} k(t)
\hat{q}_{2}^{2}\right) =\mathcal{H}_{1}-\mathcal{H}_{2}.
\label{eq7}
\end{equation}
In the next section we will use this splitting to find the Ermakov-Lewis
invariant.
\section{Ermakov-Lewis invariant for KvN TDHO}
Let $I$ be the Ermakov-Lewis invariant for KvN TDHO. It must satisfy the
following equation
\begin{equation}
\frac{d \hat{I}}{d t}=\frac{\partial \hat{I}}{\partial t}-i
[\hat{I}, \mathcal{H}]=0.
\label{eq8}
\end{equation}
Since $\mathcal{H}=\mathcal{H}_{1}(\hat{q}_{1},\hat{p}_{1})-
\mathcal{H}_{2} (\hat{q}_{2},\hat{p}_{2})$, $\hat{I}$ will have the form
$\hat{I}=\hat{I}_{1}+\hat{I}_{2}$, where $\hat{I}_{1}$ and $\hat{I}_{2}$ are
the usual quantum mechanical Ermakov-Lewis invariants associated respectively
with $\mathcal{H}_{1}(\hat{q}_{1},\hat{p}_{1})$ and $\mathcal{H}_{2} (\hat{q}_{2},
\hat{p}_{2})$. However, there is a subtlety associated with the minus sign in
$\mathcal{H}=\mathcal{H}_{1}(\hat{q}_{1},\hat{p}_{1})-
\mathcal{H}_{2} (\hat{q}_{2},\hat{p}_{2})$, which indicates that in the second
Ermakov-Lewis invariant we should assume time-reversal. More formally, we have
\begin{equation}
\frac{\partial \hat{I}}{\partial t}-i[\hat{I}, \mathcal{H}] =
\left(\frac{\partial I_{1}}{\partial t}-i\left[I_{1},
\mathcal{H}_{1}\right]\right)+\left(\frac{\partial I_{2}}{\partial t}+i
\left[I_{2}, \mathcal{H}_{2}\right]\right)=0.
\label{eq9}
\end{equation}
The individual terms in the brackets must vanish. However for
\begin{equation}
\frac{\partial I_{1}}{\partial t}-i\left[I_{1}, \mathcal{H}_{1}\right]=0,
\label{eq10}
\end{equation}
the corresponding quantum-mechanical invariant is well known \cite{5}
\begin{equation}
I_{1}=\frac{1}{2}\left[\left(\frac{\hat{q}_{1}}{\rho}\right)^{2}+
\left(\hat{p}_{1} \rho-\dot{\rho} \hat{q}_{1}\right)^{2}\right],
\label{eq11}
\end{equation}
where $\rho$ obeys the Ermakov equation \cite{6}
\begin{equation}
\ddot\rho+k(t)\rho=\rho^{-3}.
\label{eq11A}
\end{equation}
As for the equation containing $\hat{I}_{2}$, define $\tau=-t$ and the equation
takes the form:
\begin{equation}
\frac{\partial I_{2}}{\partial \tau}-i \left[I_{2},
\mathcal{H}_{2}\right]=0.
\label{eq12}
\end{equation}
It is clear that the corresponding invariant is
\begin{equation}
I_{2}=\frac{1}{2}\left[\left(\frac{\hat{q}_{2}}{\rho}\right)^{2}+
\left(\hat{p}_{2} \rho-\rho^{\prime} \hat{q}_{2}\right)^{2}\right],
\label{eq13}
\end{equation}
where $\rho'=\frac {\partial \rho }{\partial \tau }$. Restoring the derivatives
with respect to $t$, we get
\begin{equation}
I_{2}=\frac{1}{2}\left[\left(\frac{\hat{q}_{2}}{\rho}\right)^{2}+
\left(\hat{p}_{2} \rho+\dot{\rho} \hat{q}_{2}\right)^{2}\right].
\label{eq14}
\end{equation}
Thus the Ermakov-Lewis invariant $I$ in KvN mechanics takes the following
form, after using eq.(\ref{eq11}) and eq.(\ref{eq14}) and re-writing the
result in terms of $\hat{q},\hat{Q},\hat{p}$ and $\hat{P}$:
\begin{equation}
I=\frac{\hat{Q}^{2}}{2 \rho^{2}}+\frac{\hat{q}^{2}}{2 \rho^{2}}+\frac{1}{2}
\left\{(\dot{\rho} \hat{q}-\rho \hat{p})^{2}+(\dot{\rho} \hat{Q}- \rho
\hat{P})^{2}\right\}.
\label{eq15}
\end{equation}
Using the correspondence given by eq.(\ref{eq2}), and re-arranging the above
expression, we get the invariant in the form found in \cite{1}:
\begin{equation}
\hat{I}=\frac{1}{2}\left[\frac{\hat{x}^{2}}{\rho^{2}}+(\dot{\rho} \hat{x}-
\rho \hat{p})^{2}+\frac{\hat{\lambda}_{p}^{2}}{\rho^{2}}+\left(\dot{\rho}
\hat{\lambda}_{p}+\rho \hat{\lambda}_{x}\right)^{2}\right].
\label{eq16}
\end{equation}
\section{Conclusions}
We have shown that one can find the Ermakov-Lewis invariant
in the case of KvN mechanics using the well-known quantum-mechanical
expression for this invariant \cite{5} and some simple algebra.
In this method, there is no need to consider any coupled differential equations.
However, it is less general than the method considered in \cite{1}, which,
in principle, can be applied to any potential, even if it does not allow
$\mathcal{H}$ to split in a way described in this note.
|
1,116,691,497,698 | arxiv | \section{Introduction}
Product mixing for a group $G$ generally refers to any estimate of the following form: whenever subsets $X,Y,Z\subset G$ have densities $\alpha,\beta,\gamma$ above some threshold, the number of solutions to $xy=z$ with $x\in X,y\in Y,z\in Z$ is $(1+o(1))\alpha\beta\gamma|G|^2$. The following foundational theorem proved by Gowers~\cite{gowers} (and expanded by Babai, Nikolov, and Pyber~\cite{babainikolovpyber}) explains this idea further.
\begin{theorem}[Gowers]\label{gowers}
Let $G$ be a group and let $m$ be the minimal dimension of a nontrivial representation of $G$. Let $X,Y,Z\subset G$ have densities $\alpha,\beta,\gamma$, respectively. Then
\[
\left| \langle 1_X*1_Y,1_Z\rangle - \alpha\beta\gamma \right| < m^{-1/2} \alpha^{1/2} \beta^{1/2} \gamma^{1/2}.
\]
In particular if $\alpha\beta\gamma \gg m^{-1}$ then
\[
\langle 1_X*1_Y,1_Z\rangle = (1+o(1))\alpha\beta\gamma.
\]
\end{theorem}
Here and throughout this paper we write $X\lesssim Y$ to mean that $X\leq O(Y)$, and we write $X\ll Y$ to mean that $X\leq o(Y)$. We will write $X\sim Y$ to mean $X\lesssim Y$ and $X\gtrsim Y$. This differs from the standard convention in analytic number theory, but it will be convenient for us.
There are several immediate corollaries of Theorem~\ref{gowers}. For example, if $\alpha\beta\gamma\geq m^{-1}$, then the intersection $XY\cap Z$ is nonempty, and in fact $XYZ^{-1} = G$. In particular, if $X\subset G$ is product-free (meaning that there are no solutions to $xy=z$ with $x,y,z\in X$), then $X$ has density at most $m^{-1/3}$.
For the purpose of illustration let us assume $\alpha\sim\beta\sim\gamma$. Then Theorem~\ref{gowers} asserts that there is a product-mixing phenomenon for sets of density greater than $m^{-1/3}$. On the other hand Kedlaya~\cite{kedlaya1} proved that any group $G$ acting transitively on a set of size $n$ has a product-free subset of density $n^{-1/2}$. For a broad class of groups, including for example the alternating groups and special linear groups, we have $m\sim n$, so for these groups this leaves a gap between $m^{-1/3}$ and $m^{-1/2}$.
In Section~\ref{examples} we partly explain this gap by showing that any group $G$ acting transitively on a set of size $n$ has a subset $X$ of density $\sim n^{-1/3}$ for which there are significantly \emph{more} than the expected number of solutions to $xy=z$. In groups with $m\sim n$ this shows that the density threshold for product mixing is $m^{-1/3}$, as in Gowers's theorem.
Our main purpose, however, is to demonstrate that a one-sided product-mixing phenomenon persists in the alternating group $A_n$ for somewhat lower densities. Specifically we prove the following theorem.
\begin{theorem}\label{main}
If $X,Y,Z\subset A_n$ have densities $\alpha,\beta,\gamma$, respectively, and
\[
\min(\alpha\beta,\alpha\gamma,\beta\gamma) \gg \frac{(\log n)^7}{n},
\]
then
\[
\langle 1_X*1_Y,1_Z\rangle \geq (1+o(1)) \alpha\beta\gamma.
\]
\end{theorem}
As a corollary we deduce that if $X$ has density $\gg n^{-1/2}(\log n)^{7/2}$ then $X^2$ has density
\[
1 - O(n^{-1/2}(\log n)^{7/2}).
\]
In particular if $X$ is product-free then $X$ has density at most $O(n^{-1/2}(\log n)^{7/2})$. This is best possible up to the logarithmic factors.
As to the methods, we first use nonabelian Fourier analysis to reduce to a problem taking place only in the standard representation, an idea due to Ellis and Green. This problem is then interpretted in terms of random rearrangements of inner products, and we tackle this problem using concentration of measure and entropy subadditivity. The backbone of our proof is a Brascamp--Lieb-type inequality for the symmetric group due to Carlen, Lieb, and Loss, which we explain in Section~\ref{sec:cll}.
\emph{Notation.} As already mentioned, in addition to the usual asymptotic notation $O(\cdot)$ and $o(\cdot)$, we write $X\lesssim Y$ to mean that $X\leq O(Y)$, and we write $X\ll Y$ to mean that $X\leq o(Y)$. We write $X\sim Y$ to mean $X\lesssim Y$ and $X\gtrsim Y$.
We write $\Omega$ throughout for the ground set $\{1,\dots,n\}$ on which $S_n$ and $A_n$ act. We attach the uniform measures to $S_n$, $A_n$, and $\Omega$, and we write an unadorned integral ${\textstyle\int} f$ to mean the integral with respect to the uniform measure on the domain of $f$. We also define inner products, $L^p$ norms, and convolutions accordingly.
\emph{Acknowledgments.} I thank David Ellis for pointing out several confusing typos and recommending a few improvements.
\section{Examples of sets with poor product mixing}\label{examples}
In this section we give two concrete examples of fairly dense sets with poor product-mixing properties. The first example, a relatively large product-free set, is due to Kedlaya~\cite{kedlaya1} and independently Edward Crane (Ben Green, personal communication), but we recall the construction here as it shows that Theorem~\ref{main} is best possible up to logarithms. The second construction is original, and shows that Theorem~\ref{gowers} is best possible for two-sided mixing.
\subsection{Sets with no solutions to $xy=z$}
First we give an example of a set $X$ of density $\sim n^{-1/2}$ with no solutions to $xy=z$. Fix a set $T\subset\Omega$ of size $t$ and a point $1$ not in $T$ and let $X$ be the set of all $\pi\in A_n$ such that $\pi(1)\in T$ and such that $\pi(T)\subset T^c$. Then clearly $X^2$ is disjoint from $X$, as every $\pi\in X^2$ satisfies $\pi(1)\in T^c$, and it is straightforward to see that $X$ has density
\[
\frac1{n!} t \binom{n-t}{t} t! (n-t-1)! = \frac{t (n-t)! (n-t-1)!}{n! (n-2t)!} = \frac{t}{n} e^{O(t^2/n)}.
\]
Thus if $t\sim n^{1/2}$ then $X$ has density $\sim n^{-1/2}$. This example is due to Kedlaya~\cite{kedlaya1} and independently Edward Crane (Ben Green, personal communication), and it shows that Theorem~\ref{main} is best possible up to logarithms.
As explained in Kedlaya~\cite{kedlaya1}, his construction adapts straightforwardly to any $2$-transitive subgroup $G\leq S_n$, and in fact it adapts to any transitive subgroup $G\leq S_n$ through an averaging argument.
\begin{proposition}[Kedlaya~\cite{kedlaya1}]\label{prop:ked}
Let $G$ be a transitive subgroup of $S_n$. Then there is a subset $X\subset G$ of density $\sim n^{-1/2}$ such that $X^2 \cap X = \emptyset$.
\end{proposition}
\subsection{Sets with too many solutions to $xy=z$}
Next we give an example of a set $X$ of density $\alpha\sim n^{-1/3}$ having many more than the expected number of solutions (namely, $\alpha^3 n!^2$) to $xy=z$. Fix a set $T$ of size $t$ and let $X$ be the set of all $\pi\in A_n$ such that $\pi(T) \cap T$ is nonempty. As long as $t = o(n^{1/2})$ then $X$ has density roughly $t^2/n$, and if you choose $\pi_1,\pi_2$ randomly from $X$ then $\pi_1\pi_2$ is again in $X$ with probability of order $t^2/n + 1/t$. To see this it may help to notice that $X$ is symmetric, and that $\pi_1^{-1} \pi_2 \in X$ if and only if $\pi_1(T) \cap \pi_2(T) \neq \emptyset$. Each of $\pi_1(T)$ and $\pi_2(T)$ is required to intersect $T$ nontrivially, so $\pi_1(T)$ and $\pi_2(T)$ intersect with probability at least $1/t$. Aside from that restriction $\pi_1(T)$ and $\pi_2(T)$ are just random sets of size $t$, so they intersect with probability at least $t^2/n$. (We can afford to be somewhat lax with this computation as we will shortly prove a more general proposition.) Note that the probability $t^2/n + 1/t$ is much larger than the expected probability $t^2/n$ whenever $t$ is small compared to $n^{1/3}$.
As with the previous construction, this construction adapts straightforwardly to any $2$-transitive subgroup $G\leq S_n$, and to an arbitrary transitive subgroup $G\leq S_n$ through an averaging argument.
\begin{proposition}\label{prop:many}
Let $G$ be a transitive subgroup of $S_n$. Then there is a subset $X\subset G$ of density $\alpha \sim n^{-1/3}$ for which there are at least $100\alpha^3|G|^2$ solutions to $xy=z$ with $x,y,z\in X$.
\end{proposition}
\begin{proof}
For $T\subset \Omega$ of size $t$ let $X_T$ be the set of all $g\in G$ for which $g(T)\cap T\neq\emptyset$. Also fix an arbitrary total order $<$ on $\Omega$. Clearly $|X_T|/|G| \leq t^2/n$. We will bound $|X_T|$ below by the number of $g\in G$ for which there are $i,j\in T$ with $i<j$ such that $g(i)=j$. Thus by inclusion-exclusion we have
\begin{align*}
\frac{|X_T|}{|G|}
&\geq \sum_{\substack{i,j\in T\\i<j}} \frac{|\{g:g(i) = j\}|}{|G|} - \sum_{\substack{i,j,i',j'\in T \\ i<j,i'<j' \\ (i,j) \neq (i',j')}} \frac{|\{g:g(i)=j,g(i')=j'\}|}{|G|}.
\end{align*}
The first sum here is $\sim t^2/n$ by transitivity, for any $T$. The second sum can be rewritten as
\begin{equation}\label{secondsum}
\sum_{\substack{i,i'\in\Omega \\ i \neq i'}} \frac1{|G|} \sum_{\substack{g\in G\\g(i)>i,g(i')>i'}} 1_{i\in T} 1_{g(i)\in T} 1_{i'\in T} 1_{g(i')\in T}.
\end{equation}
Now note that for any fixed $i,i'\in\Omega$ such that $i\neq i'$ and for any $g$ satisfying $g(i)>i$ and $g(i')>i'$ we have $|\{i,g(i),i',g(i')\}|\geq 3$, and in fact $|\{i,g(i),i',g(i')\}|=4$ except for a proportion at most $O(1/n)$ of $g\in G$. It follows that the average of~\eqref{secondsum} over $T\subset\Omega$ is bounded by
\[
O(n^2(t/n)^4 + n(t/n)^3) = O(t^4/n^2).
\]
Thus, by Markov's inequality, \eqref{secondsum} is $O(t^4/n^2)$ with probability at least $9/10$.
Similarly let us count solutions to $xy=z$ in $X_T$. We will bound the number $N_T$ of solutions below by the number of pairs $(g_1,g_2)\in G^2$ for which there exists $i,j,k\in T$ with $i<j<k$ such that $g_1(i)=j$ and $g_2(j)=k$. Thus by inclusion-exclusion again we have
\begin{align*}
\frac{N_T}{|G|^2}
&\geq \sum_{\substack{i,j,k\in T\\ i<j<k}} \frac{|\{(g_1,g_2)\in G^2: g_1(i)=j,g_2(j)=k\}|}{|G|^2}\\
&\, - \sum_{\substack{i,j,k,i',j',k'\in T\\ i<j<k,i'<j'<k'\\ (i,j,k)\neq(i',j',k')}} \frac{|\{(g_1,g_2)\in G^2: g_1(i)=j, g_2(j)=k, g_1(i')=j', g_2(j')=k'\}|}{|G|^2}.
\end{align*}
The first sum is $\sim t^3/n^2$ by transitivity. The second sum can be rewritten
\begin{equation}\label{secondsum2}
\sum_{\substack{j,j'\in\Omega\\j\neq j'}} \frac1{|G|^2} \sum_{\substack{g_1,g_2\in G\\g_1^{-1}(j)<j<g_2(j)\\g_1^{-1}(j')<j'<g_2(j')}} 1_{g_1^{-1}(j)\in T} 1_{j\in T} 1_{g_2(j)\in T} 1_{g_1^{-1}(j')\in T} 1_{j'\in T} 1_{g_2(j')\in T}.
\end{equation}
To bound this we again average over $T\subset\Omega$. For $j\neq j'$ and $g_1,g_2$ under the stated restrictions the set
\[
S = \{g_1^{-1}(j),j,g_2(j),g_1^{-1}(j'),j',g_2(j')\}
\]
always has size at least $4$, has size $4$ for at most a proportion $O(1/n^2)$ of $(g_1,g_2)\in G^2$, has size $5$ for at most a proportion $O(1/n)$ of $(g_1,g_2)\in G^2$, and otherwise has size $6$. It follows that the average of~\eqref{secondsum2} over $T\subset\Omega$ is bounded by
\[
O(n^2(t/n)^6 + n(t/n)^5 + (t/n)^4) = O(t^6/n^4).
\]
Thus, by Markov's inequality, \eqref{secondsum2} is $O(t^6/n^4)$ with probability at least $9/10$.
We deduce that there is some $T$ for which \eqref{secondsum} is $O(t^4/n^2)$ and \eqref{secondsum2} is $O(t^6/n^4)$. For this $T$ it follows that
\[
\frac{|X_T|}{|G|} \sim t^2/n + O(t^4/n^2)
\]
and that
\[
\frac{N_T}{|G|^2} \gtrsim t^3/n^2 + O(t^6/n^4).
\]
Thus as long as $t=o(n^{1/2})$ we see that $X_T$ has density $\alpha \sim t^2/n$ while there are at least $(t^3/n^2)|G|^2 \sim (n/t^3) \alpha^3 |G|^2$ solutions to $xy=z$ in $X$. Now take $t=\lfloor c n^{1/3}\rfloor$ for a sufficiently small constant $c$.
\end{proof}
\section{Nonabelian Fourier analysis}
Here we briefly recall the fundamentals of nonabelian Fourier analysis, and then we give a short Fourier-analytic proof of Theorem~\ref{gowers}. This proof seems to be well known among experts: see for example Wigderson~\cite[Chapter 2.11]{wigderson}.
Let $G$ be a compact group endowed with the uniform measure. The Fourier transform of a function $f\in L^2(G)$ at an irreducible unitary representation $\xi:G\to U(d_\xi)$ is defined by
\[
\hat{f}(\xi) = \int_G f(x) \xi(x).
\]
We then have the inversion formula
\[
f(x) = \sum_\xi d_\xi \langle\hat{f}(\xi), \xi(x)\rangle_\textup{HS},
\]
and Parseval's identity
\begin{equation}\label{parseval}
\langle f,g\rangle = \sum_\xi d_\xi \langle\hat{f}(\xi),\hat{g}(\xi)\rangle_\textup{HS}.
\end{equation}
Here the sums are taken over a complete set of representatives of the irreducible representations of $G$ up to equivalency, and the Hilbert--Schmidt inner product $\langle\cdot,\cdot\rangle_\textup{HS}$ is defined by
\[
\langle R,S\rangle_\textup{HS} = \tr(RS^*).
\]
Like classical Fourier analysis, nonabelian Fourier analysis is a powerful tool for understanding the behaviour of convolutions. Here the convolution $f*g$ of two functions $f,g\in L^2(G)$ is defined by
\begin{equation}\label{convolution-definition}
f*g(x) = \int_G f(y) g(y^{-1}x),
\end{equation}
and by an application of Fubini's theorem we have the rule
\begin{equation}\label{convolutionrule}
\widehat{f*g}(\xi) = \hat{f}(\xi)\hat{g}(\xi).
\end{equation}
For all this and more the reader might refer to Tao~\cite[\textsection 2.8]{tao}.
We can now give a short proof of Theorem~\ref{gowers}.
\begin{proof}[Proof of Theorem~\ref{gowers}]
Suppose that $G$ is finite, that $d_\xi\geq m$ for $\xi\neq 1$, and that $X,Y,Z\subset G$ have densities $\alpha,\beta,\gamma$, respectively. Let $f=1_X, g=1_Y, h=1_Z$. Then by the convolution rule~\eqref{convolutionrule} and Parseval~\eqref{parseval} we have
\begin{align*}
\langle f*g,h\rangle
&= \sum_\xi d_\xi \langle \hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}\\
&= \alpha\beta\gamma + \sum_{\xi\neq 1} d_\xi \langle\hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}.
\end{align*}
Here we have written $1$ for the trivial representation of $G$. Now by Cauchy--Schwarz and the algebra property $\|RS\|_\textup{HS}\leq\|R\|_\textup{HS}\|S\|_\textup{HS}$ of the Hilbert--Schmidt norm we have
\[
|\langle\hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}| \leq \|\hat{f}(\xi)\hat{g}(\xi)\|_\textup{HS} \|\hat{h}(\xi)\|_\textup{HS} \leq \|\hat{f}(\xi)\|_\textup{HS} \|\hat{g}(\xi)\|_\textup{HS} \|\hat{h}(\xi)\|_\textup{HS},
\]
so by using Cauchy--Schwarz together with Parseval again we have
\begin{equation}\label{mixcomp}
\begin{aligned}
\sum_{\xi\neq 1} d_\xi |\langle\hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}|
&\leq \sum_{\xi\neq 1} d_\xi \|\hat{f}(\xi)\|_\textup{HS} \|\hat{g}(\xi)\|_\textup{HS} \|\hat{h}(\xi)\|_\textup{HS}\\
&\leq \max_{\xi\neq 1} \|\hat{f}(\xi)\|_\textup{HS} \sum_\xi d_\xi \|\hat{g}(\xi)\|_\textup{HS} \|\hat{h}(\xi)\|_\textup{HS}\\
&\leq m^{-1/2} \|f\|_2 \|g\|_2 \|h\|_2\\
&=m^{-1/2} \alpha^{1/2} \beta^{1/2} \gamma^{1/2}.
\end{aligned}
\end{equation}
This proves Theorem~\ref{gowers}.
\end{proof}
For the rest of the paper we specialize to the alternating group $G = A_n$. As explained in the introduction, Theorem~\ref{gowers} provides a satisfactory estimate for $\langle 1_X*1_Y,1_Z\rangle$ only if $\alpha\beta\gamma\gg1/n$. However, as observed by Ellis and Green (personal communication), by examination of the proof above it is clear that only the standard $(n-1)$-dimensional representation $\sigma$ is problematic: Again taking $f=1_X, g=1_Y, h = 1_Z$, we have
\begin{align}
\langle f*g,h\rangle
&= \sum_{\xi} d_\xi \langle \hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}\nonumber\\
&= \alpha\beta\gamma + (n-1)\langle\hat{f}(\sigma)\hat{g}(\sigma),\hat{h}(\sigma)\rangle_\textup{HS} + \sum_{\xi\neq 1,\sigma} d_\xi\, \langle\hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}, \label{ellisgreen}
\end{align}
and since $d_\xi\gtrsim n^2$ for $\xi\neq 1,\sigma$ (this follows from the hook formula: see for example~\cite[Result~2]{rasala}) we have, by straightforward adaptation of~\eqref{mixcomp},
\[
\sum_{\xi\neq 1,\sigma} d_\xi |\langle\hat{f}(\xi)\hat{g}(\xi),\hat{h}(\xi)\rangle_\textup{HS}| \lesssim n^{-1} \alpha^{1/2}\beta^{1/2}\gamma^{1/2}.
\]
This is negligible compared to the main term $\alpha\beta\gamma$ whenever $\alpha\beta\gamma \gg n^{-2}$. Thus it remains only to control $(n-1)\langle \hat{f}(\sigma)\hat{g}(\sigma),\hat{h}(\sigma)\rangle_\textup{HS}$.
For each $i\in\Omega$ we have a map $S_n\to\Omega$ given by $\pi\mapsto \pi(i)$, which induces a map $L^2(\Omega)\to L^2(S_n)$ given by composition with $\pi\mapsto \pi(i)$. We denote by $p_i$ the adjoint of this map, and we call $p_i f$ the \emph{pushforward} of $f$ under $\pi\mapsto\pi(i)$. Explicitly $p_if$ is defined by
\[
p_i f(\omega) = n \int_{S_n} f(\pi) 1_{\pi(i)=\omega} = \frac1{(n-1)!} \sum_{\substack{\pi\in S_n\\\pi(i)=\omega}} f(\pi),
\]
and for any $g\in L^2(\Omega)$ we have
\[
\int_{S_n} f(\pi) g(\pi(i)) = \int_\Omega p_if(\omega) g(\omega).
\]
Now by direct computation whenever at least one of ${\textstyle\int} f, {\textstyle\int} g, {\textstyle\int} h$ is zero we have
\begin{align}
(n-1)\langle\hat{f}(\sigma)\hat{g}(\sigma),\hat{h}(\sigma)\rangle_\textup{HS}
&= (n-1) \int_{S_n^2} (f*g)(x) \overline{h(y)} \tr\sigma(xy^{-1})\nonumber\\
&= (n-1) \int_{S_n^2} (f*g)(x) \overline{h(y)} \(\sum_{i\in\Omega} 1_{x(i)=y(i)} - 1\)\nonumber\\
&= (n-1) \sum_{i\in\Omega} \int_{S_n^2} (f*g)(x) \overline{h(y)} 1_{x(i)=y(i)}\nonumber\\
&= \frac{n-1}n \sum_{i\in\Omega} \langle f*p_ig,p_ih\rangle\nonumber\\
&\sim \sum_{i\in\Omega} \langle f*p_ig,p_ih\rangle\label{secondterm}.
\end{align}
Here we define the convolution of functions $f\in L^2(S_n)$ and $u\in L^2(\Omega)$ by the same formula:
\[
f*u(\omega) = \int_{S_n} f(\pi) u(\pi^{-1}(\omega));
\]
$f*u$ is then a function defined on $\Omega$, and one may check the relation
\[
p_i(f*g) = f*p_ig.
\]
Note that the assumption that one of ${\textstyle\int} f,{\textstyle\int} g, {\textstyle\int} h$ is zero is innocuous, since changing $f$ by a constant does not change $\hat{f}(\sigma)$.
Similarly whenever ${\textstyle\int} f = 0$ we have the following remnant of Parseval's identity:
\begin{equation}
\|f\|_2^2 \geq (n-1)\|\hat{f}(\sigma)\|^2_\textup{HS} \sim \sum_{i\in\Omega} \|p_i f\|_2^2.\label{remnantparseval}
\end{equation}
We can now summarize the rest of the proof. We will prove a concentration-of-measure result for the randomly rearranged inner product
\[
\langle \pi*p_ig,p_ih\rangle = \int_\Omega p_ig(\pi^{-1}(\omega)) p_ih(\omega).
\]
This result will ensure that
\[
\langle \pi*p_ig,p_ih\rangle \approx {\textstyle\int} p_ig {\textstyle\int} p_ih = {\textstyle\int} g {\textstyle\int} h
\]
with high probability, and with a tail depending on the variances $\|p_ig-{\textstyle\int} g\|_2^2$ and $\|p_ih - {\textstyle\int} h\|_2^2$ of $p_i g$ and $p_i h$ and on the entropies of $p_ig/{\textstyle\int} g$ and $p_ih/{\textstyle\int} h$. Crucially, when the variances are small there is rather strong concentration from below, unless one of the entropies is large. We will then apply the Parseval remnant~\eqref{remnantparseval} and a version of subadditivity of entropy to conclude.
\section{An inequality of Carlen, Lieb, and Loss}\label{sec:cll}
The following inequality was proved by Carlen, Lieb, and Loss~\cite{cll}.
\begin{theorem}\label{cll} Let $f_1,\dots,f_n:\Omega\to\C$ be functions. Then
\[
\int_{S_n} \prod_{i=1}^n |f_i(\pi(i))| \leq \prod_{i=1}^n \|f_i\|_2.
\]
\end{theorem}
This inequality can be viewed in at least two ways. First, as it resembles the classical Loomis--Whitney inequality, or more generally the Brascamp--Lieb inequality, it can be viewed as an inequality of Brascamp--Lieb-type for the symmetric group. In this light Theorem~\ref{cll} bears a striking resemblance to another Brascamp--Lieb-type inequality proved by Carlen, Lieb, and Loss for the sphere: see~\cite{cll-sphere}.
Theorem~\ref{cll} can also be viewed as a Hadamard-type inequality for permanents. The classical Hadamard inequality states that if $M$ is a matrix with columns $v_1,\dots,v_n\in\C^n$ then
\[
|\det(M)| \leq \prod_{i=1}^n |v_i|,
\]
where $|\cdot|$ is the usual Euclidean norm on $\C^n$. By comparison Theorem~\ref{cll} states that
\[
|\textup{perm}(M)| \leq \frac{n!}{n^{n/2}} \prod_{i=1}^n |v_i|.
\]
In this section we deduce two consequences of Theorem~\ref{cll}, neither of them original: a version of entropy subadditivity for the symmetric group, and a concentration-of-measure result for a statistic of Hoeffding.
\subsection{Entropy subadditivity for the symmetric group}
Given $f:S_n\to[0,\infty)$ with $\alpha = {\textstyle\int} f$ we define the entropy of $f$ to be
\[
S(f) = \int_{S_n} (f/\alpha) \log (f/\alpha).
\]
To be more precise we might call $S(f)$ the Kullback--Liebler divergence of $(f/\alpha)\,d\pi$ from uniform, but we will use the shorter term for simplicity. Similarly, given $g:\Omega\to[0,\infty)$ with $\beta = {\textstyle\int} g$ we define
\[
S(g) = \int_\Omega (g/\beta) \log (g/\beta).
\]
All logarithms are of course taken to the natural base.
In the coup de gr\^ace of our argument we will apply the following entropy-subadditivity inequality.
\begin{theorem}[Subadditivity of entropy]\label{subadditivity}
Suppose $f:S_n\to [0,\infty)$. Then
\[
S(f) \geq \frac12 \sum_{i\in\Omega} S(p_i f).
\]
\end{theorem}
Note that this is much stronger than what one gets from just applying usual entropy subadditivity to $f$ as a function $[n]^n\to[0,\infty)$.
Theorems~\ref{cll} and~\ref{subadditivity} are more closely related than it may appear, as shown in some generality by Carlen and Cordero-Erausquin~\cite{cc}. We repeat the rather simple deduction of Theorem~\ref{subadditivity} from Theorem~\ref{cll} here for the convenience of the reader.
\begin{proof}[Proof of Theorem~\ref{subadditivity}]
Put $\alpha = {\textstyle\int} f$. Define $f':S_n\to[0,\infty)$ by
\[
f'(\pi) = \prod_{i=1}^n p_if(\pi(i))^{1/2},
\]
and put $\alpha' = {\textstyle\int} f'$. Then by Jensen's inequality we have
\begin{align}
0
&\leq \int_{S_n} (f/\alpha) \log\(\frac{f/\alpha}{f'/\alpha'}\)\nonumber\\
&= S(f) - \frac12 \sum_{i=1}^n \int_{S_n} (f/\alpha) \log p_if(\pi(i)) + \log \alpha'\nonumber\\
&= S(f) - \frac12 \sum_{i=1}^n \int_{\Omega} (p_if/\alpha) \log p_if + \log\alpha'\nonumber\\
&= S(f) - \frac12 \sum_{i=1}^n S(p_if) - \frac{n}{2} \log\alpha + \log \alpha'.\label{jensen}
\end{align}
On the other hand by Theorem~\ref{cll} we have
\[
\alpha' = \int_{S_n} \prod_{i=1}^n p_if(\pi(i))^{1/2} \leq \prod_{i=1}^n \(\int_\Omega p_i f\)^{1/2} = \alpha^{n/2},
\]
so $\log \alpha' \leq \frac{n}{2} \log \alpha$ and the theorem follows from~\eqref{jensen}.
\end{proof}
\subsection{Concentration for Hoeffding's statistic}
Given an $n\times n$ complex matrix $(a_{ij})$ we consider the sum
\[
X = \sum_{i=1}^n a_{i\pi(i)},
\]
where $\pi\in S_n$ is random permutation. The study of such sums goes back at least to Hoeffding~\cite{hoeffding}, who proved a central limit theorem for $X$ under suitable hypotheses, and so we refer to $X$ as \emph{Hoeffding's statistic}. More recently work on Hoeffding's statistic has been more or less wedded to Stein's method of exchangeable pairs, starting with Bolthausen's~\cite{bolthausen} Berry--Esseen-type estimate for the error in Hoeffding's theorem, and following with the work of Chatterjee~\cite{chatterjee}, who proved the first nonasymptotic concentration-type result for such sums.
In the next section we will need the following Bernstein-type concentration inequality for Hoeffding's statistic, which was proved in the more general context of random matrix theory by Mackey, Jordan, Chen, Farrell, and Tropp~\cite[Corollary~10.3]{mjcft}, using an extension of Chatterjee's method.
\begin{theorem}\label{bernstein}
Let $(a_{ij})$ be an $n\times n$ matrix such that $\sum_{i,j=1}^n a_{ij} = 0$ and such that $|a_{ij}|\leq M$ for each $i,j$. Let $v = \frac1n \sum_{i,j=1}^n |a_{ij}|^2$. Let $\pi\in S_n$ be chosen uniformly at random, and let
\[
X = \sum_{i=1}^n a_{i\pi(i)}.
\]
Then for all $t>0$ we have
\[
\P(|X|>t) \leq 2 \exp\( \frac{-ct^2}{v + Mt}\),
\]
where $c$ is some positive constant.
\end{theorem}
The purpose of this subsection is to give another proof of the above theorem, not relying on Stein's method, but instead relying on the Carlen--Lieb--Loss inequality Theorem~\ref{cll}. The main value of doing so is to reduce the reliance of the present paper on results proved elsewhere, but it may also be of independent interest.
\begin{proof}[Proof of Theorem~\ref{bernstein}]
By replacing $(a_{ij})$ with $(a_{ij} - \frac1n \sum_{j'} a_{ij'})$ if necessary and slightly reducing the constant $c$ we may assume that $\sum_j a_{ij} = 0$ for each $i$: note that this operation does not change $X$, it can at worst double $\max |a_{ij}|$, and it can only reduce $v$. We may also assume that $(a_{ij})$ is real, for otherwise we may just deal with the real and imaginary parts separately.
Now for $\lambda>0$ we have, by Theorem~\ref{cll},
\begin{align}
\E\exp(\lambda X)
&=\int_{S_n} \prod_{i=1}^n \exp(\lambda a_{i\pi(i)})\nonumber\\
&\leq \prod_{i=1}^n \(\frac1n \sum_{j=1}^n \exp(2\lambda a_{ij})\)^{1/2}.\label{expmom}
\end{align}
Define
\[
h(x) = \frac{e^x - 1 - x}{x^2} = \sum_{k=0}^\infty \frac{x^k}{(k+2)!}.
\]
Then
\begin{align*}
\frac1n \sum_{j=1}^n \exp(2\lambda a_{ij})
&= \frac1n \sum_{j=1}^n \( 1 + 2\lambda a_{ij} + 4\lambda^2 a_{ij}^2 h(2\lambda a_{ij})\)\\
&= 1 + \frac1n \sum_{j=1}^n 4\lambda^2 a_{ij}^2 h(2\lambda a_{ij})\\
&\leq 1 + 4\lambda^2 \(\frac1n \sum_{j=1}^n a_{ij}^2 \) h(2\lambda M)\\
&\leq \exp\(4\lambda^2 \(\frac1n \sum_{j=1}^n a_{ij}^2\) h(2\lambda M)\),
\end{align*}
so from~\eqref{expmom} and the simple bound
\[
h(x) \leq \sum_{k=0}^\infty x^k = \frac1{1-x} \qquad(0<x<1),
\]
we have
\[
\E\exp(\lambda X) \leq \exp\(\frac{2\lambda^2 v}{1-2\lambda M}\)
\]
for $2\lambda M<1$. The claimed result now follows by bounding
\[
\P(X>t) = \P(\exp(\lambda X) > e^{\lambda t}) \leq e^{-\lambda t} \E\exp(\lambda X) \leq \exp\(-\lambda t + \frac{2\lambda^2 v}{1 - 2\lambda M}\)
\]
and putting
\[
\lambda = \frac{t}{4v + 2Mt},
\]
and similarly bounding $\P({-X}>t)$.
\end{proof}
The reader familiar with the usual Bernstein inequality may recognize that from~\eqref{expmom} onwards all we have done is reproduce the usual proof. Indeed, if $Y$ is the sum of $n$ independent random variables, the $i$th of which takes values $a_{i1},\dots, a_{in}$ each with probability $1/n$, then~\eqref{expmom} states that
\[
\E \exp(\lambda X) \leq \(\E\exp(2\lambda Y)\)^{1/2},
\]
so it suffices to extract from the proof of the usual Bernstein inequality an upper bound for $\E\exp(2\lambda Y)$.
\section{Refined concentration for rearrangements}
In this section we prove a refined concentration estimate for Hoeffding's statistic
\[
X = \sum_{i=1}^n a_{i\pi(i)}
\]
under the hypothesis that $a_{ij} = u_i v_j$ for some $(u_i)$ and $(v_i)$ for which we have some sort of entropy control. Moreover we are particularly interested in the concentration from below, which in certain regimes we expect to be stronger than the concentration from above.
\begin{theorem}\label{thm:rearrangement-concentration}
Let $f:S_n\to[0,1]$ be a function with ${\textstyle\int} f = \alpha$. Let $g_1,g_2:\Omega\to[0,1]$ be functions with ${\textstyle\int} g_1 = \beta$ and ${\textstyle\int} g_2 = \gamma$. Then
\begin{align*}
- \(\langle f*g_1,g_2\rangle - \alpha \beta\gamma \)
&\lesssim \frac{\alpha \|g_1 - \beta \|_2 \|g_2 - \gamma\|_2 \log n}{n^{1/2}}\\
&\qquad+ \frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}(\beta^{1/2}+\gamma^{1/2}) S(g_1)^{1/2} S(g_2)^{1/2} (\log n)^{5/2}}{n^{1/2}}\\
&\qquad+ O(n^{-99}).
\end{align*}
\end{theorem}
\begin{lemma}\label{lem:rearrangement-concentration}
Let $h_1,h_2:\Omega\to[0,1]$ be functions such that $h_i$ is supported on a set $H_i$ of density $\delta_i$, and such that $1/2\leq h_i\leq 1$ on $H_i$. Let $f:S_n\to[0,1]$ be a function with ${\textstyle\int} f = \alpha$. Then if $\delta_1\delta_2\gtrsim n^{-1}$ we have
\[
|\langle f*h_1,h_2\rangle - \alpha {\textstyle\int} h_1 {\textstyle\int} h_2| \lesssim \frac{\alpha \delta_1^{1/2} \delta_2^{1/2} \log n}{n^{1/2}} + O(n^{-100}),
\]
while if $\delta_1\delta_2 \lesssim n^{-1}$ we have
\[
- \alpha\delta_1\delta_2 \lesssim \(\langle f\ast h_1,h_2\rangle - \alpha {\textstyle\int} h_1 {\textstyle\int} h_2\) \lesssim \min\(\frac{\alpha\log n}{n} + O(n^{-100}),\delta_1\delta_2\).
\]
\end{lemma}
To explain the two cases appearing in Lemma~\ref{lem:rearrangement-concentration}, let us momentarily think of $h_i$ as the indicator of $H_i$. The inner product $\langle f*h_1,h_2\rangle/\alpha$ is then the density of a random intersection $\pi(H_1)\cap H_2$, where $\pi$ is chosen randomly according to $f/\alpha$. If $H_1$ and $H_2$ are not too small then we expect $|\pi(H_1)\cap H_2|$ to be highly concentrated around $\delta_1\delta_2 n$ with a Gaussian-type tail: this is the first case in the lemma. However if $H_1$ and $H_2$ are small then $|\pi(H_1)\cap H_2|$ has a Poisson-type distribution, so we expect $\pi(H_1)\cap H_2$ to be nonempty with probability about $\delta_1\delta_2$, and in any case almost surely bounded in size by about $\log n$: this is the second case in the lemma. The lower bound in the second case is trivial.
\begin{proof}
Apply Theorem~\ref{bernstein} to $a_{ij} = (h_1(i)-{\textstyle\int} h_1)(h_2(j) - {\textstyle\int} h_2)$, noting that $|a_{ij}|\leq 1$ and
\[
\frac1n \sum_{i,j=1}^n a_{ij}^2 = \frac1n \sum_{i=1}^n (h_1(i)-{\textstyle\int} h_1)^2 \sum_{j=1}^n (h_2(j)-{\textstyle\int} h_2)^2 \lesssim \delta_1\delta_2 n.
\]
The result is that
\[
\P(n|\langle \pi\ast(h_1-{\textstyle\int} h_1),(h_2-{\textstyle\int} h_2)\rangle| > t) \leq 2 \exp\( \frac{-ct^2}{\delta_1\delta_2n + t} \).
\]
Thus for every $t>0$ we have
\[
| \langle f*(h_1-{\textstyle\int} h_1),(h_2-{\textstyle\int} h_2)\rangle| \lesssim \frac{\alpha t}{n} + 2 \exp\( \frac{-ct^2}{\delta_1\delta_2n + t} \).
\]
For the first part of the lemma put $t = C\delta_1^{1/2}\delta_2^{1/2}n^{1/2}\log n$ for some constant $C$. Then we obtain
\[
| \langle f*(h_1-{\textstyle\int} h_1),(h_2-{\textstyle\int} h_2)\rangle| \lesssim_C \frac{\alpha \delta_1^{1/2} \delta_2^{1/2} \log n}{n^{1/2}} + 2 \exp\( \frac{-c C^2 (\log n)^2}{1 + (\delta_1\delta_2n)^{-1/2} C \log n} \).
\]
If $\delta_1\delta_2 \gtrsim n^{-1}$ and $C$ is sufficiently large it follows that
\[
| \langle f*(h_1-{\textstyle\int} h_1),(h_2-{\textstyle\int} h_2)\rangle| \lesssim \frac{\alpha\delta_1^{1/2}\delta_2^{1/2} \log n}{n^{1/2}} + O(n^{-100}),
\]
as claimed.
For the second part of the lemma put $t = C \log n$ for some constant $C$. Then we obtain
\[
| \langle f*(h_1-{\textstyle\int} h_1),(h_2-{\textstyle\int} h_2)\rangle| \lesssim_C \frac{\alpha \log n}{n} + 2 \exp\( \frac{-c C^2 (\log n)^2}{\delta_1\delta_2 n + C \log n} \).
\]
Now if $\delta_1\delta_2\lesssim n^{-1}$ and $C$ is sufficiently large it follows that
\[
| \langle f*(h_1-{\textstyle\int} h_1),(h_2-{\textstyle\int} h_2)\rangle| \lesssim \frac{\alpha \log n}{n} + O(n^{-100}).
\]
The remaining inequalities asserted by the lemma are trivial: just note that
\[
\langle f*h_1,h_2\rangle \leq \langle 1*h_1,h_2\rangle = {\textstyle\int} h_1{\textstyle\int} h_2 \lesssim \delta_1\delta_2,
\]
and
\[
\alpha{\textstyle\int} h_1{\textstyle\int} h_2 \lesssim \alpha\delta_1\delta_2.\qedhere
\]
\end{proof}
We will deduce Theorem~\ref{thm:rearrangement-concentration} from Lemma~\ref{lem:rearrangement-concentration} using a dyadic decomposition, but first we need two basic entropy computations.
\begin{lemma}\label{lem:entropy-low}
Let $g:\Omega\to[0,1]$ be a function such that ${\textstyle\int} g = \beta$ and such that $g\leq \beta - t$ on a set of density at least $\delta$, where $t,\delta>0$. Then
\[
S(g) \gtrsim \frac{\delta t^2}{\beta^2}.
\]
\end{lemma}
\begin{proof}
We must have $t\leq \beta$, so by replacing $t$ with $t/100$ if necessary we may assume that $t/\beta \leq 1/100$. Similarly, by reducing $\delta$ if necessary we may assume that $\delta\leq 1/2$ and that $\delta n$ is an integer. Now by convexity $S(g)$ is minimized under the stated conditions when $g = \beta-t$ on a set of density $\delta$ and otherwise equal to $\beta + \frac{\delta}{1-\delta} t$, and in this case
\[
S(g) = \delta \(1-\frac{t}{\beta}\) \log\(1 - \frac{t}{\beta}\) + (1-\delta) \(1 + \frac{\delta}{1-\delta} \frac{t}{\beta}\) \log\(1 + \frac{\delta}{1-\delta}\frac{t}{\beta}\).
\]
By inserting the Taylor expansion
\begin{equation}\label{taylor}
(1+x)\log(1+x) = x + x^2/2 + O(x^3)
\end{equation}
we thus have
\[
S(g) \geq \frac12 \frac{\delta}{1-\delta} \frac{t^2}{\beta^2} + O\(\delta \frac{t^3}{\beta^3}\) \gtrsim \frac{\delta t^2}{\beta^2}.
\]
The last inequality follows from our assumption $t/\beta\leq 1/100$.
\end{proof}
\begin{lemma}\label{lem:entropy-high}
Let $g:\Omega\to[0,1]$ be a function such that ${\textstyle\int} g = \beta$ and such that $g \geq \beta + t$ on a set of density at least $\delta$, where $t,\delta>0$. Then
\[
S(g) \gtrsim \min\(\frac{\delta t}{\beta}, \frac{\delta t^2}{\beta^2}\) \geq \frac{\delta t^2}{\beta}.
\]
\end{lemma}
\begin{proof}
We must have $(\beta+t)\delta \leq {\textstyle\int} g = \beta$, i.e.,
\[
\frac{\delta}{1-\delta} \frac{t}{\beta}\leq 1,
\]
so by replacing $t$ with $t/100$ if necessary we may assume that
\[
\frac{\delta}{1-\delta} \frac{t}{\beta}\leq \frac1{100}.
\]
As before we may also assume that $\delta\leq 1/2$ and that $\delta n$ is an integer. Now by convexity $S(g)$ is minimized under the stated conditions when $g = \beta+t$ on a set of density $\delta$ and otherwise equal to $\beta - \frac{\delta}{1-\delta} t$, and in this case
\[
S(g) = \delta \(1+\frac{t}{\beta}\) \log\(1 + \frac{t}{\beta}\) + (1-\delta) \(1 - \frac{\delta}{1-\delta} \frac{t}{\beta}\) \log\(1 - \frac{\delta}{1-\delta}\frac{t}{\beta}\).
\]
By inserting~\eqref{taylor} we thus have
\[
S(g) \geq \delta \(1+\frac{t}{\beta}\) \log\(1 + \frac{t}{\beta}\) - \delta \frac{t}{\beta} + \frac12 \frac{\delta^2}{(1-\delta)} \frac{t^2}{\beta^2} + O\(\frac{\delta^3t^3}{\beta^3}\).
\]
Now we separate into cases depending on the size of $t/\beta$. If $t/\beta \geq 1$ then we have
\[
S(g) \geq \delta \frac{t}{\beta} (2\log 2-1) + O\(\frac{\delta^2t^2}{\beta^2}\) \gtrsim \frac{\delta t}{\beta}.
\]
On the other hand if $t/\beta \leq 1$ then by reducing $t$ if necessary we may assume that $t/\beta \leq 1/100$, and then by inserting~\eqref{taylor} again we have
\[
S(g) \geq \frac12 \frac{\delta}{1-\delta} \frac{t^2}{\beta^2} + O\(\frac{\delta t^3}{\beta^3}\) \gtrsim \frac{\delta t^2}{\beta^2}.
\]
As before we used our assumption about the size of $\delta t/\beta$ or $t/\beta$ to justify the absorption of the error terms.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:rearrangement-concentration}]
Write
\[
g_i - {\textstyle\int} g_i = \sum_s g_i^s + O(n^{-100}) = \sum_s (g_i^s - {\textstyle\int} g_i^s) + O(n^{-100}),
\]
where $s$ ranges over all $s$ of the form $\pm 2^{-k}$ for which $n^{-100}\leq |s|\leq 1$, and where $g_i^s$ is defined to be equal to $g_i-{\textstyle\int} g_i$ where $g_i-{\textstyle\int} g_i$ has the same sign as $s$ and $|s|/2 < |g_i - {\textstyle\int} g_i| \leq |s|$ and zero elsewhere. Then
\[
\langle f*g_1, g_2\rangle - \alpha\beta\gamma = \sum_{s,t} \(\langle f*g_1^s,g_2^t\rangle - \alpha {\textstyle\int} g_1^s {\textstyle\int} g_2^t \) + O(n^{-100}).
\]
For each $s,t$ we apply Lemma~\ref{lem:rearrangement-concentration} with $h_1 = g_1^s/s$ and $h_t=g_2^t/t$. Let $\delta_1^s$ be the density of points where $g_1-{\textstyle\int} g_1$ has the same sign as $s$ and $|s|/2 < |g_1 - {\textstyle\int} g_1| \leq |s|$ and let $\delta_2^t$ be the density of points where $g_2-{\textstyle\int} g_2$ has the same sign as $t$ and $|t|/2 < |g_2-{\textstyle\int} g_2|\leq |t|$. If $\delta_1^s \delta_2^t \gtrsim 1/n$ then we get the bound
\[
\left|\langle f*g_1^s, g_2^t\rangle - \alpha {\textstyle\int} g_1^s {\textstyle\int} g_2^t\right| \lesssim \frac{\alpha |s| |t| (\delta_1^s)^{1/2} (\delta_2^t)^{1/2} \log n}{n^{1/2}} + O(n^{-100}),
\]
and the total contribution from all such cases is bounded by
\begin{align*}
&\sum_{s,t} \(\frac{ \alpha |s| |t| (\delta_1^s)^{1/2} (\delta_2^t)^{1/2} \log n}{n^{1/2}} + O(n^{-100})\)\\
&\qquad\lesssim \frac{\alpha \|g_1-{\textstyle\int} g_1\|_2 \|g_2-{\textstyle\int} g_2\|_2 \log n}{n^{1/2}} + O(n^{-99}).
\end{align*}
Now consider the cases in which $\delta_1^s\delta_2^t \lesssim 1/n$ and in which $s$ and $t$ have the same sign. By Lemma~\ref{lem:rearrangement-concentration} we have
\[
-\(\langle f\ast g_1^s,g_2^t\rangle - \alpha g_1^s g_2^t\) \lesssim \alpha |s| |t| \delta_1^s \delta_2^t \lesssim \frac{\alpha |s| |t| (\delta_1^s)^{1/2} (\delta_2^t)^{1/2}}{n^{1/2}},
\]
so the total contribution from these cases is again acceptable.
Finally consider the cases in which $\delta_1^s\delta_2^t\lesssim 1/n$ and in which $s$ and $t$ have opposite sign, say $s<0$ and $t>0$. By Lemmas~\ref{lem:entropy-low} and~\ref{lem:entropy-high} we have
\[
S(g_1) \gtrsim \frac{\delta_1^s s^2}{\beta^2}
\]
and
\[
S(g_2) \gtrsim \frac{\delta_2^t t^2}{\gamma},
\]
so
\[
S(g_1)^{1/2} S(g_2)^{1/2} \gtrsim \frac{|s| |t| (\delta_1^s \delta_2^t)^{1/2}}{\beta\gamma^{1/2}}.
\]
Thus by Lemma~\ref{lem:rearrangement-concentration} we can bound
\begin{align*}
|\langle f*g_1^s,g_2^t\rangle - \alpha {\textstyle\int} g_1^s {\textstyle\int} g_2^t |
&\lesssim |s| |t| \(\frac{\alpha \log n}{n}\)^{1/2} \(\delta_1^s \delta_2^t\)^{1/2} + O(n^{-100})\\
&\lesssim \frac{\alpha^{1/2} \beta \gamma^{1/2} S(g_1)^{1/2} S(g_2)^{1/2} (\log n)^{1/2}}{n^{1/2}} + O(n^{-100}).
\end{align*}
If $s<0$ and $t>0$ then we get the analogous bound
\[
|\langle f*g_1^s,g_2^t\rangle - \alpha {\textstyle\int} g_1^s {\textstyle\int} g_2^t |
\lesssim \frac{\alpha^{1/2} \beta^{1/2} \gamma S(g_1)^{1/2} S(g_2)^{1/2} (\log n)^{1/2}}{n^{1/2}} + O(n^{-100}).
\]
The number of choices of $s$ and $t$ is bounded by $(\log n)^2$, so the total contribution from all these cases is bounded by
\[
\frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}(\beta^{1/2} + \gamma^{1/2})S(g_1)^{1/2} S(g_2)^{1/2} (\log n)^{5/2}}{n^{1/2}} + O(n^{-99}).\qedhere
\]
\end{proof}
\section{Bounding the second term in~\eqref{ellisgreen}}
\begin{proof}[Proof of Theorem~\ref{main}]
Let $f=1_X$, $g=1_Y$, and $h=1_Z$, where $X,Y,Z\subset A_n$ have densities $\alpha,\beta,\gamma \geq n^{-O(1)}$ respectively. Then the first term in~\eqref{ellisgreen} is
\[
\alpha\beta\gamma,
\]
the third term is bounded by
\[
c\alpha^{1/2}\beta^{1/2}\gamma^{1/2}/n,
\]
and the second term is, by~\eqref{secondterm} and Theorem~\ref{thm:rearrangement-concentration},
\begin{align*}
(n-1)&\langle\hat{f}(\sigma)\hat{g}(\sigma),\hat{h}(\sigma)\rangle_\textup{HS}\\
&\sim \sum_{i\in\Omega} \langle f*(p_ig-\beta),(p_ih-\gamma)\rangle\\
&\gtrsim - \frac{\alpha \log n}{n^{1/2}} \sum_{i\in\Omega} \|p_i g - \beta \|_2 \|p_i h - \gamma\|_2 \\
&\qquad - \frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}(\beta^{1/2}+\gamma^{1/2})(\log n)^{5/2}}{n^{1/2}} \sum_{i\in\Omega} S(p_i g)^{1/2} S(p_i h)^{1/2}\\
&\qquad + O(n^{-98}).
\end{align*}
By Cauchy--Schwarz and the Parseval remnant~\eqref{remnantparseval}, the first term here is bounded in magnitude by
\[
\frac{\alpha\log n}{n^{1/2}} \(\sum_{i\in\Omega} \|p_ig - \beta\|_2^2\)^{1/2} \(\sum_{i\in\Omega} \|p_ih - \gamma\|_2^2\)^{1/2} \lesssim \frac{\alpha \beta^{1/2} \gamma^{1/2} \log n}{n^{1/2}}.
\]
Similarly, by Cauchy--Schwarz and subadditivity of entropy (Theorem~\ref{subadditivity}) the second term is bounded in magnitude by
\begin{align*}
&\frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}(\beta^{1/2}+\gamma^{1/2})(\log n)^{5/2}}{n^{1/2}} \(\sum_{i\in\Omega} S(p_i g)\)^{1/2} \(\sum_{i\in\Omega} S(p_i h)\)^{1/2}\\
&\qquad \lesssim \frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}(\beta^{1/2}+\gamma^{1/2})(\log n)^{5/2}}{n^{1/2}} (\log\beta^{-1})^{1/2} (\log\gamma^{-1})^{1/2}\\
&\qquad \lesssim \frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}(\beta^{1/2}+\gamma^{1/2})(\log n)^{7/2}}{n^{1/2}}.
\end{align*}
Thus we deduce that $\langle f*g,h\rangle \geq (1+o(1))\alpha\beta\gamma$ provided that
\begin{align*}
\frac{\alpha^{1/2}\beta^{1/2}\gamma^{1/2}}{n} &\ll \alpha\beta\gamma,\\
\frac{\alpha\beta^{1/2}\gamma^{1/2}(\log n)}{n^{1/2}} &\ll \alpha\beta\gamma,\\
\frac{\alpha^{1/2}\beta\gamma^{1/2}(\log n)^{7/2}}{n^{1/2}} &\ll \alpha\beta\gamma,\\
\frac{\alpha^{1/2}\beta^{1/2}\gamma(\log n)^{7/2}}{n^{1/2}} &\ll \alpha\beta\gamma,~\text{and}\\
n^{-98} &\ll \alpha\beta\gamma.
\end{align*}
In other words what we require is that
\[
\min(\alpha\beta,\alpha\gamma,\beta\gamma) \gg (\log n)^7/n.\qedhere
\]
\end{proof}
\section{Open questions}
The most obvious outstanding open question is whether the logarithms can be removed from Theorem~\ref{main}. Specifically, does the largest product-free subset of $A_n$ have density $O(n^{-1/2})$? Can you say anything about the extremal examples? It is possible that all near-extremizers look roughly like the first example in Section~\ref{examples}, or its inverse, but this may be difficult to quantify, and even more difficult to prove.
Another obvious outstanding open question is whether a one-sided product-mixing phenomenon persists in other groups for densities lower than that given by Theorem~\ref{gowers}. For example take $G=\SL_2(p)$. For this group $m\sim p$. By Theorem~\ref{gowers} there is two-sided product mixing for sets of density at least $p^{-1/3}$, by Proposition~\ref{prop:many} there is no two-sided product mixing for sets of density less than $p^{-1/3}$, and by Proposition~\ref{prop:ked} there is no product mixing at all below density $p^{-1/2}$. Do we have one-sided product mixing for sets of densities between $p^{-1/2}$ and $p^{-1/3}$?
Another great question, which has been asked before by both Kedlaya~\cite{kedlayaAMM} and Gowers~\cite{gowers}, is about the product-mixing properties of $\textup{SU}(n)$. To make the question concrete, what is the measure of the largest product-free subset of $\textup{SU}(n)$? By straightforward adaptation of Theorem~\ref{gowers} it is at most $O(n^{-1/3})$, but the only lower bounds we know have the form $c^n$ for some $c<1$. Apart from being an interesting and natural question in its own right, answering this question may be relevant for understanding the product-mixing behaviour of groups not having a permutation representation of dimension $\sim m$.
|
1,116,691,497,699 | arxiv |
\section{Introduction}
Wholesale electricity markets in many jurisdictions use a two-settlement structure: a day-ahead market for bulk power
transactions and a real-time market for fine-grain supply-demand balancing. Forecast errors in the day-ahead market necessitate subsequent balancing in the real-time market. With deeper penetrations of wind and solar generation, markets must be able to contend with greater levels of uncertainty stemming from renewable intermittency. Forecast errors increase, and balancing supply and demand becomes more challenging. The traditional approach of balancing using conventional fossil fuel based reserves is untenable: it is expensive and defeats the emissions benefits of renewables. Balancing the variability of intermittent renewable generation through demand flexibility is a far better alternative to reserve generation, as it produces no emissions and consumes no resources. This is recognized and encouraged by the Federal Energy Regulatory Commission (FERC) through its Order 745, which mandates that demand response be compensated on par with the conventional generation that supplies grid power \cite{federal2012assessment}. Commercial buildings, light industry, and households are flexible in their electricity consumption. These agents can be induced to yield this flexibility in exchange for monetary compensation. This paper explores trading demand response assets within a traditional two-settlement market structure.
We consider the setting where a {\it Load Serving Entity} (LSE) supplies electricity to a collection of consumers at the delivery time $T$. An aggregator manages the aggregate load flexibility of these consumers. The LSE interacts directly with the aggregator and can request a certain aggregate load reduction which will be reliably produced at the delivery time. The LSE can purchase bulk power in the day-ahead market and can also buy balancing
power in the real-time market. It also has access to zero marginal cost renewable generation.
We consider the situation where excess renewable generation is spilled, and cannot be sold back into the real-time market.
Other generalizations of our results are possible, but we choose to explore the simplest situation.
{\em When should demand response assets be traded?}
Well in advance of the delivery time, the LSE has poor forecasts of its renewable generation and of clearing
prices in the real-time market. So the LSE prefers to delay its demand response request close to the delivery time.
Conversely, the aggregator prefers to receive any load curtailment requests well before the delivery time. This
affords its client consumers sufficient lead time to organize their electricity use and cede their demand reduction.
These considerations argue that demand response assets should be traded in an intermediate market as a
recourse between the day-ahead and real-time markets.
{\em What is an appropriate mechanism for the intermediate time trading of demand response assets?}
Economic orthodoxy argues in favor of an idealized spot market with contingent prices from the perspective of efficiency.
In this intermediate spot market, trading takes place after counter-parties digest all information that is revealed.
Therefore, the clearing prices are {\em contingent} on the realized information.
\blue{While the spot market is efficient, it has two main drawbacks: (a) pricing is typically very volatile and does not
offer guaranteed income to demand response assets to compensate for yielding their load flexibility and for the associated capital costs,
and (b) an intermediate spot market requires organized infrastructure and regulatory approval which can be very expensive.}
To overcome these difficulties, we propose to trade demand response assets using call options.
In our scheme, the LSE buys a number of call options contracts from the aggregator at time $t_o$,
coincident with the gate closure of the day-ahead market.
It pays an option price $\pi^o$ per contract. Each call option contract affords the LSE the right, but not the obligation, to
receive one unit of load reduction from the aggregator. These options expire at the intermediate time $t_1$
by which time they must be exercised or forfeited. To exercise these options the LSE must pay the aggregator
the strike price $\pi^{sp}$ per unit of load reduction. The strike price is not contingent, it is fixed and known at time $t_o$.
Payment from the sale of option contracts provides a guaranteed income to flexible loads
for their demand response {\em capability}. Subsequent payment from the exercise of option contracts compensates loads for the {\em provision} of demand response. Since option contracts can be viewed as private over-the-counter transactions
between the LSE and the aggregator, our scheme does not require regulatory blessing or organized market infrastructure.
\begin{figure*}[htb]
\label{fig:timeline}
\centering
\begin{tikzpicture}[xscale=0.4, yscale=0.35]
\draw [black, fill=black, fill opacity = 0.1, thick] (-9,4) rectangle (-4,8);
\node [align=center] at (-6.5,6) {Flexible \\ Loads};
\draw [<->,very thick] (-4,6) -- (2,6);
\node [align=center] at (-1.5,6.7) {Incentives};
\draw [black, fill=black, fill opacity = 0.1, thick] (2,4) rectangle (7,8);
\node [align=center] at (4.5,6) {Aggregator};
\draw [<->,very thick] (7,6) -- (11,6);
\draw [black, fill=black, fill opacity = 0.1, thick] (11,4) rectangle (15,8);
\node [align=center] at (13,6) {LSE};
\draw [<->,very thick] (15,6) -- (20,6);
\draw [black, fill=black, fill opacity = 0.1, thick] (20,4) rectangle (27,8);
\node [align=center] at (23.5,6) {DAM \& RTM \\ Markets};
\draw [black, fill=red, fill opacity = 0.01, thick] (1,3) rectangle (16,10);
\node [align=center] at (9,9) {social planner};
\node [align=center] at (9,6.7) {Options};
\node [align=center] at (18,6.75) {$q, q^{rt}$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.5]
{\small
\draw [->,very thick,black] (0,-0.25) -- (28,-0.25);
\node [align=center] at (27.5,-1) {(time $t$)};
\node [align=left] at (4.2,1.5) {buy DAM power $q$ at $\pi^{da}$ \\ buy $x$ options at $\pi^o$ \\ based on wind forecast $f_0$};
\node [align=center] at (3,-1) {$t_0$};
\node [align=center] at (13,1.5) {exercise \\ $y$ options at $\pi^{sp}$ \\ based on wind forecast $f_1$ };
\node [align=center] at (13,-1) {$t_1$};
\node [align=right] at (24,1.5) {renewables $w$ revealed \\ buy RTM power $q^{rt}$ at $\pi^{rt}$ \\ to balance power at delivery};
\node [align=center] at (24,-1) {$T$};
\node [align=center] at (3,-0.25) {$\bullet$};
\node [align=center] at (13,-0.25) {$\bullet$};
\node [align=center] at (24,-0.25) {$\bullet$};}
\end{tikzpicture}
\caption{Players, interactions, and decision time-line.} \label{fig:timeline}
\end{figure*}
\subsection{Our Contributions}
Our principal contributions are:
\begin{itemize}
\item First, we consider optimal energy scheduling from the perspective of a social planner.
We formulate this as a three stage optimization problem and characterize the optimal decisions at each stage: the optimal energy purchase in the day-ahead market, the optimal demand response (or load curtailment) decision at the intermediate stage,
and the balancing energy purchase in the real-time market. This serves as a benchmark for evaluating other market designs.
\item Second, we consider an intermediate spot market with contingent pricing. We study the interactions of the LSE and the aggregator in a spot-market. We show that there exists a competitive equilibrium, and the equilibrium is socially optimal, i.e., it realizes the same system cost as the benchmark.
\item Third, we study the options market for the LSE and the aggregator. We show that under some conditions, a competitive equilibrium always exists, and it is the optimal solution to a convex optimization problem. We compare the efficiency of the equilibrium for the options market and the spot market, and show that the options market is not necessarily socially optimal.
We then design optimal option and prices which minimize the welfare gap at the competitive equilibrium.
\end{itemize}
\subsection{Related Work}
There is extensive literature on demand response and work related to managing the uncertainty with renewable integration \cite{li2011optimal, roscoe2010supporting, samadi2014real, chen2012real, wu2012vehicle, mohsenian2010autonomous, yang2013game, gabriel2006optimal, haring2013decentralized, fahrioglu2000designing, gatsis2012residential, chavali2014distributed, shi2014optimal, lidemand, xudemand, huang2012optimal}. These works can be broadly classified as price-based or contract-based.
{\it Price-based Demand Response}: This is a type of demand response where the consumers alter their energy consumption based on time varying prices determined apriori by the LSE. The objective here is to improve overall system benefits by influencing the consumers to shift their demand. The works in \cite{li2011optimal, roscoe2010supporting, samadi2014real} propose different approaches to determine the time varying prices such that the overall system benefits, measured in terms of efficiency and load variability, are improved. The authors in \cite{mohsenian2010autonomous, yang2013game} study a game theoretic formulation and propose a pricing strategy that improves system benefits in Nash equilibrium. Closely related works such as \cite{wu2012vehicle} propose a time varying price policy to utilize flexible storage of EVs in order to manage load variability. Other works such as \cite{chen2012real} propose a demand response management strategy using a stochastic optimization procedure that accounts for financial risks associated with time varying prices.
{\it Distributed Price-based Demand Response}: Authors in \cite{gatsis2012residential}, \cite{chavali2014distributed} and \cite{shi2014optimal} propose iterative distributed load control schemes with the objective of meeting system requirements and minimizing consumer discomfort. They primarily address the coordination of multiple demand response users by iteratively discovering the most appropriate electricity price and its variation with time.
\blue{The setting we consider is different from the above works, which are primarily concerned with price-responsive demand response. We propose a market mechanism for direct calling of a certail level of demand response instead of using price to influence demand.
Here, a LSE can buy DR contracts from aggregators of DR in the day-ahead market and can determine the amount of DR to call in the real-time market at an intermediate stage when more information is available on renewable generation at real-time. }
{\it Multi-stage Stochastic Decision}: Varaiya \emph{et al.} \cite{Va-RLD11} propose a {\em risk-limiting dispatch}
approach for integrating renewable energy in the grid. They formulate a multi-stage stochastic control problem where at each stage the utility makes purchase decisions based on the available information. Rajagopal \emph{et al.} \cite{rajagopal2013risk} extend this approach and characterize optimal power procurement policies as threshold based decisions.
Our work parallels the approach of Varaiya et al. \cite{Va-RLD11}. In particular, we extend their approach to a contract setting as proposed in this paper, where the decision of two entities are coordinated in a multi-stage decision problem through an options contract mechanism.
{\it Contract-based Demand Response}: The works in \cite{muthirayan2019mechanism, haring2013decentralized, fahrioglu2000designing} address the problem of demand response aggregation from a mechanism design perspective. The objective of the mechanism design is to gather demand response contracts at minimal cost and at preferrably maximal privacy so that the aggregator or the LSE can meet the DR requirements of the system. Alternatively, demand response contracts that treat demand response as a differentiated good, based on their power level and duration, have also been proposed \cite{nayyar2016duration}, \cite{bitar2017deadline}. Our work is different from these set of works in the sense that we provide a multi-settlement market framework for trading the aggregated demand response in the electricity markets.
\blue{Authors in \cite{oren2001integrating} provide a forward electricity contract with a call option to hedge against price risks while exercising load flexibility. Authors in \cite{kamat2002exotic} provide a similar forward contract bundled with a option to hedge against price for interruptible services. In \cite{oren2005generation} the authors discuss long term options contract as a mechanism for ensuring generation adequacy in an energy only market, where these contracts can be covered by curtailable load contracts. In contrast to these works, we propose an intermediate options market for trading aggregated DR that allows the LSE to call for load curtailment based on an improved forecast of the wind power that is available at this intermediate time. The options market is proposed as an alternative to the intermediate spot market that can be cumbersome to organize and volatile in terms of the revenue it generates for the service providers. }
The paper is organized as follows. We introduce the basic notions and notation in Section \ref{sec:pelims}. In Section \ref{sec:opt-sch}, we consider the problem of energy scheduling with demand response from the perspective of a system planner. In Section \ref{sec:CE-contingent} we discuss the implementation of intermediate spot market for scheduling demand response. We present the options market mechanism in Section \ref{sec:CE-option}. Finally, we conclude the paper with a brief description of future research directions in Section \ref{sec:conclusion}.
\section{Preliminaries}
\label{sec:pelims}
The setting we consider is shown in Figure \ref{fig:timeline}. A load serving entity (LSE)
supplies $l$ units of electricity to a collection of consumers for delivery at time $T$. The demand $l$ is considered inelastic and known at time $t_o$. Indeed, day-ahead load forecast errors are within $1\%-2\%$\cite{CAISO2018briefing}.
The LSE buys $q$ units of energy in the day-ahead market at price $\pi^{da}$. At the intermediate time $t_1$, he extracts $y$ units of demand response, which incurs a disutility of $\phi(y)$. The LSE has access to zero marginal cost random renewable
generation $w$ which is realized at the delivery time $T$.
To meet its demand obligations, the LSE buys the remaining energy required $q^{rt}$ in the real-time market at price $\pi^{rt}$.
The total energy purchase must satisfy:
\begin{equation}
\label{eq:power-balance}
l \leq q + q^{rt} + w + y.
\end{equation}
Note that we consider the situation where excess renewable generation is spilled,
and cannot be sold back into the real-time market. \blue{This is necessary to ensure that the LSE does not sell all of the renewable generation back in the real-time market and can be imposed as a regulation}. The demand response purchase made at the intermediate time $t_1$ is based on a forecast $f_1$ of $w$ and forecast of $\pi^{rt}$.
\subsection{Model Uncertainties}
Let $f_{0}$ denote the information available at $t_o$. Let $p(w|S)$ be the conditional probability of the wind given the intermediate forecast state $S$ at time $t_{1}$. The forecast state $S$ can be regarded as a sufficient statistic which parameterizes the information on wind at time $t_{1}$. We parameterize $S \in [0, 1]$. We call this an \emph{information state}. Define
\[p_{s}(w) = p(w | S=s),~~P_{s}(z) = \int^{z}_{w=0} p_{s}(w) dw, \]
where $P_{s}(z)$ is the probability that the wind at time $T$ is less than $z$ given the information state $s$. Let $\alpha(s)$ be the prior probability density function of the information state, i.e.,
\[\alpha(s) = \mathbb{P}\left(S = s | f_{0}\right)\]
We assume that real-time price $\pi^{rt}$ is a random variable and denote the expected real-time price conditioned on the information state by,
\[\overline{\pi}^{rt}_{s} = \mathbb{E}[\pi^{rt}|S = s] \]
The day-ahead price $\pi^{da}$ is known at time $t_{0}$. We use $\mathbb{E}_{S}\left[\cdot\right]$ and $\mathbb{E}_{w}\left[\cdot\right]$ to denote the expectation with respect to the information state and the randomness in wind, respectively. Let $\mathbb{E}\left[\cdot\right]$ denote the joint expectation.
We make the following assumptions.
\begin{assumption}
\label{as:w-pirt}
(i) $\mathbb{P}(w \geq z | S = s) < \mathbb{P}(w \geq z | S=s'),$ $ \forall z,~ \text{if}~ s' > s$,
(ii) $\overline{\pi}^{rt}_{s'} < \overline{\pi}^{rt}_{s}$, if $s' > s$,
(iii) $\pi^{rt}$ and $w$ are conditionally independent given the information state $s$.
\end{assumption}
Assumption (i) imposes a stochastic ordering on wind conditioned on the information state $s \in [0, 1]$. The intuitive interpretation is that larger values of $s$ indicate (stochastically) more wind. This assumption guarantees that $P_{s'}(z) < P_{s}(z), \forall z$, if $s' > s$ so that the cumulative distribution $P_{s}(\cdot)$ and $P_{s'}(\cdot)$ do not intersect. Assumption (ii) similarly imposes an ordering on the expected value of the real-time price conditioned on the information state. The ordering is such that higher values of $s$ correspond to a lower expected real-time price (because more wind power reduces demand in the real-time market). \blue{Assumption (iii) imposes that the information state $s$ contains all the causal factors that determine the real-time price and wind power $w$. This is a reasonable assumption because the information state $s$ represents the underlying state of nature.}
\subsection{Decision Making of Players}
The players of the problem include an LSE, an aggregator, and a social planner. We model them as follows.
\noindent {\bf Load Serving Entity (LSE):} The LSE is responsible for satisfying the energy balance specified by equation \eqref{eq:power-balance}. At time $t_0$, it buys $q$ units of energy at a price $\pi^{da}$ from the day-ahead market. At time $t_1$, it receives a load reduction of $y_{s}$ units from the aggregator when the information state $s$ is revealed, and makes a payment $R_{s}(y_{s})$. At time $T$, the renewable $w$ is revealed, and the LSE purchases the remaining energy from the real-time market, i.e., $q^{rt} = (l-q-y_{s}-w)_{+}$. The ex-post cost for the LSE given the information state $s$ is,
\blue{\begin{equation}
\label{eq:J-LSE-expost}
{J}^{LSE}_{s} = \pi^{da} q + R_{s}(y_{s}) + \pi^{rt} (l-q-y_{s}-w)_{+}
\end{equation}}
\noindent {\bf Aggregator:} The aggregator suffers a disutility $\phi(y_s)$ for a load reduction of $y_s$ units, and receives a compensation payment $R_s(y_s)$ from the LSE. The ex-post cost for the aggregator, given the information state $s$, is as follows,
\begin{equation}
\label{eq:J-agg-expost}
{J}^{agg}_s = \phi(y_s) - R_{s}(y_s)
\end{equation}
We assume that the disutility function satisfies the assumption given below.
\blue{\begin{assumption}
\label{disutilityfunction}
$\phi(y_s)$ is twice differentiable, and is strongly convex in $y_s$, i.e., $\phi''(y_s)> 0$.
\end{assumption}}
\noindent {\bf Social Planner or Entity (e):} We consider a hypothetical agent, the \emph{social planner}, which combines the roles of the LSE and the aggregator. We denote decision variables and cost functions of the social planner with the superscript e for entity. This social planner buys $q$ units of energy from the day-ahead market, receives a load curtailment of $y_s$ units at an intermediate time $t_{1}$, acquires zero marginal-cost realized wind power $w$ at time $T$, and purchases the remaining energy $(l-q-y_s-w)_{+}$ from the real-time market for load balance. Given $s$, the ex-post cost for the social planner (also called the system cost) is:
\blue{\begin{equation}
J^{e}_s = \pi^{da} q + \phi(y_s) + \pi^{rt} (l-q-y_s-w)_{+}
\end{equation}}
Payment for demand response is an internal exchange between the LSE and aggregator, and does not appear in the social planner ledger. In the sequel, we first discuss the optimal scheduling problem for the social planner, and then we study the interaction between the LSE and the aggregator in the intermediate market and options market. We characterize the competitive equilibrium in both markets, and compare the system costs.
\section{Optimal Scheduling for the Social Planner}
\label{sec:opt-sch}
This section studies the optimal scheduling of energy from the perspective of the social planner. We separately consider the scheduling problems with and without demand response. We use these solutions as benchmarks to compare the various market mechanisms we propose in subsequent sections.
\subsection{Optimal Scheduling without Demand Response}
\label{sec:opt-sch-ndr}
In the absence of demand response, the social planner is confined to purchase energy from the day-ahead and real-time markets. Let $J^{e}_{ndr}(q)$ be the expected cost for the social planner in the absence of demand response. This is a function of the day-ahead purchase $q$ and is
\blue{\begin{equation}
\label{eq:social planner-J-ndr-1}
J^{e}_{ndr}(q) = \pi^{da} q + \mathbb{E}[{\pi}^{rt} (l-q-w)_{+}]
\end{equation}}
This implicitly accounts for the balance inequality (\ref{eq:power-balance}) necessary to service the load $l$. The optimal decision of the social planner is
\begin{equation}
\label{eq:optimaldecision4SP}
q^{e}_{ndr} = \arg \min_{q\geq 0} J^{e}_{ndr}(q)
\end{equation}
We have the following:
\begin{proposition}
\label{thm:no-dr-social planner}
$J^{e}_{ndr}(\cdot)$ is convex. The minimizer $q^e_{ndr}$ solves
\begin{equation}
\label{optimalqwithoutDR}
\pi^{da} - \mathbb{E}_{s}[\overline{\pi}^{rt}_{s} P_{s}(l-q^{e}_{ndr})] = 0
\end{equation}
\end{proposition}
\blue{\begin{assumption}
To avoid trivial results, we assume that the day-ahead market price is discounted from the expected real-time market price, i.e., $\pi^{da} < \mathbb{E}_{s}[\overline{\pi}^{rt}_{s} P_{s}(l) ]$. This will ensure that $q^{e}_{ndr} > 0$.
\end{assumption} }
\subsection{Optimal Scheduling with Demand Response}
\label{sec:opt-sch-dr}
With demand response, the net expected cost for the social planner as a function of the first-stage purchase is given by,
\begin{equation}
\label{eq:social planner-J-dr}
J^{e}(q) = \pi^{da} q + \mathbb{E}_{s}\left[\min_{y_s\geq 0} J^{e}_s(y_s; q) \right]
\end{equation}
where $J^{e}_s(y_s; q) $ is the expected second-stage cost conditioned on $s$ and $q$ and is given by,
\blue{\begin{equation}
\label{eq:social planner-J2}
J^{e}_{s}(y_s;q) = \phi(y_s) + \mathbb{E}_{w} \left[ \overline{\pi}^{rt}_{s} (l-q-y_s-w)_{+} \vert s\right]
\end{equation}}
The optimal first-stage and second-stage decisions, $q^{e}$ and $y^{e}_s$ respectively, are
\begin{align}
\label{eq:social planner-q-y-1}
\begin{cases}
q^{e} = \arg \min_{q\geq 0} J^{e}(q), \\ y^{e}_s = \arg \min_{ 0\leq y_s\leq l} J^{e}_s(y_s;q^e)
\end{cases}
\end{align}
The optimal system cost is then $J^{*e} = J^{e}(q^{e})$. Using the fact that both (\ref{eq:social planner-J-dr}) and (\ref{eq:social planner-J2}) are convex, we can solve for $q^e$ and $y_s^e$ using the conditions given in the following proposition.
\begin{proposition}
\label{thm:dr-social planner}
${J}^{e}(\cdot)$ and ${J}^{e}_s(\cdot)$ are convex. For any given first stage decision $q$, the second-stage decision $y^{e}_s$ is given by,
\begin{align}
\label{eq:social planner-ys-1}
\begin{cases}
\phi'(y_s^e) = \overline{\pi}^{rt}_{s} P_{s}(l-q-y_s^e), & \text{if } \phi'(y_s^e) < \overline{\pi}^{rt}_{s} P_{s}(l-q) \\
y_s^e=0, & \text{if } \phi'(y_s^e) \geq \overline{\pi}^{rt}_{s} P_{s}(l-q)
\end{cases}
\end{align}
The first-stage decision $q^{e}$ is given by the solution of,
\begin{equation}
\label{eq:social planner-qe-1}
\pi^{da} - \mathbb{E}_s[ \overline{\pi}^{rt}_{s} P_{s}(l-q^{e}-y^{e}_s)] = 0
\end{equation}
\end{proposition}
The proof is offered in Appendix B.
\subsection{Socially Optimal Costs}
The optimal costs for the social planner with and without demand response are
\blue{\[ J^*_{dr}=J^e(q^e)
\text{ and } J_{ndr}^*=J^e_{ndr}(q^e_{ndr})\]}
respectively. Clearly, $J^*_{dr}\leq J^*_{ndr}$. These social cost values serve as benchmarks for our market mechanism designs. In Section \ref{sec:CE-contingent}, we show that a spot market with contingent prices realizes the socially optimal cost $J^*_{dr}$. In Section V, we show that trading demand response in an options market will, in general, result in a loss of social welfare. We further select options prices so that this welfare gap is modest. As a result, the over-the-counter options market can well approximate the idealized spot market.
\section{Spot Markets with Contingent Prices}
\label{sec:CE-contingent}
In Section \ref{sec:opt-sch}, we considered the optimal scheduling of energy from the perspective of a hypothetical social planner. We now show that the optimal scheduling decisions of the social planner can be realized through a spot market with contingent prices. In this market, the LSE is a buyer, and the aggregator is the seller.
At time $t_0$, the LSE buys $q$ units of energy from the day-ahead market at a price $\pi^{da}$. At time $t_1$, the information state $s$ is revealed. Depending on this revelation, the LSE purchases $y_s$ units of energy curtailment from the aggregator, paying a price $\pi_s^{in}$.
This is a {\em contingent price} as it depends on the realized information state $s$.
\blue{At time $T$}, the LSE receives wind energy $w$ and purchases the required balancing energy $(l-q-y_s-w)_{+}$ from the real-time market at a price $\pi^{rt}$.
The expected cost for the LSE as a function of the first-stage purchase $q$ is given by,
\begin{equation}
\label{eq:J-LSE-market-1}
J^{LSE}(q) = \pi^{da} q + \mathbb{E}_{s}[\min_{y_{s}\geq 0} J^{LSE}_{s}(y_{s};q)]
\end{equation}
where $J^{LSE}_{s}(y_{s};q)$ is the second stage cost and is given by,
\begin{equation}
J^{LSE}_{s}(y_{s};q) = \pi_{s}^{in} y_{s} + \mathbb{E}_{w}[\overline{\pi}^{rt}_{s} (l-q-y_{s}-w)_{+}]
\end{equation}
The optimal first and second-stage purchase decisions of the LSE are
$q^{LSE}$ and $y^{LSE}_{s}$ respectively. These are given by
\begin{align*}
\label{eq:J-LSE-q-ys}
\begin{cases}
q^{LSE} = \arg \min_{q\geq 0} J^{LSE}(q) \\
y^{LSE}_{s} = \min_{0\leq y_{s} \leq l} J^{LSE}_{s}(y_{s};q^{LSE})
\end{cases}
\end{align*}
The expected cost for the aggregator under the information state $s$ is
\begin{equation}
\label{eq:J-agg-market}
J^{agg}_{s}(y_{s}) = \phi(y_s) - \pi_{s}^{in}y_{s}.
\end{equation}
The optimal selling decision of the aggregator is
\begin{equation*}
\label{eq:J-agg-market_loptselling}
y^{agg}_{s} = \min_{0\leq y_{s} \leq l} J^{agg}_{s}(y_{s})
\end{equation*}
Note that the optimal buying/selling decisions of agents (LSE/aggregator) depend on the contingent prices $\pi_s^{in}$. The market is said to be in {\em equilibrium} if the prices are such that the optimal buying and selling decisions of the agents are consistent under {{\em all} realizations of $s$. We make this notion more precise below.
\begin{definition}[Competitive Equilibrium with Contingent Prices]
The contingent prices $\{\pi_{s}^{*in}\}$, optimal buying decisions of the LSE $q^{*LSE}, \{y^{*LSE}_{s}\}$, optimal selling decisions of the aggregator $\{y^{*agg}_{s}\}$ constitute a competitive equilibrium, if the following holds for \textbf{all} $s\in S$:
\begin{subnumcases}{\label{eq:defineCEforspot}}
J^{LSE}(q^{*LSE}) = \min_{q\geq 0} J^{LSE}(q) \label{eq:defineCEforspota}\\
J^{LSE}_{s}(y^{*LSE}_{s}) = \min_{0\leq y_{s}\leq l} J^{LSE}_{s}(y_{s};q^{*LSE}) \label{eq:defineCEforspotb} \\
J^{agg}_{s}(y^{*agg}_{s}) = \min_{0\leq y_{s} \leq l} J^{agg}_{s}(y_{s}) \label{eq:defineCEforspotc} \\
y^{*LSE}_{s}=y^{*agg}_{s} \label{eq:defineCEforspotd}
\end{subnumcases}
\end{definition}
Here (\ref{eq:defineCEforspota}) and (\ref{eq:defineCEforspotb}) require $(q^{*LSE},y^{*LSE}_{s})$ to be the optimal decision of the buyer, (\ref{eq:defineCEforspotc}) requires $y^{*agg}_{s}$ to be the optimal decision of the seller, and (\ref{eq:defineCEforspotd}) ensures that the traded demand response quantities are in balance. We require this balance at all realizations of $s$.
Let $J^{*LSE}$ be the expected cost for the LSE, and let $J^{*agg}$ be the expected cost for the aggregator at any competitive equilibrium. The system cost of the market at any competitive equilibrium is
\begin{equation}
\label{eq:k1}
J^{*cp} = J^{*LSE} + J^{*agg}.
\end{equation}
Define the minimum system cost for the social planner as $J^{*e}=J^e(q^e)$. This is a lower bound of the system cost for any market. Therefore, we can use $J^{*e}$ as a benchmark to evaluate the {\em efficiency} of the options market. The market is called efficient (or socially optimal) if the system cost for the market attains the lower bound $J^{*e}$ at the competitive equilibrium. \blue{We make this precise in the following definition}.
\begin{definition}[Socially Optimal Equilibrium with Contingent Prices]
An equilibrium with contingent prices is said to be socially optimal, if $J^{*cp} = J^{*e}$.
\end{definition}
We now offer the main result of this section.
\blue{\begin{theorem}
\label{thm:ce-market}
(a) There exists at least one competitive equilibrium under contingent pricing.
(b) All competitive equilibria are socially optimal. Equivalently, define $y_{s}^*= y_{s}^{*LSE} = y_{s}^{*agg}$ at any competitive equilibrium, then
\begin{subnumcases}{\label{optimalconditionofspot}}
J^{e}(q^{*LSE}) = \min_{q\geq 0} J^{e}(q), \label{optimalconditionofspot1}\\
J_{s}^e(y_{s}^*) \quad = \min_{0\leq y_{s} \leq l}J_{s}^e(y_{s};q^{*LSE}). \label{optimalconditionofspot2}
\end{subnumcases}
\end{theorem}}
The proof is deferred to Appendix C. Condition (\ref{optimalconditionofspot}) requires that competitive equilibrium is the optimal solution to the social planner's problem. Therefore, it can be computed by solving (\ref{eq:social planner-ys-1}) and (\ref{eq:social planner-qe-1}). This result implies that the optimal scheduling of the social planner can be realized though an intermediate spot market with contingent prices.
\section{Options Markets and Competitive Equilibrium}
\label{sec:CE-option}
In the previous section, we showed that the intermediate spot market is efficient. However, implementing intermediate spot markets requires organized infrastructure and regulatory approval which can be prohibitive. We now present an intermediate market for demand response using call options. These are private over-the-counter transactions which do not need utility blessing or organized infrastructure.
\subsection{Options Market}
\label{options_market_original}
At time $t_0$, the LSE purchases energy $q$ in the day-ahead market. Concurrently, he buys $x$ units of options from the aggregator at the {\em option price} $\pi^o$. By purchasing these options, the LSE acquires the right, without the obligation, to receive $y$ units of load reduction from the aggregator where $0\leq y\leq x$. At time $t_1$, the LSE can {\em exercise} these options by paying a {\em strike price} $\pi^{sp}$ per contract. Clearly, the number of exercised options $y_{s}$ depends on the information state $s$ revealed at time $t_{1}$. The strike price $\pi^{sp}$ is {\em ex ante}, and does not depend on the information state. At time $T$, the aggregator delivers the contractually obligated load reduction $y_s$. The LSE observes the wind energy $w$ and purchases the remaining balancing energy $(l-q-y_{s}-w)_{+}$ in the real time market.
Since we are considering a competitive market, we assume the agents are rational and price takers. They make their buying/selling decisions based on the market prices $\pi^{o}$ and $\pi^{sp}$. The expected cost for the LSE is a function of the first stage decisions $q$ and $x$:
\blue{\begin{equation}
\label{eq:J-LSE-option-1}
\tilde{J}^{LSE}(q, x) = \pi^o x + \pi^{da} q + \mathbb{E}_{s}[\min_{0\leq y_{s} \leq x} \tilde{J}^{LSE}_{s}(y_{s}) ],
\end{equation}}
Here $\tilde{J}^{LSE}_{s}(\cdot)$ is the second stage cost for the LSE given by
\begin{equation}
\label{eq:J-LSE-option-2}
\tilde{J}^{LSE}_{s}(y_{s}) = \pi^{sp} y_{s} + \mathbb{E}_{w}[\overline{\pi}^{rt}_{s} (l-q-y_{s}-w)_{+}].
\end{equation}
Denote the optimal first and second-stage decisions of the LSE by $\tilde{q}^{LSE}, \tilde{x}^{LSE}$ and $\tilde{y}^{LSE}_{s}$. These decisions solve
\begin{align}
\begin{cases}
\label{eq:J-LSE-option-q-ys}
(\tilde{q}^{LSE}, \tilde{x}^{LSE}) = \arg \min_{(q,x)} \tilde{J}^{LSE}(q, x),\nonumber \\ \tilde{y}^{LSE}_{s} = \arg \min_{y_s\leq x } \tilde{J}^{LSE}_{s}(y_s)
\end{cases}
\end{align}
In the options market, the expected cost for the aggregator is
\begin{equation}
\label{eq:J-agg-option}
\tilde{J}^{agg}(x) = \mathbb{E}_{s}[\phi(y_{s}) - \pi^{sp} y_{s}] -\pi^o x,
\end{equation}
The decision variable of the aggregator is the quantity of options $x$ offered for sale. The optimal selling decision is:
\begin{equation}
\label{eq:J-agg-option_solution}
\tilde{x}^{agg} = \arg \min_{x\geq 0} \tilde{J}^{agg}(x)
\end{equation}
We now define an equilibrium notion for our options market.
\begin{definition}[Competitive Equilibrium for Options Market]
The options price $\pi^{*o}$, the strike price $\pi^{*sp}$, the optimal day-ahead purchase $\tilde{q}^*$, the optimal buying decision of the LSE $\tilde{x}^{*LSE}$ and the optimal selling decision of the aggregator $\tilde{x}^{*agg}$ constitute a competitive equilibrium if
\begin{align*}
\begin{cases}
\tilde{J}^{LSE}(\tilde{q}^*,\tilde{x}^{*LSE}) = \min_{(q,x)} \tilde{J}^{LSE}(q,x) \\
\tilde{J}_s^{LSE}(\tilde{y}_s^{*LSE}) = \min_{0\leq y_s \leq \tilde{x}^{*LSE}} \tilde{J}_s^{LSE}(y_s) \\
\tilde{J}^{agg}(\tilde{x}^{*agg}) = \min_{0\leq x \leq l} \tilde{J}^{agg}(x) \\
\tilde{x}^{*LSE}=\tilde{x}^{*agg}
\end{cases}
\end{align*}
\end{definition}
At the competitive equilibrium, the volume of options that the LSE is willing to buy balances the volume of options that the aggregator is willing to sell. Therefore, we have $\tilde{x}^{LSE} = \tilde{x}^{agg}$.
We now offer the main results of this section.
\begin{theorem}
\label{thm:ce-option_oldversion}
There exists a competitive equilibrium for options market. Define the following at a competitive equilibrium, $\tilde{q}^{*} = \tilde{q}^{LSE}, \tilde{x}^{*} = \tilde{x}^{*LSE} = \tilde{x}^{*agg}$ and $\tilde{y}^{*}_{s} = \tilde{y}^{*LSE}_{s}$. Then the competitive equilibrium satisfies,
\begin{align*}
\pi^{da} &-\mathbb{E}_{s}[\overline{\pi}^{rt}_{s} P_{s}(l-\tilde{q}^{*}-\tilde{y}^{*}_{s})] = 0 \\
\pi^{*o} &+ \pi^{*LSE} \mathbb{E}_{s}[\mathbb{I}\{\tilde{y}^{*}_{s} = \tilde{x}^{*}\}] \nonumber \\
& - \mathbb{E}_{s}[\overline{\pi}^{rt}_{s} P_{s}(l-\tilde{q}^{*}-\tilde{y}^{*}_{s}) \mathbb{I}\{\tilde{y}^{*}_{s} = \tilde{x}^{*}\}] = 0 \\
\pi^{*o} &+ \pi^{*LSE} \mathbb{E}_{s}[\mathbb{I}\{\tilde{y}^{*}_{s} = x^{*}\}] - \phi'(x^{*}) \mathbb{E}_{s}[\mathbb{I}\{\tilde{y}^{*}_{s} = \tilde{x}^{*}\}] = 0,
\end{align*}
where $\tilde{y}_s^*$ satisfies,
\begin{align*}
\tilde{y}^*_{s} &= \left\lbrace \begin{array}{ccc}
0, \quad \quad \quad \quad \quad \quad \quad \quad \text{if}~ P_{s}(l-\tilde{q}^*) < \pi^{*sp}/\bar{\pi}^{rt}_{s} \\
\tilde{x}^{*}, \quad \quad \quad \quad \quad \quad \text{if}~P_{s}(l-\tilde{q}^*-\tilde{x}^*) > \pi^{*sp}/\bar{\pi}^{rt}_{s} \\
l - \tilde{q}^*- P^{-1}_{s}(\pi^{*sp}/\bar{\pi}^{rt}_{s}), \quad \quad \quad \quad \quad \textnormal{otherwise}
\end{array} \right.
\end{align*}
\end{theorem}
The proof of this theorem is given in Appendix D. We comment that the competitive equilibrium consists of four variables $(\pi^{*o},\pi^{*sp},\tilde{q}^*,\tilde{x}^*)$ determined by three equations. Therefore, there is one degree of freedom which induces multiple competitive equilibria. We will illustrate these equilibria prices through a numerical simulation in Section \ref{simulation_sec}.
\subsection{Redesign of Options Market}
\label{options_market_redesign}
The options market proposed in the previous section is asymmetric with respect to the decision of the buyer and the seller. That is the decision of the LSE is $q$ and $x$, while the decision of the aggregator is only $x$. This asymmetry can provide market advantage to the buyer. To address this concern, we propose a redesign of the options market where the decision of the buyer and the seller is symmetric. We show the existence of a competitive equilibrium, and study its various properties.
\noindent B.1 {\em Symmetric Decision Making}
Consider the following modification to the options market: before time $t_0$, the aggregator proposes a demand response offer to the LSE. The aggregator chooses $l'>0$ and dictates that $x+q=l'$. \blue{This endows the aggregator the power to negotiate on $q$: the aggregator offers $x$ units of options, only if the LSE buys $l'-x$ units of energy in the day-ahead market. For the moment, we treat $l'$ as given}.
Upon receiving the demand response offer, the LSE decides whether or not to accept it. There is no trade of load reduction if the offer is not accepted. When the offer is accepted, the expected cost for the LSE is
\begin{equation}
\label{eq:J-LSE-option-1_redesign}
\tilde{J}^{LSE}(x) = \pi^o x + \pi^{da} (l'-x) + \mathbb{E}_{s}[\min_{y_{s}, y_{s} \leq x} \tilde{J}^{LSE}_{s}(y_{s};x) ],
\end{equation}
where $\tilde{J}_s^{LSE}(\cdot)$ is the second stage cost and is given by,
\begin{equation}
\label{eq:J-LSE-option-2_redesign}
\tilde{J}^{LSE}_{s}(y_{s};x) = \pi^{sp} y_{s} + \mathbb{E}_{w}[\overline{\pi}^{rt}_{s} (l-l'+x-y_{s}-w)_{+}].
\end{equation}
The optimal first and second-stage decisions of the LSE are
\begin{align}
\label{eq:J-LSE-option-q-ys_redesign}
\begin{cases}
\tilde{x}^{LSE} = \arg \min_{x\geq 0} \tilde{J}^{LSE}(x), \\ \tilde{y}^{LSE}_{s}(x) = \arg \min_{y_s\leq x} \tilde{J}^{LSE}_{s}(y_s; x)
\end{cases}
\end{align}
Note that the second-stage decision $\tilde{y}_s^{LSE}$ depends on $x$. From now on, we do not express this dependence explicitly as it is implied by context. The expected cost for the aggregator and its optimal decisions remain as in (\ref{eq:J-agg-option}) and (\ref{eq:J-agg-option_solution}).
We assume that the LSE and the aggregator are price takers. The options market attains a competitive equilibrium if the supply of options balances the demand of options.
\begin{definition}[Competitive Equilibrium for Options Market]
Given any $l'$ such that $0\leq l' \leq l$, the options price $\pi^{*o}$, the strike price $\pi^{*sp}$, the optimal buying decision of the LSE $\tilde{x}^{*LSE}$ and the optimal selling decision of the aggregator $\tilde{x}^{*agg}$ constitute a competitive equilibrium if:
\begin{subnumcases}{\label{competitive_opt}}
\tilde{J}^{LSE}(\tilde{x}^{*LSE}) = \min_{0\leq x\leq l'} \tilde{J}^{LSE}(x), \label{competitive_opta}\\
\tilde{J}_s^{LSE}(\tilde{y}_s^{*LSE}) = \min_{0\leq y_s \leq x} \tilde{J}_s^{LSE}(y_s;x^{*LSE}) \label{competitive_optb} \\
\tilde{J}^{agg}(\tilde{x}^{*agg}) = \min_{0\leq x \leq l'} \tilde{J}^{agg}(x) \label{competitive_optc} \\
\tilde{x}^{*LSE} = \tilde{x}^{*agg}. \label{competitive_optd}
\end{subnumcases}
\end{definition}
The choice of $l'$ is determined by the willingness of the LSE to accept the demand response offer. If the LSE accepts the offer, its optimal cost is $\tilde{J}^{LSE}(\tilde{x}^{LSE})$. Else, its cost is equal to that of optimal cost without DR, i.e., $J_{ndr}^{*e}$. Thus, the LSE will accept the contract proposed by the aggregator if
\begin{equation}
\label{participationcons}
J_{ndr}^{*e} \geq \tilde{J}^{LSE}(\tilde{x}^{LSE}).
\end{equation}
However, $\tilde{J}^{LSE}(x^{LSE})$ depends on the options price $\pi^o$, which is not revealed when the LSE makes the decision. Ideally, $l'$ should be such that (\ref{participationcons}) holds for any $\pi^o$. We present a candidate of $l^{'}$ that satisfies this condition:
\begin{proposition}
\label{howtochoosecontract}
If $l'=q_{ndr}^e$, the LSE always accepts the demand response offer, i.e., $J_{ndr}^{*e}\geq \tilde{J}^{LSE}(x^{LSE})$ for $\forall \pi^o\geq 0$.
\end{proposition}
The idea is as follows: $q=q_{ndr}^e$ is the optimal decision of the LSE if it declines the demand response offer. Therefore, when $l'=q_{ndr}^e$, the LSE loses nothing if it accepts the demand response offer. This is because, there exists a LSE decision, i.e., $x = 0$ and $q = q_{ndr}^e$, that satisfies the condition $x+q=q_{ndr}^e$ and also attains the same cost.
\vspace{0.3cm}
\noindent B.2 {\em Properties of Competitive Equilibrium}
\vspace{0.1cm}
We now focus on the existence, efficiency and optimality of the competitive equilibrium for options market.
\begin{theorem}
\label{thm:ce-option}
Given any $l'\in [0,l]$, there exists a competitive equilibrium $(\pi^{*o}, \pi^{*sp}, \tilde{x}^{*LSE}, \tilde{x}^{*agg})$ for the options market, and $\tilde{x}^{*LSE} = \tilde{x}^{*agg}$ is the optimal solution to:
\begin{equation}
\label{eq:ce-option-ys}
\min_{0\leq x \leq l'} \pi^{da} (l'-x)+ \mathbb{E}_{s} [\phi(\tilde{y}_s^{LSE})+\tilde{J}_s^{LSE}(\tilde{y}_s^{LSE})],
\end{equation}
where $\tilde{y}_s^{LSE}$ is the second stage optimal decision for the LSE and,
\begin{align}
\label{optimaLSEcondstagedec}
\tilde{y}^{LSE}_{s} &= \left\lbrace \begin{array}{ccc}
0, \quad \quad \quad \quad \quad \quad \quad \quad \text{if}~ P_{s}(l-l'+x) < \pi^{*sp}/\bar{\pi}^{rt}_{s} \\
x, \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{if}~P_{s}(l-l') > \pi^{*sp}/\bar{\pi}^{rt}_{s} \\
l - l'+x- P^{-1}_{s}(\pi^{*sp}/\bar{\pi}^{rt}_{s}), \quad \quad \quad \quad \quad \textnormal{otherwise}
\end{array} \right.
\end{align}
\end{theorem}
The proof is given in Appendix E. The optimization problem (\ref{eq:ce-option-ys}) is convex, and the optimal value of (\ref{eq:ce-option-ys}) is the social cost at the competitive equilibrium of the options market.
Similar to the options market in Section \ref{options_market_original}, there are multiple competitive equilibria because for any equilibrium price pair $\pi^{s*o}$ and $\pi^{*sp}$, a higher options price with a lower strike price can be equally acceptable to both the LSE and the aggregator.
To compare the efficiency of different markets, let $\tilde{J}^{*LSE}(\pi^{sp})$ and $\tilde{J}^{*agg}(\pi^{sp})$ be the expected cost at competitive equilibrium for the LSE and the aggregator, respectively. Define the system cost at competitive equilibrium by,
\begin{equation}
\label{eq:k1}
\tilde{J}^{*cp}(\pi^{sp}) = \tilde{J}^{*LSE}(\pi^{sp})+ \tilde{J}^{*agg}(\pi^{sp}).
\end{equation}
In addition, let $J_{ndr}^{*e}$ be the optimal value of problem (\ref{eq:social planner-J-ndr-1}), then the following proposition provides a comparison of the optimal cost of the different markets that we have discussed so far.
\begin{proposition}
\label{efficiencyofoptions}
Given any $l'$ and $\pi^{sp}$, the social cost of the options market at the competitive equilibrium is lower bounded by $J^{*cp}$ and upper bounded by $J_{ndr}^{*e}$, i.e., $J^{*cp}\leq \tilde{J}^{*cp}(\pi^{sp}) \leq J_{ndr}^{*e}$.
\end{proposition}
The proof of Proposition \ref{efficiencyofoptions} is given in the Appendix. It indicates that the efficiency of the options market outperforms that of the market without demand response, but is no better than that of the spot market with contingent prices.
The following theorem presents the optimal strike price that minimizes the social cost at the competitive equilibrium:
\begin{theorem}
\label{optimalstrikeprice}
There exists an optimal strike price $\tilde{\pi}^{*LSE}$, such that $\tilde{J}^{*cp}(\tilde{\pi}^{*LSE})\leq \tilde{J}^{*cp}(\pi^{sp})$ for all $\pi^{sp}$, and
$\tilde{\pi}^{*LSE}$ satisfies:
\begin{equation}
\tilde{\pi}^{*LSE}=\dfrac{\int_{s_1}^{s_2} \phi'(y_s) \beta(s)ds }{\int_{s_1}^{s_2}\beta(s)ds }
\end{equation}
where $\beta(s)=\dfrac{\alpha(s)}{\bar{\pi}_s^{rt}p_s(l-q-y_s)}$.
\end{theorem}
The proof of Theorem \ref{optimalstrikeprice} is in the Appendix. It shows that the optimal strike price is the average of the marginal disutility over a skewed distribution $\beta(s)$.
\begin{figure*}[bt]%
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure1}
\caption{The probability distribution function of information state $s$. }
\label{PDF_s}
\end{minipage}
\begin{minipage}[b]{0.01\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure2}
\caption{The cumulative distribution function of information state $s$. }
\label{CDF_s}
\end{minipage}
\begin{minipage}[b]{0.01\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure3}
\caption{Load reduction called at the spot market under different information state. }
\label{fig:loadreductionsopt}
\end{minipage}
\end{figure*}
\begin{figure*}[bt]%
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure4}
\caption{Contingency price of the options market under different values of information state.}
\label{fig:contingentprice}
\end{minipage}
\begin{minipage}[b]{0.01\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure7}
\caption{Options price and strike price as competitive equilibria of the options market.}
\label{fig:contingentprice_options}
\end{minipage}
\begin{minipage}[b]{0.01\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure6}
\caption{Traded options at the competitive equilibrium under different equilibrium strike price in the options market. }
\label{fig:x_options}
\end{minipage}
\end{figure*}
\begin{figure*}[bt]%
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure5}
\caption{Energy procurement in the day-ahead market under different strike price in the options market.}
\label{fig:q_options}
\end{minipage}
\begin{minipage}[b]{0.01\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure8}
\caption{Load reduction under different information state when strike price minimizes the system total cost.}
\label{fig:reduction_options}
\end{minipage}
\begin{minipage}[b]{0.01\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\input{figure9}
\caption{System cost of the optimal scheduling without demand response, spot market, and options market.}
\label{fig:systemcostcompare}
\end{minipage}
\end{figure*}
\blue{
\section{Case Studies}
\label{simulation_sec}
This section illustrates the proposed options market and validates the results through numerical simulation. We consider a particular time interval where the LSE needs to deliver electricity to a total load of $l=3 \text{MW}$. We emphasize that this is without any loss of generality because larger or smaller load can be captured by scaling the size of the options and load reduction.
Define the information state $s$ as a real number between $0$ and $1$. i.e., $s \in [0,1]$. Here we use $s$ to represent the average wind energy level on the next day. Since the realization of wind is a random variable, $s$ is monotonically associated with the expectation of $w$. Given that this average number can be estimated with certain accuracy, it is reasonable to assume that $s$ is subject to a single-peaked probability distribution function (PDF) where the peak represents the most probable wind level. A natural candidate for the PDF is the truncated normal distribution. In this study, we set $\alpha(s)$ as:
\begin{equation}
\label{truncatedNormal}
\alpha(s)=\dfrac{1}{\sqrt{2\pi}\sigma \Phi(\sigma)} \text{exp}\left( -\dfrac{1}{2} \left( \dfrac{s-0.5}{\sigma} \right)^2 \right)
\end{equation}
where $\Phi(\sigma)=\dfrac{1}{2}\left(\text{erf}\left(\dfrac{0.5}{\sqrt{2}\sigma}\right)-\text{erf}\left(-\dfrac{0.5}{\sqrt{2}\sigma}\right)\right)$ and $\text{erf}(\cdot)$ is the error function. We set $\sigma=0.2$, then the PDF and the cumulative distribution function of $s$ are shown in Figure \ref{PDF_s} and Figure \ref{CDF_s}, respectively.
Although the average wind availability is given in (\ref{truncatedNormal}), the realization of wind energy in real time is difficult to predict even if its average value is given. This indicates that the conditional distribution of $w$ given information state $s$ can be captured by a uniform distribution where different wind levels are evenly distributed. We set $p(w|s)$ as
\begin{equation}
\label{pdfofconditionalwind}
p(w|s)=\begin{cases}
1/(s+2), \quad & \text{if } 0\leq w \leq s+2 \\
0, \quad & \text{if } w>s+2.
\end{cases}
\end{equation}
From (\ref{pdfofconditionalwind}), it is clear that the cumulative distribution of wind satisfies $P_{s_1}(w)\geq P_{s_2}(w)$ for $\forall s_1<s_2$. Therefore, larger values of $s$ indicates (stochastically) more wind power.
We randomly select several day-ahead and real-time prices in Berkeley, California, Jan, 2020 \cite{berkeleyLMP}, and derive that the average day-ahead and real-time LMP are $\$26.76/\text{MWh}$ and $\$29.86/\text{MWh}$, respectively. Suppose the conditional expectation of the real-time price is a linear function of $s$ in the form $\bar{\pi}_s^{rt}=31.71- 3.71s$\footnote{We set the parameter of this linear model so that (i) the average real-time price equals the real data, i.e., $\$29.86/\text{MWh}$, and (ii) wind energy affects real-time price by the maximum of $10\%$.}. For simpliciy, consider a quadratic distutility function $\phi(y)=15y+15y^2$. In this example, both Assumptions \ref{as:w-pirt} and \ref{disutilityfunction} are satisfied.
We consider four cases: (a) optimal scheduling without demand response, (b) spot market with contingent pricing, (c) options market and (d) redesign of options market. We compute the optimal decisions in each case and derive the competitive equilibrium for these market setups. The system costs in these cases are compared.
\vspace{0.1cm}
\noindent 1. \emph{Optimal Scheduling without Demand Response:} The LSE buys energy from the day-ahead and real-time markets. Using Proposition \ref{thm:no-dr-social planner}, the optimal day-ahead purchase is $q_{ndr}^e= 1.23 \ \text{MW}$. The optimal system cost is $J_{ndr}^e(q_{ndr}^e) = \$ 56.77$.
\vspace{0.1cm}
\noindent 2. \emph{Optimal Scheduling with Demand Response:} With demand response, the optimal day-ahead purchase is $q_{ndr}^e= 0.84 \ \text{MW}$ and the optimal system cost is $J_{ndr}^e(q_{ndr}^e) = \$ 53.95$.
\vspace{0.1cm}
\noindent 3. \emph{Spot Market with Contingent Price:}
Solving equation (\ref{optimalconditionofspot}), the optimal day-ahead purchase is $q^*=0.84\ \text{MW}$. The optimal load reduction and contingent price given the information state $s$ are shown in Figures \ref{fig:loadreductionsopt} and \ref{fig:contingentprice} respectively. Figure \ref{fig:loadreductionsopt} reveals that when $s$ is larger, the LSE expects more wind at the delivery time $T$ and therefore calls on less load reduction at the intermediate time $t_1$. As a result, the competitive equilibrium price is lower when $s$ is larger, as shown in Figure \ref{fig:contingentprice}.
\vspace{0.1cm}
\noindent 4. \emph{Options Market:}
The competitive equilibrium of the options market can be derived based on Theorem \ref{thm:ce-option_oldversion} and Theorem~ \ref{thm:ce-option}, respectively. The setup of these two options markets are distinct. Here we solve the competitive equilibria under both market schemes and compare their performances in terms of the system cost at the competitive equilibria.
We first compute the competitive equilibrium of the options market presented in Section \ref{options_market_original}. Based on Theorem \ref{thm:ce-option_oldversion}, the competitive equilibrium constitutes four decision variables $(\pi^{*o}, \pi^{*sp}, \tilde{x}^*, \tilde{q}^*)$ that are determined by three equations. This indicates that there is an extra degree of freedom, which induces multiple competitive equilibria. Figure \ref{fig:q_options} shows competitive equilibrium prices of the options market. Clearly, there is a continuum of competitive equilibrium prices and the options price is a decreasing function of the strike price. This is intuitive since the objective functions of the LES and the aggregator are jointly determined by $\pi^o$ and $\pi^{sp}$. Therefore, for any pair of $(\pi^o, \pi^{sp})$ at the competitive equilibrium, the combination of a higher $\pi^o$ and lower $\pi^{sp}$ is equally acceptable as another competitive equilibrium price. Figure \ref{fig:x_options} shows that as the strike price increases, the traded option $x$ at the competitive equilibrium first increases $(\pi^{*sp}\leq \$28.7/\text{WMh})$ and then decreases $(\pi^{*sp}> \$28.7/\text{WMh})$. This is because the options price decreases $(\pi^{*sp}\leq \$28.7/\text{WMh})$ and then remains constant $(\pi^{*sp}> \$28.7/\text{WMh})$. Figure \ref{fig:q_options} shows the day-ahead procurement as a function of the strike price. It displays non-monotone property when the strike price is between $\$20/\text{WMh}$ and $\$30/\text{WMh}$ due to the joint effect of $\pi^{*o}$ and $\pi^{*sp}$. Figure \ref{fig:reduction_options} shows the exercised load reduction in real time under different information state when the strike price is at $\$24.1/\text{WMh}$. Clearly, $y_s$ is a decreasing function of $s$, which indicates that less load reduction is called when more wind energy is available for free.
Figure \ref{fig:systemcostcompare} compares the system costs at the competitive equilibria of the original (Section \ref{options_market_original}) and the redesigned options market (Section \ref{options_market_redesign}) under different equilibrium strike prices. The equilibrium of the redesigned market is derived based on (\ref{eq:ce-option-ys}). Clearly, Figure~\ref{fig:systemcostcompare} has three regimes:
(i) $\pi^{*sp}\leq \$20.3/\text{WMh}$, (ii) $\$20.3/\text{WMh}< \pi^{*sp}<\$31.8/\text{WMh}$, and (iii) $\pi^{*sp}\geq\$31.8/\text{WMh}$. In the first regime, the strike price is so low that the LSE calls upon the maximum load reduction regardless of the information state, i.e., $y_s=x, \forall s$.
In the third regime, the strike price is so high that no load reduction is called, i.e., $y_s=0, \forall s$. The minimum system cost is attained in the second regime. It is clear that the system cost at the competitive equilibrium for the redesigned market is consistently lower than than that of the original one, while the difference of these two markets is more significant in the second regime.
}
\section{Conclusion}
\label{sec:conclusion}
We have studied a novel market model for trading demand response using options. We have shown that demand response can be used as an intermediate recourse between the real-time market and the day-ahead market. Under some conditions, this options market admits a competitive equilibrium. We studied the efficiency of this equilibrium, and obtained the optimal strike price that yields the minimum system cost at the competitive equilibrium. In future work we plan to address option markets with multiple intermediate stages and also the case where the LSE can exercise market power.
\bibliographystyle{IEEEtran}
|
1,116,691,497,700 | arxiv | \section{Introduction}
The main goal of Speaker Verification~(SV) is the process of verifying a query sample belonging to a speaker utterance by comparing to the existing speaker models. Speaker verification is usually split into two text-independent and text-dependant categories. Text-dependent includes the scenario in which all the speakers are uttering the same phrase while in text-independent no prior information is considered for what the speakers are saying. The later setting is much more challenging as it can contain numerous variations for non-speaker information that can be misleading while extracting solely speaker information is desired.
The speaker verification, in general, consists of three stages: Training, enrollment, and evaluation. In training, the universal background model is trained using the gallery of speakers. In enrollment, based on the created background model, the new speakers will be enrolled in creating the speaker model. Technically, the speakers' models are generated using the universal background model. In the evaluation phase, the test utterances will be compared to the speaker models for further identification or verification.
Recently, by the success of deep learning in applications such as in biomedical purposes ~\cite{shen2015multi, mobiny2017lung}, automatic speech recognition, image recognition and network sparsity~\cite{simonyan2014very,krizhevsky2012imagenet,hinton2012deep,torfi2018attention}, the DNN-based approaches have also been proposed for Speaker Recognition~(SR)~\cite{lei2014novel,variani2014deep}.
The traditional speaker verification models such as Gaussian Mixture Model-Universal Background Model (GMM-UBM) \cite{reynolds2000speaker} and i-vector \cite{dehak2011front} have been the state-of-the-art for long. The drawback of these approaches is the employed unsupervised fashion that does not optimize them for verification setup. Recently, supervised methods proposed for model adaptation to speaker verification such as the one presented in \cite{campbell2006support} and PLDA-based i-vectors model~\cite{garcia2011analysis}. Convolutional Neural Networks~(CNNs) has also been used for speech recognition and speaker verification~\cite{variani2014deep,abdel2014convolutional} inspired by their their superior power for action recognition~\cite{ji20133d} and scene understanding~\cite{tran2015learning}. Capsule networks introduced by Hinton et al.~\cite{sabour2017dynamic} has shown quite remarkable performance in different tasks ~\cite{mobiny2018fast, jaiswal2018capsulegan}, and demonstrated the potential and power to be used for similar purposes.
In the present work, we propose the use of LSTMs by using MFCCs\footnote{Mel Frequency Cepstral Coefficients} speech features for directly capturing the temporal information of the speaker-related information rather than dealing with non-speaker information which plays no role for speaker verification.
\section{Related works}
There is a huge literature on speaker verification. However, we only focus on the research efforts which are based on deep learning deep learning. One of the traditional successful works in speaker verification is the use of Locally Connected Networks (LCNs)~\cite{chen2015locally} for the text-dependent scenario. Deep networks have also been used as feature extractor for representing speaker models~\cite{heigold2016end,zhang2017end}. We investigate LSTMs in an end-to-end fashion for speaker verification. As Convolutional Neural Networks \cite{lecun1998gradient} have successfully been used for the speech recognition~\cite{sainath2013deep} some works use their architecture for speaker verification~\cite{lei2014novel,richardson2015deep}. The most similar work to ours is \cite{heigold2016end} in which they use LSTMs for the text-dependent setting. On the contrary, we use LSTMs for the text-independent scenario which is a more challenging one.
\section{Speaker Verification Using Deep Neural Networks}
Here, we explain the speaker verification phases using deep learning. In different works, these steps have been adopted regarding the procedure proposed by their research efforts such as i-vector~\cite{dehak2011front,kenny2007joint}, d-vector system~\cite{variani2014deep}.\\
\subsection{Development}
In the development stage which also called training, the speaker utterances are used for background model generation which ideally should be a universal model for speaker model representation. DNNs are employed due to their power for feature extraction. By using deep models, the feature learning will be done for creating an output space which represents the speaker in a universal model.\\
\subsection{Enrollment}
In this phase, a model must be created for each speaker. For each speaker, by collecting the spoken utterances and feeding to the trained network, different output features will be generated for speaker utterances. From this point, different approaches have been proposed on how to integrate these enrollment features for creating the speaker model. The tradition one is aggregating the representations by averaging the outputs of the DNN which is called d-vector system~\cite{chen2015locally,variani2014deep}.
\subsection{Evaluation}
For evaluation, the test utterance is the input of the network and the output is the utterance representative. The output representative will be compared to different speaker model and the verification criterion will be some similarity function. For evaluation purposes, the traditional Equal Error Rate~(EER) will often be used which is the operating point in that false reject rate and false accept rate are equal.
\section{Model}
The main goal is to implement LSTMs on top of speech extracted features. The input to the model as well as the architecture itself is explained in the following subsections.
\subsection{Input}
The raw signal is extracted and 25ms windows with \%60 overlapping are used for the generation of the spectrogram as depicted in Fig.~\ref{fig:speechinput}. By selecting 1-second of the sound stream, 40 log-energy of filter banks per window and performing mean and variance normalization, a feature window of $40 \times 100$ is generated for each 1-second utterance. Before feature extraction, voice activity detection has been done over the raw input for eliminating the silence. The derivative feature has not been used as using them did not make any improvement considering the empirical evaluations. For feature extraction, we used SpeechPy library~\cite{torfi2018speechpy}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.45]{_imgs/input-pipeline.png}
\end{center}
\caption{The feature extraction from the raw signal.}
\label{fig:speechinput}
\end{figure}
\subsection{Architecture}\label{sec:architecture}
The architecture that we use a long short-term memory recurrent neural network (LSTM)~\cite{hochreiter1997long,sak2014long} with a single output for decision making. We input fixed-length sequences although LSTMs are not limited by this constraint. Only the last hidden state of the LSTM model is used for decision making using the loss function. The LSTM that we use has two layers with 300 nodes each~(Fig.~\ref{fig:lstm}).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{_imgs/lstm.png}
\end{center}
\caption{The siamese architecture built based on two LSTM layers with weight sharing.}
\label{fig:lstm}
\end{figure}
\subsection{Verification Setup}\label{sec:Verification Setup}
A usual method which has been used in many other works~\cite{chen2015locally}, is training the network using the Softmax loss function for the auxiliary classification task and then use the extracted features for the main verification purpose. A reasonable argument about this approach is that the Softmax criterion is not in align with the verification protocol due to optimizing for identification of individuals and not the one-vs-one comparison. Technically, the Softmax optimization criterion is as below:
\begin{equation}
\textrm{softmax}({\bf x})_{Speaker} = \frac{e^{x_{Speaker}}}{\sum_{Dev_{Spk}}{e^{x_{Dev_{Spk}}}}}
\end{equation}
\begin{equation}\label{eq4}
\begin{cases}
x_{Speaker} = W_{Speaker} \times y + b\\
x_{Dev_{Spk}} = W_{Dev_{Spk}} \times y + b
\end{cases}
\end{equation}
\noindent in which \textit{Speaker} and \textit{$Dev_{Spk}$} denote the sample speaker and an identity from speaker development set, respectively. As it is clear from the criterion, there is no indication to the one-to-one speaker comparison for being consistent to speaker verification mode.
To consider this condition, we use the Siamese architecture to satisfy the verification purpose which has been proposed in \cite{chopra2005learning} and employed in different applications~\cite{sun2018deep,varior2016gated,koch2015siamese}. As we mentioned before, the Softmax optimization will be used for initialization and the obtained weights will be used for fine-tuning.
The Siamese architecture consists of two identical networks with weight sharing. The goal is to create a shared feature subspace which is aimed at discrimination between genuine and impostor pairs. The main idea is that when two elements of an input pair are from the same identity, their output distances should be close and far away, otherwise. For this objective, the training loss will be contrastive cost function. The aim of contrastive loss $C_W(X,Y)$ is the minimization of the loss in both scenarios of having genuine and impostor pairs, with the following definition:
\begin{align}\label{eq2}
C_W(X,Y) = {{1}\over{N}} \sum_{j=1}^{N} C_W(Y_j,(X_{1},X_{2})_j),
\end{align}
\noindent where N indicates the training samples, $j$ is the sample index and $C_W(Y_i,(X_{p_{1}},X_{p_{2}})_i)$ will be defined as follows:
\begin{equation} \label{eq3}
\begin{split}
C_W&(Y_i,(X_{1},X_{2})_j) = Y*C_{gen}(D_W(X_{1},X_{2})_j)\\ &+ (1-Y)*C_{imp}(D_W(X_{1},X_{2})_j)+\lambda{||W||}_{2}
\end{split}
\end{equation}
\noindent in which the last term is the regularization. $C_{gen}$ and $C_{imp}$ will be defined as the functions of $D_W(X_{1},X_{2})$ by the following equations:
\begin{equation}\label{eq5}
\begin{cases}
C_{gen}(D_W(X_{1},X_{2})={{1}\over{2}}{D_W(X_{1},X_{2})}^2\\
C_{imp}(D_W(X_{1},X_{2})={{1}\over{2}}max\{0,{(M-D_W(X_{1},X_{2}))}\}^2
\end{cases}
\end{equation}
\noindent in which $M$ is the margin.
\section{Experiments}
TensorFLow has been used as the deep learning library~\cite{tensorflow2015-whitepaper}. For the development phase, we used data augmentation by randomly sampling the 1-second audio sample for each person at a time. Batch normalization has also been used for avoiding possible gradient explotion~\cite{ioffe2015batch}. It's been shown that effective pair selection can drastically improve the verification accuracy~\cite{lin2008facetnet}. Speaker verification is performed using the protocol consistent with \cite{Nagrani17} for which the name identities start with ’E’ will be used for evaluation.
\begin{algorithm}\label{algorithm:pair selection}
\textbf{Update}: Freeze weights!
\textbf{Evaluate}: Input data and get output distance vector\;
\textbf{Search}: Return max and min distances for match pairs : $max\_gen$ \& $min\_gen$\;
\textbf{Thresholding}: Calculate $th=th_{0}\times \left | \frac{max\_gen}{min\_gen}\right|$\;
\While{impostor pair}{
\eIf{$imp > max\_gen + th$}{
discard\;
}{
feed the pair\;
}
}
\caption{The utilized pair selection algorithm for selecting the main contributing impostor pairs}
\end{algorithm}
\subsection{Baselines}
We compare our method with different baseline methods. The \textit{GMM-UBM} method \cite{reynolds2000speaker} if the first candidate. The MFCCs features with 40 coefficients are extracted and used. The \textit{Universal Background Model}~(UBM) is trained using 1024 mixture components. The \textit{I-Vector} model~\cite{dehak2011front}, with and without \textit{Probabilistic Linear Discriminant Analysis}~(PLDA)~\cite{kenny2010bayesian}, has also been implemented as the baseline.
The other baseline is the use of DNNs with locally-connected layers as proposed in \cite{chen2015locally}. In the d-vector system, after development phase, the d-vectors extracted from the enrollment utterances will be aggregated to each other for generating the final representation. Finally, in the evaluation stage, the similarity function determines the closest d-vector of the test utterances to the speaker models.
\subsection{Comparison to Different Methods}
Here we compare the baseline approaches with the proposed model as provided in Table ~\ref{table:compasison}. We utilized the architecture and the setup as discussed in Section~\ref{sec:architecture} and Section~\ref{sec:Verification Setup}, respectively. As can be seen in Table ~\ref{table:compasison}, our proposed architecture outperforms the other methods.
\begin{table}[h]
\caption[Table caption text]{The architecture used for verification purpose.}
\label{table:compasison}
\begin{center}
\addtolength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\toprule
Model & EER\\
\hline
\midrule
\rowcolor{black!0} GMM-UBM~\cite{reynolds2000speaker} & 27.1 \\
\rowcolor{black!0} I-vectors~\cite{dehak2011front} & 24.7 \\
\rowcolor{black!0} I-vectors~\cite{dehak2011front} + PLDA~\cite{kenny2010bayesian} & 23.5 \\
\rowcolor{black!0} LSTM~[ours] & 22.9 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Effect of Utterance Duration}
One one the main advantage of the baseline methods such as \cite{dehak2011front} is their ability to capture robust speaker characteristics through long utterances. As demonstrated in Fig.~\ref{fig:utteranceeffect}, our proposed method outperforms the others for short utterances considering we used 1-second utterances. However, it is worth to have a fair comparison for longer utterances as well. In order to have a one-to-one comparison, we modified our architecture to feed and train the system on longer utterances. In all experiments, the duration of utterances utilized for development, enrollment, and evaluation are the same.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{_imgs/durationeffect.png}
\end{center}
\caption{The effect of the utterance duration~(EER).}
\label{fig:utteranceeffect}
\end{figure}
As can be observed in Fig.~\ref{fig:utteranceeffect}, the superiority of our method is only in short utterances and in longer utterances, the traditional baseline methods such as \cite{dehak2011front}, still are the winners and LSTMs fails to capture effectively inter- and inter-speaker variations.
\section{Conclusion}
In this work, an end-to-end model based on LSTMs has been proposed for text-independent speaker verification. It was shown that the model provided promising results for capturing the temporal information in addition to capture the within-speaker information. The proposed LSTM architecture has directly been used on the speech features extracted from speaker utterances for modeling the spatiotemporal information. One the observed traces is the superiority of traditional methods on longer utterances for more robust speaker modeling. More rigorous studies are needed to investigate the reasoning behind the failure of LSTMs to capture long dependencies for speaker related characteristics. Additionally, it is expected that the combination of traditional models with long short-term memory architectures may improve the accuracy by capturing the long-term dependencies in a more effective way. The main advantage of the proposed approach is its ability to capture informative features in short utterances.
\bibliographystyle{ieeetr}
\section{Introduction}
The main goal of Speaker Verification~(SV) is the process of verifying a query sample belonging to a speaker utterance by comparing to the existing speaker models. Speaker verification is usually split into two text-independent and text-dependant categories. Text-dependent includes the scenario in which all the speakers are uttering the same phrase while in text-independent no prior information is considered for what the speakers are saying. The later setting is much more challenging as it can contain numerous variations for non-speaker information that can be misleading while extracting solely speaker information is desired.
The speaker verification, in general, consists of three stages: Training, enrollment, and evaluation. In training, the universal background model is trained using the gallery of speakers. In enrollment, based on the created background model, the new speakers will be enrolled in creating the speaker model. Technically, the speakers' models are generated using the universal background model. In the evaluation phase, the test utterances will be compared to the speaker models for further identification or verification.
Recently, by the success of deep learning in applications such as in biomedical purposes ~\cite{shen2015multi, mobiny2017lung}, automatic speech recognition, image recognition and network sparsity~\cite{simonyan2014very,krizhevsky2012imagenet,hinton2012deep,torfi2018attention}, the DNN-based approaches have also been proposed for Speaker Recognition~(SR)~\cite{lei2014novel,variani2014deep}.
The traditional speaker verification models such as Gaussian Mixture Model-Universal Background Model (GMM-UBM) \cite{reynolds2000speaker} and i-vector \cite{dehak2011front} have been the state-of-the-art for long. The drawback of these approaches is the employed unsupervised fashion that does not optimize them for verification setup. Recently, supervised methods proposed for model adaptation to speaker verification such as the one presented in \cite{campbell2006support} and PLDA-based i-vectors model~\cite{garcia2011analysis}. Convolutional Neural Networks~(CNNs) has also been used for speech recognition and speaker verification~\cite{variani2014deep,abdel2014convolutional} inspired by their their superior power for action recognition~\cite{ji20133d} and scene understanding~\cite{tran2015learning}. Capsule networks introduced by Hinton et al.~\cite{sabour2017dynamic} has shown quite remarkable performance in different tasks ~\cite{mobiny2018fast, jaiswal2018capsulegan}, and demonstrated the potential and power to be used for similar purposes.
In the present work, we propose the use of LSTMs by using MFCCs\footnote{Mel Frequency Cepstral Coefficients} speech features for directly capturing the temporal information of the speaker-related information rather than dealing with non-speaker information which plays no role for speaker verification.
\section{Related works}
There is a huge literature on speaker verification. However, we only focus on the research efforts which are based on deep learning deep learning. One of the traditional successful works in speaker verification is the use of Locally Connected Networks (LCNs)~\cite{chen2015locally} for the text-dependent scenario. Deep networks have also been used as feature extractor for representing speaker models~\cite{heigold2016end,zhang2017end}. We investigate LSTMs in an end-to-end fashion for speaker verification. As Convolutional Neural Networks \cite{lecun1998gradient} have successfully been used for the speech recognition~\cite{sainath2013deep} some works use their architecture for speaker verification~\cite{lei2014novel,richardson2015deep}. The most similar work to ours is \cite{heigold2016end} in which they use LSTMs for the text-dependent setting. On the contrary, we use LSTMs for the text-independent scenario which is a more challenging one.
\section{Speaker Verification Using Deep Neural Networks}
Here, we explain the speaker verification phases using deep learning. In different works, these steps have been adopted regarding the procedure proposed by their research efforts such as i-vector~\cite{dehak2011front,kenny2007joint}, d-vector system~\cite{variani2014deep}.\\
\subsection{Development}
In the development stage which also called training, the speaker utterances are used for background model generation which ideally should be a universal model for speaker model representation. DNNs are employed due to their power for feature extraction. By using deep models, the feature learning will be done for creating an output space which represents the speaker in a universal model.\\
\subsection{Enrollment}
In this phase, a model must be created for each speaker. For each speaker, by collecting the spoken utterances and feeding to the trained network, different output features will be generated for speaker utterances. From this point, different approaches have been proposed on how to integrate these enrollment features for creating the speaker model. The tradition one is aggregating the representations by averaging the outputs of the DNN which is called d-vector system~\cite{chen2015locally,variani2014deep}.
\subsection{Evaluation}
For evaluation, the test utterance is the input of the network and the output is the utterance representative. The output representative will be compared to different speaker model and the verification criterion will be some similarity function. For evaluation purposes, the traditional Equal Error Rate~(EER) will often be used which is the operating point in that false reject rate and false accept rate are equal.
\section{Model}
The main goal is to implement LSTMs on top of speech extracted features. The input to the model as well as the architecture itself is explained in the following subsections.
\subsection{Input}
The raw signal is extracted and 25ms windows with \%60 overlapping are used for the generation of the spectrogram as depicted in Fig.~\ref{fig:speechinput}. By selecting 1-second of the sound stream, 40 log-energy of filter banks per window and performing mean and variance normalization, a feature window of $40 \times 100$ is generated for each 1-second utterance. Before feature extraction, voice activity detection has been done over the raw input for eliminating the silence. The derivative feature has not been used as using them did not make any improvement considering the empirical evaluations. For feature extraction, we used SpeechPy library~\cite{torfi2018speechpy}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.45]{_imgs/input-pipeline.png}
\end{center}
\caption{The feature extraction from the raw signal.}
\label{fig:speechinput}
\end{figure}
\subsection{Architecture}\label{sec:architecture}
The architecture that we use a long short-term memory recurrent neural network (LSTM)~\cite{hochreiter1997long,sak2014long} with a single output for decision making. We input fixed-length sequences although LSTMs are not limited by this constraint. Only the last hidden state of the LSTM model is used for decision making using the loss function. The LSTM that we use has two layers with 300 nodes each~(Fig.~\ref{fig:lstm}).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{_imgs/lstm.png}
\end{center}
\caption{The siamese architecture built based on two LSTM layers with weight sharing.}
\label{fig:lstm}
\end{figure}
\subsection{Verification Setup}\label{sec:Verification Setup}
A usual method which has been used in many other works~\cite{chen2015locally}, is training the network using the Softmax loss function for the auxiliary classification task and then use the extracted features for the main verification purpose. A reasonable argument about this approach is that the Softmax criterion is not in align with the verification protocol due to optimizing for identification of individuals and not the one-vs-one comparison. Technically, the Softmax optimization criterion is as below:
\begin{equation}
\textrm{softmax}({\bf x})_{Speaker} = \frac{e^{x_{Speaker}}}{\sum_{Dev_{Spk}}{e^{x_{Dev_{Spk}}}}}
\end{equation}
\begin{equation}\label{eq4}
\begin{cases}
x_{Speaker} = W_{Speaker} \times y + b\\
x_{Dev_{Spk}} = W_{Dev_{Spk}} \times y + b
\end{cases}
\end{equation}
\noindent in which \textit{Speaker} and \textit{$Dev_{Spk}$} denote the sample speaker and an identity from speaker development set, respectively. As it is clear from the criterion, there is no indication to the one-to-one speaker comparison for being consistent to speaker verification mode.
To consider this condition, we use the Siamese architecture to satisfy the verification purpose which has been proposed in \cite{chopra2005learning} and employed in different applications~\cite{sun2018deep,varior2016gated,koch2015siamese}. As we mentioned before, the Softmax optimization will be used for initialization and the obtained weights will be used for fine-tuning.
The Siamese architecture consists of two identical networks with weight sharing. The goal is to create a shared feature subspace which is aimed at discrimination between genuine and impostor pairs. The main idea is that when two elements of an input pair are from the same identity, their output distances should be close and far away, otherwise. For this objective, the training loss will be contrastive cost function. The aim of contrastive loss $C_W(X,Y)$ is the minimization of the loss in both scenarios of having genuine and impostor pairs, with the following definition:
\begin{align}\label{eq2}
C_W(X,Y) = {{1}\over{N}} \sum_{j=1}^{N} C_W(Y_j,(X_{1},X_{2})_j),
\end{align}
\noindent where N indicates the training samples, $j$ is the sample index and $C_W(Y_i,(X_{p_{1}},X_{p_{2}})_i)$ will be defined as follows:
\begin{equation} \label{eq3}
\begin{split}
C_W&(Y_i,(X_{1},X_{2})_j) = Y*C_{gen}(D_W(X_{1},X_{2})_j)\\ &+ (1-Y)*C_{imp}(D_W(X_{1},X_{2})_j)+\lambda{||W||}_{2}
\end{split}
\end{equation}
\noindent in which the last term is the regularization. $C_{gen}$ and $C_{imp}$ will be defined as the functions of $D_W(X_{1},X_{2})$ by the following equations:
\begin{equation}\label{eq5}
\begin{cases}
C_{gen}(D_W(X_{1},X_{2})={{1}\over{2}}{D_W(X_{1},X_{2})}^2\\
C_{imp}(D_W(X_{1},X_{2})={{1}\over{2}}max\{0,{(M-D_W(X_{1},X_{2}))}\}^2
\end{cases}
\end{equation}
\noindent in which $M$ is the margin.
\section{Experiments}
TensorFLow has been used as the deep learning library~\cite{tensorflow2015-whitepaper}. For the development phase, we used data augmentation by randomly sampling the 1-second audio sample for each person at a time. Batch normalization has also been used for avoiding possible gradient explotion~\cite{ioffe2015batch}. It's been shown that effective pair selection can drastically improve the verification accuracy~\cite{lin2008facetnet}. Speaker verification is performed using the protocol consistent with \cite{Nagrani17} for which the name identities start with ’E’ will be used for evaluation.
\begin{algorithm}\label{algorithm:pair selection}
\textbf{Update}: Freeze weights!
\textbf{Evaluate}: Input data and get output distance vector\;
\textbf{Search}: Return max and min distances for match pairs : $max\_gen$ \& $min\_gen$\;
\textbf{Thresholding}: Calculate $th=th_{0}\times \left | \frac{max\_gen}{min\_gen}\right|$\;
\While{impostor pair}{
\eIf{$imp > max\_gen + th$}{
discard\;
}{
feed the pair\;
}
}
\caption{The utilized pair selection algorithm for selecting the main contributing impostor pairs}
\end{algorithm}
\subsection{Baselines}
We compare our method with different baseline methods. The \textit{GMM-UBM} method \cite{reynolds2000speaker} if the first candidate. The MFCCs features with 40 coefficients are extracted and used. The \textit{Universal Background Model}~(UBM) is trained using 1024 mixture components. The \textit{I-Vector} model~\cite{dehak2011front}, with and without \textit{Probabilistic Linear Discriminant Analysis}~(PLDA)~\cite{kenny2010bayesian}, has also been implemented as the baseline.
The other baseline is the use of DNNs with locally-connected layers as proposed in \cite{chen2015locally}. In the d-vector system, after development phase, the d-vectors extracted from the enrollment utterances will be aggregated to each other for generating the final representation. Finally, in the evaluation stage, the similarity function determines the closest d-vector of the test utterances to the speaker models.
\subsection{Comparison to Different Methods}
Here we compare the baseline approaches with the proposed model as provided in Table ~\ref{table:compasison}. We utilized the architecture and the setup as discussed in Section~\ref{sec:architecture} and Section~\ref{sec:Verification Setup}, respectively. As can be seen in Table ~\ref{table:compasison}, our proposed architecture outperforms the other methods.
\begin{table}[h]
\caption[Table caption text]{The architecture used for verification purpose.}
\label{table:compasison}
\begin{center}
\addtolength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\toprule
Model & EER\\
\hline
\midrule
\rowcolor{black!0} GMM-UBM~\cite{reynolds2000speaker} & 27.1 \\
\rowcolor{black!0} I-vectors~\cite{dehak2011front} & 24.7 \\
\rowcolor{black!0} I-vectors~\cite{dehak2011front} + PLDA~\cite{kenny2010bayesian} & 23.5 \\
\rowcolor{black!0} LSTM~[ours] & 22.9 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Effect of Utterance Duration}
One one the main advantage of the baseline methods such as \cite{dehak2011front} is their ability to capture robust speaker characteristics through long utterances. As demonstrated in Fig.~\ref{fig:utteranceeffect}, our proposed method outperforms the others for short utterances considering we used 1-second utterances. However, it is worth to have a fair comparison for longer utterances as well. In order to have a one-to-one comparison, we modified our architecture to feed and train the system on longer utterances. In all experiments, the duration of utterances utilized for development, enrollment, and evaluation are the same.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{_imgs/durationeffect.png}
\end{center}
\caption{The effect of the utterance duration~(EER).}
\label{fig:utteranceeffect}
\end{figure}
As can be observed in Fig.~\ref{fig:utteranceeffect}, the superiority of our method is only in short utterances and in longer utterances, the traditional baseline methods such as \cite{dehak2011front}, still are the winners and LSTMs fails to capture effectively inter- and inter-speaker variations.
\section{Conclusion}
In this work, an end-to-end model based on LSTMs has been proposed for text-independent speaker verification. It was shown that the model provided promising results for capturing the temporal information in addition to capture the within-speaker information. The proposed LSTM architecture has directly been used on the speech features extracted from speaker utterances for modeling the spatiotemporal information. One the observed traces is the superiority of traditional methods on longer utterances for more robust speaker modeling. More rigorous studies are needed to investigate the reasoning behind the failure of LSTMs to capture long dependencies for speaker related characteristics. Additionally, it is expected that the combination of traditional models with long short-term memory architectures may improve the accuracy by capturing the long-term dependencies in a more effective way. The main advantage of the proposed approach is its ability to capture informative features in short utterances.
\bibliographystyle{ieeetr}
|
1,116,691,497,701 | arxiv | \section{Introduction}
The search for extrasolar planets has
led to more than 700 confirmed discoveries\footnote{The Extrasolar Planets Encyclopedia;
http://www.exoplanet.eu; 2011-Nov-15} by using all detection techniques. Up to now, most of them have been detected by means of the radial velocity (RV) technique using high-resolution spectrographs ($R=\lambda / \Delta \lambda \ge 40,000$)
at optical wavelengths. Most discoveries are giant gaseous planets (typically hot Neptunes and Jupiters) of short periods (of a few days) around stars of spectral
types F, G and K.
As potential hosts to rocky planetary companions, M-dwarfs have become increasingly popular as
targets for RV searches (e.g. Endl et al. 2006, Charbonneau et al. 2009, Mayor et al. 2009, Zechmeister et al.~2009). Very cool stars such as M-dwarfs are the most abundant type ($\sim70 \%$) of stars in the solar neighborhood and the Milky Way in
general (Henry at al.~1997).
The effective temperatures and masses of M-dwarfs, respectively, are in the range 3700 to 2200~K and 0.5 to 0.07 solar masses for the M0 to M9.5 spectral types. They exhibit prominent absorption features corresponding to strong neutral atoms, H$_2$0, FeH, VO, CO, and TiO.
Owing to the low masses of these
objects, the reflex motion of the host star due to the gravitational pull of the
extrasolar planet is higher and more easily detectable than for more massive
host stars. Since M-dwarfs are very cool stars in comparison with solar-type stars, short period planets would more
likely be situated in the habitable zone.
M-dwarfs emit most of their energy around $1.1-1.3~\mu {\rm m}$, in the near-infrared
(NIR), while they appear very faint at optical wavelengths.
First attempts to measure RV variations among very cool M-dwarfs at NIR wavelengths were done by Mart\'in et al.~(2006). They achieved a RV precision of around 300 m~s$^{-1}$ for the M9.5-dwarf LP944-20 by using the spectrograph NIRSPEC, which is mounted on the Keck II telescope in Hawaii
(McLean et al.~1998). Recently, several research groups have reported high-precision RV measurements taken in the NIR with CRIRES
(K\"aufl et al.~2004), mounted at the UT1/VLT in the Paranal
Observatory of ESO in Chile. Bean et al.~(2010a) conducted high-resolution spectroscopic data of
over 60 M-dwarfs (spectral types M4-M9) and used a NH$_3$ gas cell spectrum as a stable reference, and report an
RV precision of better than $5~{\rm m~s^{-1}}$. Figueira et al.~(2010) took observations of the planetary
candidate TW~Hya and achieved a RV precision better than $10~{\rm m~s^{-1}}$ by adopting telluric lines as
a stable reference. Blake et al.~(2010) report RV measurements of 59 M- and L-dwarfs using the Keck/NIRSPEC spectrograph, with the aim to detect low-mass companions. They made use of strong CO absorption features around 2.3~$\mu$m in M- and L-dwarfs and achieved RV precisions between 50 and 200~m~s$^{-1}$. Tanner et al.~(2010) report preliminary results of a late M-dwarf survey by using Keck/NIRSPEC with RV precisions between 150-300~m~s$^{-1}$.
In 2009, Pravdo \& Shaklan~(2009) announced a massive planet around the M-dwarf vB10 discovered by means of astrometrical data.
Zapatero Osorio et al.~(2009; hereafter ZO09) made use of our NIRSPEC data set and found evidence for RV variations, which supported the planet hypothesis. They achieved a RV precision of about 300~m~s$^{-1}$. However, this planet was later refuted by different groups: Bean et al. (2010b), who took high-resolution spectra ($R=\lambda/\Delta\lambda\sim100,000$) with CRIRES and who achieved a RV precision of $\sim 10$~m~s$^{-1}$, and by Anglada-Escud\'e et al. (2010). Additionally, Lazorenko et al.~(2011) carried out an astrometric survey using the FORS2 camera of the ESO/VLT on Cerro Paranal, Chile, but found no evidence for the existence of a massive planet orbiting vB10. As part of this work, we aimed at finding out what had caused the spurious RV variations in the data analysis of ZO09.
Here, we report relative RV measurements of 8 late M-dwarfs with NIRSPEC, and we support the capability of this instrument to detect giant planetary companions with short orbital periods. In Section 2 we describe our M-dwarf sample, our observations and data reduction. In
Section 3 we outline the details of the data analysis, followed by the results and discussion (Section~4).
\section{Observations and Data Reduction}
As part of our M-dwarf survey (Deshpande et al., in prep.), we observed 8 M-dwarfs (2M2331, GJ1156, GJ406, GJ905, LHS1363, RXJ2208.2, and vB10) at two or more epochs (Table~\ref{xoxo:T1}) using the NIRSPEC instrument, mounted on the Keck II telescope on the summit of Mauna Kea in Hawaii (McLean et al. 1998). We aimed at conducting RV precision tests, and searching for RV drifts which could be interpreted as massive planets orbiting those M-dwarfs. Our sample comprised dwarfs with spectral types of M5.0-M8.0 and masses between 0.14 and 0.075~M$_\odot$. Table~\ref{xoxo:T4} provides a list of the spectral types, $J$-band magnitude and the projected stellar rotational broadening $v \sin i$ of the targets.
NIRSPEC is a cross-dispersed, cryogenic echelle spectrometer employing a $1024\times
1024$ ALADDIN InSb array detector. In the echelle mode, we selected
the NIRSPEC-3 ($J$-band) filter and an entrance slit width of $0.432\arcsec$ (i.e. 3 pixels along the dispersion direction of the detector), except for the 2001 June observations of vB10, where we used an entrance slit width of $0.576\arcsec$.
The corresponding spectral resolving powers were $R= \lambda / \Delta \lambda \approx 22,700$ and $R \approx 17,800$, respectively for the $0.432\arcsec$ slit and the $0.576\arcsec$ slit. The length of
both slits was $12\arcsec$. All observations were carried out at an echelle angle of
$\sim 63^\circ$. This instrumental setup provided a wavelength coverage from 1.148 to $1.346~\mu$m split into 10 different echelle orders, a nominal dispersion ranging from 0.164 (blue) to
0.191$~{\rm \AA~pix^{-1}}$ (red wavelengths). Weather conditions (seeing and atmospheric transparency) were fine during the observations, except for the 2008 epoch, which was hampered by cirrus and strong wind. Table~\ref{xoxo:T1} lists the individual exposure times and the signal-to-noise ratios (SNRs) on average per spectral pixel in the stellar continua for each observing epoch.
For each target, the spectra were collected at two different positions along the entrance slit. This nodding allowed later the removal of the OH sky emission lines. For the identification of atmospheric telluric absorption, near-infrared featureless stars of spectral types A0-A2 were observed close in time (on average 3 min before or after the target observations) and position to our targets.
Raw data were reduced using the echelle package within
{\tt IRAF}\footnote{{\tt IRAF} is distributed by the National Optical Astronomy Observatory (NOAO), which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation in the USA.}. Nodded images were
subtracted to remove sky background and dark current. White
light spectra obtained with the same instrumental configuration
and for each target observation were used to flat-field the data.
By adopting the {\tt apall} task, we first identified and optimally centered the echelle orders in the two individual nodding frames for each target and traced these orders by adopting a second-order Legendre polynomial along the dispersion axis. In the next step, we extracted the one-dimensional spectra for each echelle order adopting the same aperture/trace parameters for both, the target as well as an arc-lamp exposure of Ar, Kr, and Xe, which was always acquired after observing
the target and before pointing the telescope at the next star. The
air wavelengths of the arc lines were identified using the NIST\footnote{ http://physics.nist.gov/PhysRefData/ASD/lines\_form.html}
database, and we produced preliminary wavelength calibration fits using a third-order Legendre
polynomial along the dispersion axis and a second-order one
perpendicular to it. The mean rms of the fits was $0.03~{\rm \AA}$, or
0.7 km~s$^{-1}$.
\begin{table}
\begin{center}
\caption{Journal of target observations. The average SNR values in the stellar continua are given.} \label{xoxo:T1}
\begin{tabular}{lcccc}
\hline\hline
Obs. date & UT & Slit & Exp. (s) & SNR$^{a}$\\
\hline
\multicolumn{2}{l}{{\bf 2MJ2331-2749}:} \\
2007-Jun-24 & 14:33 & $0.432 \times 12$ & $2\times200$ & $\sim50$ \\
2007-Jun-25 & 14:12 & $0.432 \times 12$& $2\times200$ & $\sim60$ \\
\hline
\multicolumn{2}{l}{{\bf GJ 406}:} \\
2007-Apr-30 & ~7:03 & $0.432 \times 12$ & $2\times120$ & $\sim50$ \\
2007-Dec-23 & 15:32 & $0.432 \times 12$ & $2\times~30$ & $\sim50$ \\
\hline
\multicolumn{2}{l}{{\bf GJ 905}:} \\
2007-Jun-25 & 14:42 & $0.432 \times 12$ & $2\times~20$ & $\sim110$ \\
2007-Oct-27 & 10:51 & $0.432 \times 12$ & $2\times120$ & $\sim280$ \\
\hline
\multicolumn{2}{l}{{\bf GJ 1156}:} \\
2007-Jun-24 & ~7:24 & $0.432 \times 12$ & $2\times120$ & $\sim140$ \\
2007-Jun-25 & 15:55& $0.432 \times 12$ & $4\times300$ & $\sim240$ \\
\hline
\multicolumn{2}{l}{{\bf LHS 1363}:} \\
2007-Oct-26 & 12:09 & $0.432 \times 12$ & $2\times300$ & $\sim110$ \\
2007-Oct-27 & 12:13 & $0.432 \times 12$ & $2\times300$ & $\sim110$ \\
\hline
\multicolumn{2}{l}{{\bf LP 412-31}:} \\
2007-Oct-26 & 12:44 & $0.432 \times 12$ & $2\times300$ & $\sim60$ \\
2007-Oct-27 & 12:56 & $0.432 \times 12$ & $2\times300$ & $\sim50$ \\
\hline
\multicolumn{2}{l}{{\bf RXJ2208.2}:} \\
2007-Jun-24 & 13:49 & $0.432 \times 12$ & $2\times120$ & $\sim70$ \\
2007-Jun-25 & 13:40 & $0.432 \times 12$ & $2\times120$ & $\sim70$ \\
\hline
\multicolumn{2}{l}{{\bf vB10}:} \\
2001-Jun-15 & 14:06 & $0.576 \times 12$ & $2\times100$ & $\sim60$ \\
2001-Nov-~2 & ~4:43 & $0.432 \times 12$ & $2\times120$ & $\sim60$ \\
2001-Nov-~2 & ~5:39 & $0.432 \times 12$ & $2\times120$ & $\sim60$ \\
2007-Jun-25 & 13:22 & $0.432 \times 12$ & $2\times120$ & $\sim70$ \\
2008-Jul-28 & ~6:07 & $0.432 \times 12$ & $2\times120$ & $\sim20$ \\
\hline
\hline
\end{tabular}
~~~~~~~~~~~~~~~~~$^{a}$ SNR on average in the pseudo stellar continuum per spectral pixel around 1.265~$\mu$m.\\
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Properties of the M-dwarfs} \label{xoxo:T4}
\begin{tabular}{lcccccl}
\hline
\hline
Name & Sp. type & $J$ & $v \sin i$ & Ref. \\
& & & (km s$^{-1}$) \\
\hline
2MJ2331-2749 & M7.0 & 11.65 & $<12$ & Des11 \\
GJ406 & M5.5 & 7.09 & $\sim3$ &Rei10 \\
GJ905 & M5.0 & 6.88 & $<3$ & Rei10 \\
GJ1156 & M5.0 & 8.52 & $17.2\pm2.9$ & Des11 \\
LHS1363 & M6.5 & 10.48 & $<12$ & Des11 \\
LP412-31 & M6.5 & 10.48 & $17.6\pm3.2$ & Des11 \\
RXJ2208.2 & M5.0 & 10.60 & $18.6\pm2.3$ & Des11 \\
vB10 & M8.0 & 9.91 & 6.5 & Moh03 \\
\hline
\hline
\multicolumn{5}{l}{Abbreviations:}\\
\multicolumn{5}{l}{Des11 ...... Deshpande et al., in prep.}\\
\multicolumn{5}{l}{Moh03 ...... Mohanty \& Basry (2003).}\\
\multicolumn{5}{l}{Rei10 ...... Reiners et al. (2010).}\\
\end{tabular}
\end{center}
\end{table}
\section{Relative Radial Velocity Method}
We measured the RV of the stars relatively to the telluric lines present in the
spectra as well as to a selected epoch of the star, employing a self-calibration approach.
The radial velocity of
the telluric lines is constant in all wavelengths down to a level
of 10~m~s$^{-1}$ (e.g., Figueira et al. 2010, Seifahrt \& K\"aufl 2008), which is about a magnitude
smaller than the velocity precision we can achieve with NIRSPEC.
The basics of the self-calibration method have been extensively described
(e.g. Valenti et al.~1995, Endl et al. 2000, Bean et al.~2010a), so that we just
give a concise description of the method here and point out the important
aspects of its implementation.
Briefly, the main idea is to model the observations
and thereby determine the relative RV shift, and perform a fine tuning
of the wavelength solution at the same time. Basically, the model spectrum
is the product of a high-resolution telluric spectrum with a Doppler-shifted
version of a high-resolution reference spectrum of the star. This product of
those two spectra is then subjected to a convolution with the instrumental
profile (IP) of the spectrograph, and finally binned to the sampling of the
observed data. By variation of the free parameters of the model (Table~\ref{xoxo:T2}),
the best fit model is evaluated by $\chi^2$ statistics.
The input Doppler-shift which yields the best fit represents the measured RV.
Since our method requires the presence of telluric lines in the spectra, we restricted the analysis to the echelle orders 66, 60, 58, and 57, which were heavily contaminated mainly by absorption lines of water vapor. These four orders correspond to the wavelength ranges of $\lambda \sim 1.147$ to 1.163~$\mu$m, 1.261 to 1.279~$\mu$m, 1.304 to 1.323~$\mu$m, and 1.327 to 1.346~$\mu$m, respectively. For the echelle order numbering we refer to McLean et al.~(2007). Subsequently, we subdivided each spectral order into 5 equidistant pixel chunks of 200 pixel each (i.e. for all four orders together we have 20 chunks). This step was done to simplify the process of improving the model, to speed up the calculations and to account for variations of the IP throughout each spectral order. Each of the following steps was carried out on each chunk individually, and the SNR was determined by
\begin{equation} \label{xoxoequ:50}
{\rm SNR} = S_{\star} / \sqrt{S_{\star} + S_{\rm BG} + {S_{\rm BG2} + \rm RON}^2\times 2n} ,
\end{equation}
where $S_{\star}$ denotes the signal level from a star in electrons, integrated over an aperture of $n$ pixels,
$S_{\rm BG}$ is the signal level of the sky background, $S_{\rm BG2}$ is the signal level from the sky background of the frame taken at the other nodding position, and RON denotes the read-out-noise level per pixel in rms electrons (for NIRSPEC, RON=65e$^{-1}$). The noise errors were propagated in the following data analysis steps.
\subsection{Step 1: Telluric template spectrum and determination of the instrumental profile}
For the calculation of the atmospheric transmission spectrum, we used the Line-By-Line Radiative Transfer Model (LBLRTM) code, which is based on the FASCODE algorithm (Clough et al.~1992). LBLRTM is available as fortran source code \footnote{Source code and manuals are available under {\tt http://rtweb.aer.com/lblrtm\_description.html}} and runs on various platforms. As molecular database we adopted HITRAN (Rothman et al.~2005), which contains the 42 most prominent molecules and isotopes present in the atmosphere of the Earth. Following the approach presented by Seifahrt et al.~(2010), we created a high-resolution theoretical telluric spectrum for each observed spectrum by accounting for the air mass of the star as well as the weather conditions (water vapour density column, temperature and pressure profiles) during the observations. We retrieved the weather information from the Global Data Assimilation System (GDAS). GDAS models are available in 3 hours intervals for any location around the globe\footnote{GDAS webpage: {\tt http://ready.arl.noaa.gov/READYamet.php}}.
\begin{figure}
\includegraphics[angle=0,scale=.45]{xoxo-fg123.ps}
\caption{Comparison between the observed telluric spectrum (points), and the theoretical model (line). The observed telluric spectrum was taken by using a featureless A-star (HD181414), which was observed with NIRSPEC in the $J$-band on 2007-06-25. The rms of the telluric model fit to the observed data is about 1\%. \label{xoxo:F1A}}
\end{figure}
To calculate a first version of the instrumental profile (IP) of the spectrograph, we made use of the A-star observations next to our targets. First of all, we normalized the spectrum in such a way that the flux in the telluric continuum was at one. Next, we refined the wavelength solution of the observed telluric spectrum with the appropriate high-resolution theoretical telluric spectrum by adopting a second order polynomial. We then determined a preliminary version of the IP as the sum of 7 Gaussian profiles in a similar way as described in Valenti et al.~(1995): Around a central Gaussian we grouped 3 Gaussians on each side of it, which allow to account for asymmetries in the IP. Free parameters were the height and width of the central Gaussian, plus the heights of the six satellite Gaussians (c.f. Table~\ref{xoxo:T2}). To reduce the number of free parameters per chunk, and to ensure that the method works robust, the positions and the widths of these satellites were fixed and set {\it a priori} in such a way that their half-widths overlapped.
Next, we convolved the high-resolution theoretical spectrum with the determined preliminary IP and compared the resulting spectrum with the observed A-star spectrum. For a few telluric lines, we realized that the theoretical spectrum under- or overestimated the line-depths. To produce a better match between theory and observation, we iteratively carried out a fine tuning of the line-depths in the high-resolution theoretical telluric spectrum, then again convolved the modified telluric spectrum with the IP and evaluated the result with the observation by means of $\chi^2$-statistics. The iterations were carried out until the reduced $\chi^2$ reached 1. Fig.~\ref{xoxo:F1A} shows a comparison between an A-star spectrum (HD~181414) and the fit of the refined theoretical telluric spectrum as well as the residuals of the model fit to the A-star spectrum.
The rms of the telluric model fit to the observed data is about 1\% on average. The refined high-resolution telluric model spectrum from now on served the purpose of the telluric template spectrum. In the final step, we refined the wavelength solution of the observed spectrum, and then re-calculated the IP by adopting this new telluric template spectrum.
\subsection{Step 2: Stellar template spectrum}
Due to lack of appropriate theoretical model spectra which fit the stellar absorption features in the $J$-band, we created the stellar template spectrum for one selected reference epoch by calculating an IP-free and telluric-free version of the target spectrum. Concerning the reference epoch, we selected that epoch in which the stellar spectrum showed the highest SNR. To produce the stellar template, we first applied the refined wavelength solution of the A-star spectrum (which was taken - on average - 3 minutes before or after the target observations) to the observed target spectrum of the same epoch. Since the telluric lines were present in the target spectrum, we needed to remove them from the spectrum. In preparation for this, we convolved the appropriate theoretical telluric spectrum with the IP. Then, we divided the target spectrum by the convolved theoretical telluric spectrum (Fig.~\ref{xoxo:F1A}). Similar to Bean et al.~(2010a) and Blake et al.~(2010), we found that this approach led to smaller uncertainties than when the usual method of the telluric lines removal was carried out, where the target spectrum is simply divided by the appropriate A-star spectrum.
To create the final stellar template spectrum, we deconvolved the telluric-free target observation by the IP by employing the maximum-entropy method (MEM) with 5 times oversampling of the output spectrum. In the final step, we applied the refined wavelength solution that we had obtained for the A-star spectrum to the 5-times over-sampled IP-free stellar spectrum, which from there served the purpose of the stellar template.
\begin{figure}
\includegraphics[angle=0,scale=.5]{xoxo-fg3.ps}
\caption{Example model components and fit for the radial velocity measurements. The components are given in the two top panels: the spectrum of the high-resolution theoretical telluric spectrum (top), and the deconvolved and RV-shifted version of the telluric free stellar spectrum (bottom). We note that the scale of the flux is different in each panel for better visibility. In the lower panel, we show the observed spectrum (points) and the best-fit model (line).\label{xoxo:F3A}}
\end{figure}
\begin{table}
\begin{center}
\caption{Free parameters in the model per chunk.} \label{xoxo:T2}
\begin{tabular}{lc}
\hline\hline
Parameter & degree of \\
~ & freedom\\
\hline
Stellar absorption line depth & 1 \\
Linear stellar continuum trend & 2 \\
Doppler shift of stellar template & 1 \\
Telluric absorption line depth & 1 \\
Amplitude and width of main Gaussian & 2 \\
Amplitudes of satellite Gaussians & 6 \\
${\rm2^{nd}}$ order wavelength solution vector & 3\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Step 3: Fitting the observed data}
For each target, we first determined the barycentric velocity differences $\Delta v_{{\rm bc},t}$ for all observation epochs $t$ with respect to that one of the stellar template epoch. This correction was calculated by use of the JPL ephemeris DE200 (Standish 1990).
We constructed the model of the observation by multiplying the telluric template with the stellar template, where its Doppler shift is one of the free parameters.
The resulting combination spectrum was subjected to a convolution with the IP that was determined in Step~1, and a new wavelength solution was calculated. Subsequently, all the free parameters (i.e. line-depths in the models, IP, ...; See Table~\ref{xoxo:T2}) were refined by employing Brent's optimization algorithm, and the fit to the observed data was evaluated by using $\chi^2$ statistics. The search range for the Doppler shift was $\Delta v_{{\rm bc},t}\pm15$~km~s$^{-1}$ with a step width of 10~m~s$^{-1}$. We note that such a large interval would also allow us to detect large
relative RV variations due to unseen massive companions such as low-mass
stars and brown dwarfs. We calculated the $\chi^2$ values for each Doppler shift and then determined the exact $\chi^2$-minimum by using a Gaussian fit. That Doppler shift which led to the overall best fit model ($\chi^2$-minimum) constituted the measured RV of the star in the chunk, relatively to the stellar template.
To determine the global (i.e. all chunks together) RV measurement, we combined
all the RV measurements in all chunks into one by considering the following restrictions:
Each chunks was given a specific weight which was determined from the average SNR
in the stellar continuum, plus the number of telluric lines and stellar absorption lines which
were present in that chunk, plus the depths of the stellar lines. Furthermore, we rejected chunks in which the RV-measurement constituted clearly an
outlier ($3 \sigma$ above / below average of all RV measurements) by adopting sigma-clipping. No chunks were rejected for the stars GJ905, GJ1156, LHS1363, RXJ2208.2, and vB10. For 2MJ2331-2749 and LP412-31, one chunk was rejected each, while for GJ406 two chunks were rejected. All those rejected chunks were located in noisy areas with SNR levels lower than 40 on average. We attribute these spurious RV shifts to improper stellar templates which contained artifacts coming from the deconvolution of low-SNR data.
The global RV measurement
was then determined as the arithmetic weighted mean of the un-rejected chunks.
The error of the global RV measurement was determined as the weighted standard deviation
of the un-rejected RV measurements in the chunks.
\section{Results and Discussion}
We analyzed the data sets with our relative radial velocity measurement approach and determined the relative RV measurements with respect to the selected reference epoch. For any of the eight M-dwarfs in our sample, we have not found significant
evidence of relative RV variations at the level of 3$\sigma$ (Table~\ref{xoxo:T3}), where
$\sigma$ stands for the measurements uncertainty. The RV precisions are in the order of 180-300~m~s$^{-1}$, except for the observations in July 2008, which were taken at low SNR.
We investigated the period and mass range of companions which could be detected with such RV precisions. We determined the minimum mass of the planet by employing a Monte-Carlo analysis, thereby probing planetary orbits with different parameters and investigating how many of these orbits could be recovered for the five measurements of vB10. We considered only the case of a circular orbit and the mass of vB10, which is $m_\star=0.078~{\rm M}_\odot$.
Fig.~\ref{xoxo:F1} shows the 3$\sigma$ detection limit. We find that for companions with only a few days period, even planets with minimum masses of $m_{\rm{p}}\sin i \ge 0.3~M_{\rm Jup}$ can be detected with a RV precision of $\sim220$~m~s$^{-1}$.
\begin{figure}
\includegraphics[angle=270,scale=.4]{out.dat.ps}
\caption{Monte-Carlo analysis for the five vB10 measurements. For the mass of vB10, we adopted 0.078~M$_\odot$ from Pravdo \& Shaklan (2009). It is shown that with a RV precision of $\sim220$~m~s$^{-1}$ even hot Jupiters with minimum masses $m_{\rm p} \sin i> 0.3~$M$_{\rm Jup}$ could be detected around late M-dwarfs with 3$\sigma$ confidence. We note that for a larger number of measurements the number of aliasing peaks can be significantly decreased.\label{xoxo:F1}}
\end{figure}
In Fig.~4 we show our relative RVs of vB10 and the measurements by ZO09.
We note that for a proper comparison, we adopted the same reference epoch
as in ZO09. The agreement between ZO09 and our measurements is within
1$\sigma$ of the quoted uncertainties for all epochs except for the 2001 epoch
(BJD = 2452076). We provide next an explanation for the discrepancy of this
one measurement. Similar to our data analysis, ZO09 used the telluric lines present in the target spectra as a stable reference, but contrary to our analysis they did not account for any IP variations in their analysis, but calculated the RVs by cross correlation. In our analysis, we do not see any RV shift exceeding the RV-precision for any measurement. We get evidence that the different instrumental setting used on 2001-Jun-15 (0.576" slit instead of the standard setting of 0.432") produced an asymmetric instrumental profile (Fig.~\ref{xoxo:F2}), which led to a significant RV shift when a simple cross correlation is adopted for the RV determination. Our results clearly demonstrate the importance of modeling the IP especially when observations are carried out with different instrumental settings.
\begin{figure}
\includegraphics[angle=270,scale=.355]{xoxo-fg2.ps}
\caption{Our RV measurements of vB10 (crosses) vs. the measurements of ZO09 (open circles). The RV precision given by our analysis is about $~220$~m~s$^{-1}$, except for the last epoch in 2008, which was hampered by bad weather. We prove that the RV shift in the work of ZO09 in the first 2001 epoch (BJD$=245~2076$) originates from
unaccounted asymmetries in the IP rather than from a planetary companion.
\label{xoxo:F2A}}
\end{figure}
\begin{figure}
\includegraphics[angle=270,scale=.36]{new-ip-nogrid6.ps}
\caption{Instrumental profiles (IPs) of NIRSPEC for two different observing epochs of vB10. On 2001-06-15, a broader slit was used ($0.576"$; solid line) than for the reference epoch ($0.432"$; 3007-06-25; dashed line). To visualize the asymmetries between both IPs, we calculated the ratio between both IPs and scaled the resulting function for better visibility (dotted line). ZO09 did not account for these asymmetries between both IPs, which led to a spurious RV shift of about 1~km~s$^{-1}$ for the 2001-06-15 measurement in their data analysis.\label{xoxo:F2}}
\end{figure}
\begin{table}
\begin{center}
\caption{Relative radial velocity measurements.} \label{xoxo:T3}
\begin{tabular}{lcccc}
\hline\hline
Obs. Date & BJD & rel. RV \\
& 245~0000+ & (m~s$^{-1}$) \\
\hline
\multicolumn{2}{l}{{\bf 2MJ2331-2749}:} \\
2007-Jun-24 & 4276.10874 & $194\pm224$\\
2007-Jun-25 & 4277.09424 & {\it ref. epoch}\\
\hline
\multicolumn{2}{l}{{\bf GJ 406}:} \\
2007-Apr-30 & 4220.79751 & {\it reference epoch} \\
2007-Dec-23 & 4458.14962 & $-77\pm 238$ \\
\hline
\multicolumn{2}{l}{{\bf GJ 905}:} \\
2007-Jun-25 & 4277.11249 & $-233\pm 201$\\
2007-Oct-27 & 4400.95657 & {\it reference epoch} \\
\hline
\multicolumn{2}{l}{{\bf GJ 1156}:} \\
2007-Apr-30 & 4220.81513 & $141\pm185$\\
2007-Dec-22 & 4457.16844 & {\it reference epoch} \\
\hline
\multicolumn{2}{l}{{\bf LHS 1363}:} \\
2007-Oct-26 & 4400.01234 & $-27\pm196$\\
2007-Oct-27 & 4401.01511 & {\it reference epoch}\\
\hline
\multicolumn{2}{l}{{\bf LP 412-31}:} \\
2007-Oct-26 & 4400.03654 & {\it reference epoch} \\
2007-Oct-27 & 4401.04491 & $298\pm260$\\
\hline
\multicolumn{2}{l}{{\bf RXJ2208.2}:} \\
2007-Jun-24 & 4276.07879 & $-189\pm 307$\\
2007-Jun-25 & 4277.07266 & {\it reference epoch}\\
\hline
\multicolumn{2}{l}{{\bf vB10}:} \\
2001-Jun-15 & 2076.08951 & $19\pm230$ \\
2001-Nov-02 & 2215.70560 & $69\pm223$ \\
2001-Nov-02 & 2215.74225 & $-3\pm217$ \\
2007-Jun-25 & 4277.05865 & {\it reference epoch} \\
2008-Jul-28 & 4675.75669 & $131\pm497$ \\
\hline
\end{tabular}
\end{center}
\end{table}
We compare our results to the work of Blake et al.~(2010), who searched for companions to M- and L-dwarf wby using NIRSPEC at a spectral resolving power of $\sim25,000$ in the $K$-band. They adopted one spectral order covering the wavelength range from 2.285 to 2.318~$\mu$m to measure the dense and strong CO-absorption line pattern present in those dwarfs. As a stable wavelength reference, they made use of the CH$_4$ telluric absorption lines present in the observations. Similarly to us, they employed a self calibrating approach, with the difference that they adopted theoretical models for M- and L-dwarfs, which well-described the observations.
Blake et al. obtained measurements with SNRs in the range of 50 to 100 in the pseudo stellar continua, and they report RV precisions of 100-300 m/s for slowly rotating late-M and L dwarfs. The uncertainty of Blake et al. in the K-band is in agreement
with our derivation of 180-300 m/s in view of our SNRs. However, our wavelength coverage is about twice that in Blake et al. According to the relative RV precision formulae, we should have obtained better velocity precision in terms of wavelength coverage, which is not the case. We conclude that both the larger number of deep lines (more than 30 lines with a line depth larger than 50\%) in the CO-band region as compared to the J-band (only a few lines with a depth larger than 25\%) as well as the use of theoretical template spectra instead of deconvolved stellar spectra appear to compensate for the shorter wavelength coverage in a similar factor (c.f. Equation~6 in Butler et al. 1996).
We note that Reiners et al.~(2010) and Rodler et al.~(2011) carried out theoretical RV precision studies of M- and L-dwarfs, by adopting theoretical models of M-dwarfs (e.g. del Burgo et al.~2009). As result, they find that the highest RV precision for M-dwarfs is attained in the $Y$ band around $1~\mu$m, rather than in the $J-$ , $H-$ or $K$-band. For L-dwarfs, however, Rodler et al.~(2011) reported that the highest RV precision is attained in the $J$-band.
We conclude that for an accurate relative RV determination with NIRSPEC, a
self-calibrating approach, which accounts for changes in the instrumental
setting, produces the best measurements in terms of RV precision. Although with our RV precision we would be able to detect massive hot Neptunes around late M-dwarfs, we have not found any brown dwarf or massive planetary companion in our survey. Additionally, the re-analysis of the data of the M8-dwarf vB10 presented in ZO09 now clearly confirms the non-existence of a massive planet orbiting that dwarf and agrees with the results by other research groups (e.g. Anglada-Escud\'e et al.~2010; Bean et al.~2010b; Lazorenko et al.~2011).
\begin{acknowledgements}
We thank to those of the Hawaiian ancestry on whose sacred mountain we are privileged to be guests.
We are grateful to H. Bouy, N. Dello-Russo, P.-B. Ngoc, R. Tata, and R. Vervack for helping to obtain the 2007 and 2008 NIRSPEC spectra.
FR thanks to A. Seifahrt for his help with LBLRTM, and to M. Zechmeister and M. Endl for discussions on the self-calibrating approach. This work has been supported by the Spanish Ministerio de Eduaci\'on y Ciencia through grant AYA2007-67458. Partial support for this research was provided by RoPACS,
a Marie Curie Initial Training Network funded by the European
Commission's Seventh Framework Programme. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science and the Pennsylvania Space Grant Consortium. This work was partly funded by the Funda\c{c}\~{a}o para
a Ci\^{e}ncia e a Tecnologia (FCT)-Portugal through the
project PEst-OE/EEI/UI0066/201.
We would furthermore like to thank the anonymous referee for valueable comments to improve the article.
\end{acknowledgements}
|
1,116,691,497,702 | arxiv | \section{Introduction}
\label{sec:intro}
Initiated by \cite{Knuth73} almost forty years ago, the study of permutation
classes has since received a lot of attention, mostly with respect to enumerative questions (see
\cite{Bou03,Eli04,KiMa03} and their references among many others).
Most articles are focused on a given class $\mathcal{C} = Av(B)$ where
the basis $B$ of excluded patterns characterizing $\mathcal{C}$ is
finite, explicit, and in most cases contains only patterns of size $3$
or $4$. Recently, some results of a rather different nature have been
obtained, and have in common that they describe general properties of
permutation classes -- see
\cite{AA05,ALR05,BBPR09,BBPR11,BHV06a,BRV06,Vat05} for example.
Our work falls into this new line of research.
Our goal in this article is to provide a general algorithmic method to
obtain a combinatorial specification for any permutation class
$\mathcal{C}$ from its basis $B$ and the set $\mathcal{S}_{\mathcal{C}}$
of simple permutations in $\mathcal{C}$, and assuming these two sets are
finite.
Notice that by previous works to be detailed in Section
\ref{sec:previous_algos}, it is enough to know the finite basis $B$ of the
class to decide whether the set $\mathcal{S}_{\mathcal{C}}$ is finite
and (in the affirmative) to compute $\mathcal{S}_{\mathcal{C}}$.
By \emph{combinatorial specification} of a class (see \cite{FlSe09}), we
mean an unambiguous system of combinatorial equations that describe
recursively the permutations of $\mathcal{C}$ using only combinatorial constructors
(disjoint union, cartesian product, sequence, \ldots) and permutations
of size $1$.
Notice the major difference with the results of \cite{AA05}: our
specifications are unambiguous, whereas \cite{AA05} obtain combinatorial
systems of equations characterizing permutations classes that are ambiguous
in general.
We believe that our purpose of obtaining algorithmically combinatorial
specifications of permutation classes is of interest \emph{per se} but
also because it then allows to obtain by routine algorithms a system
of equations satisfied by the generating function of $\mathcal{C}$ and
a Boltzmann uniform random sampler of permutations in $\mathcal{C}$,
using the methods of \cite{FlSe09} and \cite{DuFlLoSc04} respectively.
The paper is organized as follows.
Section~\ref{sec:PermClasses} proceeds with some background on
permutation classes, simple permutations and substitution decomposition, and
Section~\ref{sec:previous_algos} sets the algorithmic context of our study.
Section~\ref{sec:ambiguous} then explains how to obtain a system of
combinatorial equations describing $\mathcal{C}$ from the set
of simple permutations in $\mathcal{C}$, that we assume to be finite.
The system so obtained may
be ambiguous and Section~\ref{sec:disambiguation} describes a
disambiguation algorithm to obtain a combinatorial specification for
$\mathcal{C}$.
The most important idea of this disambiguation procedure is to
transform ambiguous unions into disjoint unions of terms that involve both
pattern avoidance and pattern containment constraints. This somehow
allows to interpret on the combinatorial objects themselves the result of
applying the inclusion-exclusion on their generating functions.
Finally, Section~\ref{sec:ccl} concludes the whole
algorithmic process by explaining how this specification can be
plugged into the general methodologies of \cite{FlSe09} and
\cite{DuFlLoSc04} to obtain a system of equations satisfied by the
generating function of $\mathcal{C}$ and a Boltzmann uniform random
sampler of permutations in $\mathcal{C}$.
We also give a number of perspectives opened by our algorithm.
\section{Permutation classes and simple permutations}
\label{sec:PermClasses}
\subsection{Permutation patterns and permutation classes}
A permutation $\sigma= \sigma_1 \sigma_2 \ldots \sigma_n$ of size
$|\sigma| = n$ is a bijective map from $\{1,\ldots ,n\}$ to itself,
each $\sigma_i$ denoting the image of $i$ under $\sigma$. A
permutation $\pi = \pi_1 \pi_2 \ldots \pi_k$ is a \emph{pattern} of a
permutation ${\sigma = \sigma_1 \sigma_2 \ldots \sigma_n}$ (denoted $\pi
\preceq \sigma$) if and only if $k\leq n$ and there exist integers $1
\leq i_1 < i_2 < \ldots < i_k \leq n$ such that $\sigma_{i_1}\ldots
\sigma_{i_k}$ is order-isomorphic to $\pi$, \emph{i.e.} such that
$\sigma_{i_{\ell}} < \sigma_{i_m}$ whenever $\pi_{\ell} < \pi_{m}$. A
permutation~$\sigma$ that does not contain $\pi$ as a pattern is said
to {\em avoid} $\pi$. For example the permutation $\sigma=316452$
contains~$\pi = 2431$ as a pattern, whose occurrences are $3642$ and
$3652$. But $\sigma$ avoids the pattern $2413$ as none of its
subsequences of length $4$ is order-isomorphic to $2413$.
The pattern containment relation $\preceq$ is a partial order on
permutations, and a {\em permutation class} $\mathcal{C}$ is a downset
under this order: for any $\sigma \in \mathcal{C}$, if $\pi \preceq
\sigma$, then we also have $\pi \in \mathcal{C}$. For every set $B$,
the set $Av(B)$ of permutations avoiding any pattern of $B$ is a
class. Furthermore every class $\mathcal{C}$ can be rewritten as
$\mathcal{C} = Av(B)$ for a unique antichain $B$ ({\em i.e.,} a unique
set of pairwise incomparable elements) called the {\it basis} of
$\mathcal{C}$.
The basis of a class $\mathcal{C}$ may be finite or infinite; it is
described as the set of permutations that do not belong to $\mathcal{C}$ and
that are minimal in the sense of $\preceq$ for this criterion.
In the following, we only consider classes whose basis $B$ is given
explicitly, and is finite. This does not cover the whole range of
permutation classes, but it is a reasonable assumption when dealing
with \emph{algorithms} on permutation classes, that take a finite
description of a permutation class as input.
Moreover, as proved by \cite{AA05}, it is necessary that $B$ is finite as soon
as the set $\mathcal{S}_{\mathcal{C}}$ of simple permutations in $\mathcal{C}
= Av(B)$ is finite. Consequently the assumption of the
finiteness of $B$ is not a restriction when working on permutation classes
such that $\mathcal{S}_{\mathcal{C}}$ is finite, which is the context of
our study.
\subsection{Simple permutations and substitution decomposition of permutations}
\label{sec:substitution}
An \emph{interval} (or {\em block}) of a permutation $\sigma$ of size
$n$ is a subset $\{i,\ldots ,(i+\ell-1)\}$ of consecutive integers of
$\{1,\ldots ,n\}$ whose images by $\sigma$ also form an interval of
$\{1,\ldots ,n\}$. The integer $\ell$ is called the \emph{size} of the
interval. A permutation $\sigma$ is \emph{simple} when it is of size
at least $4$ and it contains no interval, except the trivial ones:
those of size $1$ (the singletons) or of size $n$ ($\sigma$
itself). The permutations $1$, $12$ and $21$ also have only trivial
intervals, nevertheless they are \emph{not} considered to be simple
here. Moreover no permutation of size $3$ has only trivial intervals.
For a detailed study of simple permutations, in particular from an
enumerative point of view, we refer the reader to
\cite{AA05,AAK03,Bri08}.
Let $\sigma$ be a permutation of size $n$ and $\pi^{1},\ldots,
\pi^{n}$ be $n$ permutations of size $p_1, \ldots, p_n$
respectively. Define the \emph{substitution} $\sigma[\pi^{1}, \pi^{2}
,\ldots, \pi^{n}]$ of $\pi^{1},\pi^{2} , \ldots, \pi^{n}$ in
$\sigma$
to be the permutation of size $p_1 + \ldots + p_n$ obtained by
concatenation of $n$ sequences of integers $S^1, \ldots , S^n$ from
left to right, such that for every $i,j$, the integers of $S^i$ form
an interval, are ordered in a sequence order-isomorphic to $\pi^{i}$,
and $S^i$ consists of integers smaller than $S^j$ if and only if
$\sigma_i < \sigma_j$. For instance, the substitution $ 1\, 3\, 2
[2\, 1, 1\, 3\, 2, 1]$ gives the permutation $ 2\, 1\, \, 4\, 6\, 5\,
\, 3$. We say that a permutation $\pi$ is \emph{$12$-indecomposable}
(resp. \emph{$21$-indecomposable}) if it cannot be written as
$12[\pi^{1},\pi^{2}]$ (resp. $21[\pi^{1},\pi^{2}]$), for any
permutations $\pi^{1}$ and $\pi^{2}$.
Simple permutations allow to describe all permutations through their \emph{substitution decomposition}.
\begin{theo}[\cite{AA05}]
Every permutation $\pi$ of size $n$ with
$n \geq 2$ can be uniquely decomposed as follows, $12$ (resp. $21$, $\sigma$) being called the \emph{root} of $\pi$:
\vspace{-5.5pt}
\begin{itemize}\setlength{\itemsep}{-0.8pt}
\item $12[\pi^{1},\pi^{2}]$, with $\pi^{1}$ $12$-indecomposable,
\item $21[\pi^{1},\pi^{2}]$, with $\pi^{1}$ $21$-indecomposable,
\item $\sigma[\pi^{1},\pi^{2},\ldots,\pi^{k}]$, with $\sigma$ a simple permutation of size $k$.
\end{itemize}
\label{thm:decomp_perm_AA05}
\end{theo}
\vspace{-9pt}
To account for the first two items of
Theorem~\ref{thm:decomp_perm_AA05} in later discussions, we
furthermore introduce the following notations: For any set $\ensuremath{\mathcal{C}}\xspace$ of
permutations, $\ensuremath{\mathcal{C}}\xspace^+$ (resp. $\ensuremath{\mathcal{C}}\xspace^-$) denotes the set of permutations of
$\ensuremath{\mathcal{C}}\xspace$ that are $12$-indecomposable (resp. $21$-indecomposable). Notice
that even when $\ensuremath{\mathcal{C}}\xspace$ is a permutation class, this is not the case for
$\ensuremath{\mathcal{C}}\xspace^+$ and $\ensuremath{\mathcal{C}}\xspace^-$ in general.
Theorem~\ref{thm:decomp_perm_AA05} provides the first step in the
decomposition of a permutation $\pi$. To obtain its full
decomposition, we can recursively decompose the permutations $\pi^{i}$
in the same fashion, until we reach permutations of size $1$. This
recursive decomposition can naturally be represented by a tree, that
is called the substitution decomposition tree (or {\em decomposition
tree} for short) of $\pi$. Each internal node of the tree is
labeled by $12,21$ or by a simple permutation and the leaves represent
permutation $1$.
Notice that in decomposition trees, the left child of a
node labeled $12$ (resp. $21$) is never labeled by $12$ (resp. $21$),
since $\pi^{1}$ is $12$-indecomposable (resp. $21$-indecomposable) in
the first (resp. second) item of Theorem~\ref{thm:decomp_perm_AA05}.
\begin{example}\label{ex:decomp}
The permutation $\pi = 8\ 9\ 5\ 11\ 7\ 6\ 10\ 17\ 2\ 1\ 3\ 4\ 14\
16\ 13\ 15\ 12$ is recursively decomposed as $\pi =
2413[4517326,1,2134,35241] =
2413[31524[12[1,1],1,1,21[1,1],1]],1,12[21[1,1],12[1,1]],\\
21[2413[1,1,1,1],1]] $ and its decomposition tree is given in
Figure~\ref{fig:tree}.
\end{example}
\begin{wrapfigure}[8]{r}{60mm}
\vspace{-3.5mm}
\begin{tikzpicture}[
scale=.6,
level/.style={sibling distance=20mm/#1},
edge from parent/.style={very thick,draw=black!70},
simple/.style={rectangle, draw=none, rounded corners=1mm, fill=white, text centered, text=black,anchor=north,inner sep=2pt},
linear/.style={rectangle, draw=none, fill=white, text centered, anchor=north, text=black,inner sep=2pt},
every node/.style={circle, draw=none, fill=black, text centered, anchor=north, text=white,inner sep=0},
level distance=7mm
]
\node[simple] {$2\,4\,1\,3$}
child { node[simple] {$3\,1\,5\,2\,4$}
child {node[linear] {$12$}
child { node { ~ }}
child { node { ~ }}
}
child[sibling distance=7mm] { node { ~ }}
child { node { ~ }}
child{node[linear] {$21$}
child { node { ~ }}
child { node { ~ }}
}
child { node { ~ }}
}
child {node { ~ }}
child[sibling distance=6mm] {node[linear] {$12$}
child {node[linear] {$21$}
child { node { ~ }}
child { node { ~ }}
}
child {node[linear] {$12$}
child { node { ~ }}
child { node { ~ }}
}
}
child {node[linear] {$21$}
child {node[simple] {$2\,4\,1\,3$}
child { node { ~ }}
child { node { ~ }}
child { node { ~ }}
child { node { ~ }}
}
child {node { ~ }}
};
\end{tikzpicture}
\caption{{\small Decomposition tree of $\pi$ (from Ex.~\ref{ex:decomp}).}}\label{fig:tree}
\end{wrapfigure}
The \emph{substitution closure} $\ensuremath{{\hat{\mathcal{C}}}}$ of a permutation
class\footnote{that contains permutations $12$ and $21$. We will
assume so in the rest of this article to avoid trivial cases.}
$\mathcal{C}$ is defined as the set of permutations whose
decomposition trees have internal nodes labeled by either $12, 21$ or
a simple permutation of $\mathcal{C}$. Notice that $\mathcal{C}$ and
$\ensuremath{{\hat{\mathcal{C}}}}$ therefore contain the same simple permutations. Obviously, for
any class $\mathcal{C}$, we have $\mathcal{C} \subseteq \ensuremath{{\hat{\mathcal{C}}}}$. When the
equality holds, the class $\mathcal{C}$ is said to be {\em
substitution-closed} (or sometimes {\em wreath-closed}). But this is
not always the case, and the simplest example is given by $\mathcal{C}
= Av(213)$. This class contains no simple permutation hence its
substitution closure is the class of separable permutations of
\cite{BBL98}, \emph{i.e.} of permutations whose decomposition trees
have internal nodes labeled by $12$ and $21$. It is immediate to
notice that $213 \in \ensuremath{{\hat{\mathcal{C}}}}$ whereas of course $213 \notin \mathcal{C}$.
A characterization of substitution-closed classes useful for our
purpose is given in \cite{AA05}: A class is substitution-closed if and
only if its basis contains only simple permutations.
\section{Algorithmic context of our work}
\label{sec:previous_algos}
Putting together the work reported in this article and recent algorithms
from the litterature provides a full algorithmic chain starting with the
finite basis $B$ of a permutation class $\mathcal{C}$, and computing a
specification for $\mathcal{C}$.
The hope for such a very general algorithm is of course very tenuous,
and the algorithm we describe below will compute its output only when
some hypothesis are satisfied, which are also tested algorithmically.
Figure~\ref{fig:schema2} summarizes the main steps of the algorithm.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{schema-fpsac3.pdf}
\vspace{-3mm}
\caption{Automatic process from the basis of a
permutation class to generating function and
Boltzmann sampler.}
\label{fig:schema2}
\end{figure}
The algorithms
performing the first two steps of the algorithmic process of Figure~\ref{fig:schema2}
are as follows.
\textbf{First step : Finite number of simple permutations} \qquad
First, we check whether $\mathcal{C} = Av(B)$ contains only a finite number of
simple permutations.
This is achieved using algorithms of
\cite{BBPR09} when the class is substitution-closed and of
\cite{BBPR11} otherwise. The complexity of these algorithms are
respectively $\mathcal{O}(n \log n)$ and $\mathcal{O}(n^{4k})$, where
$n = \sum_{\beta \in B} |\beta|$ and $k = |B|$.
\textbf{Second step : Computing simple permutations} \qquad
The second step of the algorithm is the computation of the set of
simple permutations $\mathcal{S}_{\mathcal{C}}$ contained in
$\mathcal{C} = Av(B)$, when we know it is finite. Again, when
$\mathcal{C}$ is substitution-closed, $\mathcal{S}_{\mathcal{C}}$ can
be computed by an algorithm that is more efficient than in the
general case. The two algorithms are described in \cite{PR11}, and
their complexity depends on the output: $\mathcal{O}(N \cdot
\ell^{p+2}\cdot |B|)$ in general and $\mathcal{O}(N \cdot \ell^{4})$ for
substitution-closed classes, with $N = |\mathcal{S}_{\mathcal{C}}|$,
$p = \max \{|\beta| : \beta \in B\}$ and $\ell = \max \{|\pi| : \pi
\in \mathcal{S}_{\mathcal{C}}\}$.
Sections~\ref{sec:ambiguous}
and~\ref{sec:disambiguation} will then explain how to derive a
specification for $\mathcal{C}$ from $\mathcal{S}_{\mathcal{C}}$.
\section{Ambiguous combinatorial system describing $\mathcal{C}$\label{sec:ambigu}}
\label{sec:ambiguous}
We describe here an algorithm that
takes as input the set $\mathcal{S}_{\mathcal{C}}$ of
simple permutations in a class $\mathcal{C}$ and the basis $B$ of $\mathcal{C}$,
and that produces in output a
(possibly ambiguous) system of combinatorial equations describing the
permutations of $\mathcal{C}$ through their decomposition trees.
The main ideas are those of Theorem~10 of \cite{AA05}, but unlike this work,
we make the whole process fully algorithmic.
\subsection{The simple case of substitution-closed classes}
Recall that $\mathcal{C}$ is a substitution-closed permutation class
when {$\mathcal{C}=\ensuremath{{\hat{\mathcal{C}}}}$}, or equivalently when the permutations in
$\mathcal{C}$ are exactly the ones whose decomposition trees have
internal nodes labeled by $12, 21$ or any simple permutation of
{$\ensuremath{\mathcal{C}}\xspace$}. Then Theorem~\ref{thm:decomp_perm_AA05} directly yields the
following system $\ensuremath{\mathcal{E}}\xspace_{\ensuremath{{\hat{\mathcal{C}}}}}$:
\begin{eqnarray}
\ensuremath{{\hat{\mathcal{C}}}} &=& 1\ \uplus \ 12[\ensuremath{{\hat{\mathcal{C}}}}^+, \ensuremath{{\hat{\mathcal{C}}}}]\ \uplus \ 21[\ensuremath{{\hat{\mathcal{C}}}}^-, \ensuremath{{\hat{\mathcal{C}}}}] \ \textstyle\uplus \biguplus_{\pi \in \ensuremath{{\mathcal S}}_\ensuremath{{\hat{\mathcal{C}}}}} \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\label{eqn:Wc1} \\
\ensuremath{{\hat{\mathcal{C}}}}^+ &=& 1\ \uplus \ 21[\ensuremath{{\hat{\mathcal{C}}}}^-, \ensuremath{{\hat{\mathcal{C}}}}]\ \uplus\ \textstyle\biguplus_{\pi \in \ensuremath{{\mathcal S}}_\ensuremath{{\hat{\mathcal{C}}}}} \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}] \label{eqn:Wc2}\\
\ensuremath{{\hat{\mathcal{C}}}}^- &=& 1\ \uplus\ 12[\ensuremath{{\hat{\mathcal{C}}}}^+, \ensuremath{{\hat{\mathcal{C}}}}]\ \uplus\ \textstyle\biguplus_{\pi \in \ensuremath{{\mathcal S}}_\ensuremath{{\hat{\mathcal{C}}}}} \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]. \label{eqn:Wc3}
\end{eqnarray}
By uniqueness of substitution decomposition, unions are disjoint
and so Equations~\eqref{eqn:Wc1} to \eqref{eqn:Wc3} describe
unambiguously the substitution closure $\ensuremath{{\hat{\mathcal{C}}}}$ of a permutation
class $\mathcal{C}$. For a substitution-closed class (and the
substitution closure of any class), this description gives a
combinatorial specification. Hence, it provides an efficient
way to compute the generating function of the class, and to
generate uniformly at random a permutation of a given size in
the class.
\subsection{Adding constraints for classes that are not substitution-closed\label{sec:addConstraint}}
When ${\mathcal C}$ is not substitution-closed, we
compute a new system by adding constraints to the system
obtained for \ensuremath{{\hat{\mathcal{C}}}}, as in \cite{AA05}. Denoting by $X\langle Y\rangle$ the
set of permutations of $X$ that avoid the patterns in $Y$, we
have $\ensuremath{\mathcal{C}}\xspace = \ensuremath{{\hat{\mathcal{C}}}}\langle\ensuremath{{B^\star}}\rangle$ where $\ensuremath{{B^\star}}$ is the
subset of non-simple permutations of~$B$. Noticing that $\ensuremath{{\mathcal S}}_\ensuremath{{\hat{\mathcal{C}}}} = \ensuremath{{\mathcal S}}_\ensuremath{\mathcal{C}}\xspace$ (by definition of $\ensuremath{{\hat{\mathcal{C}}}}$), and since
$\ensuremath{\mathcal{C}}\xspace^{\varepsilon} = \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon} \langle \ensuremath{{B^\star}}
\rangle$
for $\varepsilon \in \{~~ , +, -\}$
, Equations~\eqref{eqn:Wc1} to~\eqref{eqn:Wc3} give
\begin{eqnarray}
\ensuremath{{\hat{\mathcal{C}}}} \langle \ensuremath{{B^\star}} \rangle &=& 1\ \uplus \ 12[\ensuremath{{\hat{\mathcal{C}}}}^+, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle\ \uplus \ 21[\ensuremath{{\hat{\mathcal{C}}}}^-, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle \ \uplus \textstyle\biguplus_{\pi \in \ensuremath{{\mathcal S}}_\ensuremath{\mathcal{C}}\xspace} \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle \label{eqn:const1}\\
\ensuremath{{\hat{\mathcal{C}}}}^+ \langle \ensuremath{{B^\star}} \rangle &=& 1\ \uplus \ 21[\ensuremath{{\hat{\mathcal{C}}}}^-, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle\ \uplus\ \textstyle\biguplus_{\pi \in \ensuremath{{\mathcal S}}_\ensuremath{\mathcal{C}}\xspace} \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle \label{eqn:const2}\\
\ensuremath{{\hat{\mathcal{C}}}}^- \langle \ensuremath{{B^\star}} \rangle &=& 1\ \uplus\ 12[\ensuremath{{\hat{\mathcal{C}}}}^+, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle\ \uplus\ \textstyle\biguplus_{\pi \in \ensuremath{{\mathcal S}}_\ensuremath{\mathcal{C}}\xspace} \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle, \label{eqn:const3}
\end{eqnarray}
all these unions being disjoint.
This specification is not complete, since sets of the form $\pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle$ are not immediately described from $\ensuremath{{\hat{\mathcal{C}}}} \langle \ensuremath{{B^\star}}\rangle$.
Theorem 10 of \cite{AA05} explains how sets such as $ \pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle$ can be expressed as union of smaller sets:
$$\pi[\ensuremath{{\hat{\mathcal{C}}}}, \dots, \ensuremath{{\hat{\mathcal{C}}}}]\langle \ensuremath{{B^\star}} \rangle = \textstyle\bigcup_{i=1}^{k} \pi[\ensuremath{{\hat{\mathcal{C}}}}\langle E_{i,1} \rangle,\ensuremath{{\hat{\mathcal{C}}}}\langle E_{i,2} \rangle,\ldots,\ensuremath{{\hat{\mathcal{C}}}}\langle E_{i,k} \rangle]$$
where $E_{i,j}$ are sets of permutations which are patterns of some
permutations of $\ensuremath{{B^\star}}$.
This introduces sets of the form $\ensuremath{{\hat{\mathcal{C}}}}\langle E_{i,j} \rangle$ on the right-hand side of an equation of the system that do not appear on the left-hand side of any equation. We will call such sets \emph{right-only} sets.
Taking $E_{i,j}$ instead of $\ensuremath{{B^\star}}$ in Equations~\eqref{eqn:const1} to~\eqref{eqn:const3}, we can recursively compute these right-only sets by introducing new equations in the system.
This process terminates since there exists only a finite number of sets of patterns of elements of $\ensuremath{{B^\star}}$ (as $B$ is finite). Let us introduce some definitions to describe these sets $E_{i,j}$.
A \emph{generalized substitution} $\sigma\{\pi^{1}, \pi^{2} ,\ldots,
\pi^{n}\}$ is defined as a substitution (see
p.\pageref{thm:decomp_perm_AA05}) with the particularity that any
$\pi^i$ may be the empty permutation (denoted by~$0$). Specifically
$\sigma[\pi^{1}, \pi^{2} ,\ldots, \pi^{n}]$ necessarily contains
$\sigma$ whereas $\sigma\{\pi^{1}, \pi^{2} ,\ldots, \pi^{n}\}$ may
avoid $\sigma$. For instance, $ 1\, 3\, 2 \{2\, 1, 0, 1\}= 2\, 1\,
3\in Av(132)$.
An {\em embedding of $\gamma$ in~${\pi=\pi_1\dots\pi_n}$} is a
map $\alpha$ from $\{1, \ldots, n\}$ to the set of (possibly empty)
blocks\footnote{Recall that here blocks of a permutation are sets of \emph{indices}.}
of $\gamma$ such that:
\vspace{-6.5pt}
\begin{itemize}\setlength{\itemsep}{-3pt}
\item if blocks $\alpha(i)$ and $\alpha(j)$ are not empty, and $i<j$,
then $\alpha(i)$ consists of smaller indices than $\alpha(j)$;
\item as a word, $\alpha(1) \ldots \alpha(n)$ is a factorization of the word
$1\ldots |\gamma|$ (which may include empty factors).
\item denoting $\gamma_I$ the pattern corresponding to
$\gamma_{i_1} \ldots \gamma_{i_{\ell}}$ for any
block $I$ of indices from $i_1$ to $i_{\ell}$ in increasing order, we
have $\pi\{\gamma_{\alpha(1)},\dots,\gamma_{\alpha(n)}\}=\gamma$.
\end{itemize}
\vspace{-5.5pt}
There are $11$ embeddings of $\gamma = 5\,4\,6\,3\,1\,2$ into
$\pi = 3\,1\,4\,2$, which correspond for instance to the generalized substitutions $\pi\{3241,12,0,0\}$,
$\pi\{3241,0,0,12\}$ and $\pi\{0,0,3241,12\}$ for the
same expression of $\gamma$ as the substitution ${21[3241,12]}$, or $
\pi\{3241,1,0,1\}$ which is the only one corresponding to
$312[3241,1,1]$. Notice that this definition of embeddings conveys the
same notion than in \cite{AA05}, but it is formally different and it will turn to be more
adapted to the definition of the sets $E_{i,j}$.
Equations~\eqref{eqn:const1} to~\eqref{eqn:const3} can be viewed as
Equations~\eqref{eqn:Wc1} to~\eqref{eqn:Wc3} ``decorated'' with
pattern avoidance constraints. These constraints apply to every set
$\pi[\ensuremath{{\hat{\mathcal{C}}}}_1, \dots, \ensuremath{{\hat{\mathcal{C}}}}_n]$ that appears in a disjoint union on the
right-hand side of an equation. For each such set, the pattern
avoidance constraints can be expressed by pushing constraints into the
subtrees, using embeddings of excluded patterns in the root $\pi$. For
instance, assume that $\gamma= 5\,4\,6\,3\,1\,2 \in B^\star$ and $\mathcal
S_{\mathcal C}=\{3142\}$, and consider $3142[\ensuremath{{\hat{\mathcal{C}}}}, \ensuremath{{\hat{\mathcal{C}}}},\ensuremath{{\hat{\mathcal{C}}}},
\ensuremath{{\hat{\mathcal{C}}}}]\langle \gamma \rangle$.
The embeddings of~$\gamma$ in~$3142$ indicates how pattern $\gamma$ can be
found in the subtrees in $3142[\ensuremath{{\hat{\mathcal{C}}}}, \ensuremath{{\hat{\mathcal{C}}}},\ensuremath{{\hat{\mathcal{C}}}},
\ensuremath{{\hat{\mathcal{C}}}}]$. As example the last embedding of the previous example
tells that $\gamma$ can spread over all the subtrees of~$3142$ except
the third. In order to avoid this particular embedding of~$\gamma$, it
is enough to avoid one of the induced pattern $\gamma_I$ on one of the
subtrees. However, in order to ensure that~$\gamma$ is avoided, the
constraints resulting from all the embeddings must be considered and
merged. More precisely, consider a set~$\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots ,\ensuremath{\mathcal{C}}\xspace_n]
\langle \gamma \rangle$, $\pi$ being a simple
permutation. Let~$\{\alpha_1, \dots, \alpha_\ell\}$ be the set of
embeddings of $\gamma$ in $\pi$, each $\alpha_i$ being associated to a
generalized substitution $\gamma = \pi\{\gamma_{\alpha_i(1)},\dots,
\gamma_{\alpha_i(n)}\}$ where $\gamma_{\alpha_i(k)}$ is embedded in
$\pi_k$. Then the constraints are propagated according to the
following equation:
\begin{equation} \label{eq:propagate}
\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots ,\ensuremath{\mathcal{C}}\xspace_n] \langle \gamma \rangle =
\textstyle\bigcup_{(k_1, \dots, k_\ell) \in K^\pi_\gamma} \pi[\ensuremath{\mathcal{C}}\xspace_1 \langle E_{1,k_1 \dots k_\ell} \rangle, \dots ,\ensuremath{\mathcal{C}}\xspace_n \langle E_{n,k_1 \dots k_\ell} \rangle]
\end{equation}
where $K^\pi_\gamma =\{(k_1, \dots, k_\ell) \in [1..n]^\ell\ |\ \forall i,\ \gamma_{\alpha_i(k_i)} \neq 0\}$ and
$E_{m,k_1 \dots k_\ell }= \{ \gamma_{\alpha_i(k_i)}\ |\ i \in [1..\ell] \text{ and } k_i=m \}$ is a set containing at least $\gamma$ for $(k_1, \dots, k_\ell) \in K^\pi_\gamma$.
In a tuple $(k_1, \ldots, k_{\ell})$ of $K^\pi_\gamma$, $k_i$
indicates a subtree of $\pi$ where the pattern avoidance constraint
($\gamma_{\alpha_i(k_i)}$ excluded) forbids any occurrence of
$\gamma$ that could result from the embedding $\alpha_i$. The set
$E_{m,k_1 \dots k_\ell }$ represents the pattern avoidance constraints
that have been pushed into the $m$-th subtree of $\pi$ by embeddings
$\alpha_i$ of $\gamma$ in $\pi$ where the block $\alpha_i(k_i)$ of
$\gamma$ is embedded into $\pi_m$.
Starting from a finite basis of patterns $B$, Algorithm~\ref{alg:sys-ambigu} describes the whole process to compute an ambiguous system defining the class~$\ensuremath{\mathcal{C}}\xspace = Av(B)$ knowing its set of simple permutations $\ensuremath{{\mathcal S}}_\ensuremath{\mathcal{C}}\xspace$.
The propagation of the constraints expressed by Equation~\eqref{eq:propagate} is performed by the
procedure~\textsc{AddConstraints}. It is applied to every set of
the form $\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots , \ensuremath{\mathcal{C}}\xspace_n] \langle B' \rangle$ that appears in the equation defining some
$\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle B' \rangle$ by the procedure~\textsc{ComputeEqn}. Finally,
Algorithm~\ref{alg:sys-ambigu} computes an ambiguous system for a
permutation class $Av(B)$ containing a finite number of simple
permutations: it starts from Equations~\eqref{eqn:const1} to~\eqref{eqn:const3},
and adds new equations to this system calling
procedure~\textsc{ComputeEqn}, until every $\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots ,
\ensuremath{\mathcal{C}}\xspace_n] \langle B' \rangle$ is replaced by some $\pi[\ensuremath{\mathcal{C}}\xspace'_1, \dots ,
\ensuremath{\mathcal{C}}\xspace'_n]$ and until every $\ensuremath{\mathcal{C}}\xspace'_i = \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle B'_i \rangle$ is defined by an equation of the system. All the
sets $B'$ are sets of
patterns of some permutations in $B$. Since there is only a
finite number of patterns of elements of $B$, there is a
finite number of possible $B'$, and Algorithm~\ref{alg:sys-ambigu} terminates.
\SetKwBlock{PGfunc}{\textsc{AddConstraints}}{end}
\SetKwBlock{Efunc}{\textsc{ComputeEqn}}{end}
\begin{algorithm}[h!]
\KwData{ $B$ is a finite basis of patterns defining
${\mathcal C}=Av(B)$ such that
$\mathcal{S}_\ensuremath{\mathcal{C}}\xspace$ is known and finite.} \KwResult{A
system of equations of the form $\ensuremath{{\mathcal D}} = \bigcup \pi[\ensuremath{{\mathcal D}}_1, \dots,
\ensuremath{{\mathcal D}}_n]$ defining $\ensuremath{\mathcal{C}}\xspace$.}
\Begin{
$\mathcal{E}\leftarrow$ \textsc{ComputeEqn}($(\ensuremath{{\hat{\mathcal{C}}}},\ensuremath{{B^\star}})$) $\cup$ \textsc{ComputeEqn}($(\ensuremath{{\hat{\mathcal{C}}}}^+,\ensuremath{{B^\star}})$) $\cup$ \textsc{ComputeEqn}($(\ensuremath{{\hat{\mathcal{C}}}}^-,\ensuremath{{B^\star}})$)\\
\While{ there is a right-only $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle B'\rangle$ in some equation of $\ensuremath{\mathcal{E}}\xspace$} { $\mathcal{E}
\leftarrow \mathcal{E}~\cup$ \textsc{ComputeEqn}($\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}$, $B'$)
} }
\bigskip
/* Returns an equation defining $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle B'\rangle$ as a union of $\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots , \ensuremath{\mathcal{C}}\xspace_n]$ */\\
/* $B'$ is a set of permutations, $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}$ is given by $\ensuremath{{\mathcal S}}_\ensuremath{{\hat{\mathcal{C}}}}$ and $\varepsilon \in \{~~ , +, -\}$ */\\
\Efunc(\params{$\ensuremath{{\hat{\mathcal{C}}}}^\varepsilon,B'$}){
$\mathcal{E} \leftarrow$ Equation \eqref{eqn:const1} or \eqref{eqn:const2} or \eqref{eqn:const3} (depending on $\varepsilon$) written with $B'$ instead of $\ensuremath{{B^\star}}$\\
\ForEach{ $t=\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots , \ensuremath{\mathcal{C}}\xspace_n] \langle B' \rangle$ that appears in $\mathcal{E}$}
{$t\leftarrow$ \textsc{AddConstraints}$(\pi[\ensuremath{\mathcal{C}}\xspace_1, \dots , \ensuremath{\mathcal{C}}\xspace_n], B' )$}
\Return $\mathcal{E}$\\
}
\bigskip
/* Returns a rewriting of $\pi[\ensuremath{\mathcal{C}}\xspace_1 \dots \ensuremath{\mathcal{C}}\xspace_n] \langle E \rangle$ as a union $\bigcup \pi[\ensuremath{{\mathcal D}}_1, \dots \ensuremath{{\mathcal D}}_n]$ */\\
\PGfunc(\params{$(\pi[\ensuremath{\mathcal{C}}\xspace_1 \ldots \ensuremath{\mathcal{C}}\xspace_n] , E)$}){
\lIf{$E = \emptyset$}{return $\pi[\ensuremath{\mathcal{C}}\xspace_1 \dots \ensuremath{\mathcal{C}}\xspace_n]$}\;
\Else{ choose $\gamma \in E$ and compute all the embeddings of $\gamma$ in $\pi$\\ compute $K^\pi_\gamma$ and sets $E_{m,k_1 \dots k_\ell }$ defined in Equation~\eqref{eq:propagate} \\ return $\bigcup_{(k_1, \dots, k_\ell) \in K^\pi_\gamma} \textsc{AddConstraints}(\pi[\ensuremath{\mathcal{C}}\xspace_1 \langle E_{1,k_1 \dots k_\ell} \rangle, \dots ,\ensuremath{\mathcal{C}}\xspace_n \langle E_{n,k_1 \dots k_\ell} \rangle], E \setminus \gamma)$.}
}
\caption{\textsc{AmbiguousSystem}($B$)\label{alg:sys-ambigu}}
\end{algorithm}
Consider for instance the class $\ensuremath{\mathcal{C}}\xspace =Av(B)$ for $B=\{1243,2413,531642,41352\}$: $\ensuremath{\mathcal{C}}\xspace$ contains only one simple permutation (namely $3142$), and $\ensuremath{{B^\star}} = \{1243\}$. Applying Algorithm~\ref{alg:sys-ambigu} to this class $\ensuremath{\mathcal{C}}\xspace$ gives the following system of equations:
\begin{small}
\begin{eqnarray}
\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle &=& 1 \ \cup \ 12[\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle] \ \cup \ 12[\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 1 2 4 3 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle] \ \cup \ 21[\ensuremath{{\hat{\mathcal{C}}}}^{-}\langle 1 2 4 3 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle] \nonumber\\
&\ \cup \ & 3 1 4 2[\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle] \ \cup \ 3 1 4 2[\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle] \label{eqn:ambigu1}\\
\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle &=& 1 \ \cup \ 21[\ensuremath{{\hat{\mathcal{C}}}}^{-}\langle 1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle]\label{eqn:ambigu2}\\
\ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle &=& 1 \ \cup \ 12[\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 1 3 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle] \ \cup \ 21[\ensuremath{{\hat{\mathcal{C}}}}^{-}\langle 1 3 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle]\label{eqn:ambigu3}\\
\ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle &=& 1 \ \cup \ 1 2 [\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 2 1 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle].\label{eqn:ambigu4}
\end{eqnarray}
\end{small}
\vspace*{-2em}
\section{Disambiguation of the system}
\label{sec:disambiguation}
In the above, Equation~\eqref{eqn:ambigu1} gives an ambiguous
description of the class $\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle$. As noticed in
\cite{AA05}, we can derive an unambiguous equation using the
inclusion-exclusion principle:
{\small $
\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle = 1 \ \cup \ 1 2 [\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle] \ \cup \ 1 2[\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 1 2 4 3 \rangle ,
\ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle] \ \setminus \ 1 2 [\ensuremath{{\hat{\mathcal{C}}}}^{+}\langle 1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2 1 \rangle] \ \cup \ 2 1 [\ensuremath{{\hat{\mathcal{C}}}}^{-}\langle 1 2 4 3 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle] \ \cup \ $
$
3 1 4 2 [\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle ,
\ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle] \ \cup \ 3 1 4
2 [\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2
1 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle] \ \setminus \ 3 1 4
2[\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2 1
\rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle]$}.
The system so obtained contains negative terms in
general. This still gives a system of equations allowing to compute
the generating function of the class. However, this cannot be easily
used for random generation, as the subtraction of combinatorial
objects is not handled by random samplers.
In this section we disambiguate this system to obtain a new positive one:
the key idea is to replace the negative terms by \emph{complement sets}, hereby
transforming pattern avoidance constraints into pattern \emph{containment} constraints.
\subsection{General framework}
The starting point of the disambiguation is to rewrite ambiguous
terms like $A \cup B \cup C$ as a disjoint union $(A \cap B
\cap C) \uplus (\bar{A} \cap B \cap C) \uplus (\bar{A} \cap \bar{B}
\cap C) \uplus (\bar{A} \cap B \cap \bar{C}) \uplus (A \cap \bar{B}
\cap C) \uplus (A \cap \bar{B} \cap \bar{C}) \uplus (A \cap B \cap
\bar{C})\textrm{.}$
By disambiguating the union $A \cup B \cup C$ using complement sets
instead of negative terms, we obtain an unambiguous description of
the union with only positive terms.
But when taking the complement of a set defined by pattern avoidance
constraints, these are transformed into pattern \emph{containment} constraints.
Therefore,
for any set $\mathcal{P}$ of permutations, we define the {\em restriction\xspace}
$\mathcal{P}\langle E \rangle(A)$ of $\mathcal{P}$ as the set of
permutations that belong to $\mathcal{P}$ and that avoid every pattern
of $E$ and contain every pattern of $A$. This notation will be used
when $\mathcal{P} = \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}$, for $\varepsilon \in \{~~ , +,
-\}$ and $\ensuremath{\mathcal{C}}\xspace$ a permutation class. With this notation, notice also
that for $A=\emptyset$, $\ensuremath{\mathcal{C}}\xspace\langle E \rangle = \ensuremath{\mathcal{C}}\xspace\langle E
\rangle(\emptyset)$ is a standard permutation class. Restrictions
have the nice feature of being stable by intersection as
$\mathcal{P}\langle E \rangle(A) \cap \mathcal{P}\langle E'
\rangle(A') = \mathcal{P}\langle E \cup E' \rangle(A \cup
A')$. We also define a {\em restriction term\xspace} to be a set of permutations
described as $\pi[{\mathcal S}_{1},{\mathcal S}_{2},\ldots,{\mathcal S}_{n}]$ where $\pi$ is a simple
permutation or $12$ or $21$ and the ${\mathcal S}_{i}$ are restrictions\xspace. By uniqueness of the substitution decomposition of a permutation, restriction terms are stable by intersection as well and the intersection is performed componentwise for terms sharing the same root: $\pi[{\mathcal S}_{1},{\mathcal S}_{2},\ldots,{\mathcal S}_{n}] \cap \pi[{\mathcal T}_{1},{\mathcal T}_{2},\ldots,{\mathcal T}_{n}] = \pi[{\mathcal S}_{1}\cap {\mathcal T}_{1},{\mathcal S}_{2}\cap {\mathcal T}_{2},\ldots,{\mathcal S}_{n}\cap {\mathcal T}_{n}]$.
\subsection{Disambiguate\label{sec:disambiguate}}
The disambiguation of the system obtained by
Algorithm~\ref{alg:sys-ambigu} is performed by
Algorithm~\ref{alg:disambiguise}. It consists in two main operations. One is
the disambiguation of an equation according to the root of the terms that induce ambiguity, which may introduce right-only restrictions. This leads to the second
procedure which computes new equations (that are added to the system) to describe these new restrictions (Algorithm~\ref{alg:ComputeEquationForRestriction}).
As stated in Section~\ref{sec:ambiguous}, every equation $F$ of our
system can be written as $t =1 \cup t_{1} \cup t_{2} \cup t_{3} \ldots
\cup t_{k}$ where the $t_{i}$ are restriction terms\xspace and $t$ is a
restriction\xspace.
By uniqueness of the substitution decomposition of a permutation, terms
of this union which have different roots $\pi$ are disjoint. Thus for
an equation we only need to disambiguate unions of terms with same
root.
\begin{algorithm}[t]
\KwData{A ambiguous system $\ensuremath{\mathcal{E}}\xspace$ of combinatorial equations \hfill /* obtained by Algo.~\ref{alg:sys-ambigu} */}
\KwResult{An unambiguous system of combinatorial equations equivalent to $\ensuremath{\mathcal{E}}\xspace$ }
\Begin{
\While{ there is an ambiguous equation $F$ in $\ensuremath{\mathcal{E}}\xspace$ }{
Take $\pi$ a root that appears several times in $F$ in an ambiguous way\\
Replace the restriction terms of F whose root is $\pi$ by a disjoint union using Eq.~\eqref{eq:DisambiguateRoot} -- \eqref{eq:14}\\
\While{ there exists a right-only restriction\xspace $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E\rangle(A)$ in some equation of $\ensuremath{\mathcal{E}}\xspace$}{
$\ensuremath{\mathcal{E}}\xspace \longleftarrow \ensuremath{\mathcal{E}}\xspace \bigcup$ \textsc{ComputeEqnForRestriction}($\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}$,$E$,$A$). \hfill /* See Algo.~\ref{alg:ComputeEquationForRestriction} */
}
}
\Return $\ensuremath{\mathcal{E}}\xspace$
}
\caption{\textsc{DisambiguateSystem}($\ensuremath{\mathcal{E}}\xspace$) \label{alg:disambiguise}}
\end{algorithm}
For example in Equation~\eqref{eqn:ambigu1}, there are two pairs
of ambiguous terms which are terms with root $3 1 4 2$ and terms with
root $12$. Every ambiguous union can be
written in the following unambiguous way:
\begin{equation}\label{eq:DisambiguateRoot}
\textstyle\bigcup_{i=1}^{k} t_{i}=\textstyle\biguplus_{X
\subseteq [1\ldots k], X \not= \emptyset} \bigcap_{i \in X} t_{i}
\cap \bigcap_{i \in \overline{X}} \overline{t_{i}},
\end{equation}
where the \emph{complement} $\overline{t_{i}}$ of a restriction term\xspace $t_{i}$ is
defined as the set of permutations of $\ensuremath{{\hat{\mathcal{C}}}}$ whose decomposition tree has the
same root than $t_{i}$ but that do not belong to $t_{i}$.
Equation~\ref{eq:ComplementTerm} below shows that $\overline{t_{i}}$ is not a term in general but can be expressed as a disjoint
union of terms. By distributivity of $\cap$ over $\uplus$, the above expression can therefore be
rewritten as a disjoint union of intersection of terms. Because terms
are stable by intersection, the right-hand side of Equation~\ref{eq:DisambiguateRoot}
is hereby written as a disjoint union of terms.
For instance, consider terms with root $3142$ in Equation~\eqref{eqn:ambigu1}:
$t_{1} = 3 1 4 2 [\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle ,
\ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2 \rangle]$ and $t_{2} = 3 1
4 2 [\ensuremath{{\hat{\mathcal{C}}}}\langle1 2 4 3 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 2 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle2
1 \rangle , \ensuremath{{\hat{\mathcal{C}}}}\langle1 3 2
\rangle]$. Equation~\eqref{eq:DisambiguateRoot} applied to
$t_1$ and $t_2$ gives an expression of the form
$\ensuremath{{\hat{\mathcal{C}}}}\langle1243\rangle = 1 \cup 12[\ldots] \cup 12[\ldots] \cup 21[\ldots] \cup (t_{1} \cap t_{2}) \uplus (t_{1} \cap \overline{t_{2}}) \uplus (\overline{t_{1}} \cap t_{2})\textrm{.}$
To compute the complement of a term $t$, it is enough to write that
\begin{equation}\label{eq:ComplementTerm}
\overline{t}=\biguplus_{X \subseteq \{1,\ldots,n\}, X \not= \emptyset} \pi[{\mathcal S}'_{1},\ldots,{\mathcal S}'_{n}] \mbox{ where }{\mathcal S}'_{i} = \overline{{\mathcal S}_{i}}\mbox{ if }i \in X\mbox{ and }{\mathcal S}'_{i} = {\mathcal S}_{i}\mbox{ otherwise},
\end{equation}
with the convention that $\overline{{\mathcal S}_{i}} = \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon} \setminus {\mathcal S}_{i}$ for ${\mathcal S}_{i} = \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E\rangle(A)$.
Indeed, by uniqueness of substitution
decomposition, the set of permutations of $\ensuremath{{\hat{\mathcal{C}}}}$ that do not belong to $t$ but
whose decomposition tree has root~$\pi$ can be written as the union of
terms $u = \pi[{\mathcal S}'_{1},{\mathcal S}'_{2},\ldots,{\mathcal S}'_{n}]$ where ${\mathcal S}'_{i} = {\mathcal S}_{i}$ or
${\mathcal S}'_{i}=\overline{{\mathcal S}_{i}}$ and at least one restriction\xspace ${\mathcal S}_{i}$ must be
complemented. For example $\overline{21[{\mathcal S}_{1},{\mathcal S}_{2}]} =
21[{\mathcal S}_{1},\overline{{\mathcal S}_{2}}] \uplus 21[\overline{{\mathcal S}_{1}},{\mathcal S}_{2}]
\uplus 21[\overline{{\mathcal S}_{1}},\overline{{\mathcal S}_{2}}]$.
The complement operation being pushed from restriction terms\xspace down to
restrictions\xspace, we now compute~$\overline{{\mathcal S}}$, for a given restriction\xspace ${\mathcal S} = \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle
E\rangle(A)$, $\overline{{\mathcal S}}$ denoting the
set of permutations of $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}$ that are not in~${\mathcal S}$. Notice
that
given a permutation $\sigma$ of $A$, then any permutation $\tau$ of
$\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle \sigma \rangle$ is in $\overline{{\mathcal
S}}$ because $\tau$ avoids $\sigma$ whereas permutations of
${\mathcal S}$ must contain $\sigma$. Symmetrically, if a
permutation $\sigma$ is in $E$ then permutations of
$\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle\rangle(\sigma)$ are in $\overline{{\mathcal
S}}$. It is straightforward to check that $\textstyle
\overline{\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E \rangle (A)} = \big[
\bigcup_{\sigma \in E} \ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle\rangle(\sigma)\big]
\bigcup \big[ \bigcup_{\sigma \in A}
\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle\sigma\rangle()\big]$. Unfortunately this
expression is ambiguous. Like before we can rewrite it as an
unambiguous union
\begin{equation} \label{eq:14}
\overline{\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E \rangle (A)}
= \biguplus_{\underset{X \times Y \not=
\emptyset\times\emptyset}{{X\subseteq A, Y \subseteq E}}}
\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle X\cup\overline{Y} \rangle (Y \cup \overline{X}) \textrm{, where } \overline{X} = A \setminus X \textrm{ and }\overline{Y} = E \setminus Y \textrm{.}
\end{equation}
In our example (Equations~\eqref{eqn:ambigu1} to~\eqref{eqn:ambigu4}),
only trivial complements appear as every restriction\xspace is of the form
$\ensuremath{{\hat{\mathcal{C}}}}\langle \sigma \rangle()$ or $\ensuremath{{\hat{\mathcal{C}}}}\langle \rangle(\sigma )$ for
which complements are respectively $\ensuremath{{\hat{\mathcal{C}}}}\langle \rangle(\sigma )$ and
$\ensuremath{{\hat{\mathcal{C}}}}\langle \sigma \rangle()$.
All together, for any equation of our system, we are able to rewrite it
unambiguously as a disjoint union of restriction terms\xspace. As noticed before,
some new right-only restrictions\xspace may appear during this process, for example as
the result of the intersection of several restrictions\xspace or when complementing
restrictions\xspace. To obtain a complete system we must compute iteratively
equations defining these new restrictions\xspace using Algorithm~\ref{alg:ComputeEquationForRestriction} described below.
Finally, the terminaison of Algorithm~\ref{alg:disambiguise} is easily proved.
Indeed, for all the restrictions\xspace $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E\rangle(A)$ that are
considered in the inner loop of Algorithm~\ref{alg:disambiguise}, every
permutation in the sets $E$ and $A$ is a pattern of some element of
the basis $B$ of $\mathcal{C}$. And since $B$ is finite, there is a finite
number of such restrictions.
\subsection{Compute an equation for a restriction\xspace}
Let $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E \rangle(A)$ be a restriction\xspace. Our goal here is to find
a combinatorial specification of this restriction\xspace in terms of smaller restriction terms\xspace
(smaller w.r.t. inclusion).
If $A = \emptyset$, this is exactly the problem addressed in Section~\ref{sec:addConstraint} and solved by pushing down the pattern avoidance constraints in the procedure \textsc{AddConstraints} of Algorithm~\ref{alg:sys-ambigu}.
Algorithm~\ref{alg:ComputeEquationForRestriction} below shows how to propagate also the pattern \emph{containment} constraints induced by $A \neq \emptyset$.
\SetKwBlock{AMfunc}{\textsc{AddMandatory}}{end}
\begin{algorithm}[H]
\KwData{$\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}, E,A$ with $E,A$ sets of permutations, $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}$ given by $\ensuremath{{\mathcal S}}_\ensuremath{{\hat{\mathcal{C}}}}$ and $\varepsilon \in \{~~ , +, -\}$.}
\KwResult{An equation defining $\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon}\langle E \rangle(A)$ as a union of restriction terms\xspace.}
\Begin{
$F \leftarrow$ Equation \eqref{eqn:Wc1} or \eqref{eqn:Wc2} or \eqref{eqn:Wc3} (depending on $\varepsilon$)\\
\ForEach{$\sigma \in E$}{
/* This step modifies $F$! */\\
Replace any restriction term $t$ in $F$ by \textsc{AddConstraints}$(t, \{\sigma\})$\hfill /* See Algo.~\ref{alg:sys-ambigu} */\\
}
\ForEach{$\sigma \in A$}{
/* This step modifies $F$! */\\
Replace any restriction term $t$ in $F$ by \textsc{AddMandatory}$(t, \sigma)$ \\
}
\Return $F$ \\
}
\bigskip
\AMfunc(\params{$\pi[{\mathcal S}_1, \dots, {\mathcal S}_n],\gamma$}){
\Return a rewriting of $\pi[{\mathcal S}_1, \dots, {\mathcal S}_n] (\gamma)$ as a union of restriction terms\xspace using Equation~\eqref{eq:AddMandatory}.
}
\caption{\textsc{ComputeEqnForRestriction}$(\ensuremath{{\hat{\mathcal{C}}}}^{\varepsilon},E,A)$ \label{alg:ComputeEquationForRestriction}}
\end{algorithm}
The pattern \emph{containment} constraints are propagated by \textsc{AddMandatory}, in a very similar fashion to the pattern \emph{avoidance} constraints propagated by \textsc{AddConstraints}.
To compute $t(\gamma)$ for $\gamma$ a permutation and $t = \pi[{\mathcal S}_1, \dots, {\mathcal S}_n]$ a restriction term, we first compute all embeddings of $\gamma$ into $\pi$.
In this case, a permutation belongs to $t(\gamma)$ if and only if at
least one embedding is satisfied.
Hence, any restriction term $t = \pi[{\mathcal S}_1, \dots, {\mathcal S}_n](\gamma)$ rewrites as a (possibly ambiguous) union as follows:
\begin{equation}\label{eq:AddMandatory}
\textstyle\bigcup_{i=1}^{\ell} \pi[{\mathcal S}_{1}(\gamma_{\alpha_{i}(1)}),{\mathcal S}_{2}(\gamma_{\alpha_{i}(2)}),\ldots,{\mathcal S}_{n}(\gamma_{\alpha_{i}(n)})],
\end{equation}
where the $(\alpha_{i})_{i \in \{1, \ldots, \ell \}}$ are all the embeddings of $\gamma$ in $\pi$ and if $\gamma_{\alpha_{i}(j)}=0$, then ${\mathcal S}_{j}(\gamma_{\alpha_{i}(j)}) = {\mathcal S}_j$.
For instance, for $t = 2413[{\mathcal S}_{1},{\mathcal S}_{2},{\mathcal S}_{3},{\mathcal S}_{4}]$ and $\gamma = 3214$, there are $9$ embeddings of $\gamma$ into $2413$, and the embedding $2413\{321,1,0,0\}$ contributes to the above union with the term $2413[{\mathcal S}_{1}(321),{\mathcal S}_{2}(1),{\mathcal S}_{3},{\mathcal S}_{4}]$.
Notice that although the unions of Equation~\ref{eq:AddMandatory} may be ambiguous, they will be transformed into disjoint unions by the outer loop of Algorithm~\ref{alg:disambiguise}.
Finally, the algorithm produces an unambiguous system which is the result of a finite number of iterations of computing equations followed by their disambiguation.
\section{Conclusion}
\label{sec:ccl}
We provide an algorithm to compute a combinatorial specification
for a permutation class $\ensuremath{\mathcal{C}}\xspace=Av(B)$, when its basis $B$ and the set of its simple permutations are finite and given as input. The complexity of this algorithm is however still to analyse. In particular, we observe a combinatorial explosion of the number of equations in the system obtained, that needs to be quantified.
Combined with existing algorithms, our procedure provides a full algorithmic chain from the basis (when finite) of a permutation class \ensuremath{\mathcal{C}}\xspace to a specification for \ensuremath{\mathcal{C}}\xspace. This procedure may fail to compute its result, when \ensuremath{\mathcal{C}}\xspace contains an infinite number of simple permutations, this condition being tested algorithmically.
This procedure has two natural algorithmic continuations.
First, with the \emph{dictionnary} of \cite{FlSe09}, the
constructors in the specification of $\mathcal{C}$ can be
directly translated into operators on the generating function $C(z)$
of $\mathcal{C}$, turning the specification into a system of (possibly
implicit) equations defining $C(z)$.
Notice that, using the inclusion-exclusion principle as
in \cite{AA05}, a system defining $C(z)$ could also be obtained
from an \emph{ambiguous} system describing
$\mathcal{C}$.
Second, the specification can be translated directly
into a Boltzmann uniform random sampler of permutations in
$\mathcal{C}$, in the same fashion as the above dictionnary (see
\cite{DuFlLoSc04}). This second translation is possible
only from an unambiguous system: indeed,
whereas adapted when considering enumeration sequences, the
inclusion-exclusion principle does not apply when working on the
combinatorial objects themselves.
When generating permutations with a Boltzmann sampler, complexity
is measured w.r.t. the size
of the permutation produced (and is linear if we allow a small variation on the size of the output
permutation; quadratic otherwise) and not at all w.r.t. the number of equations in
the specification. In our context, this dependency is of course
relevant, and opens a new direction in the study of Boltzmann random
samplers.
With a complete implementation of the algorithmic chain from $B$ to the specification
and the Boltzmann sampler, one should be able to test conjectures on and study
permutation classes. One direction would be to somehow
measure the randomness of permutations in a given class, by comparing
random permutations with random permutations in a class, or
random permutations in two different classes, w.r.t. well-known
statistics on permutations. Another perspective would be to use the
specifications obtained to compute or estimate the growth rates of
permutation classes, to provide improvements on the known bounds on
these growth rates. We could also explore the possible use
the computed specifications to provide more
efficient algorithms to test membership of a permutation to a
class.
However, a weekness of our procedure that we must acknowledge is that it fails to be completely general. Although the method is generic and algorithmic, the classes that are fully handled by the algorithmic process are those containing a finite number of simple permutations. By \cite{AA05}, such classes have finite basis (which is a restriction we imposed already), but they also have an \emph{algebraic} generating function. Of course, this is not the case for every permutation class.
We may wonder how restrictive this framework is, depending on which problems are studied.
First, does it often happen that a permutation class contains
finitely many simple permutations? To properly express what \emph{often} means, a probability distribution on permutation classes should be defined, which is a
direction of research yet to be explored.
Second, we may want to describe some problems (maybe like the distribution of some statistics) for which algebraic permutation classes are representative of all permutation classes.
To enlarge the framework of application of our algorithm, we could explore
the possibility of extending it to permutation
classes that contain an infinite number of simple permutations, but
that are finitely described (like the family of oscillations of
\cite{BRV06} for instance). With such an improvement, more classes would enter our framework, but it would be hard to leave the algebraic case.
This is however a promising
direction for the construction of Boltzmann random samplers for such
permutation classes.
\bibliographystyle{abbrvnat}
\begin{small}
|
1,116,691,497,703 | arxiv | \section{Introduction}
One of the most challenging questions in fluid dynamics is whether the
incompressible 3D Euler equations can develop a finite time singularity
from smooth and bounded initial data. From a theoretical point of view, the
main difficulty is due to the presence of the vortex stretching term in
the vorticity equation, which is formally quadratic in vorticity. If
such quadratic nonlinearity persists in time long enough, we would expect
a finite time singularity of the form $O(T-t)^{-1}$ in vorticity.
Such blow-up rate is consistent with the well-known result of Beale-Kato-Majda
\cite{BKM84} (see also \cite{EFM70}). There have been many computational
efforts in searching for
finite time singularities of the 3D Euler and Navier-Stokes equations, see e.g.
\cite{Chorin82,PS90,KH89,GS91,SMO93,Kerr93,Caf93,BP94,Pelz98,GMG98,Kerr04,Chorin06}.
One example that has been studied extensively is the interaction of two
perturbed antiparallel vortex tubes. This example is interesting because of
the vortex reconnection which has been observed for the corresponding
Navier-Stokes equations. It is natural to ask whether the 3D Euler
equations would develop a finite time singularity in the limit of
vanishing viscosity.
In \cite{Kerr93}, Kerr presented numerical evidence which suggests
a finite time singularity of the 3D Euler equations for two perturbed
antiparallel vortex tubes. In Kerr's computations, he used a
pseudo-spectral discretization in the $x$ and $y$ directions, and
a Chebyshev discretization in the $z$ direction with resolution of order
$512\times 256 \times 192$. His computations showed that the growth of
the peak vorticity, the peak axial strain, and the enstrophy production
obey $(T-t)^{-1}$ with $T = 18.9$. Self-similar development and
equal rates of collapse in all three directions were shown
(see the abstract of \cite{Kerr93}). While
velocity blowup was not documented in \cite{Kerr93}, Kerr showed
in his subsequent papers \cite{Kerr97,Kerr99,Kerr04} that velocity
field blows up like $O(T-t)^{-1/2}$ with $T$ being revised to $T=18.7$.
Kerr's computations have generated a lot of interests and his proposed
initial conditions have been
considered as ``the most attractive candidates for potential singular
behavior'' of the 3D Euler equations (see page 187 of \cite{MB02}).
Vortex reconnection of two perturbed antiparallel vortex tubes has been
studied extensively in the literature. Substantial core deformation has been
observed \cite{PS90,AG89,BPZ92,KH89,MH89,SMO93}. Most studies indicated
only exponential growth in the maximum vorticity.
However, the work of Kerr and Hussain in \cite{KH89} suggested a finite
time blow-up in the infinite Reynolds number limit, which motivated
Kerr's Euler computations mentioned above.
There has been some interesting development in the theoretical understanding
of the 3D incompressible Euler equations. It has been shown that the local
geometric regularity of vortex lines can play an important role in depleting
nonlinear vortex stretching \cite{Const94,CFM96,DHY05a,DHY05b}. In particular,
the recent results obtained by Deng, Hou, and Yu \cite{DHY05a,DHY05b}
show that geometric regularity of vortex lines, even in an extremely
localized region containing the maximum vorticity, can lead to depletion of
nonlinear vortex stretching, thus avoiding finite time singularity formation
of the 3D Euler equations. To obtain these results, Deng-Hou-Yu
\cite{DHY05a,DHY05b} explored the connection between the stretching
of local vortex lines and the growth of vorticity. In particular,
they showed that if the vortex lines near the region of maximum vorticity
satisfy some local geometric regularity conditions and the maximum velocity
field is integrable in time, then no finite time blow-up is possible. See
Section 4.2 for the detailed description of these results. Kerr's
computations fall in the critical case of the non-blowup theory in
\cite{DHY05a,DHY05b}. To get a definite answer in this critical case,
we need to check whether certain scaling constants, which describe
the local geometric properties of the vortex lines, satisfy an
algebraic inequality. However, such scaling constants are not
available in \cite{Kerr93}. This is our original motivation
to repeat Kerr's computations.
It is worth mentioning that the predicted singularity time in Kerr's
computations is $T=18.7$, while his computations from $t=17$ to
$t=18$, as mentioned in \cite{Kerr93}, seem to be under-resolved and
were ``not part of the primary evidence for a singularity''.
Clearly, the computations for $t \le 17$, which Kerr used as the
primary evidence for a singularity, is still far from the predicted
singularity time, $T=18.7$. In order to justify the predicted asymptotic
behavior of vorticity and velocity blowup, one needs to perform
well-resolved computations much closer to the predicted singularity
time. Such well-resolved computations can also provide more accurate
geometric properties of vortex lines, which can be used to check whether
the non-blowup conditions in \cite{DHY05a,DHY05b} are satisfied.
In this paper, we perform well-resolved computations of the 3D incompressible
Euler equations using the same initial condition as the one used by Kerr in
\cite{Kerr93}. A pseudo-spectral method with a very high order
Fourier smoothing is used to discretise the 3D incompressible Euler
equations in all three directions. The time integration is performed using
the classical fourth order Runge-Kutta method with adaptive time stepping to
satisfy the CFL stability condition. We perform a careful numerical study to
show that the pseudo-spectral method we use provides more accurate
approximations to the 3D Euler equations than the pseudo-spectral method
that uses the standard 2/3 dealiasing rule. We use up to
$1536\times 1024\times 3072$ space resolution in order to resolve
the nearly singular behavior of the 3D Euler equations.
Our numerical results demonstrate that the maximum vorticity does
not grow faster than double exponential in time, up to $t=19$, beyond the
singularity time $t=18.7$ predicted by Kerr's computations \cite{Kerr93,Kerr04}.
There are three distinguished stages of vorticity growth in time. In the early
stage for $0\le t \le 12$, the maximum vorticity grows only exponentially in time.
During the intermediate stage for $12 \le t\le 17$, the two vortex tubes
experience tremendous core deformation and become severely flattened. Each
vortex tube effectively turns into a vortex sheet with rapidly decreasing
thickness. During this stage, the maximum vorticity grows slightly slower
than double exponential in time. It is also interesting to examine the
degree of nonlinearity in the vortex stretching term during this stage.
An $O(T-t)^{-1}$ blowup rate in the maximum vorticity would imply that the
nonlinearity in the vortex stretching term is quadratic. However, our numerical
results show that the vortex stretching term, when projected to
the unit vorticity vector, is bounded by
$ \|\vec{\omega} \|_\infty \log (\|\vec{\omega} \|_\infty )$, where
$\vec{\omega} $ is
vorticity. It is easy to see that such upper bound on the vortex stretching
term implies that the maximum vorticity is bounded by double exponential in time.
During the final stage for $17 \le t \le 19$, we observe that the growth of
the maximum vorticity slows down considerably and deviates from double exponential
growth, indicating that there is stronger cancellation taking place in the
vortex stretching term.
We also find that the vortex lines near the region of the maximum vorticity are
relatively straight and the vorticity vectors seem to be quite regular. This
was also observed in \cite{Kerr93}. On the other hand, the inner region
containing the maximum vorticity does not seem to shrink to zero at a
rate of $(T-t)^{1/2}$, as predicted by Kerr's computations. Moreover, we
find that the velocity field, the enstrophy, and enstrophy production rate
remain bounded throughout
the computations. The fact that the velocity field remains bounded is
significant. With velocity field being bounded, the result of Deng-Hou-Yu
\cite{DHY05a} can be applied, which implies the non-blowup of the Euler
equations up to $T=19$. The geometric regularity of the vortex lines near
the inner region seems to play an important role in the dynamic depletion
of vortex stretching \cite{DHY05a,DHY05b}.
We would like to stress the importance of sufficient resolution in
determining the nature of the nearly singular behavior of the 3D Euler
equations. As demonstrated by our numerical computations, the 3D Euler
equations have different growth rates in the maximum vorticity
in different stages. A resolution without the proper level of refinement
would not be able to capture the transition from one growth phase to another.
In \cite{Kerr93}, the inverse of the maximum vorticity was shown to approach
to zero almost linearly in time up to $t=17$. If this trend were to continue
to hold, it would lead to the blowup of the maximum vorticity in the form
of $O(T-t)^{-1}$. However, with increasing resolutions, we find that
the curve corresponding to the inverse of maximum vorticity starts to turn
away from zero starting at $t=17$. We also observe that the velocity field
becomes saturated around this time. Incidentally, this is precisely the
time when Kerr's computations began to lose resolution. At $t=17$, the
thin vortex sheets have already started to roll up. After $t=17.5$, the
vorticity in the rolled up region has developed large gradients in all
three directions, with $z$ being the most singular direction and $y$ being
the least singular direction. To resolve the large gradients in all
three directions, we allocate $3072$ grid points along the $z$ direction,
$1536$ along the $x$ direction, and $1024$ along the $y$ direction. This
level of resolution ensures that we have about 8 grid points across the
most singular region in each direction toward the end of the computations.
Kerr interpreted the roll-up of the vortex sheet as ``two vortex sheets
meeting at an angle'' and argued that the formation of this angle may be
responsible for the finite time blowup of the Euler equations. Our
computations indicate that the rollup region of the vortex sheet
is still relatively smooth even during the final stage of the
computations. Moreover, according to the results in
\cite{DHY05a,DHY05b}, it is the curvature of the vortex lines and
the divergence of the unit vorticity vector that contribute to the blow-up,
not the curvature of the vortex sheet itself. Further, we observe that the
vortex lines near the region of the maximum vorticity are relatively smooth.
This geometric regularity leads to strong dynamic depletion of the nonlinear
vortex stretching.
There is reason to believe that if the current scenario persists,
there is no blowup of the 3D Euler equations for these data beyond $T=19$.
In fact, during the final stage of the computations for $17.5 \le t \le 19$,
the vortex lines near the region of the maximum vorticity remain smooth.
Further, as the vortex sheet rolls up, we observe that the location of
the maximum vorticity moves away from the dividing plane separating the
two vortex tubes toward the rolled up portion of the vortex sheet, leading
to a slower growth rate of maximum vorticity.
The rest of this paper is organized as follows. We describe the set-up of
the initial condition in Section 2 and describe our numerical method in
Section 3. In Section 4, we describe our numerical results in detail and
perform comparisons with the previous results obtained in
\cite{Kerr93,Kerr04}. Some concluding remarks are made in Section 5.
\section{The Initial Condition}
The 3D incompressible Euler equations in the vorticity stream function
formulation are given as follows:
\begin{eqnarray}\label{3deuler}
\vec{\omega}_t+(\vec{u}\cdot\nabla) \vec{\omega} & = & \nabla
\vec{u} \cdot \vec{\omega}, \\
- \bigtriangleup \vec{ \psi} &= & \vec{\omega}, \quad
\vec{u} = \nabla \times \vec{\psi},
\end{eqnarray}
with initial condition $\vec{\omega}\mid_{t=0} = \vec{\omega}_{0}$,
where $\vec{u}$ is velocity, $\vec{\omega}$ is vorticity, and
$\vec{\psi}$ is stream function. Vorticity is related to velocity
by $\vec{\omega} = \nabla \times \vec{u}$. The incompressibility
implies that
\[
\nabla \cdot \vec{u} = \nabla \cdot \vec{\omega} = \nabla \cdot \vec{\psi} = 0.
\]
We consider periodic boundary
conditions with period $2 \pi$ in all three directions.
We study the interaction of two perturbed antiparallel vortex tubes using
the same initial condition as that of Kerr (see Section III of \cite{Kerr93}).
There are a few misprints in the analytic expression of the initial
condition given in \cite{Kerr93}. In our computations, we use the corrected
version of Kerr's initial condition by comparing with Kerr's Fortran
subroutine which was kindly provided to us by him. A list of corrections
to these misprints is given in the Appendix.
The initial condition is given by a pair of perturbed anti-parallel vortex
tubes, which is expressed in terms of vorticity. The vorticity that
describes the vortex tube above the $x$-$y$ plane is of the form:
\begin{equation}
\vec{\omega} = \omega ( r ) (\omega_x, \; \omega_y,\; \omega_z ) .
\end{equation}
The first step in setting up the initial condition is to define the
profile $\omega (r)$.
If $r \ge 1$, we set $\omega ( r ) = 0$. For $r < 1$, we define
\begin{equation}
\omega ( r ) = \exp [ f ( r ) ],
\end{equation}
with $f(r)$ given by
\begin{equation}
f( r ) = \frac{- r^2}{1 - r^2} + r^4 \left( 1 + r^2 + r^4 \right) .\label{f}
\end{equation}
The radius $r$ is centered around an initial vortex core trajectory $(X,Y,Z)$,
and is defined by
\begin{equation}
r = \left| \left( x, y, z \right) - \left( X, Y, Z \right) \right| / R,
\hspace{2em} \mathrm{for} \;\; r \leqslant 1 .
\end{equation}
The initial vortex core trajectory $(X,Y,Z)$ is characterized by
\begin{equation}
\left( X, Y, Z \right) = ( x ( s ), y, z ( s ) ),
\label{trajectory}
\end{equation}
where $s$ is a function of $y$ and
\begin{equation}
x ( s ) = x_0 + \delta_x \cos ( \pi s / L_x ), \label{x-s}
\end{equation}
\begin{equation}
z ( s ) = z_0 + \delta_z \cos ( \pi s / L_z ). \label{z-s}
\end{equation}
To complete the definition of $\omega (r)$, we need to define
$s$ as a function of $y$, which is given below:
\begin{equation}
s ( y ) = y_2 + L_y \delta_{y 1} \sin ( \pi y_2 / L_y ),
\end{equation}
and
\begin{equation}
y_2 = y + L_y \delta_{y 2} \sin \left( \pi y / L_y \right) .
\end{equation}
The second step is to define the vorticity vector
$(\omega_x,\omega_y,\omega_z)$, which is given as follows:
\begin{eqnarray}
\omega_x & =& - \frac{\pi \delta_x}{L_x} \left[ 1 + \pi \delta_{y_2} \cos
\left( \frac{\pi y}{L_y} \right) \right] \times \left[ 1 + \pi \delta_{y 1}
\cos \left( \frac{\pi y_2}{L_y} \right) \right] \sin \left( \frac{\pi s ( y
)}{L_x} \right) ,\label{w-x} \\
\omega_y & = & 1, \label{w-y} \\
\omega_z & = & - \frac{\pi \delta_z}{L_z} \left[ 1 + \pi \delta_{y_2} \cos
\left( \frac{\pi y}{L_y} \right) \right] \times \left[ 1 + \pi \delta_{y 1}
\cos \left( \frac{\pi y_2}{L_y} \right) \right] \sin \left( \frac{\pi s ( y
)}{L_z} \right). \label{w-z}
\end{eqnarray}
We choose exactly the same parameters as in \cite{Kerr93}.
Specifically, we set $\delta_{y 1} = 0.5$,
$\delta_{y 2} = 0.4$, $\delta_x = -1.6$, $\delta_z = 0$, $z_0 = 1.57$
and $R = 0.75$. The constant $x_0$ fixes the center of perturbation
for the vortex tube along the $x$ direction. In our computations, we
set $x_0 = 0$. Moreover, we choose $L_x = L_y = 4 \pi$, and $L_z = 2 \pi$.
The third step is to rescale the initial profile defined above. According to
\cite{Kerr93} (see the last paragraph on page 1728 of \cite{Kerr93}),
we need to rescale the above initial vorticity profile by a constant factor
so that the maximum vorticity in the $y$ direction is increased to 8.
With the choice of the above parameters, the maximum vorticity in the $y$
direction before rescaling is equal to 0.999766. Thus, the constant rescaling
factor is equal to 8.001873.
The final step in defining the initial condition is to filter the initial
vorticity profile. After rescaling the initial vorticity, we apply
exactly the same Fourier filter as the one used in \cite{Kerr93},
i.e. $\exp \left[ - 0.05 \left( k_x^4 + k_y^4 + k_z^4 \right) \right]$,
to the initial vorticity to smooth the rough edges.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/init_vort.epsf}
\end{center}
\caption{The axial vorticity ($\omega_y$) contours of the initial value on
the symmetry plane. \label{fig.vort-init}}
\end{figure}
We should point out that due to the difference between ours and Kerr's
discretization strategies in solving the 3D Euler equations, the discrete
initial condition generated by Kerr's discretization and the one generated
by our pseudo-spectral
discretization are not exactly the same. In \cite{Kerr93}, Kerr used the
Chebyshev polynomials to approximate the solution along
the $z$ direction. In order to prepare the initial data that can be used for
the Chebyshev polynomials, he
performed some interpolation and used extra filtering. This interpolation
and extra filtering seem to introduce some asymmetry to Kerr's discrete
initial data. According to \cite{Kerr93} (see the top paragraph of
page 1729), ``An effect of the initial filter upon the vorticity contours
at $t=0$ is a long tail in Fig. 2(a)''.
Since we perform pseudo-spectral approximations in all three directions,
there is no need to perform interpolation or use extra filtering as was
done in \cite{Kerr93}. To demonstrate this slight difference between
Kerr's initial data and ours, we plot the initial vorticity contours
along the symmetry plane in Figure \ref{fig.vort-init}. As we can see,
the initial vorticity contours in Figure \ref{fig.vort-init} are
essentially symmetric. This is in contrast to the apparent asymmetry
in Kerr's initial vorticity contours (see Fig. 2(a) of \cite{Kerr93}).
The 3D plot of the initial vortex tubes is given in Figure \ref{fig.vorttube}.
Again, we can see that the initial vortex tube is essentially symmetric.
As time increases, the two antiparallel perturbed vortex tubes approach
each other. By time $t=6$, we already observe a significant flattening
near the center of the tubes, see Figure \ref{fig.vort-t=6} and
Figure \ref{fig.vorttube}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/vortt=6.epsf}
\end{center}
\caption{The axial vorticity ($\omega_y$) contours when $t=6$ on
the symmetry plane. \label{fig.vort-t=6}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/vorttubet=0.eps}
\includegraphics[width=8cm]{pics/vorttubet=6.eps}
\end{center}
\caption{The 3D view of the vortex tube for $t=0$ and $t=6$.
The tube is the isosurface at $60\%$ of the maximum vorticity.
The ribbons on the symmetry plane are the contours at other different
values. \label{fig.vorttube}}
\end{figure}
\section{The Numerical Method}
We use the pseudo-spectral method with a very high order
Fourier smoothing to discretize the 3D Euler equations.
The Fourier smoothing that we use along the $x_j$ direction is of the form:
$\rho(2k_j/N_j) \equiv \exp(-\alpha (2k_j/N_j)^m)$ with $\alpha=36$ and
$m=36$, where $k_j$ is the wave number ($|k_j| \leqslant N_j/2$)
along the $x_j$ direction and $N_j$ is the total number of grid
points along the $x_j$ direction. Specifically, if $\widehat{v}_k$
is the discrete Fourier transform of $v$, then we approximate
$v_{x_j}$ by taking the discrete inverse Fourier transform of
$i k_j \rho(2k_j/N_j) \widehat{v}_k$, where $k =(k_1,k_2,k_3)$.
The time integration is performed
using the classical fourth order Runge-Kutta method. Adaptive
time stepping is used to satisfy the CFL stability condition with
CFL number equal to $\pi/4$. We use up to
$1536 \times 1024 \times 3072$ space resolution in order to resolve
the nearly singular behavior of the 3D Euler equations.
There is a good reason why we choose to use the pseudo-spectral
method with the above high order Fourier smoothing instead of using
the classical 2/3 dealiasing rule. The Fourier smoothing we use is
designed to keep the majority of the Fourier modes unchanged
and remove the very high modes to avoid the aliasing errors, see
Fig. \ref{fig.fourier_smoother} for the profile of $\rho (x)$. We
choose $\alpha$ to be $36$ to guarantee that $\rho (k)$ reaches
the level of the round-off errors ($O(10^{-16})$) at the highest modes.
We choose the order of smoothing, $m$, to be 36 in order to optimize
the accuracy of the spectral approximation, while still keeping the
aliasing errors under control. As we can see from Figure
\ref{fig.fourier_smoother}, the effective modes in our computation
are about $12 \sim 15\%$ more than those using the standard $2/3$
dealiasing rule. To demonstrate that the pseudo-spectral method
with the above high order Fourier smoothing is indeed more accurate
than the pseudo-spectral method with the $2/3$ dealiasing rule, we
perform resolution study of the two approaches. In Figure
\ref{fig.enstrophy-spec-comp}, we compare the Fourier spectra
of the enstrophy obtained by using the pseudo-spectral method
with the $2/3$ dealiasing rule with those obtained by the
pseudo-spectral method with the high order smoothing. For
a fixed resolution $768\times 512\times 1536$, we can see
that the Fourier spectra obtained by the pseudo-spectral
method with the high order smoothing are more accurate than
those obtained by the spectral method using the $2/3$
dealiasing rule. This can be seen by comparing the results
with the corresponding computations using a higher resolution
$1024 \times 768 \times 2048$. Moreover, the pseudo-spectral
method using the high order Fourier smoothing does not
give the spurious oscillations in the Fourier spectra which
are present in the computations using the $2/3$ dealiasing
rule near the $2/3$ cut-off point.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/fourier_smoother.eps}
\end{center}
\caption{The profile of the Fourier smoothing: $\exp(-36 (x)^{36})$.
The vertical line corresponds to the cut-off mode using the $2/3$
dealiasing rule. We can see that using this Fourier smoothing we keep
about $12 \sim 15\%$ more modes than those using the 2/3 dealiasing rule.
\label{fig.fourier_smoother}}
\end{figure}
We have used a sequence of resolutions in our computations in
order to perform a resolution study. The resolutions presented in the
paper are $768\times 512\times 1536$, $1024\times 768\times 2048$,
and $1536\times 1024\times 3072$ respectively. Except for the computation
on the largest resolution $1536\times 1024\times 3072$, all computations
are carried out from $t=0$ to $t=19$. The computation on the final
resolution $1536\times 1024\times 3072$ is started from $t=10$ with
the initial condition given by the computation with the resolution
$1024\times 768\times 2048$. From our resolution study for the velocity
and vorticity in both physical and spectral spaces, we find that
the solution at $t=10$ is fully resolved even on the resolution
$768\times 512\times 1536$. Thus it is justified to start the computation
with the largest resolution at $t=10$ using the computation obtained
by the resolution $1024\times 768\times 2048$ as the initial condition.
\begin{figure}
\begin{center}
\includegraphics[width=12cm,height=6cm]{pics/enstrophy_spec_comp.eps}
\end{center}
\caption{The comparison of the enstrophy spectra obtained using the high order
Fourier smoothing method with those using the 2/3 dealiasing rule. The dashed
lines and dashed-dotted lines are the enstrophy spectra with the
resolution $768\times 512\times 1536$ using the 2/3 dealiasing rule and the
Fourier smoothing, respectively. The solid lines are the enstrophy spectra
with resolution $1024\times 768\times 2048$. The times for the spectra lines
are at $t= 15, 16, 17, 18, 19$ respectively.
\label{fig.enstrophy-spec-comp}}
\end{figure}
We also exploit the symmetry properties of the solution in our computations,
and perform our computations on only a quarter of the whole domain.
We use the well-known parallel FFT package, called FFTW 3.1 to perform
the sine and cosine transformations. The whole program is coded in C
and we use MPI as the parallel interface. Since the
solution appears to be most singular in the $z$ direction, we allocate twice
as many grid points along the $z$ direction than along the $x$ direction.
The solution is least singular in the $y$ direction. We allocate the
smallest resolution in the $y$ direction to reduce the computation cost.
In our computations, two typical ratios in the resolution along the
$x$, $y$ and $z$ directions are $3:2:6$ and $4:3:8$.
In the early stage of the computations for $0 \le t \le 8$, the solution is
still very regular. The solution in this stage can be fully resolved
using the resolution $768\times 512\times 1536$. In the intermediate stage
for $8 \le t \le 16$, the solution has a fast growth in the maximum vorticity.
After $t=16$, the two vortex tubes become severely flattened, and evolve into
two thin vortex sheets, which roll up subsequently. The maximum vorticity
also experiences a transition from the double exponential growth to
a slower growth rate due to the dynamic depletion of vortex stretching.
In order to resolve the later stage of the solution, higher resolutions
become necessary.
Our computations were carried out on the PC cluster LSSC-II in the
Institute of Computational Mathematics and Scientific/Engineering Computing
of Chinese Academy of Sciences and the Shenteng 6800 cluster in the Super
Computing Center of Chinese Academy of Sciences. The maximal memory consumption
in our computations is about 120 GBytes.
\section{Numerical Results}
\subsection{Review of Kerr's results}
In \cite{Kerr93}, Kerr presented numerical evidence which suggested
a finite time singularity of the 3D Euler equations for two perturbed
antiparallel vortex tubes. He used a
pseudo-spectral discretization in the $x$ and $y$ directions, and
a Chebyshev method in the $z$ direction with resolution of order
$512\times 256 \times 192$. His computations showed that the growth
of the peak vorticity, the peak axial strain, and the enstrophy
production obey $(T-t)^{-1}$ with $T = 18.9$. As $t$ approaches
the alleged blow-up time $T$, the region bounded by the contour
of $0.6\|\vec{\omega}\|_\infty$, also known as the inner region,
looks like two vortex sheets with thickness $\sim (T-t)$
meeting at an angle \cite{Kerr93}. This region has the length
scale $(T-t)^{1/2}$ in the vorticity direction. The maximum
vorticity resides in the small tube-like region, with scaling
$(T-t)^{1/2}\times (T-t)\times (T-t)$, which is the intersection
of the two sheets. Inside the inner region, the vortex lines
are ``relatively straight'' \cite{Kerr93}. Kerr stated in his
paper \cite{Kerr93} (see page 1727) that his numerical results shown
after $t=17$ and up to $t=18$ were ``not part of the primary evidence
for a singularity'' due to the lack of sufficient numerical
resolution and the presence of noise in the numerical solutions.
In his recent paper \cite{Kerr04} (see also \cite{Kerr97,Kerr99}),
Kerr applied a high wave number filter to the data obtained in his
original computations to ``remove the noise that masked the structures
in earlier graphics'' presented in \cite{Kerr93}. With this filtered
solution, he presented some scaling analysis of the numerical solutions
up to $t=17.5$. Two new properties were presented in this recent
paper \cite{Kerr04}. First, the velocity field was shown to blow up
like $O(T-t)^{-1/2}$ with $T$ being revised to $T=18.7$. The maximum
velocity is located on the boundary of the inner region with
distance $(T-t)^{1/2}$ away from the position where the maximum
vorticity is achieved. Secondly, he showed that the blowup is
characterized by two anisotropic length scales,
$\rho \approx (T-t) $ and $R \approx (T-t)^{1/2}$.
Further, Kerr argued that the curvature, $\kappa$, of the vortex
lines and $\nabla\cdot \vec{\xi}$ (here
$\vec{\xi} =\vec{\omega}/|\vec{\omega}|$) in
the inner region are likely bounded
by $(T-t)^{-1/2}$ \cite{Kerr04}.
\subsection{Recent theoretical results on the 3D Euler equations}
There has been some interesting development in the theoretical
understanding of the 3D incompressible Euler equations. It has been
shown by several authors that the local geometric regularity of vortex
lines can play an important role in depleting nonlinear vortex stretching.
In particular, Constantin, Fefferman and Majda \cite{CFM96} proved that if
(i) there is up to time $T$ an $O(1)$ region $\Omega$ in which the vorticity
vector is smoothly directed, i.e.,
the maximum norm of $\nabla \xi$ in this $O(1)$ region
is $L^2$ integrable in time,
(ii) the maximum norm of velocity is uniformly bounded in time, plus
a technical condition on the distribution of the vorticity within this
region, then no blow-up can occur in this region up to time $T$. While
this result is very interesting, it does not apply to Kerr's computations
since the two assumptions are violated by Kerr's computations.
Motivated by the result of \cite{CFM96}, Deng, Hou, and Yu \cite{DHY05a}
obtained sharper non-blowup conditions which use only very localized information
of the vortex lines. Assume that at each time $t$ there exists some vortex line
segment $L_t$ on which the local maximum vorticity is comparable to the global maximum
vorticity. Further, we denote $L(t)$ as the arclength of $L_t$.
Roughly speaking, if (i) the velocity field along $L_t$ is bounded
by $C_U (T-t)^{-\alpha}$ for some $\alpha < 1$, (ii)
$C_L (T-t)^\beta \le L(t) \le C_0 /\max_{L_t}(|\kappa|,\;|\nabla \cdot \vec{\xi}|)$,
for some $\beta < 1 - \alpha$, then the
solution of the 3D Euler equations remains regular up to $T$. In Kerr's
computations, the first condition is satisfied with $\alpha = 1/2$. Moreover,
based on the bound on $\kappa$ and $\nabla \cdot \vec{\xi}$ in the inner
region \cite{Kerr04}, we can choose a vortex line segment of length
$(T-t)^{1/2}$ in the inner region so that the upper bound in the
second condition is satisfied. However, the lower bound is violated
since $\beta < 1/2 = 1 - \alpha $. In a subsequent paper \cite{DHY05b},
the non-blowup conditions have been improved to include the critical case,
$\beta = 1 - \alpha$ by requiring the scaling constants, $C_U, \;C_0,$ and
$C_L$ in conditions (i)-(ii), to satisfy an algebraic inequality.
This algebraic inequality can be checked numerically if we obtain a good
estimate of these scaling constants. For example, if $C_0 = 0.1$, which
seems reasonable since the vortex lines are relatively straight in the
inner region, the result of \cite{DHY05b} implies no blowup up to
$T$ if $2C_U < 0.43 C_L$. However, such scaling constants are not
available in \cite{Kerr93}. One of our original motivations was
to repeat Kerr's computations with higher resolution to obtain a good
estimate of these scaling constants.
\subsection{Maximum vorticity growth}
We first present the result on the growth of the maximum vorticity in time,
see Figure \ref{fig.omega}. The maximum vorticity increases rapidly from
the initial value of $0.669$ to $23.46$ at the final time $t=19$, a factor
of 35 increase from its initial value. Kerr's computations predicted a
finite time singularity at $T=18.7$. Our computations show no sign of finite
time blowup of the 3D Euler equations up to $T=19$, beyond the singularity
time predicted by Kerr. In Figure \ref{fig.omega},
we plot the maximum vorticity in time using three different
resolutions, i.e. $768\times 512\times 1536$, $1024 \times 768 \times 2048$,
and $1536 \times 1024 \times 3072$ respectively. As we can see, the
agreement between the two successive resolutions is very good with
only mild disagreement toward the end of the computations. This indicates
that a very high space resolution is indeed needed to capture the rapid
growth of maximum vorticity at the later stage of the computations.
We observe that the growth of the maximum vorticity has three distinguished
phases. The first stage is for $0\le t \le 12$. In this early stage,
the maximum vorticity
grows only exponentially in time. This is consistent with Kerr's results.
The second stage is for $12 \le t\le 17$. During this intermediate stage,
the two vortex tubes experience tremendous core deformation and become
severely flattened. Each vortex tube effectively turns into a vortex sheet
with rapidly decreasing thickness. We observe that the growth of
maximum vorticity is slightly slower than double exponential in time
during the second stage, see Figure \ref{fig.omega_loglog}. This growth
behavior can be also confirmed by examining the degree of nonlinearity
in the vortex stretching term. If the maximum vorticity indeed blew up
like $O(T-t)^{-1}$, as alleged in \cite{Kerr93}, the vortex stretching term
at the position of the maximum vorticity should have been quadratic as a
function of maximum vorticity. However, as Figure \ref{fig.growth_rate}
shows, the vortex stretching term, when projected to the unit vorticity
vector, grows much slower than the quadratic nonlinearity. In fact, it
is even slower than
$C\|\vec{\omega} \|_\infty \log (\|\vec{\omega} \|_\infty )$, i.e.
\begin{equation}
\label{eqn:stret}
\| \vec{\xi} \cdot \nabla \vec{u} \cdot \vec{\omega} \|_\infty
\le C \|\vec{\omega} \|_\infty \log (\|\vec{\omega} \|_\infty ), \quad \quad
15 \le t \le 19 .
\end{equation}
Using the equation that governs the magnitude of vorticity \cite{Const94},
\begin{equation}
\frac{\partial}{\partial t} |\vec{\omega}| + (\vec{u}\cdot\nabla) |\vec{\omega}|
= \vec{\xi} \cdot \nabla \vec{u} \cdot \vec{\omega} ,
\end{equation}
one can easily show that inequality (\ref{eqn:stret}) implies that
the maximum vorticity is bounded by double exponential in time.
During the final stage for $17 \le t \le 19$, we observe that the growth of
the maximum vorticity slows down and deviates from double exponential growth,
see Figure \ref{fig.omega_loglog}. This indicates that there is stronger
cancellation taking place in the vortex stretching term.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/max_vort_comp.eps}
\end{center}
\caption{The maximum vorticity $\|\omega\|_\infty$ in time using
different resolutions.
\label{fig.omega}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/loglog_omega.eps}
\end{center}
\caption{The plot of $ \log \log \|\omega\|_\infty$ vs time, resolution $1536\times 1024
\times 3072$.
\label{fig.omega_loglog}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/growth_rate.eps}
\end{center}
\caption{Study of the vortex stretching term in time, resolution $1536\times
1024\times 3072$. We take $c_1 = 1/8.128$, $c_2 = 1/23.24$ to match the
same starting value for all three plots.
\label{fig.growth_rate}}
\end{figure}
We remark that for vorticity that grows as rapidly as double exponential in time,
one may be tempted to fit the maximum vorticity growth by $(T-t)^{-1}$ for some
$T$. Indeed, if we choose $T=18.7$ as suggested by Kerr in \cite{Kerr04}, we
find a reasonably good fit for the maximum vorticity as a function of
$c/(T-t)$ for the period $15 \le t \le 17$. We plot the scaling constant
$c$ in Figure \ref{fig.scaling_const}. As we can see, $c$ is close to a
constant for $15 \le t \le 17$. To conclude that the 3D Euler equations
indeed develop a finite time singularity, one must demonstrate that such
scaling persists as $t$ approaches to $T$. As we can see from Figure
\ref{fig.scaling_const}, the scaling constant $c$ decreases rapidly to
zero as $t$ approaches to the alleged singularity time $T$. Therefore,
the fitting of $\| \vec{\omega}\|_\infty \approx O(T-t)^{-1}$ is not correct
asymptotically.
A similar test can be performed for the inverse of the maximum vorticity.
In Figure \ref{fig.omega1}, we plot the inverse of the maximum vorticity
using different resolutions. As we can see from this picture, the
inverse of the maximum vorticity approaches to zero almost linearly
in time for $8 \le t \le 17$. This was one of the strong evidences
presented in \cite{Kerr93} that suggests a finite time blowup of the
3D Euler equations. If this trend were to continue to hold up to $T$,
it would have led to the blowup of the maximum vorticity in the form of
$O(T-t)^{-1}$. However, as we increase our resolutions, we find that the
curve corresponding to the inverse of the maximum vorticity starts to turn
away from zero around $t=17$. This is precisely the time when Kerr's
computations began to lose resolution. By $t=17.5$, the gradients of the
solution become very large in all three directions. In order to resolve the
nearly singular solution structure, we use $1536\times 1024\times 3072$ grid
points from $t=10$ to $19$. This level of resolution gives about 8 grid points
across the most singular region in each direction toward the end of the
computations.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/scaling_const.eps}
\end{center}
\caption{Scaling constant in time for the fitting $\|\omega\|_\infty \approx c/(T-t)$,
$T=18.7$.\label{fig.scaling_const}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/inv_vort_comp.eps}
\end{center}
\caption{The inverse of maximum vorticity $\|\omega\|_\infty$ in time
using different resolutions.
\label{fig.omega1}}
\end{figure}
\subsection{Velocity profile}
One of the important findings of our computations is that the velocity
field is actually bounded by 1/2 up to $T=19$. This is in contrast to Kerr's
computations in which the maximum velocity was shown to blow up like
$O(T-t)^{-1/2}$. We plot
the maximum velocity as a function of time using different
resolutions in Figure \ref{fig.velocity}. The computation
with the resolution $768\times 512\times 1536$ shows some
mild discrepancy toward the end of the computation. On the
other hand, the computation obtained by resolution
$1024\times 768 \times 2048$ and the one obtained
by resolution $1536\times 1024\times 3072$ are
almost indistinguishable. As we can see, the maximum velocity
grows slowly in time and is relatively
small in magnitude. There is a relatively fast growth of maximum velocity
between $t=14$ and $t=17$. But this growth becomes saturated by $t=17.5$.
After $t=18.4$, the velocity experiences a mild growth, but it is still
bounded by $0.46$ at the final time $T=19$.
We also plot the contours
of $|\vec{u}|$ near the region of maximum vorticity
at $t=18$ and $19$ in Figure \ref{velo-cont}. As we can see,
the velocity seems to be well resolved.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/max_velo_comp.eps}
\end{center}
\caption{Maximum velocity $\|\vec{u}\|_\infty$ in time using
different resolutions.\label{fig.velocity}}
\end{figure}
The fact that the velocity field is bounded is significant.
By re-examining the non-blowup conditions of the theory of
Deng-Hou-Yu \cite{DHY05a}, we find that the first condition is
now satisfied with $\alpha = 0$ since the velocity field is bounded.
According to \cite{Kerr04}, we have
$\max_{L_t}(|\kappa|,\;|\nabla \cdot \vec{\xi}|) \le C_0 (T-t)^{-1/2}$.
In fact, our computations indicate that the curvature and the
divergence of the unit vorticity vector are actually bounded. With
$\alpha = 0$, we can now choose the vortex line segment of
length, $L(t) = (T-t)^\beta $ with $\beta =1/2 < 1-\alpha$, so that
the second condition is now satisfied. Thus, the theory of
Deng-Hou-Yu applies, which implies non-blowup of the 3D Euler
equations up to $T$.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/velo_cont-t=18.epsf}
\includegraphics[width=8cm]{pics/velo_cont-t=19.epsf}
\end{center}
\caption{The contour of $|\vec{u}|$ in the region of maximum vorticity
on the symmetry plane. The pictures are plotted at $t=18$ and $t=19$
respectively using resolution $1536\times 1024\times 3072$.
\label{velo-cont}}
\end{figure}
\subsection{Local vorticity structure}
In this subsection, we would like to examine the local vorticity structure
near the region of the maximum vorticity. To illustrate the development in
the symmetry plane, we show a series of vorticity contours near the region
of the maximum vorticity at late times in a manner similar to the results
presented in
\cite{Kerr93}. In Kerr's computations, he observed that the head and tail
in the symmetry plane develop a corner separating the head and tail. We
adopt Kerr's definition of the ``head'' to be the region extending above
the vorticity peak $\omega_p$ just behind the leading edge of the vortex
sheet.
The ``tail'' is the vortex sheet extending behind the peak vorticity.
One interesting question is to determine whether one direction becomes
progressively more flattened or stretched as the flow evolves and whether
the rates of collapse are the same in different directions. Our
computational results are in qualitative agreement with Kerr's in
the early and intermediate stages. In particular, we observe that as the
flow evolves the region of peak vorticity concentrates into the region
where the vortex sheets of the head and tail meet. To compare with Kerr's
figures, we scale the vorticity contours in the $x-z$ plane by a
factor of 5 in the $z$ direction. The results at $t=15$ and $t=17$ are
plotted in Figure \ref{fig.scaled_local_struc}. We can see that the
location of maximum axial vorticity moves toward the corner where
the vortex sheets of the head and tail meet as time increases,
see also Figure \ref{fig.local_struc}.
This is in qualitative agreement with Kerr's results.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/scaled_contour_t=15.epsf}
\includegraphics[width=8cm]{pics/scaled_contour_t=17.epsf}
\end{center}
\caption{The contour of axial vorticity around the maximum vorticity on the symmetry
plane at $t=15, 17$. The figure is scaled in $z$ direction by a factor of $5$ to
compare with the Figure 4 in \cite{Kerr93}. A contour at value very close to the maximum
value is plotted to show the location of the maximum vorticity. \label{fig.scaled_local_struc}}
\end{figure}
In order to see better the dynamic development of the local vortex structure,
we plot a sequence of vorticity contours on the symmetry plane at
$t=16, 17, 18,$ and $19$ respectively in Figure \ref{fig.local_struc}.
The pictures are plotted using the
original length scales, without the scaling by a factor of 5 in the $z$
direction as in Figure \ref{fig.scaled_local_struc}. From these results, we
can see that the vortex sheet is compressed in the $z$ direction.
It is clear that a thin layer (or a vortex sheet) is formed dynamically.
The head of the vortex sheet is a bit thicker
than the tail at the beginning. The head of the vortex
sheet begins to roll up around $t=16$. By the time $t=19$, the
head of the vortex sheet has traveled backward for quite a distance,
and the vortex sheet has been compressed quite strongly along
the $z$ direction.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/fig4.noscale.t=17.5.epsf}
\includegraphics[width=8cm]{pics/fig4.noscale.t=18.epsf}
\includegraphics[width=8cm]{pics/fig4.noscale.t=18.5.epsf}
\includegraphics[width=8cm]{pics/fig4.noscale.t=19.epsf}
\end{center}
\caption{The contour of axial vorticity around the maximum vorticity on
the symmetry plane at $t=17.5, 18, 18.5, 19$. \label{fig.local_struc}}
\end{figure}
We also plot the isosurface of vorticity near the region of the maximum
vorticity in Figures \ref{fig.local_struc_3d_17} and \ref{fig.local_struc_3d}
to illustrate the dynamic roll-up of the vortex sheet near the region of the
maximum vorticity. Figure \ref{fig.local_struc_3d_17} gives
the local vorticity structure at $t=17$. If we scale the local
roll-up region on the left hand side next to the box by a factor of 4
along the $z$ direction, as was done in \cite{Kerr04}, we would obtain a
local roll-up structure which is qualitatively similar to
Figure 1 in \cite{Kerr04}.
In Figure \ref{fig.local_struc_3d}, we show the local vorticity structure
for $t=18$ and $19$. In both figures, the isosurface
is set at $0.5 \times \|\vec{\omega}\|_\infty$. Here we make a
few observations. First, the vortex lines near the region of
maximum vorticity are relatively straight and the vorticity vectors
seem to be quite regular. This was also observed in \cite{Kerr93}.
On the other hand, the inner region containing the maximum vorticity
does not seem to
shrink to zero at a rate of $(T-t)^{1/2}$, as predicted in \cite{Kerr93}.
The length and the width of the vortex sheet are still
$O(1)$, although the thickness of the vortex sheet becomes quite small.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{pics/l17.epsf}
\end{center}
\caption{The local 3D vortex structure and vortex lines around the maximum
vorticity at $t=17$. The size of the box on the left is
$0.075^3$ to demonstrate the scale of the picture.
\label{fig.local_struc_3d_17}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/l-18.epsf}
\hspace{2mm}
\includegraphics[width=8cm]{pics/l-19.epsf}
\end{center}
\caption{
The local 3D vortex structures and vortex lines around the maximum
vorticity at $t=18$ (on the left) and $t=19$ (on the right).
\label{fig.local_struc_3d}}
\end{figure}
We also plot the energy spectrum in Figure \ref{fig.energy_spec} at
$t=16, 17, 18, 19$. A finite
time blow-up of enstrophy would imply that the energy spectrum decays no
faster than $|k|^{-3}$. In \cite{Kerr93}, Kerr observed that the
energy spectrum decays exactly like $|k|^{-3}$, suggesting a finite time
blow-up of the enstrophy (recall that enstrophy is defined as
the square of the $L^2$ norm of vorticity, i.e.
$\|\vec{\omega}\|_2^2$).
Our computations show that the energy spectrum
approaches $|k|^{-3}$ for $|k| \le 100$ as time increases to $t=19$.
This is in qualitative agreement with Kerr's results. Note that
there are only less than 100 modes available along the $|k_x|$ or
$|k_y|$ direction
in Kerr's computations, see Figure 18 (a)-(b) of \cite{Kerr93}.
On the other hand, our computations show that
the high frequency Fourier spectrum for $100 \le |k| \le 1300$
decays much faster than
$|k|^{-3}$, as one can see from Figures \ref{fig.energy_spec}
and \ref{fig.energy_spec_1}.
This indicates that there is no blow-up in enstrophy. This is also
supported by the enstrophy spectrum, given in Figure
\ref{fig.enstrophy_spec}, and the plot of enstrophy
as a function of time in Figures \ref{fig.omegal2}.
In Figure \ref{fig.enstrophy_production}, we plot the enstrophy
production rate, which is defined as the time derivative of enstrophy.
Although it grows relatively fast, it actually grows
slower than double exponential in time (see the picture on
the right in Figure \ref{fig.enstrophy_production}). In the double
logarithm plot of the enstrophy production rate, we multiply the
enstrophy production rate by a constant factor 8 to make the second
logarithm well-defined. It is interesting to note that
the double logarithm of enstrophy production rate in Figure
\ref{fig.enstrophy_production} is qualitatively similar to the double
logarithm of $||\vec{\omega}||_\infty$ in Figure \ref{fig.omega_loglog}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/energy_spec_loglog.eps}
\end{center}
\caption{The energy spectra for velocity at different times in log-log scale.
The energy spectrum is calculated using $E(k) = \sum_{|\vec{k}|\in (k-1/2, k+1/2]}
|\widehat{u}_{\vec{k}}|^2$. The time for the spectral lines from bottom to top are $t=
15, 16, 17, 18, 19$. The dashed line corresponds to $k^{-3}$.
\label{fig.energy_spec}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/enstrophy_spec_loglog.eps}
\end{center}
\caption{The enstrophy spectra for vorticity at different times in log-log scale.
The enstrophy spectrum is calculated using $Ens(k) = \sum_{|\vec{k}|\in (k-1/2, k+1/2]}
|\widehat{\omega}_{\vec{k}}|^2$. The dashed line corresponds to $k^{-1}$.
\label{fig.enstrophy_spec}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/enstrophy.eps}
\end{center}
\caption{The enstrophy as a function of time, resolution $1536\times 1024\times 3072$.
\label{fig.omegal2}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{pics/enstrophy_production.eps}
\includegraphics[width=8cm]{pics/loglog_enstrophy_production.eps}
\end{center}
\caption{The enstrophy production rate (left) and the double logarithm of 8
times enstrophy production rate (right) as a function of time, resolution
$1536\times 1024\times 3072$.
\label{fig.enstrophy_production}}
\end{figure}
\subsection{Resolution Study}
In this subsection, we perform a resolution study to make sure
that the nearly singular behavior of the 3D Euler equations is
resolved by our computational grid. In Figure \ref{fig.omega},
we have performed resolution study for the maximum vorticity
using three different resolutions and found very good agreement
for the time interval $[0,18]$.
There is only a mild disagreement toward the end of the
computations from $t=18$ to 19.
In our computations with different resolutions, we find that the
maximum vorticity always grows slower than double exponential in time.
Similar resolution study has been performed for the inverse of
maximum vorticity in Figure \ref{fig.omega1} and for the
maximum velocity in Figure \ref{fig.velocity}. We observe excellent
agreement between the solutions obtained by the two largest resolutions.
We have also performed similar resolution study in the Fourier
space by examining the convergence of the energy and enstrophy
spectra using different resolutions. We observe that the
Fourier spectrum corresponding to the effective modes of one
resolution is in excellent agreement with that corresponding
a higher resolution computation,
see Figures \ref{fig.enstrophy-spec-comp},
\ref{fig.enstrophy_spec_1} and \ref{fig.energy_spec_1}.
To see how many grid points we have across the most singular
region, we plot the underlying mesh for the vorticity contours
in the $x-z$ plane in Figure \ref{fig.local_mesh}. One can see
from this picture that we have about $16$ grid points in the $z$
direction at $t=18$ and $8$ grid points at $t=19$. It is also
interesting to note that at $t=18.5$, the location of the maximum
vorticity has moved away from the bottom of the vortex sheet
structure.
If the current trend continues, it is likely that the location of the
maximum vorticity will continue to move away from the bottom of the
vortex sheet. One of the possible blow-up scenarios is that the
interaction of the two perturbed antiparallel vortex tubes would induce
a strong compression between the two vortex tubes, leading to a finite time
collapse of the two vortex tubes. The fact that the location of the
maximum vorticity moves away from the dividing plane of the two vortex
tubes seems to destroy the desired mechanism to produce a blow-up.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,height=7cm]{pics/vort_cont_mesh-t=18-1.epsf}
\hspace{2mm}
\includegraphics[width=8cm,height=7cm]{pics/vort_cont_mesh-t=19-1.epsf}
\end{center}
\caption{This picture is to illustrate the mesh around the maximum vorticity.
The times for this plot are $18, 19$. At $t=19$, we still have about $8$
points along $z$ direction to resolve the nearly singular layer.
\label{fig.local_mesh}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=12cm,height=6cm]{pics/enstro_spec_1.eps}
\end{center}
\caption{Convergence study for enstrophy spectra using different resolutions.
The dashed lines and the solid lines are the enstrophy
spectra on resolution $1536\times 1024\times 3072$ and $1024\times 768\times
2048$, respectively. The times for the lines from bottom to top are $t=16, 17,
18, 19$.
\label{fig.enstrophy_spec_1}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=12cm,height=6cm]{pics/energy_spec_1.eps}
\end{center}
\caption{Convergence study for energy spectra using different resolutions.
The dashed lines and the solid lines are the energy spectra on resolution
$1536\times 1024\times 3072$ and $1024\times 768\times 2048$,
respectively. The times for the lines from bottom to top are $t=16, 17,
18, 19$.
\label{fig.energy_spec_1}}
\end{figure}
\section{Concluding Remarks}
We investigate the interaction of two perturbed vortex tubes for the 3D
Euler equations using Kerr's initial data. Our numerical computations
demonstrate a very subtle dynamic depletion of vortex stretching. The
maximum vorticity is shown to grow no faster than double exponential
in time up to $T=19$, beyond the singularity time predicted by Kerr in
\cite{Kerr93}. The local geometric regularity of vortex lines seems
to be responsible for this dynamic depletion of vortex stretching.
Sufficient numerical resolution is essential in capturing the double
exponential growth in vorticity and the dynamic depletion
of vortex stretching. The velocity field and the enstrophy are
shown to be bounded throughout the computations. We provide
evidence that the vortex stretching term is only weakly nonlinear and
is bounded
by $ \|\vec{\omega} \|_\infty \log (\|\vec{\omega} \|_\infty )$. Such an
upper
bound on the vortex stretching term implies that the maximum vorticity
is bounded by the double exponential in time. Our computational results
also satisfy the non-blowup conditions of Deng-Hou-Yu, which
provides a theoretical support for our computational results.
The current computations, even with this level of resolution,
can not rule out the possibility of the blow-up of the 3D Euler
equations for large times for Kerr's initial data. The theoretical
results of \cite{CFM96,DHY05a,DHY05b} and the computations presented
here suggest that a finite time singularity, if it exists, would have rather
complicated geometric structures. There are other types of potential
Euler singularities that are not considered in this paper. Among them,
the Kida-Pelz initial condition \cite{BP94,Pelz98} is worth further
investigation.
The extra symmetry constraints in this type of initial data are believed
to be important in producing a finite time singularity for the 3D
Euler equations. Indeed, the computations by Boratav and Pelz
\cite{BP94} and Pelz \cite{Pelz98} indicate a more singular
self-similar type of blow-up. Pelz's computations also fall in the
critical case of the non-blowup theory of Deng-Hou-Yu \cite{DHY05a,DHY05b}.
We are currently investigating this problem numerically using even higher
resolutions. We will report the results elsewhere.
\vspace{0.2in}
\centerline{\bf \large Appendix. Corrections to Some Misprints in \cite{Kerr93}}
\vspace{0.1in}
In this appendix, we explain the corrections that we make
regarding the misprints in the description of the initial condition in
\cite{Kerr93}. There are two constraints on the
initial vorticity. The first one is that it must be divergence free.
The second one is that it must satisfy the periodic boundary condition.
It is obvious that we have
\begin{equation} \nabla \cdot ( \omega_x , \; \omega_y, \; \omega_z ) = 0 .
\end{equation}
Thus the divergence free constraint on the initial vorticity implies
that $\omega ( r )$ must satisfy
\[
\nabla \omega ( r ) \cdot (\omega_x , \;\omega_y ,\; \omega_z ) = 0.
\]
The analytic expression of the initial vorticity profile in \cite{Kerr93}
does not satisfy the above constraint due to a few typos in various formula.
We correct these typos by comparing the analytic formula with the formula
that were actually used by Kerr in his Fortran subroutine that generates
the initial data. Below we would list these typos and point out the
corrections.
\begin{enumerate}
\item In equation (\ref{trajectory}), the original expression in
\cite{Kerr93} was written as $[ x_0 + x ( s ), y, z_0 + z ( s ) ]$.
We remove $x_0$ and $z_0$ from (\ref{trajectory}) since the definition
of $x(s)$ and $z(s)$ has already taken $x_0$ and $z_0$ into account.
\item In (\ref{x-s}), the original expression in \cite{Kerr93} was given
as $x_0 + \delta_x \cos ( s )$. This would violate the divergence free
condition. We correct this by replacing $\cos ( s )$ with
$\cos \left (\pi s/L_x \right )$.
\item In (\ref{z-s}), the original expression was $z_0 + \delta_z \cos ( s )$.
This again violates the divergence free condition. We correct it by replacing
$\cos ( s )$ with $\cos \left (\pi s/L_z \right )$.
\item In (\ref{w-x}), the last factor in the original expression was
$\sin ( \pi s ( y ))$. We correct it by replacing $\sin ( \pi s ( y ))$
with $\sin \left ( \pi s ( y )/L_x \right )$.
\item In (\ref{w-z}), the last factor in the original expression was
$\sin ( \pi s ( y ))$. We correct it by replacing $\sin ( \pi s ( y ))$
with $\sin \left ( \pi s ( y )/L_z \right )$.
\end{enumerate}
With the above corrections, it can be verified that both the divergence free
condition and the periodic boundary conditions are satisfied.
Based on Kerr's subroutine that generates the initial data, we also make
one minor modification in the definition of $f(r)$. The original equation
in \cite{Kerr93} for (\ref{f}) was given by
\begin{equation}
f ( r ) = \frac{- r^2}{1 - r^2} + r^2 \left( 1 + r^2 + r^4 \right) .
\end{equation}
After studying Kerr's code, we found that the function $f(r)$ in his
code was actually defined using (\ref{f}) instead of the above formula.
The difference lies in the first factor in the
second term of the above equation. What was used in Kerr's code
was $r^4$ instead $r^2$ in the above equation. This minor modification
has little effect on the behavior of the solution from our computational
experience. We make this minor modification to $f(r)$ in order to match
exactly the initial condition that was actually used in Kerr's computations.
\vspace{0.2in}
\noindent
{\bf Acknowledgments.}
We would like to thank Prof. Lin-Bo Zhang from the Institute of
Computational Mathematics in Chinese Academy of Sciences (CAS) for
providing us with the computing resource to perform this large
scale computational project. Additional computing resource was
provided by the Center of High Performance Computing in CAS. We
also thank Prof. Robert Kerr for providing us with his Fortran
subroutine that generates his initial data. This work was in part
supported by NSF under the NSF FRG grant DMS-0353838 and ITR
Grant ACI-0204932. Part of this work was done while Hou visited
the Academy of Systems and Mathematical Sciences of CAS in the
summer of 2005 as a member of the Oversea Outstanding Research
Team for Complex Systems. Finally, we would like to thank Profs.
Hector Ceniceros and Robert Kerr for their valuable
comments on the original manuscript.
\bibliographystyle{amsplain}
|
1,116,691,497,704 | arxiv | \section{Introduction}
Asymptotic stability of solitary waves in the context of
continuous nonlinear Schr\"{o}dinger equations in one, two, and
three spatial dimensions was considered in a number of recent
works (see Cuccagna \cite{cuccagna} for a review of literature).
Little is known, however, about asymptotic
stability of solitary waves in the context of discrete nonlinear
Schr\"{o}dinger (DNLS) equations.
Orbital stability of a global energy minimizer under a fixed
mass constraint was proved by Weinstein \cite{weinstein} for the
DNLS equation with power nonlinearity
$$
i \dot{u}_n + \Delta_d u_n + |u_n|^{2 p} u_n = 0, \quad n \in
\mathbb{Z}^d,
$$
where $\Delta_d$ is a discrete Laplacian in $d$ dimensions and $p
> 0$. For $p < \frac{2}{d}$ (subcritical case), it is
proved that the ground state of an arbitrary energy exists,
whereas for $p \geq \frac{2}{d}$ (critical and supercritical
cases), there is an energy threshold, below which the ground state
does not exist.
Ground states of the DNLS equation with power-law nonlinearity
correspond to single-humped solitons, which are excited in
numerical and physical experiments by a single-site initial data
with sufficiently large amplitude \cite{KEDS}. Such experiments
have been physically realized in optical settings with both
focusing \cite{mora} and defocusing \cite{rosberg} nonlinearities.
We would like to consider long-time dynamics of the ground states
and prove their asymptotic stability under some assumptions on the
spectrum of the linearized DNLS equation. From
the beginning, we would like to work in the space of one spatial
dimension $(d = 1)$ and to add an external potential $V$ to the
DNLS equation. These specifications are motivated by physical
applications (see, e.g., the recent work of \cite{kroli} and
references therein for a relevant discussion). We hence write the
main model in the form
\begin{equation}
\label{dNLS} i \dot{u}_n = (-\Delta + V_n) u_n + \gamma |u_n|^{2p} u_n,
\quad n \in \mathbb{Z},
\end{equation}
where $\Delta u_n := u_{n+1} - 2 u_n + u_{n-1}$ and $\gamma = 1$
($\gamma = -1$) for defocusing (focusing) nonlinearity. Besides
physical applications, the role of potential $V$ in our work can
be explained by looking at the differences between the recent
works of Mizumachi \cite{Miz} and Cuccagna \cite{Cuc} for a
continuous nonlinear Schr\"{o}dinger equation in one dimension.
Using an external potential, Mizumachi proved asymptotic stability
of small solitons bifurcating from the ground state of the
Schrodinger operator $H_0 = -\partial_x^2 + V$ under some
assumptions on the spectrum of $H_0$. He needed only spectral
theory of the self-adjoint operator $H_0$ in $L^2$ since spectral
projections and small nonlinear terms were controlled in the
corresponding norm. Pioneering works along the same lines are
attributed to Soffer--Weinstein \cite{SW1,SW2,SW3}, Pillet \&
Wayne \cite{PW}, and Yao \& Tsai \cite{YT1,YT2,YT3}. Compared to
this approach, Cuccagna proved asymptotic stability of nonlinear
space-symmetric ground states in energy space of the continuous
nonlinear Schr\"{o}dinger equation with $V \equiv 0$. He had to
invoke the spectral theory of non-self-adjoint operators arising
in the linearization of the nonlinear Schr\"{o}dinger equation at
the ground state, following earlier works of Buslaev \& Perelman
\cite{BP1,BP2}, Buslaev \& Sulem \cite{BS}, and Gang \& Sigal
\cite{GS1,GS2}.
Since our work is novel in the context of the DNLS equation, we
would like to simplify the spectral formalism and to focus on
nonlinear analysis of asymptotic stability. This is the main
reason why we work with small solitons bifurcating from the ground
state of the discrete Schrodinger operator $H = -\Delta + V$. We
will make use of the dispersive decay estimates obtained recently
for operator $H$ by Stefanov \& Kevrekidis \cite{SK} (for $V
\equiv 0$), Komech, Kopylova \& Kunze \cite{KKK} (for compact
$V$), and Pelinovsky \& Stefanov \cite{PS} (for decaying $V$).
With more efforts and more elaborate analysis, our results can be
generalized to large solitons with or without potential $V$ under
some restrictions on spectrum of the non-self-adjoint operator
associated with linearization at the nonlinear ground state.
From a technical point of view, many previous works on asymptotic
stability of solitary waves in continuous nonlinear Schr\"{o}dinger equations
address critical and supercritical cases, which in $d = 1$
corresponds to $p \geq 2$. Because the dispersive decay in
$l^1-l^{\infty}$ norm is slower for the DNLS equation, the
critical power appears at $p = 3$ and the proof of
asymptotic stability of discrete solitons can be developed for $p \geq 3$. The most
interesting case of the cubic DNLS equation for $p = 1$ is
excluded from our consideration. To prove asymptotic stability of
discrete solitons for $p \geq 3$, we extend the pointwise
dispersive decay estimates from \cite{PS} to Strichartz estimates,
which allow us for a better control of the dispersive parts of the
solution. The nonlinear analysis follows the steps in the proof of
asymptotic stability of continuous solitons by Mizumachi
\cite{Miz}.
In addition to analytical results, we also approximate time
evolution of small solitons numerically in the DNLS equation
(\ref{dNLS}) with $p = 1,2,3$. Not only we confirm the asymptotic
stability of discrete solitons in all the cases but also we find
that the actual decay rate of perturbations near the small soliton
is faster than the one used in our analytical arguments.
The article is organized as follows. The main result for $p \geq 3$ is
formulated in Section 2. Linear estimates are derived in Section
3. The proof of the main theorem is developed in Section 4.
Numerical illustrations for $p = 1, 2, 3$ are discussed in Section
5. Appendix A gives proofs of technical formulas used in Section
3.
{\bf Acknowledgement.} When the paper was essentially complete, we
became aware of a similar work of Cuccagna \& Tarulli \cite{CT},
where asymptotic stability of small discrete solitons of the DNLS
equation (\ref{dNLS}) was proved for $p \geq 3$.
Stefanov's research is supported in part by NSF-DMS 0701802.
Kevrekidis' research is supported in part by NSF-DMS-0806762,
NSF-CAREER and the Alexander von Humboldt Foundation.
\section{Preliminaries and the main result}
In what follows, we use bold-faced notations for vectors in
discrete spaces $l_s^1$ and $l_s^2$ on $\mathbb{Z}$ defined by
their norms
$$
\| {\bf u} \|_{l^1_s} := \sum_{n \in \mathbb{Z}} (1+n^2)^{s/2}
|u_n|, \quad \| {\bf u} \|_{l^2_s} := \left( \sum_{n \in
\mathbb{Z}} (1+n^2)^{s} |u_n|^2 \right)^{1/2}.
$$
Components of ${\bf u}$ are denoted by regular font, e.g. $u_n$
for $n \in \mathbb{Z}$.
We shall make the following assumptions on the external potential
${\bf V}$ defined on the lattice $\mathbb{Z}$ and on the spectrum
of the self-adjoint operator $H = -\Delta + {\bf V}$ in $l^2$.
\begin{itemize}
\item[(V1)] ${\bf V} \in l^1_{2\sigma}$ for a fixed $\sigma >
\frac{5}{2}$.
\item[(V2)] ${\bf V}$ is generic in the sense that no solution
$\mbox{\boldmath $\psi$}_0$ of equation $H \mbox{\boldmath
$\psi$}_0 = 0$ exists in $l^2_{-\sigma}$ for $\frac{1}{2} < \sigma
\leq \frac{3}{2}$.
\item[(V3)] ${\bf V}$ supports exactly one negative eigenvalue
$\omega_0 < 0$ of $H$ with an eigenvector $\mbox{\boldmath
$\psi$}_0 \in l^2$ and no eigenvalues above $4$.
\end{itemize}
The first two assumptions (V1) and (V2) are needed for
the dispersive decay estimates developed in \cite{PS}. The last
assumption (V3) is needed for existence of a family $\mbox{\boldmath
$\phi$}(\omega)$ of real-valued decaying solutions of
the stationary DNLS equation
\begin{equation}
\label{stationaryDNLS} (-\Delta + V_n) \phi_n(\omega) + \gamma
\phi_n^{2p+1}(\omega) = \omega \phi_n(\omega), \quad n \in
\mathbb{Z},
\end{equation}
near $\omega = \omega_0 < 0$. This is a standard local bifurcation
of decaying solutions in a system of infinitely many algebraic equations
(see \cite{Nirenberg} for details).
\begin{lemma}[Local bifurcation of stationary solutions]
\label{lemma-bifurcation} Assume that ${\bf V} \in l^{\infty}$ and
that $H$ has an eigenvalue $\omega_0$ with a normalized
eigenvector $\mbox{\boldmath $\psi$}_0 \in l^2$ such that $\|
\mbox{\boldmath $\psi$}_0 \|_{l^2} = 1$. Let $\epsilon := \omega -
\omega_0$, $\gamma = +1$, and $\epsilon_0 > 0$ be sufficiently
small. For any $\epsilon \in (0,\epsilon_0)$, there exists an
$\epsilon$-independent constant $C > 0$ such that the stationary
DNLS equation (\ref{stationaryDNLS}) admits a solution
$\mbox{\boldmath $\phi$}(\omega) \in
C^2([\omega_0,\omega_0+\epsilon_0],l^2)$ satisfying
$$
\left\| \mbox{\boldmath $\phi$}(\omega) -
\frac{\epsilon^{\frac{1}{2p}} \mbox{\boldmath $\psi$}_0}{\| \mbox{\boldmath $\psi$}_0
\|^{1+\frac{1}{p}}_{l^{2p+2}}}
\right\|_{l^2} \leq C \epsilon^{1 + \frac{1}{2p}}.
$$
Moreover, the solution $\mbox{\boldmath $\phi$}(\omega)$ decays
exponentially to zero as $|n| \to \infty$.
\end{lemma}
\begin{remark}
\label{remark-bifurcation} Because of the exponential decay of
$\mbox{\boldmath $\phi$}(\omega)$ as $|n| \to \infty$, the
solution $\mbox{\boldmath $\phi$}(\omega)$ exists in $l^2_{\sigma}$
for all $\sigma \geq 0$. In addition, since $ \| \mbox{\boldmath
$\phi$}\|_{l^1} \leq C_{\sigma} \| \mbox{\boldmath $\phi$}
\|_{l^2_{\sigma}}, $ for any $\sigma > \frac{1}{2}$, the solution
$\mbox{\boldmath $\phi$}(\omega)$ also exists in $l^1$.
\end{remark}
\begin{remark}
The case $\gamma = -1$ with the local bifurcation to the domain
$\omega < \omega_0$ is absolutely analogous. For simplification,
we shall develop analysis for $\gamma = +1$ only.
\end{remark}
To work with solutions of the DNLS equation (\ref{dNLS}) for all
$t \in {\mathbb R}_+$ starting with some initial data at $t = 0$,
we need global well-posedness of the Cauchy problem for
(\ref{dNLS}). Because $H$ is a bounded operator from $l^2$ to
$l^2$, global well-posedness for (\ref{dNLS}) follows from simple
arguments based on the flux conservation equation
\begin{equation}
\label{balance} i \frac{d}{dt} |u_n|^2 = u_n (\bar{u}_{n+1} +
\bar{u}_{n-1}) - \bar{u}_n (u_{n+1}+u_{n-1})
\end{equation}
and the contraction mapping arguments (see \cite{PP} for details).
\begin{lemma}[Global well-posedness]
\label{lemma-wellposedness} Fix $\sigma \geq 0$. For any ${\bf u}_0
\in l^2_{\sigma}$, there exists a unique solution ${\bf
u}(t) \in C^1(\mathbb{R}_+,l^2_{\sigma})$ such that ${\bf
u}(0) = {\bf u}_0$ and ${\bf u}(t)$ depends continuously on
${\bf u}_0$.
\end{lemma}
\begin{remark}
Global well-posedness holds also on $\mathbb{R}_-$ (and thus on
$\mathbb{R}$) since the DNLS equation (\ref{dNLS}) is a reversible
dynamical system. We shall work in the positive time intervals
only.
\end{remark}
Equipped with the results above, we decompose a solution to the DNLS equation
(\ref{dNLS}) into a family of stationary solutions with time varying
parameters and a radiation part using the substitution
\begin{equation}
\label{decomposition}
{\bf u}(t) = e^{-i \theta(t)} \left( \mbox{\boldmath
$\phi$}(\omega(t)) + {\bf z}(t) \right),
\end{equation}
where $(\omega,\theta) \in \mathbb{R}^2$ represents a two-dimensional
orbit of stationary solutions ${\bf u}(t) = e^{-i\theta -i \omega t} \mbox{\boldmath
$\phi$}(\omega)$ (their time
evolution will be specified later) and ${\bf z}(t) \in
C^1(\mathbb{R}_+,l^2_{\sigma})$ solves the
time-evolution equation in the form
\begin{eqnarray}
\label{time-evolution-z} i \dot{{\bf z}} = (H-\omega) {\bf z} -
(\dot{\theta} - \omega) (\mbox{\boldmath $\phi$}(\omega) + {\bf z})
- i \dot{\omega}
\partial_{\omega} \mbox{\boldmath $\phi$}(\omega) +
{\bf N}(\mbox{\boldmath $\phi$}(\omega)+{\bf z}) - {\bf N}(\mbox{\boldmath $\phi$}(\omega)),
\end{eqnarray}
where $H = -\Delta + {\bf V}$, $[{\bf N}(\mbox{\boldmath
$\psi$})]_n = \gamma |\psi_n|^{2p} \psi_n$, and $\partial_{\omega}
\mbox{\boldmath $\phi$}(\omega)$ exists thanks to Lemma
\ref{lemma-bifurcation}. The linearized time evolution at the
stationary solution $ \mbox{\boldmath $\phi$}(\omega)$ involves
operators
$$
L_- = H - \omega + {\bf W}, \quad L_+ = H - \omega + (2p+1) {\bf
W},
$$
where $W_n = \gamma \phi_n^{2p}(\omega)$ and ${\bf W}$ decays
exponentially as $|n| \to \infty$ thanks to Lemma
\ref{lemma-bifurcation}. The linearized time evolution in
variables ${\bf v} = {\rm Re}({\bf z})$ and ${\bf w} = {\rm
Im}({\bf z})$ involves a symplectic structure which can be
characterized by the non-self-adjoint eigenvalue problem
\begin{equation}
\label{linearizedNLS} L_+ {\bf v} = - \lambda {\bf w}, \quad L_-
{\bf w} = \lambda {\bf v}.
\end{equation}
Using Lemma \ref{lemma-bifurcation}, we derive the following
result.
\begin{lemma}[Double null subspace]
For any $\epsilon \in (0,\epsilon_0)$, the linearized eigenvalue
problem (\ref{linearizedNLS}) admits a double zero eigenvalue with
a one-dimensional kernel, isolated from the rest of the spectrum.
The generalized kernel is spanned by vectors $({\bf
0},\mbox{\boldmath $\phi$}(\omega)), (- \partial_{\omega}
\mbox{\boldmath $\phi$}(\omega),{\bf 0}) \in l^2$ satisfying
$$
L_- \mbox{\boldmath $\phi$}(\omega) = {\bf 0}, \qquad L_+
\partial_{\omega} \mbox{\boldmath $\phi$}(\omega) = \mbox{\boldmath
$\phi$}(\omega).
$$
If $({\bf v},{\bf w}) \in l^2$ is symplectically orthogonal to the
double subspace of the generalized kernel, then
$$
\langle {\bf v},\mbox{\boldmath $\phi$}(\omega) \rangle = 0, \quad
\langle {\bf w},\partial_{\omega} \mbox{\boldmath $\phi$}(\omega)
\rangle = 0,
$$
where $\langle {\bf u},{\bf v} \rangle := \sum_{n \in \mathbb{Z}}
u_n \bar{w}_n$.
\end{lemma}
\begin{proof}
By Lemma 1 in \cite{PS}, operator $H$ has the essential spectrum
on $[0,4]$. Because of the exponential decay of ${\bf W}$ as $|n|
\to \infty$, the essential spectrum of $L_+$ and $L_-$ is shifted
by $-\omega \approx -\omega_0 > 0$, so that the zero point in the
spectrum of the linearized eigenvalue problem
(\ref{linearizedNLS}) is isolated from the continuous spectrum and
other isolated eigenvalues. The geometric kernel of the linearized
operator $L = {\rm diag}(L_+,L_-)$ is one-dimensional for
$\epsilon \in (0,\epsilon_0)$ since $L_- \mbox{\boldmath
$\phi$}(\omega) = {\bf 0}$ is nothing but the stationary DNLS
equation (\ref{stationaryDNLS}) whereas $L_+$ has an empty kernel
thanks to the perturbation theory and Lemma
\ref{lemma-bifurcation}. Indeed, for a small $\epsilon \in
(0,\epsilon_0)$, we have
$$
\langle \mbox{\boldmath $\psi$}_0, L_+ \mbox{\boldmath $\psi$}_0 \rangle =
2p \gamma \epsilon + {\cal O}(\epsilon^2) \neq 0.
$$
By the perturbation theory, a simple zero eigenvalue of $L_+$ for
$\epsilon = 0$ becomes a positive eigenvalue for $\epsilon > 0$
(if $\gamma = +1$). The second (generalized) eigenvector $(-
\partial_{\omega} \mbox{\boldmath $\phi$}(\omega),{\bf 0})$ is
found by direct computation thanks to Lemma
\ref{lemma-bifurcation}. It remains to show that the third
(generalized) eigenvector does not exist. If it does, it would
satisfy the equation
$$
L_- {\bf w}_0 = -\partial_{\omega} \mbox{\boldmath $\phi$}(\omega).
$$
However,
$$
\langle \mbox{\boldmath $\phi$}(\omega),\partial_{\omega}
\mbox{\boldmath $\phi$}(\omega) \rangle = \frac{1}{2} \frac{d}{d
\omega} \| \mbox{\boldmath $\phi$}(\omega) \|^2_{l^2}
= \frac{\epsilon^{\frac{1}{p}-1}}{2p \| \mbox{\boldmath $\psi$}_0\|^{2 + \frac{2}{p}}_{l^{2p+2}}}
\left( 1 + {\cal O}(\epsilon) \right) \neq 0
$$
for $\epsilon \in (0,\epsilon_0)$ by Lemma
\ref{lemma-bifurcation}. Therefore, no ${\bf w}_0 \in l^2$ exists.
\end{proof}
To determine the time evolution of varying parameters $(\omega,\theta)$
in the evolution equation (\ref{time-evolution-z}), we shall
add the condition that ${\bf z}(t)$ is symplectically orthogonal
to the two-dimensional null subspace of the linearized problem
(\ref{linearizedNLS}). To normalize the eigenvectors uniquely, we set
\begin{equation}
\label{eigenvectors-normalized} \mbox{\boldmath $\psi$}_1 =
\frac{\mbox{\boldmath $\phi$}(\omega)}{\|\mbox{\boldmath
$\phi$}(\omega)\|_{l^2}}, \quad \mbox{\boldmath $\psi$}_2 =
\frac{\partial_{\omega} \mbox{\boldmath $\phi$}(\omega)}{
\|\partial_{\omega} \mbox{\boldmath $\phi$}(\omega)\|_{l^2}}
\end{equation}
and require that
\begin{equation}
\label{constraints} \langle {\rm Re}{\bf z}(t),\mbox{\boldmath $\psi$}_1
\rangle = \langle {\rm Im}{\bf z}(t),\mbox{\boldmath $\psi$}_2 \rangle =
0.
\end{equation}
By Lemma \ref{lemma-bifurcation}, both eigenvectors
$\mbox{\boldmath $\psi$}_1$ and $\mbox{\boldmath $\psi$}_2$ are
locally close to $\mbox{\boldmath $\psi$}_0$, the eigenvector of
$H$ for eigenvalue $\omega_0$, in any norm, e.g.
\begin{equation}
\| \mbox{\boldmath $\psi$}_1 - \mbox{\boldmath $\psi$}_0 \|_{l^2}
+ \| \mbox{\boldmath $\psi$}_2 - \mbox{\boldmath $\psi$}_0
\|_{l^2} \leq C \epsilon,
\end{equation}
for some $C > 0$. Although the vector field of the time evolution
problem (\ref{time-evolution-z}) does not lie in the orthogonal
complement of $\mbox{\boldmath $\psi$}_0$, that is in the
absolutely continuous spectrum of $H$, the difference is small for
small $\epsilon > 0$. We shall prove that the conditions
(\ref{constraints}) define a unique decomposition
(\ref{decomposition}).
\begin{lemma}[Decomposition]
\label{lemma-decomposition} Fix $\epsilon > 0$ and $\delta
> 0$ be sufficiently small. Assume that there exists $T = T(\epsilon,\delta)$ and $C_0 >
0$, such that ${\bf u}(t) \in C^1([0,T],l^2)$ satisfies
\begin{equation}
\label{u-bound}
\| {\bf u}(t) - \mbox{\boldmath $\phi$}(\omega_0 +
\epsilon))\|_{l^2} \leq C_0 \delta \epsilon^{\frac{1}{2p}},
\end{equation}
uniformly on $[0,T]$. There exists a unique choice of
$(\omega,\theta) \in C^1([0,T],\mathbb{R}^2)$ and ${\bf z}(t) \in
C^1([0,T],l^2)$ in the decomposition
(\ref{decomposition}) provided the constraints (\ref{constraints})
are met. Moreover, there exists $C
> 0$ such that
\begin{equation}
\label{theta-omega-bounds}
|\omega(t) - \omega_0 - \epsilon | \leq C \delta \epsilon, \quad | \theta(t)| \leq C
\delta, \quad \| {\bf z}(t) \|_{l^2} \leq C \delta \epsilon^{\frac{1}{2p}},
\end{equation}
uniformly on $[0,T]$.
\end{lemma}
\begin{proof}
We write the decomposition (\ref{decomposition}) in the form
\begin{equation}
\label{z-representation}
{\bf z} = e^{i \theta} \left({\bf u} - \mbox{\boldmath $\phi$}(\omega_0+\epsilon)\right) +
\left( e^{i \theta} \mbox{\boldmath $\phi$}(\omega_0+\epsilon) -
\mbox{\boldmath $\phi$}(\omega) \right).
\end{equation}
First, we show that the constraints (\ref{constraints}) give
unique values of $(\omega,\theta)$ satisfying bounds
(\ref{theta-omega-bounds}) uniformly in $[0,T]$ provided the bound
(\ref{u-bound}) holds. To do so, we rewrite (\ref{constraints})
and (\ref{z-representation}) as a fixed-point problem ${\bf
F}(\omega,\theta) = {\bf 0}$, where ${\bf F} : \mathbb{R}^2
\mapsto \mathbb{R}^2$ is given by
$$
{\bf F}(\omega,\theta) = \left[ \begin{array}{c} \langle {\rm Re}
({\bf u} - \mbox{\boldmath $\phi$}^{(0)}) e^{i \theta},\mbox{\boldmath
$\psi$}_1 \rangle + \langle \mbox{\boldmath $\phi$}^{(0)} \cos \theta
- \mbox{\boldmath $\phi$}(\omega),\mbox{\boldmath $\psi$}_1 \rangle \\
\langle {\rm Im} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}) e^{i
\theta},\mbox{\boldmath $\psi$}_2 \rangle + \langle
\mbox{\boldmath $\phi$}^{(0)} \sin \theta, \mbox{\boldmath $\psi$}_2
\rangle \end{array} \right],
$$
where $\mbox{\boldmath $\phi$}^{(0)} := \mbox{\boldmath
$\phi$}(\omega_0 + \epsilon)$. We note that ${\bf F}$ is $C^1$ in
$(\theta,\omega)$ thanks to Lemma \ref{lemma-bifurcation}. Direct
computations give the vector field
$$
{\bf F}(\omega_0+\epsilon,0) = \left[ \begin{array}{c} \langle
{\rm Re} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}),\mbox{\boldmath $\psi$}^{(0)}_1 \rangle \\
\langle {\rm Im} ({\bf u} - \mbox{\boldmath
$\phi$}^{(0)}),\mbox{\boldmath $\psi$}^{(0)}_2 \rangle
\end{array} \right]
$$
and the Jacobian $D {\bf F}(\omega_0+\epsilon,0) = {\bf D}_1 +
{\bf D}_2$ with
\begin{eqnarray*}
{\bf D}_1 & = & \left[ \begin{array}{cc} - \langle
\partial_{\omega} \mbox{\boldmath $\phi$}^{(0)},\mbox{\boldmath
$\psi$}_1^{(0)} \rangle & 0
\\ 0 & \langle \mbox{\boldmath $\phi$}^{(0)},
\mbox{\boldmath $\psi$}^{(0)}_2 \rangle \end{array} \right], \\
{\bf D}_2 & = & \left[ \begin{array}{cc} \langle {\rm Re} ({\bf u}
- \mbox{\boldmath $\phi$}^{(0)}), \partial_{\omega}
\mbox{\boldmath $\psi$}_1^{(0)} \rangle & - \langle {\rm Im} ({\bf
u} - \mbox{\boldmath
$\phi$}^{(0)}), \mbox{\boldmath $\psi$}^{(0)}_1 \rangle \\
\langle {\rm Im} ({\bf u} - \mbox{\boldmath
$\phi$}^{(0)}), \partial_{\omega} \mbox{\boldmath $\psi$}^{(0)}_2 \rangle &
\langle {\rm Re} ({\bf u} - \mbox{\boldmath $\phi$}^{(0)}),\mbox{\boldmath $\psi$}^{(0)}_2 \rangle
\end{array} \right],
\end{eqnarray*}
where $\mbox{\boldmath $\psi$}^{(0)}_{1,2} = \mbox{\boldmath
$\psi$}_{1,2} |_{\omega = \omega_0 + \epsilon}$ and
$\partial_{\omega} \mbox{\boldmath $\psi$}^{(0)}_{1,2} =
\partial_{\omega} \mbox{\boldmath $\psi$}_{1,2} |_{\omega =
\omega_0 + \epsilon}$. Thanks to the bound (\ref{u-bound}) and the
normalization of $\mbox{\boldmath $\psi$}_{1,2}$, there exists an
$(\epsilon,\delta)$-independent constant $C_0 > 0$ such that
$$
\| {\bf F}(\omega_0+\epsilon,0) \| \leq C_0 \delta
\epsilon^{\frac{1}{2p}}.
$$
On the other hand, $D {\bf F}(\omega_0+\epsilon,0)$ is invertible
for small $\epsilon > 0$ since
$$
|({\bf D}_1)_{11}| \geq C_1 \epsilon^{\frac{1}{2p}-1}, \quad |({\bf
D}_1)_{22}| \geq C_2 \epsilon^{\frac{1}{2p}}
$$
and
$$
|({\bf D}_2)_{11}| + |({\bf D}_2)_{21}| \leq C_3 \delta \epsilon^{\frac{1}{2p} - 1}, \quad
|({\bf D}_2)_{12}| + |({\bf D}_2)_{22}| \leq C_4 \delta \epsilon^{\frac{1}{2p}},
$$
for some ($\epsilon$,$\delta$)-independent constants $C_1,C_2,C_3,C_4 > 0$. By the Implicit
Function Theorem, there exists a unique root of ${\bf
F}(\omega,\theta) = {\bf 0}$ near $(\omega_0+\epsilon,0)$ for any
${\bf u}(t)$ satisfying (\ref{u-bound}) such that
$$
|\omega(t) - \omega_0 - \epsilon | \leq C \delta \epsilon, \quad | \theta(t)| \leq C
\delta,
$$
for some $C > 0$. Moreover, if ${\bf u}(t) \in C^1([0,T],l^2)$, then
$(\omega,\theta) \in C^1([0,T],\mathbb{R}^2)$. Finally, existence of a unique
${\bf z}(t)$ and the bound $\| {\bf z}(t) \|_{l^2} \leq C \delta \epsilon^{\frac{1}{2p}}$
follow from the representation (\ref{z-representation}) and the triangle inequality.
\end{proof}
Assuming $(\omega,\theta) \in C^1([0,T],\mathbb{R}^2)$ at least
locally in time and using Lemma \ref{lemma-decomposition}, we
define the time evolution of $(\omega,\theta)$ from the
projections of the time evolution equation
(\ref{time-evolution-z}) with the symplectic orthogonality conditions (\ref{constraints}).
The resulting system is written in the matrix--vector form
\begin{equation} \label{3}
{\bf A}(\omega,{\bf z}) \left[ \begin{array}{cc} \dot{\omega} \\
\dot{\theta} - \omega \end{array} \right] = {\bf f}(\omega,{\bf
z}),
\end{equation}
where
$$
{\bf A}(\omega,{\bf z}) =
\left[ \begin{array}{ccc} \langle \partial_{\omega}
\mbox{\boldmath $\phi$}(\omega),\mbox{\boldmath
$\psi$}_1 \rangle - \langle {\rm Re} {\bf z},\partial_{\omega} \mbox{\boldmath
$\psi$}_1 \rangle & \langle {\rm Im} {\bf z},\mbox{\boldmath
$\psi$}_1 \rangle \\
\langle {\rm Im} {\bf z}, \partial_{\omega} \mbox{\boldmath
$\psi$}_2 \rangle & \langle \mbox{\boldmath $\phi$}(\omega) + {\rm Re} {\bf z},
\mbox{\boldmath $\psi$}_2 \rangle \end{array} \right]
$$
and
$$
{\bf f}(\omega,{\bf z}) = \left[ \begin{array}{l} \langle {\rm
Im} {\bf N}(\mbox{\boldmath $\phi$}+{\bf z})- {\bf W} {\bf z},
\mbox{\boldmath $\psi$}_1 \rangle \\
\langle {\rm Re} {\bf N}(\mbox{\boldmath $\phi$}+{\bf z}) - {\bf
N}(\mbox{\boldmath $\phi$})-(2p+1) {\bf W} {\bf z},
\mbox{\boldmath $\psi$}_2 \rangle
\end{array} \right].
$$
Using an elementary property for power functions
$$
||a+b|^{2p}(a+b)-|a|^{2p}a|\leq C_p (|a|^{2p}|b|+|b|^{2p+1}),
$$
for some $C_p > 0$, where $a,b \in \mathbb{C}$ are arbitrary, we
bound the vector fields of (\ref{time-evolution-z}) and (\ref{3})
by
\begin{eqnarray}
\label{estimate-N}
\| {\bf N}(\mbox{\boldmath $\phi$}(\omega)+{\bf z}) - {\bf
N}(\mbox{\boldmath $\phi$}(\omega) \|_{l^2} & \leq & C \left( \|
|\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf z}| \|_{l^2} +
\| {\bf z} \|_{l^2}^{2p+1} \right), \\
\label{estimate-f} \| {\bf f}(\omega,{\bf z}) \| & \leq & C
\sum_{j=1}^2 \left( \| |\mbox{\boldmath $\phi$}(\omega)|^{2p-1}
|\mbox{\boldmath $\psi$}_j| |{\bf z}|^2 \|_{l^1} + \|
|\mbox{\boldmath $\psi$}_j| |{\bf z}|^{2p+1} \|_{l^1} \right),
\end{eqnarray}
for some $C > 0$, where the pointwise multiplication of vectors on
$\mathbb{Z}$ is understood in the sense
$$
(|\mbox{\boldmath $\phi$}| |\mbox{\boldmath $\psi$}|)_n = \phi_n
\psi_n.
$$
By Lemmas \ref{lemma-bifurcation} and \ref{lemma-decomposition},
${\bf A}(\omega,{\bf z})$ is invertible for a small ${\bf z} \in
l^2$ and a small $\epsilon \in (0,\epsilon_0)$ so that solutions
of system (\ref{3}) satisfy the estimates
\begin{eqnarray}
\label{33} |\dot{\omega}| & \leq & C \epsilon^{2-\frac{1}{p}}
\left( \| |\mbox{\boldmath $\psi$}_1| |{\bf z}|^2 \|_{l^1} + \|
|\mbox{\boldmath $\psi$}_2| |{\bf z}|^2 \|_{l^1} \right), \\
\label{33a} |\dot{\theta}-\omega| & \leq & C
\epsilon^{1-\frac{1}{p}} \left( \| |\mbox{\boldmath $\psi$}_1|
|{\bf z}|^2 \|_{l^1} + \| |\mbox{\boldmath $\psi$}_2| |{\bf z}|^2
\|_{l^1} \right),
\end{eqnarray}
for some $C > 0$ uniformly in $\| {\bf z} \|_{l^2} \leq C_0
\epsilon^{\frac{1}{2p}}$ for some $C_0 > 0$.
\begin{remark}
{\rm The estimates (\ref{33}) and (\ref{33a}) show that if $\|
{\bf z} \|_{l^2} \leq C \delta \epsilon^{\frac{1}{2p}}$ for some
$C > 0$, then
$$
|\omega(t) - \omega(0)| \leq C \delta^2 \epsilon^2, \quad \left|
\theta(t) - \int_0^t \omega(t') dt' \right| \leq C \delta^2 \epsilon,
$$
uniformly on $[0,T]$ for any fixed $T > 0$. These bounds are smaller than
bounds (\ref{theta-omega-bounds}) of Lemma \ref{lemma-decomposition}. They
become comparable with bounds (\ref{theta-omega-bounds}) for larger
time intervals $[0,T]$, where $T \leq \frac{C_0}{\delta \epsilon}$ for some $C_0 > 0$. Our
main task is to extend these bounds globally to $T =
\infty$.}
\end{remark}
By the theorem on orbital stability in \cite{weinstein}, the
trajectory of the DNLS equation (\ref{dNLS}) originating from a
point in a local neighborhood of the stationary solution
$\mbox{\boldmath $\phi$}(\omega(0))$ remains in a local
neighborhood of the stationary solution $\mbox{\boldmath
$\phi$}(\omega(t))$ for all $t \in \mathbb{R}_+$. By a definition
of orbital stability, for any $\mu_0 > 0$ there exists a $\nu_0 >
0$ such that if $|\omega(0) - \omega_0| \leq \nu_0$ then
$|\omega(t) - \omega_0| \leq \mu_0$ uniformly on $t \in
\mathbb{R}_+$. Therefore, there exists a $\delta(\epsilon)$ for
each $\epsilon \in (0,\epsilon_0)$ such that $T(\epsilon,\delta) =
\infty$ for any $\delta \in (0,\delta(\epsilon))$ in Lemma
\ref{lemma-decomposition}. To prove the main result on asymptotic
stability, we need to show that the trajectory approaches to the
stationary solution $\mbox{\boldmath $\phi$}(\omega_{\infty})$ for
some $\omega_{\infty} \in (\omega_0,\omega_0 + \epsilon_0)$. Our
main result is formulated as follows.
\begin{theorem}[Asymptotic stability in the energy space]
\label{theorem-main} Assume (V1)--(V3), fix $\gamma = +1$ and $p
\geq 3$. Fix $\epsilon
> 0$ and $\delta > 0$ be sufficiently small and assume that
$\theta(0) = 0$, $\omega(0) = \omega_0 + \epsilon$,
and
$$
\| {\bf u}(0) - \mbox{\boldmath $\phi$}(\omega_0 + \epsilon) \|_{l^2} \leq
C_0 \delta \epsilon^{\frac{1}{2p}}
$$
for some $C_0 > 0$. Then,
there exist $\omega_{\infty} \in (\omega_0,\omega_0 +
\epsilon_0)$, $(\omega,\theta) \in
C^1(\mathbb{R}_+,\mathbb{R}^2)$, and a solution ${\bf u}(t) \in X:=
C^1(\mathbb{R}_+,l^2)\cap L^6(\mathbb{R}_+,l^\infty)$ to the DNLS equation (\ref{dNLS})
such that
$$
\lim_{t \to \infty} \omega(t) = \omega_{\infty}, \quad
\| {\bf u}(t) - e^{-i\theta(t)} \mbox{\boldmath
$\phi$}(\omega(t)) \|_{X} \leq C\delta\varepsilon^{1/(2p)}.
$$
\end{theorem}
Theorem \ref{theorem-main} is proved in Section 4. To bound
solutions of the time-evolution problem (\ref{time-evolution-z})
in the space $X$ (intersected with some other spaces of technical
nature), we need some linear estimates, which are described in
Section 3.
\section{Linear estimates}
We need several types of linear estimates, each is designed to
control different nonlinear terms of the vector field of the
evolution equation (\ref{time-evolution-z}). For notational
convenience, we shall use $L^p_t$ and $l^q_n$ to denote $L^p$
space on $t \in [0,T]$ and $l^q$ space on $n \in \mathbb{Z}$,
where $T > 0$ is an arbitrary time including $T = \infty$. The notation
$<n> = (1 + n^2)^{1/2}$ is used for the weights in $l^q_n$ norms.
The constant $C > 0$ is a generic constant, which may change from
one line to another line.
\subsection{Decay and Strichartz estimates}
Under assumptions (V1)--(V2) on the potential, the following result was proved in
\cite{PS}.
\begin{lemma}[Dispersive decay estimates]
\label{lemma-dispersive} Fix $\sigma > \frac{5}{2}$ and assume
(V1)--(V2). There exists a constant $C > 0$ depending on ${\bf V}$
such that
\begin{eqnarray}
\label{eq:15} \left\| \langle n \rangle^{-\sigma} e^{-i t
H}P_{a.c.}(H) {\bf f} \right\|_{l^2_n}
& \leq & C (1+t)^{-3/2} \| \langle n \rangle^{\sigma} {\bf f} \|_{l^2_n}, \\
\label{eq:16} \left\| e^{-i t H}P_{a.c.}(H) {\bf f}
\right\|_{l^\infty_n} & \leq & C (1+t)^{-1/3} \| {\bf f}
\|_{l^1_n},
\end{eqnarray}
for all $t \in \mathbb{R}_+$, where $P_{a.c.}(H)$ is the
projection to the absolutely continuous spectrum of
$H$.
\end{lemma}
\begin{remark}
Unlike the continuous case, the upper bound (\ref{eq:16}) is
non-singular as $t \to 0$ because the discrete case always enjoys
an estimate $\left\| {\bf f} \right\|_{l^\infty_n} \leq \| {\bf
f} \|_{l^2_n} \leq \| {\bf f} \|_{l^1_n}$.
\end{remark}
Using Lemma \ref{lemma-dispersive} and Theorem 1.2 of Keel-Tao \cite{KT},
the following corollary transfers pointwise decay estimates into Strichartz estimates.
\begin{corollary}[Discrete Strichartz estimates]
\label{corollary-Strichartz} There exists a constant $C > 0$ such that
\begin{eqnarray}
\label{eq:Strichartz1} \left\| e^{-i t H} P_{a.c.}(H) {\bf f}
\right\|_{L^6_t l^{\infty}_n \cap L^{\infty}_t l^2_n} & \leq & C \| {\bf f} \|_{l^2_n}, \\
\label{eq:Strichartz2} \left\| \int_0^t e^{-i (t-s) H} P_{a.c.}(H)
{\bf g}(s) ds \right\|_{L^6_t l^{\infty}_n \cap L^{\infty}_t
l^2_n} & \leq & C \| {\bf g} \|_{L^1_t l^2_n},
\end{eqnarray}
where the norm in $L^p_t l^q_n$ is defined by
$$
\| {\bf f} \|_{L^p_t l^q_n} = \left( \int_{\mathbb{R}_+} \left(
\| {\bf f}(t) \|_{l^q_n} \right)^p dt \right)^{1/p}.
$$
\end{corollary}
\subsection{Time averaged estimates}
To control the evolution of the varying parameters $(\omega,\theta)$, we derive additional
time averaged estimates. Similar to the continuous case, these estimates are only needed
in one dimension, because the time decay provided by the Strichartz estimates is
insufficient to guarantee time integrability of $\dot{\omega}(t)$ and
$\dot{\theta}(t)-\omega(t)$ bounded from above by the estimates (\ref{33}) and (\ref{33a}).
Without the time
integrability of these quantities, the arguments on the decay of various
norms of ${\bf z}(t)$
satisfying the time evolution problem (\ref{time-evolution-z}) cannot be closed.
\begin{lemma}
\label{le:01} Fix $\sigma > \frac{5}{2}$ and assume (V1) and (V2).
There exists a constant $C > 0$ depending on ${\bf V}$ such that
\begin{eqnarray}
\label{eq:01}
\|<n>^{-3/2} e^{-i t H} P_{a.c.}(H) {\bf f} \|_{l^\infty_n L^2_t} & \leq & C\| {\bf f} \|_{l^2_n} \\
\label{eq:02} \left\|\int_{\mathbb{R}_+} e^{-i t H} P_{a.c.}(H)
{\bf F}(s)dt \right\|_{l^2_n}
& \leq & C\|<n>^{3/2} {\bf F} \|_{l^1_nL^2_t}, \\
\label{eq:033} \left\|<n>^{-\sigma} \int_0^t e^{-i(t-s)H} P_{a.c.}(H)
{\bf F}(s) ds \right\|_{l^\infty_n L^2_t} & \leq &
C \|<n>^{\sigma} {\bf F} \|_{l^1_n L^2_t} \\
\label{eq:0333} \left\|<n>^{-\sigma} \int_0^t e^{-i(t-s)H}
P_{a.c.}(H) {\bf F}(s) ds \right\|_{l^\infty_n L^2_t} & \leq &
C \| {\bf F} \|_{L^1_t l^2_n} \\
\label{eq:03} \left\|\int_0^t e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s)
ds \right\|_{L^6_tl^\infty_n \cap L^{\infty}_t l^2_n} & \leq & C
\|<n>^3 {\bf F} \|_{L^2_t l^2_n}.
\end{eqnarray}
\end{lemma}
To proceed with the proof, let us set up a few notations. First,
introduce the perturbed resolvent $R_V(\lambda):=(H-\lambda)^{-1}$ for
$\lambda \in \mathbb{C} \backslash [0,4]$. We proved in \cite[Theorem
1]{PS} that for any fixed $\omega \in (0,4)$, there exists
$R_V^{\pm}(\omega) = \lim_{\epsilon \downarrow 0} R(\omega \pm i
\epsilon)$ in the norm of $B(\sigma,-\sigma)$ for any $\sigma >
\frac{1}{2}$, where $B(\sigma,-\sigma)$ denotes the space of bounded
operators from $l^2_{\sigma}$ to $l^2_{-\sigma}$.
Next, we recall the Cauchy formula for $e^{i t H}$
\begin{equation}
\label{eq:010}
e^{-i t H} P_{a.c.}(H) = \f{1}{\pi} \int_0^4 e^{-i t \omega} {\rm Im} R_V (\omega) d\omega =
\frac{1}{2\pi i} \int_0^4 e^{-i t \omega} \left[ R^+(\omega) - R^-(\omega) \right] d\omega,
\end{equation}
where the integral is understood in norm $B(\sigma,-\sigma)$. We shall parameterize the interval $[0,4]$
by $\omega = 2 - 2 \cos(\theta)$ for $\theta \in [-\pi,\pi]$.
Let $\chi_0, \chi \in C^{\infty}_0: \; \chi_0 +\chi = 1$ for
all $\theta\in [-\pi, \pi]$, so that
$$
{\rm supp} \chi_0 \subset [-\theta_0,\theta_0] \cup (-\pi, -\pi+\theta_0) \cup (\pi-\theta_0, \pi)
$$
and
$$
{\rm supp} \chi \subset [\theta_0/2,\pi-\theta_0/2]
\cup [-\pi+\theta_0/2,-\theta_0/2],
$$
where
$0< \theta_0 \leq \frac{\pi}{4}$. Note that the support of $\chi$
stays away from both $0$ and $\pi$. Following Mizumachi \cite{Miz}, the proof of
Lemma \ref{le:01} relies on the technical lemma.
\begin{lemma}
\label{le:08}
Assume (V1) and (V2). There exists a constant $C > 0$ such that
\begin{eqnarray}
\label{eq:05} & & \sup_{n \in \mathbb{Z}} \|\chi R^{\pm}_V(\omega)
{\bf f} \|_{L^2_{\omega}(0,4)}
\leq C\| {\bf f} \|_{l^2_n}, \\
\label{eq:06} & & \sup_{n \in \mathbb{Z}} \| <n>^{-3/2} \chi_0
R^{\pm}_V(\omega) {\bf f}\|_{L^2_\omega(0,4)}\leq C\| {\bf f} \|_{l^2_n}.
\end{eqnarray}
\end{lemma}
The proof of Lemma \ref{le:08} is developed in Appendix A. Using Lemma \ref{le:08},
we can now prove Lemma \ref{le:01}.
\begin{proof1}{\em of Lemma \ref{le:01}.} Let us first show \eqref{eq:033},
since it can be deduced from \eqref{eq:15}, although, it can also be viewed
(and proved) as a dual of \eqref{eq:01} as well. Indeed, \eqref{eq:033} is equivalent to
$$
\|<n>^{-\sigma} \int_0^t e^{-i (t-s)H} P_{a.c.}(H) <n>^{-\sigma} {\bf
G}(s) ds \|_{l^\infty_n L^2_t}\leq \| {\bf G} \|_{l^1_n L^2_t}.
$$
By the Krein's theorem, for every Banach space $X$, the elements
of the space $l^1_n(X)$ are weak limits of linear combinations
of functions in the form $\delta_{n,n_0} x$, where $x\in X$,
$n_0\in \mathbb{Z}$, and $\delta_{n,n_0}$ is Kronecker's symbol.
Thus, to prove the last estimate, we need to check if it holds for
$G_n(s) = \delta_{n,n_0} g(s)$, where $g\in L^2_t$. By Minkowski's
inequality, the obvious embedding $l^2\hookrightarrow l^\infty$
and the dispersive decay estimate \eqref{eq:15} for any $\sigma >
\frac{5}{2}$, we have
\begin{eqnarray*}
& & \left\| <n>^{-\sigma} \int_0^t e^{- i (t-s)H} P_{a.c.}(H)
<n>^{-\sigma} \delta_{n,n_0} g(s) ds \right\|_{l^\infty_n L^2_t} \\
& & \leq C \left\| <n>^{-\sigma} \int_0^t \left\| e^{- i (t-s)H}
P_{a.c.}(H) <n>^{-\sigma} \delta_{n,n_0}
\right\|_{l^2_n} |g(s)| ds \right\|_{L^2_t} \\
& & \leq C \left\| \int_0^t \frac{|g(s)| ds}{(1 + t-s)^{3/2}} \right\|_{L^2_t}\leq C \|g\|_{L^2_t},
\end{eqnarray*}
where in the last step, we have used Hausdorff-Young's inequality
$L^1*L^2 \hookrightarrow L^2$.
We show next that \eqref{eq:02}, \eqref{eq:0333}, \eqref{eq:03}
follow from \eqref{eq:01}. Indeed, \eqref{eq:02} is simply a dual
of \eqref{eq:01} and \eqref{eq:02} is hence equivalent to
\eqref{eq:01}. For \eqref{eq:0333}, we apply the so-called
averaging principle, which tells us that to prove \eqref{eq:0333},
it is sufficient to show it for ${\bf F}(t)= \delta(t-t_0) {\bf
f}$, where ${\bf f} \in l^2_n$ and $\delta(t-t_0)$ is Dirac's
delta-function. Therefore, we obtain
\begin{eqnarray*}
\left\|<n>^{- \sigma} \int_0^t e^{- i (t-s)H} \delta(s - t_0)
P_{a.c.}(H) {\bf f} ds \right\|_{l^\infty_n L^2_t} & = &
\|<n>^{-\sigma} e^{- i (t-t_0)H} P_{a.c.}(H) {\bf f} \|_{l^\infty_n L^2_t} \\
& \leq &
\|<n>^{-3/2} e^{- i (t-t_0)H} P_{a.c.}(H) {\bf f} \|_{l^\infty_n L^2_t} \\
& \leq & C\| {\bf f} \|_{l^2_n},
\end{eqnarray*}
where in the last step, we have used \eqref{eq:01}.
For \eqref{eq:03}, we argue as follows. Define
\begin{eqnarray*}
T {\bf F}(t) & = & \int_{\mathbb{R}} e^{-i(t-s)H} P_{a.c.}(H) {\bf
F}(s)ds \\ & = & e^{-i t H} P_{a.c.}(H) \left( \int_{\mathbb{R}} e^{- i s
H} P_{a.c.}(H) {\bf F}(s)ds \right) \\ & = & e^{-i t H} P_{a.c.}(H) {\bf
f},
\end{eqnarray*}
where ${\bf f} = \int_{\mathbb{R}} e^{-i s H} P_{a.c.}(H) {\bf
F}(s) ds$. By an application of the Strichartz estimate
(\ref{eq:Strichartz1}) and subsequently \eqref{eq:02}, we obtain
\begin{eqnarray*}
\| T {\bf F} \|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} \leq C
\|{\bf f}\|_{l^2_n} \leq \|<n>^{3/2} {\bf F}\|_{l^1_n L^2_t} \leq
C \|<n>^{3} {\bf F}\|_{l^2_n L^2_t} =
C \|<n>^{3} {\bf F}\|_{L^2_t l^2_n},
\end{eqnarray*}
where in the last two steps, we have used H\"older's
inequality and the fact that $l^2_n$ and $L^2_t$ commute.
Now, by the Christ-Kiselev's lemma (e.g. Theorem 1.2 in \cite{KT}), we conclude
that the estimate (\ref{eq:03}) applies to $\int_{0}^t
e^{-i(t-s)H} P_{a.c.}(H) {\bf F}(s) ds$, similar to $T {\bf
F}(t)$. To complete the proof of Lemma \ref{le:08}, it only
remains to prove \eqref{eq:01}. Let us write
\begin{eqnarray*}
e^{-i t H} P_{a.c.}(H) = \chi e^{-i t H} P_{a.c.}(H) + \chi_0 e^{-i t H} P_{a.c.}(H)
\end{eqnarray*}
Take a test function ${\bf g}(t)$ such that $\| {\bf g} \|_{l^1_n
L^2_t}=1$ and obtain
\begin{eqnarray*}
\left| \dpr{\chi e^{-i t H} P_{a.c.}(H) {\bf f}}{{\bf g}(t)}_{n,t}
\right| & = & \f{1}{\pi} \left|\int_0^4 \dpr{ \chi {\rm Im}
R_V(\omega) {\bf f}}{\int_{\mathbb{R}}
e^{-i t \omega} {\bf g}(t)dt}_n d\omega \right| \\
& \leq & C
\int_0^4 \dpr{ |\chi R_V(\omega) {\bf f}|}{|\hat{{\bf g}}(\omega)|}_n d\omega \\
& \leq & C \|\chi R^{\pm}_V(\omega) {\bf f}\|_{l^{\infty}_n
L^2_{\omega}(0,4)} \|\hat{{\bf g}}\|_{l^1_n L^2_\omega(0,4)}.
\end{eqnarray*}
By Plancherel's theorem, $\|\hat{{\bf g}}\|_{l^1_n L^2_\omega(0,4)}
\leq \|\hat{{\bf g}}\|_{l^1_n L^2_\omega(\mathbb{R})} \leq \| {\bf g}
\|_{l^1_n L^2_t}=1$. Using \eqref{eq:05}, we obtain
$$
\left\| \chi e^{-i t H} P_{a.c.}(H) {\bf f} \right\|_{l^\infty_n
L^2_t} = \sup_{ \|{\bf g}\|_{l^1_n L^2_t}=1} \left| \dpr{\chi
e^{-i t H} P_{a.c.}(H) {\bf f}}{{\bf g}(t)}_{n,t} \right| \leq C
\|{\bf f} \|_{l^2_n}.
$$
Similarly, using \eqref{eq:06} instead of \eqref{eq:05}, one concludes
$$
\left\|<n>^{-3/2} \chi_0 e^{-i t H} P_{a.c.}(H) {\bf f}
\right\|_{l^\infty_n L^2_t} = \sup_{ \|<n>^{3/2} {\bf g}\|_{l^1_n
L^2_t}=1} \left| \dpr{\chi_0 e^{-i t H} P_{a.c.}(H) {\bf f}}{{\bf
g}(t)}_{n,t} \right| \leq C \|{\bf f}\|_{l^2_n}.
$$
Combining the two estimates, we obtain (\ref{eq:01}).
\end{proof1}
\section{Proof of Theorem \ref{theorem-main}}
Let ${\bf y}(t) = e^{-i \theta(t)} {\bf z}(t)$ and write the time-evolution problem
for ${\bf y}(t)$ in the form
$$
i \dot{\bf y} = H {\bf y} + {\bf g}_1 + {\bf g}_2 + {\bf g}_3,
$$
where
\begin{eqnarray*}
{\bf g}_1 = \left( {\bf N}(\mbox{\boldmath $\phi$} + {\bf y} e^{-i \theta}) -
{\bf N}(\mbox{\boldmath $\phi$}) \right) e^{- i \theta}, \;\;
{\bf g}_2 = -(\dot{\theta} - \omega) \mbox{\boldmath $\phi$} e^{-i \theta}, \;\;
{\bf g}_3 = - i \dot{\omega} \partial_{\omega} \mbox{\boldmath $\phi$}(\omega) e^{-i \theta}.
\end{eqnarray*}
Let $P_0 = \langle \cdot,\mbox{\boldmath $\psi$}_0 \rangle \mbox{\boldmath $\psi$}_0$,
$Q = (I - P_0) \equiv P_{a.c.}(H)$, and decompose the solution ${\bf y}(t)$
into two orthogonal parts
$$
{\bf y}(t) = a(t) \mbox{\boldmath $\psi$}_0 + \mbox{\boldmath $\eta$}(t),
$$
where $\langle \mbox{\boldmath $\psi$}_0, \mbox{\boldmath $\eta$}
\rangle=0$ and $a(t) = \langle {\bf y}(t), \mbox{\boldmath
$\psi$}_0\rangle$. The new coordinates $a(t)$ and $\mbox{\boldmath
$\eta$}(t)$ satisfy the time evolution problem
$$
\left\{ \begin{array}{ccl} i \dot{a} & = & \omega_0 a + \langle {\bf g}, \mbox{\boldmath $\psi$}_0 \rangle, \\
i \dot{\mbox{\boldmath $\eta$}} & = & H \mbox{\boldmath $\eta$} + Q {\bf g} \end{array} \right.
$$
where ${\bf g} = \sum_{j=1}^3 {\bf g}_j$. The time-evolution
problem for $\mbox{\boldmath $\eta$} \equiv
P_{a.c.}(H)\mbox{\boldmath $\eta$}$ can be rewritten in the
integral form as
\begin{equation}
\label{integral} \mbox{\boldmath $\eta$}(t) = e^{-i t H} Q \mbox{\boldmath $\eta$}(0)
- i \int_0^t e^{-i (t-s) H} Q {\bf g}(s) ds,
\end{equation}
Fix $\sigma > \frac{5}{2}$ and introduce the norms
\begin{eqnarray*}
&& M_1 = \| \mbox{\boldmath $\eta$} \|_{L^6_t l^\infty_n}, \quad
M_2 = \| \mbox{\boldmath $\eta$} \|_{L^\infty _t l^2_n}, \quad
M_3 = \| <n>^{-\sigma} \mbox{\boldmath $\eta$} \|_{l^\infty_n L^{2}_t}, \\
&& M_4 = \| a \|_{L^2_t}, \quad M_5 = \| a \|_{L^{\infty}_t}, \quad
M_6 = \| \omega -\omega(0) \|_{L^{\infty}_t},
\end{eqnarray*}
where the integration in $L^p_t$ is performed on an interval
$[0,T]$ for any $T \in (0,\infty)$. Our goal is to show that
$\dot{\omega}$ and $\dot{\theta} -\omega$ are in $L^1_t$, while
the norms above satisfy an estimate of the form
\begin{equation}
\label{eq:055} \sum\limits_{j=1}^5 M_j \leq C \|{\bf y}(0)\|_{l^2_n} + C
\left( \sum\limits_{j=1}^6 M_j \right)^2
\end{equation}
and
\begin{equation}
\label{eq:055a} M_6 \leq C \epsilon^{2 - \frac{1}{p}} (M_3 +
M_4)^2,
\end{equation}
for some $T$-independent constant $C > 0$ uniformly in
$\sum\limits_{j=1}^6 M_j \leq C \delta \epsilon^{\frac{1}{2 p}}$, where
small positive values of $(\epsilon,\delta)$ are fixed by the
initial conditions $\omega(0) = \omega_0 + \epsilon$ and $\|{\bf
y}(0)\|_{l^2_n} \leq C_0 \delta \epsilon^{\frac{1}{2 p}}$ for some
$C_0 > 0$. The estimate (\ref{eq:055}) and (\ref{eq:055a}) allow
us to conclude, by elementary continuation arguments, that
$$
\sum\limits_{j=1}^5 M_j \leq C \|{\bf y}(0)\|_{l^2_n} \leq C \delta
\epsilon^{\frac{1}{2 p}}
$$
and $|\omega(t) - \omega_0 - \epsilon| \leq C \delta^2 \epsilon^2$
uniformly on $[0,T]$ for any $T \in (0,\infty)$. By interpolation, $a \in L^6_t$
so that ${\bf z}(t) \in L^6([0,T],l^{\infty}_n)$. Theorem
\ref{theorem-main} then holds for $T = \infty$. In particular,
since $\dot{\omega}(t) \in L^1_t(\mathbb{R}_+)$
and $|\omega(t) - \omega_0 - \epsilon| \leq C \delta^2 \epsilon^2$,
there exists $\omega_{\infty} := \lim_{t \to \infty} \omega(t)$ so that
$\omega_{\infty} \in (\omega_0,\omega_0 + \epsilon_0)$. In addition,
since ${\bf z}(t) \in L^6(\mathbb{R}_+,l^{\infty}_n)$, then
$$
\lim_{t \to \infty} \| {\bf u}(t) - e^{-i \theta(t)} \mbox{\boldmath $\phi$}(\omega(t)) \|_{l^{\infty}_n} =
\lim_{t \to \infty} \| {\bf z}(t) \|_{l^{\infty}_n} = 0.
$$
{\bf Estimates for $M_6$:} By the estimate \eqref{33}, we have
\begin{eqnarray*}
\int_0^T |\dot{\omega}| dt & \leq & C \epsilon^{2-\frac{1}{p}}
\|<n>^{-2 \sigma} |{\bf y}|^2\|_{l^\infty_n L^1_t} \left( \| <n>^{2 \sigma}
\mbox{\boldmath $\psi$}_1 \|_{l^1} +
\| <n>^{2 \sigma} \mbox{\boldmath $\psi$}_2 \|_{l^1} \right) \\
& \leq & C \epsilon^{2-\frac{1}{p}} \|<n>^{-\sigma} {\bf y}
\|_{l^\infty_n L^2_t}^2 \\ & \leq & C \epsilon^{2-\frac{1}{p}}
(M_3+M_4)^2,
\end{eqnarray*}
where we have used the fact that $\mbox{\boldmath $\psi$}_1$ and
$\mbox{\boldmath $\psi$}_2$
decay exponentially as $|n| \to \infty$. As a result, we obtain
$$
M_6 \leq \| \dot{\omega} \|_{L^1_t} \leq C
\epsilon^{2-\frac{1}{p}} (M_3 + M_4)^2.
$$
Similarly, we also obtain that
\begin{eqnarray*}
\int_0^T |\dot{\theta} - \omega| dt \leq C \epsilon^{1-\frac{1}{p}}
(M_3+M_4)^2.
\end{eqnarray*}
{\bf Estimates for $M_4$ and $M_5$:} We use the projection formula
$a = \langle {\bf y}, \mbox{\boldmath $\psi$}_0\rangle$ and recall
the orthogonality relation \eqref{constraints}, so that
$$
\langle {\bf z}, \mbox{\boldmath $\psi$}_0\rangle = \langle {\rm
Re}{\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath
$\psi$}_1\rangle + i \langle {\rm Im} {\bf z}, \mbox{\boldmath
$\psi$}_0-\mbox{\boldmath $\psi$}_2\rangle.
$$
By Lemma \ref{lemma-bifurcation} and definitions of
$\mbox{\boldmath $\psi$}_{1,2}$ in
(\ref{eigenvectors-normalized}), we have
$$
\| <n>^{2 \sigma} (\mbox{\boldmath
$\psi$}_0- \mbox{\boldmath $\psi$}_{1,2})\|_{l^2_n} \leq C |\omega-\omega_0|
$$
for some $C > 0$. Provided $\sigma > \frac{1}{2}$, we obtain
\begin{eqnarray*}
M_4 & = & \|\langle {\bf y}, \mbox{\boldmath
$\psi$}_0\rangle\|_{L^2_t} \leq \| \langle {\rm Re} {\bf z},
\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_1\rangle
\|_{L^2_t} +
\| \langle {\rm Im} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2\rangle \|_{L^2_t}\\
& \leq & \|<n>^{-2 \sigma} {\bf z}\|_{L^2_t l^2_n} \left( \| <n>^{2 \sigma}
(\mbox{\boldmath $\psi$}_0-\mbox{\boldmath
$\psi$}_1)\|_{L^{\infty}_t l^2_n} +
\| <n>^{2 \sigma} (\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2)
\|_{L^{\infty}_t l^2_n} \right) \\
& \leq & C \|<n>^{-\sigma} {\bf y} \|_{l^\infty_n L^2_t} \|
\omega -\omega_0 \|_{L^{\infty}_t} \leq C (M_3+M_4) M_6
\end{eqnarray*}
and, similarly,
\begin{eqnarray*}
M_5 & = & \|\langle {\bf y}, \mbox{\boldmath
$\psi$}_0\rangle\|_{L^{\infty}_t} \leq \| \langle {\rm Re} {\bf z}, \mbox{\boldmath
$\psi$}_0-\mbox{\boldmath $\psi$}_1\rangle \|_{L^{\infty}_t} +
\| \langle {\rm Im} {\bf z}, \mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2\rangle \|_{L^{\infty}_t} \\
&\leq & \|{\bf y}\|_{L^\infty_t l^2_n} \left( \|(\mbox{\boldmath $\psi$}_0-\mbox{\boldmath
$\psi$}_1)\|_{L^{\infty}_t l^2_n} +
\| (\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_2)
\|_{L^{\infty}_t l^2_n} \right) \leq
C(M_2 + M_5) M_6.
\end{eqnarray*}
{\bf Estimates for $M_3$:} The free solution in the integral equation
(\ref{integral}) is estimated by \eqref{eq:01} as
\begin{eqnarray*}
\|<n>^{-\sigma} e^{- i t H} Q \mbox{\boldmath $\eta$}(0)\|_{l^\infty_n L^2_t} \leq
\|<n>^{-3/2} e^{- i t H} Q \mbox{\boldmath $\eta$}(0)\|_{l^\infty_n L^2_t} \leq
C \|\mbox{\boldmath $\eta$}(0) \|_{l^2_n}.
\end{eqnarray*}
Since $\dot{\omega}$ and $\dot{\theta} - \omega$ are $L^1_t$ thanks to the
estimates above, we treat the terms of the integral equation
(\ref{integral}) with ${\bf g}_2$ and ${\bf g}_3$ similarly. By \eqref{eq:0333},
we obtain
\begin{eqnarray*}
\|<n>^{-\sigma} \int_0^t e^{-i (t-s) H} Q {\bf g}_{2,3}(s) ds\|_{l^\infty_n L^2_t}
& \leq & C \|{\bf g}_{2,3}\|_{L^1_t l^2_n} \\
& \leq & C \left(\|\dot{\theta}- \omega\|_{L^1_t}
\|\mbox{\boldmath $\phi$}(\omega)\|_{L^{\infty}_t l^2_n}
+ \|\dot{\omega}\|_{L^1_t} \|\partial_{\omega} \mbox{\boldmath $\phi$}(\omega) \|_{L^{\infty}_t l^2_n} \right) \\
& \leq & C \epsilon^{1-\frac{1}{2p}} (M_3 + M_4)^2.
\end{eqnarray*}
On the other hand, using the bound (\ref{estimate-N}) on the vector field ${\bf g}_1$,
we estimate by \eqref{eq:033} and \eqref{eq:0333}
\begin{eqnarray*}
&& \|<n>^{- \sigma} \int_0^t e^{-i (t-s) H} Q {\bf g}_1(s) ds\|_{l^\infty_n L^2_t} \leq
C(\|<n>^{\sigma} |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf z}|\|_{l^1_n L^2_t}+
\||{\bf z}|^{2p+1}\|_{L^1_t l^2_n})\\
&& \leq C \left(\|<n>^{-\sigma} {\bf y}\|_{l^\infty_n L^2_t}
\|<n>^{\sigma}|\mbox{\boldmath $\phi$}(\omega)|^{2p}\|_{L^{\infty}_t l^1_n} +
\|a \|_{L^{2p+1}_t}^{2p+1}\|\mbox{\boldmath $\psi$}_0\|_{l^{2(2p+1)}_n}^{2p+1} +
\|\mbox{\boldmath $\eta$}\|_{L^{2p+1}_t l^{2(2p+1)}_n}^{2p+1} \right) \\
&& \leq C \left( (M_3+M_4) M_6 + M_4^2 M_5^{2p-1} +
\|\mbox{\boldmath $\eta$}\|_{L^{2p+1}_t l^{2(2p+1)}_n}^{2p+1} \right),
\end{eqnarray*}
where we have
$$
\|a \|_{L^{2p+1}_t}^{2p+1} \leq \| a \|_{L^{\infty}_t}^{2p-1} \| a \|_{L^2_t}^2.
$$
and
$$
\|<n>^{\sigma}|\mbox{\boldmath $\phi$}(\omega)|^{2p}\|_{l^1_n}
\leq C \| \omega - \omega_0\|_{L^{\infty}_t},
$$
the latter estimate follows from Lemma \ref{lemma-bifurcation}.
To deal with the last term in the estimate, we use the Gagliardo-Nirenberg inequality,
that is, for all $2\leq r,w\leq \infty$ such that $\frac{6}{r} + \frac{2}{w} \leq 1$,
there is a $C > 0$ such that
$$
\|\mbox{\boldmath $\eta$}\|_{L^r_t l^w_n} \leq C \left( \| \mbox{\boldmath $\eta$} \|_{L^6_t l^{\infty}_n}
+ \| \mbox{\boldmath $\eta$} \|_{L^{\infty}_t l^2_n} \right) = C (M_1+M_2).
$$
If $p\geq 3$, then $((2p+1), 2(2p+1))$ is a Strichartz
pair satisfying $\frac{6}{2p + 1} + \frac{2}{2(2p+1)} \leq 1$ and hence,
combining all previous inequalities, we have
\begin{eqnarray*}
M_3 \leq C\left( \|\mbox{\boldmath $\eta$}(0)\|_{l^2_n}+
\epsilon^{1-\frac{1}{p}} (M_3+M_4)^2 + (M_3+M_4) M_6 + M_4^2 M_5^{2p-1} +
(M_1+M_2)^{2p+1}\right),
\end{eqnarray*}
which agrees with the estimate (\ref{eq:055}) for any $p \geq 3$. \\
{\bf Estimates for $M_1$ and $M_2$:} With the help of \eqref{eq:Strichartz1}, the free solution is estimated by
$$
\|e^{- i t H} Q \mbox{\boldmath $\eta$}(0)\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n}
\leq C \|\mbox{\boldmath $\eta$}(0)\|_{l^2_n}.
$$
With the help of \eqref{eq:Strichartz2}, the nonlinear terms involving ${\bf g}_{2,3}$ are estimated by
\begin{eqnarray*}
\left\| \int_0^t e^{-i (t-s) H} Q {\bf g}_{2,3}(s) ds \right\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n}
& \leq &
C \|{ \bf g}_{2,3}\|_{L^1_t l^2_n} \\ & \leq & C \epsilon^{1-\frac{1}{2p}} (M_3 + M_4)^2.
\end{eqnarray*}
The nonlinear term involving ${\bf g}_1$ is estimated by the sum of two
computations thanks to the bound (\ref{estimate-N}). The first computation is
completed with the help of \eqref{eq:03},
\begin{eqnarray*}
\left\| \int_0^t e^{-i (t-s) H}Q |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf y}|
ds \right\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n} & \leq &
C \| <n>^3 |\mbox{\boldmath $\phi$}(\omega)|^{2p} |{\bf y}|\|_{L^2_t l^2_n} \\
& \leq &
\| <n>^{3 + \sigma} |\mbox{\boldmath $\phi$}(\omega)|^{2p} \|_{L^{\infty}_t l^2_n}
\|<n>^{-\sigma} {\bf y}\|_{l^\infty_n L^2_t} \\
& \leq & C (M_3+M_4) M_6,
\end{eqnarray*}
whereas the second computation is completed with the help of \eqref{eq:Strichartz2},
\begin{eqnarray*}
\left\| \int_0^t e^{-i (t-s) H}Q |{\bf y}|^{2p+1} ds \right\|_{L^6_t l^\infty_n \cap L^\infty_t l^2_n}
& \leq & C \||{\bf y}|^{2p+1}\|_{L^1_t l^2_n} \leq C \|{\bf y}\|_{L^{2p+1}_t l^{2(2p+1)}_n}^{2p+1} \\
& \leq & C \left( M_4^2 M_5^{2p-1} + (M_1 + M_2)^{2p+1} \right),
\end{eqnarray*}
provided $p\geq 3$ holds. We conclude that the estimates for $M_1$ and $M_2$ are the same as the one for
$M_3$.
\section{Numerical results}
We now add some numerical computations which illustrate the asymptotic
stability result of Theorem \ref{theorem-main}. In particular, we shall
obtain numerically the rate, at which the localized perturbations approach to
the asymptotic state of the small discrete soliton. One advantage of numerical
computations is that they are not limited to the case of $p \geq 3$
(which is the realm of our theoretical analysis above), but can
be extended to arbitrary $p \geq 1$. In what follows, we illustrate the
results for $p=1$ (the cubic DNLS), $p = 2$ (the quintic DNLS), and $p = 3$
(the septic DNLS).
Let us consider the single-node external potential with $V_n = - \delta_{n,0}$
for any $n \in \mathbb{Z}$. This potential is known (see Appendix A in \cite{KKK}) to have
only one negative eigenvalue at $\omega_0 < 0$, the continuous spectrum at
$[0,4]$, and no resonances at $0$ and $4$, so it satisfies assumptions (V1)--(V3).
Explicit computations show that the eigenvalue exists at $\omega_0 = 2 - \sqrt{5}$
with the corresponding eigenvector $\psi_{0,n} = e^{-\kappa |n|}$ for any $n \in \mathbb{Z}$, where
$\kappa = {\rm arcsinh}(2^{-1})$. The stationary solutions of
the nonlinear difference equation (\ref{stationaryDNLS})
exist in a local neighborhood of the ground state of $H = -\Delta + {\bf V}$,
according to Lemma \ref{lemma-bifurcation}. We shall consider numerically the case
$\gamma = -1$, for which the stationary solution bifurcates to the domain $\omega < \omega_0$.
Figure \ref{afig2} illustrates the stationary solutions for $p = 1$ and two different values
of $\omega$, showcasing its increased localization (decreasing
width and increasing amplitude), as $\omega$ deviates from $\omega_0$ towards the negative domain.
\begin{figure}
\begin{center}
\includegraphics[height=7cm]{newat1.eps}
\end{center}
\caption{Two profiles of the stationary solution of the nonlinear difference equation
(\ref{stationaryDNLS}) for $V_n = -\delta_{n,0}$, $p = 1$, and for $\omega=-2$ (solid line with circles)
and $\omega=-5$ (dashed line with stars).}
\label{afig2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=6.8cm]{atan_sig3_linf.eps}
\includegraphics[height=6.8cm]{atan_sig2_linf.eps}
\includegraphics[height=6.8cm]{atan_sig1_linf.eps}
\end{center}
\caption{Evolution for $p=3$ (top), $2$ (middle), $1$ (bottom)
of $\| {\bf u}(t) - e^{-i \theta(t)} \mbox{\boldmath $\phi$}(\omega_{\infty})\|$
as a function of time in a log-log scale (solid)
and comparison with a $t^{-3/2}$ power law decay (dashed) as a guide to
the eye.}
\label{afig4}
\end{figure}
In order to examine the dynamics of the DNLS equation (\ref{dNLS})
we consider single-node initial data $u_n=A \delta_{n,0}$ for any
$n \in \mathbb{Z}$, with $A=0.75$,
and observe the temporal dynamics of the solution ${\bf u}(t)$. The resulting
dynamics involves the asymptotic relaxation of the localized perturbation into a
discrete soliton after shedding of some ``radiation''. This dynamics was found to be
typical for
all values of $p = 1,2,3$. In Figure \ref{afig4}, upon suitable subtraction of the phase
dynamics, we illustrate the approach of the wave profile to its asymptotic
form in the $l^{\infty}$ norm. The asymptotic form is obtained by running
the numerical simulation for sufficiently long times, so that the profile
has relaxed to the stationary state. Using a fixed-point algorithm, we identify
the stationary state with the same $l^2$ norm (as the central portion
of the lattice) and confirm that the result of further temporal dynamics is essentially
identical to the stationary state. Subsequently
the displayed $l^{\infty}$ norm of the deviation from the asymptotic
profile is computed, appropriately eliminating the phase by using the gauge invariance of the
DNLS equation (\ref{dNLS}).
We have found from Figure \ref{afig4} in the cases $p=3$ (top panel), $p=2$ (middle panel) and
$p=1$ (bottom panel) that the approach to the stationary state follows a power law which is
well approximated as $\propto t^{-3/2}$. The dashed line on all three figures
represents such a decay in each of the cases. We note that the decay rate observed
in numerical simulations of the DNLS equation (\ref{dNLS}) is faster than the
decay rate $\propto t^{-1/6-p}$ for any $p > 0$ in Theorem \ref{theorem-main}.
|
1,116,691,497,705 | arxiv | \section{Introduction}
With the current experimental and theoretical information it is known that Quantum Chromodynamics (QCD) describes correctly the hadronic physics. Because of the running of $\alpha_s$, one is not able to apply directly this procedure at low energies, where confinement prevents the use of the perturbative QCD --non-perturbative QCD is then needed--.
An alternative approach consists in an effective theory description in terms of the suitable degrees of freedom, which has been very successful in different branches of Particle Physics \cite{Pich,Manohar}. The main feature of this approach is the use of an effective Lagrangian with local operators involving only light particles, where the heavy particles have been integrated out. The high-energy dynamics is only kept in the low-energy couplings and through the symmetries of the full theory.
At very low energies Chiral Perturbation Theory ($\chi$PT) is the effective theory of QCD, as dictated by the spontaneous breaking of the chiral symmetry, which allows the description of the hadronic regime in terms of the pseudo-Goldstone bosons, the pion multiplet, with a perturbative expansion in powers of the soft external momenta and masses of the pseudoscalar mesons \cite{ChPT1,ChPT2}.
We are interested in QCD at energies between the $\rho$ mass and $2$ GeV, where the absence of a mass gap and the abundance of resonances make the effective theory approach more involved. Moreover there is no natural expansion parameter, since the use of the chiral counting is no longer valid at these energies.
Large-$N_C$ QCD provides an adequate framework to fulfill this aim. In this limit the Green Functions are described by the tree diagrams of an effective Lagrangian with local operators and an infinite number of meson fields, being higher corrections described by loops \cite{largeNC1,largeNC2}.
Our approach involves using the Resonance Chiral Theory (R$\chi$T) \cite{RChTa,RChTb}, which describes QCD at intermediate energies ($M_\rho \stackrel{<}{_\sim} E \stackrel{<}{_\sim} 2 \,\mathrm{GeV}$) in terms of scalar, pseudoscalar, vector and axial-vector resonances besides the pseudo-Goldstone bosons. Although R$\chi$T follows the $1/N_C$ expansion, a model dependence appears when we consider only a finite number of resonances \cite{juanjo}. Actually this approximation is supported by the phenomenology and by the assumption that heavier resonances are suppressed by their masses.
Another remark is needed to understand the effective approach and particularly R$\chi$T: one is not working with an effective theory of QCD until the matching is considered \cite{Pich}. In this case, the matching at very low energies supports the use of the chiral symmetry in order to construct the Lagrangian and gives the leading contributions to the low energy couplings (LEC's) of $\chi$PT \cite{RChTa,portoles,jorge}; the matching at very high energies, via the operator product expansion and the Brodsky-Lepage conditions for the form factors, constraints the R$\chi$T couplings \cite{portoles,BrodskyLepage}.
Quantum loops in the R$\chi$T are necessary to improve the predictions and to get a better knowledge of non-perturbative QCD. At the scale of energies available experimentally nowadays, the importance of non-perturbative QCD to distinguish New Physics effects is obvious.
The aim of this work is to make a first step in the renormalization of the R$\chi$T \cite{treball}: the divergent part of the one-loop generating functional is evaluated when one multiplet of scalar and pseudoscalar resonances are included, and we allow for operators which couple up to two resonances. We obtain the renormalization of the couplings and the complete list of operators that make this theory finite at this order.
\section{The Resonance Effective Theory with scalar and pseudoscalar resonances}
Due to the large-$N_C$ limit \cite{largeNC1,largeNC2}, $U(3)$ multiplets for the spectrum are considered, while we prefer $SU(3)$ external currents as we are not interested in anomaly related issues. We allow for operators that contain pseudo-Goldstone bosons and states from the first multiplet of scalar and pseudoscalar resonances.
Although in the initial R$\chi$T Lagrangian \cite{RChTa} only interaction terms with one resonance were included, we think that it is more convenient to consider here operators which couple up to two resonances, like in the kinetic pieces. Furthermore in a previous work \cite{VFF} it was conjectured that these new terms with more resonances are needed to keep the good short-distance behaviour at one loop; though this statement was not proved, it was observed that these terms eased the bad high-energy behaviour at tree level of some form factors with resonances in the final legs, two facts that seem to be related. In any case the requirement of the smooth behaviour of these form factors is an open question \cite{heretic}.
As mentioned in the introduction, the short-distance properties of the underlying QCD must be implemented in the effective Lagrangian. This procedure establishes constraints among its couplings. Moreover, by considering the results of the Green functions of QCD through the operator product expansion (OPE), it turns out that resonance interactions with large number of derivatives are excluded. Therefore it seems natural to consider only operators with the minimum number of derivatives in the leading Lagrangian, an approximation which is corroborated by the phenomenology.
With all these ingredients our Lagrangian reads:
\begin{eqnarray}\label{Lagrangian}
\mathcal{L}_{\mathrm{R}\chi\mathrm{T}}(\mathrm{S},\mathrm{P})&=&\mathcal{L}^{(2)}_{\chi}\,+\,\mathcal{L}_{\mathrm{kin}}(\mathrm{S},\mathrm{P}) \,+\, \mathcal{L}_{2}(\mathrm{S})\nonumber\\
&&+\, \mathcal{L}_{2}(\mathrm{P})\,+\, \mathcal{L}_{2}(\mathrm{S},\mathrm{P}) \, ,
\end{eqnarray}
where the first piece is the ${\cal O}(p^2)$ $\chi$PT Lagrangian,
\begin{equation}
\mathcal{L}_{\chi}^{(2)}\,=\,\frac{F^2}{4} \langle \, u_\mu u^\mu \, + \, \chi_+ \,\rangle \,,
\end{equation}
where, as it is usual, $<...>$ is short for the trace in the flavour space and the other terms in Eq.~(\ref{Lagrangian}) introduce the terms with scalars and pseudoscalars, which have been split into the kinetic part,
\begin{equation}\label{kinetic}
\mathcal{L}_{\mathrm{kin}} \,=\, \frac{1}{2} \sum_{R = S,P} \langle \, \nabla^\mu R \,\nabla_\mu R \,-\, M_R^2\, R^2 \,\rangle \,,
\end{equation}
and the ${\cal O}(p^2)$ interactions linear and bilinear in the scalar and pseudoscalar fields \cite{jorge,treball,gerhard},
\begin{eqnarray}
\mathcal{L}_{2}(\mathrm{S})\!\!\! &= & \!\!\! c_d \langle \, S \, u_\mu u^\mu \,\rangle + c_m \langle \, S \,\chi_+ \,\rangle \nonumber \\
\!\!\!& & \!\!\!+ \lambda_1^{\mathrm{SS}} \langle \, S^2 \,u^\mu u_\mu \,\rangle
+ \lambda_2^{\mathrm{SS}} \langle \, S u_\mu S u^\mu \,\rangle \nonumber \\
\!\!\!&& \!\!\! + \lambda_3^{\mathrm{SS}} \langle \, S^2\, \chi_+ \,\rangle \,,\\ \nonumber \\
\mathcal{L}_{2}(\mathrm{P})\!\!\!&=&\!\!\!i\,d_m \langle \, P \,\chi_- \,\rangle
+ \lambda_1^{\mathrm{PP}} \langle \, P^2 \,u^\mu u_\mu \,\rangle \nonumber \\
\!\!\!& & \!\!\!+ \lambda_2^{\mathrm{PP}} \langle \, P u_\mu P u^\mu \,\rangle + \lambda_3^{\mathrm{PP}} \langle \, P^2\, \chi_+ \,\rangle \,, \\ \nonumber \\
\mathcal{L}_{2}(\mathrm{S},\mathrm{P})\!\!\!& = & \!\!\!\lambda_1^{\mathrm{SP}}\langle \, \{ \nabla_\mu S, P \} u^\mu \,\rangle \nonumber \\
\!\!\!&& \!\!\!+\,i \lambda_2^{\mathrm{SP}} \langle \, \{ S, P \} \chi_- \,\rangle \, .
\end{eqnarray}
The notation of Ref.\cite{RChTa,RChTb} is followed.
Note that as our Lagrangian satisfies the $N_C$ counting rules for an effective theory with $U(3)$ multiplets, only operators that have one trace in the flavour space are considered.
\section{Divergent part of the one-loop generating functional}
In order to evaluate the divergent part of the one-loop generating functional, an expansion around the classical solutions is made, in the spirit of the background field method \cite{background}. In our case, as we have pseudoscalar Goldstones, scalar and pseudoscalar resonances, one defines the quantum fluctuations as:
\begin{eqnarray}
u_R\,=\,u_{cl}\,e^{i \Delta / 2} \, , \qquad\,\,\,\,\,
S\,=\,S_{cl}\,+\,\frac{1}{\sqrt{2}}{\varepsilon^{\phantom{a}}_{\mathrm{S}}} \,, \nonumber \\
u_L\,=\,u_{cl}^\dagger \, e^{-i\Delta / 2} \,, \qquad
P\,=\,P_{cl}\,+\,\frac{1}{\sqrt{2}}{\varepsilon^{\phantom{a}}_{\mathrm{P}}}\,,
\end{eqnarray}
with
\begin{equation} \label{eq:fluc2}
\Delta \,=\,\Delta_i \lambda_i / F \,, \quad
\varepsilon^{\phantom{a}}_{\mathrm{S}}\,=\,{\varepsilon^{\phantom{a}}_{\mathrm{S}}}_{i}\,\lambda_i\,, \quad
\varepsilon^{\phantom{a}}_{\mathrm{P}}\,=\,{\varepsilon^{\phantom{a}}_{\mathrm{P}}}_{i}\,\lambda_i\,.
\end{equation}
In the following we will drop the subindex `$cl$' and all the fields will be understood to be classical.
Inserting this expansion into the Lagrangian of Eq.~(\ref{Lagrangian}), and retaining terms quadratic in the quantum fields, we obtain the second-order fluctuation Lagrangian, which, after considering the equations of motion, can be written as:
\begin{equation}
\Delta {\cal L}_{\mathrm{R} \chi \mathrm{T}} \, = \,
- \, \frac{1}{2} \, \eta \, \left( \, \Sigma_{\mu} \, \Sigma^{\mu} \, + \,
\Lambda \, \right) \, \eta^{\top} \; ,
\end{equation}
where $\eta$ collects the fluctuations
, $\eta=\left(\Delta_i,{\varepsilon^{\phantom{a}}_{\mathrm{S}}}_j,{\varepsilon^{\phantom{a}}_{\mathrm{P}}}_k\right)$, $i,j,k = 0,...,8$
, $\eta^{\top}$ is its transposed and $\Sigma_\mu$ and $\Lambda$ are $27 \times 27$ matrices \cite{treball}.
With the second-order fluctuation Lagrangian, and using the heat kernel techniques \cite{treball,background}, one is able to identify the divergences of the one-loop generating functional, specified by the action
\begin{equation}
S_{1} \, = \, \frac{i}{2} \, \ln \, \mbox{det} \,
\left( \, \Sigma_{\mu} \, \Sigma^{\mu} \, + \, \Lambda \, \right) \; .
\end{equation}
Dimensional regularization is used to renormalize this determinant. Employing then the Schwinger-DeWitt proper-time representation, the divergent part of the action $S_1$ is found to be
\begin{equation}\label{eq:oneloop}
S_{1}^{\,\mathrm{div}}\!\!=\!\!\frac{-1}{(4\pi)^2(D-4)} \! \int \!\!\mathrm{d}^4x \, \langle \, \!\! \frac{1}{12} Y_{\mu\nu} Y^{\mu\nu} + \frac{1}{2} \Lambda^2 \,\rangle \, ,
\end{equation}
where $Y_{\mu \nu}$ denotes the field strength tensor of $Y_{\mu}$,
\begin{equation}
Y_{\mu \nu} \,=\, \partial_{ \mu} Y_{\nu} - \partial_{\nu} Y_{\mu} + [Y_{\mu}, Y_{\nu}] \, ,
\end{equation}
where $Y_\mu$ is defined through the splitting
\begin{equation}
\left( \Sigma_\mu \right)_{ij} \,=\,\delta_{ij} \, \partial_\mu\, + \, \left( Y_{\mu} \right)_{ij} \,.
\end{equation}
\section{Results and conclusions}
In the renormalization of effective field theories by means of dimensional regularization, $S_1^{\, \mathrm{div}}$ might be absorbed by the redefinition of the couplings of the next-to-leading Lagrangian, getting thus a finite quantum field theory at this order. In our case we get the following subleading Lagrangian,
\begin{equation} \label{NLO}
{\cal L}_{1} \, = \, \sum_ {i=1}^{18} \alpha_i \,
{\cal O}_i \, + \sum_{i=1}^{68} \beta_i^R \, {\cal O}_i^R \, +
\sum_{i=1}^{383} \beta_i^{RR} \, {\cal O}_i^{RR} .
\end{equation}
The ${\cal O}_i$, ${\cal O}_i^{R}$ and ${\cal O}_i^{RR}$ operators involve zero, one and two resonance fields respectively. The couplings in the bare Lagrangian $ {\cal L}_{1}$ read:
\begin{eqnarray} \label{running}
\alpha_i & \!\!= & \!\! \mu^{D-4} \left( \alpha_i^r(\mu) +
\frac{1}{(4\pi)^2} \frac{1}{D-4} \gamma_i \right) , \nonumber \\
\beta_i^R & \!\! = & \!\! \mu^{D-4} \left( \beta_i^{R,r}(\mu) +
\frac{1}{(4\pi)^2} \frac{1}{D-4} \gamma_i^R \right) , \nonumber \\
\beta_i^{RR} & \!\! = & \!\! \mu^{D-4} \left( \beta_i^{RR,r}(\mu) +
\frac{1}{(4\pi)^2} \frac{1}{D-4} \gamma_i^{RR} \right) , \nonumber\\
\end{eqnarray}
where $\gamma_i$, $\gamma_i^R$ and $\gamma_i^{RR}$ are the divergent coefficients given by $S_1^{\mathrm{div}}$ that constitute the $\beta$-function of our Lagrangian. The list of these operators, needed for the renormalization, and the values of $\gamma_i$, $\gamma_i^R$ and $\gamma_i^{RR}$, which fix the running of the couplings, appear partially in \cite{treball} and fully in \texttt{http://ific.uv.es/quiral/rt1loop.html}.
Some remarks are convenient here in order to understand these results:
\begin{enumerate}
\item To be consistent with the initial election, we have considered only the operators $\mathcal{O}_i$ in equation (\ref{NLO}) which couple up to two resonances. Moreover, as explained in Ref.~\cite{treball}, a cut in the number of resonances is needed in the procedure to perform the functional integration.
\item As the Lagrangian of Eq.~(\ref{Lagrangian}) only considers terms with the minimum number of derivatives, $\mathcal{O}(p^2)$, the operators of $\mathcal{L}_1$ are constructed by resonances and chiral tensors up to $\mathcal{O}(p^4)$.
\item Although the number of operators in (\ref{NLO}) is very large, one should keep in mind that we are studying only the divergent part of the couplings. We expect that most of the finite part of them must vanish in order to recover a good short-distance behaviour.
\end{enumerate}
Our result provides the running of the $\alpha_i$, $\beta_i^R$ and $\beta_i^{RR}$ couplings through the renormalization group equations (RGE). From Eq.~(\ref{running}) we get~:
\begin{equation} \label{eq:rgeq}
\mu \, \frac{d}{d \mu} \, \alpha_i^r(\mu) \, = \,
- \, \frac{\gamma_i}{16 \,\pi^2} \; ,
\end{equation}
and, analogously, for $\beta_i^R$ and $\beta_i^{RR}$. These results can be potentially useful if we are interested in the phenomenological evaluation of the resonance couplings at this order. Within this issue it is interesting to take a closer look to the running of the resonance couplings in the original R$\chi$T Lagrangian \cite{RChTa}, namely, $c_d^r(\mu)$, $c_m^r(\mu)$ and $d_m^r(\mu)$, once the large-$N_C$ relations of the couplings are used \cite{treball}. Thus we predict no evolution for $c_m^r(\mu)$ and $d_m^r(\mu)$, while unfortunately we cannot conclude anything about $c_d^r(\mu)$, as there are no known constraints on $\lambda_1^{SS}$ and $\lambda_2^{SS}$.
\vspace{0.4cm}
\noindent {\bf Acknowledgments} \\
I wish to thank S.~Narison for the organization of the 12th International QCD Conference and J.~Portol\'es and P.D.~Ruiz-Femen\' \i a for their helpful comments. My work is supported by a FPU scholarship of the Spanish MEC. This work has been supported in part by the EU HPRN-CT2002-00311 (EURIDICE), by MEC (Spain) under grant FPA2004-00996 and by Generalitat Valenciana under grants GRUPOS03/013, GV04B-594 and GV05/015.
\vspace*{-0.1cm}
|
1,116,691,497,706 | arxiv | \section{Introduction}\label{sec:intro}
In the past few years, a multitude of different
formalisms combining probabilistic reasoning with logics, databases, or logic programming
has been developed.
Prominent examples include PHA and ICL~\cite{Poole:93,Poole00},
PRISM~\cite{SatoKameya:01}, SLPs~\cite{Muggleton96},
ProbView~\cite{Lakshmanan}, CLP($\cal BN$)~\cite{Costa03:uai},
CP-logic~\cite{Vennekens}, Trio~\cite{Trio}, probabilistic
Datalog~(pD)~\cite{Fuhr00}, and probabilistic databases~\cite{DalviS04}. Although these logics have been
traditionally studied in the knowledge representation and database
communities, the focus is now often on a machine learning
perspective, which imposes new requirements.
First, these logics must be
simple enough to be learnable and at the same time sufficiently expressive to
support interesting probabilistic inferences. Second, because
learning is computationally expensive and requires answering long
sequences of possibly complex queries, inference in such logics must
be fast, although inference in even the simplest probabilistic logics is computationally hard.
In this paper, we study these problems in the context of a simple
probabilistic logic, ProbLog~\cite{DeRaedt07}, which has been used for
learning in the context of large
biological networks where edges are labeled with probabilities. Large
and complex networks of biological concepts (genes, proteins,
phenotypes, etc.) can be extracted from public databases, and
probabilistic links between concepts can be obtained by various
techniques~\cite{Sevon06}.
ProbLog is essentially an extension of Prolog where a program defines a distribution over all its possible non-probabilistic subprograms. Facts are labeled with probabilities and treated as mutually independent random variables indicating whether or not the corresponding fact belongs to a randomly sampled program.
The success
probability of a query is defined as the probability that it succeeds
in such a random subprogram. The semantics of ProbLog is not new: it is an instance of the distribution semantics~\cite{Sato:95}. This is a
well-known semantics for probabilistic logics that has been
(re)defined multiple times in the literature, often in a more limited database setting; cf.~\cite{Dantsin,Poole:93,Fuhr00,Poole00,DalviS04}. Sato has, however, shown that the semantics is also well-defined in the case of a countably infinite set of random variables and formalized it in his well-known distribution semantics~\cite{Sato:95}.
However, even though relying on the same semantics, in order to allow efficient inference, systems such as PRISM~\cite{SatoKameya:01} and PHA~\cite{Poole:93} additionally require all proofs of a query to be mutually exclusive. Thus, they cannot easily represent the type of network analysis tasks that motivated ProbLog. ICL~\cite{Poole00} extends PHA to the case where proofs need not be mutually exclusive. In contrast to the ProbLog implementation presented here, Poole's AILog2, an implementation of ICL, uses a meta-interpreter and is not tightly integrated with Prolog.
We contribute exact and approximate inference algorithms for
ProbLog. We present algorithms for computing the success and
explanation probabilities of a query, and show how they can be
efficiently implemented combining Prolog
inference with Binary Decision Diagrams (BDDs)~\cite{Bryant86}. In addition to an iterative deepening algorithm that computes an approximation along the lines of~\cite{Poole93:jrnl}, we further
adapt the Monte Carlo approach
used by~\cite{Sevon06} in the context of biological network inference. These two approximation algorithms compute an upper and a lower bound on the success probability. We
also contribute an additional approximation algorithm that computes a lower bound using
only the $k$ most likely proofs.
The key contribution of this paper is the tight integration of these
algorithms in the state-of-the-art
YAP-Prolog system. This integration includes several improvements over the initial implementation used in~\cite{DeRaedt07}, which are needed to use ProbLog to effectively query Sevon's Biomine
network~\cite{Sevon06} containing about 1,000,000 nodes and
6,000,000 edges, as will be shown in the experiments.
This paper is organised as follows. After introducing ProbLog and its semantics in Section 2,
we present several algorithms for exact and approximate inference in Section 3. Section 4 then
discusses how these algorithms are implemented in YAP-Prolog, and Section 5 reports on experiments
that validate the approach. Finally, Section 6 concludes and touches upon related work.
\section{ProbLog}\label{sec:problog}
A ProbLog program consists of a set of labeled facts $p_i::c_i$ together with a set of definite clauses. Each ground instance (that is, each instance not containing variables) of such a fact $c_i$ is true with probability $p_i$, that is, these facts correspond to random variables. We assume that these variables are mutually independent.\footnote{If the program contains multiple instances of the same fact, they correspond to different random variables, i.e.~$\{p::c\}$ and $\{p::c, p::c\}$ are different ProbLog programs.}
The definite clauses allow one to add arbitrary \emph{background knowledge} (BK).
Figure~\ref{fig:Ex} shows
a small probabilistic graph that we shall use as running example in the text.
It can be encoded in ProbLog as follows:
\begin{equation}
\begin{array}{lllll}
0\ldotp8 :: \mathtt{edge(a,c)\ldotp} & ~~~~ & 0\ldotp7 :: \mathtt{edge(a,b)\ldotp} & ~~~~ & 0\ldotp8 :: \mathtt{edge(c,e)\ldotp} \\
0\ldotp6 :: \mathtt{edge(b,c)\ldotp} & ~~~~ & 0\ldotp9 :: \mathtt{edge(c,d)\ldotp} & ~~~~ & 0\ldotp5 :: \mathtt{edge(e,d)\ldotp}
\end{array}
\end{equation}
Such a probabilistic graph can be used to sample subgraphs by tossing
a coin for each edge.
Given a ProbLog program $T=\{p_1::c_1,\cdots,p_n::c_n\} \cup BK$ and a finite set of possible substitutions $\{\theta_{j1}, \ldots \theta_{ji_j}\}$ for each probabilistic fact $p_j::c_j$, let $L_T$ denote the maximal set of \emph{logical} facts that can be added to $BK$, that is, $L_T=\{c_1\theta_{11}, \ldots , c_1\theta_{1i_1}, \cdots, c_n\theta_{n1}, \ldots , c_n\theta_{ni_n}\}$. As the random variables corresponding to facts in $L_T$ are mutually independent, the ProbLog program defines a probability distribution over ground logic programs $L \subseteq L_T$:
\begin{equation}
P(L|T)=\prod\nolimits_{c_i\theta_j\in L}p_i\prod\nolimits_{c_i\theta_j\in L_T\backslash L}(1-p_i)\ldotp
\end{equation}
Since the background knowledge $BK$ is fixed and there is a one-to-one mapping between ground definite clause programs and Herbrand interpretations, a ProbLog program thus also defines a distribution over its Herbrand interpretations.
Sato has shown how this semantics can be generalized to the countably infinite case; we refer to~\cite{Sato:95} for details. For ease of readability, in the remainder of this paper we will restrict ourselves to the finite case and assume all probabilistic facts in a ProbLog program to be ground.
We extend our example with the following background knowledge:
\begin{equation}
\begin{array}{lll}
\mathtt{path(X,Y)} & \mathtt{:-} & \mathtt{edge(X,Y)\ldotp} \\
\mathtt{path(X,Y)} & \mathtt{:-} & \mathtt{edge(X,Z), path(Z,Y)\ldotp}
\end{array}
\end{equation}
We can then ask for the probability that there exists a path
between two nodes, say \emph{c} and \emph{d}, in our probabilistic graph, that is, we query for the probability that a randomly sampled subgraph contains the
edge from \emph{c} to \emph{d}, or the path from \emph{c} to \emph{d}
via \emph{e} (or both of these).
\begin{figure}[t]
\centering
\includegraphics[]{graph}
\caption{ Example of a probabilistic graph: edge labels
indicate the probability that the edge is part of the graph.}\label{fig:Ex}
\end{figure}
Formally, the \emph{success probability} $P_s(q|T)$ of a query $q$ in
a ProbLog~program~$T$ is the marginal of $P(L|T)$ with respect to $q$, i.e.
\begin{equation}
P_s(q|T) = \sum\nolimits_{L\subseteq L_T}P(q|L)\cdot P(L|T)\;, \label{eq:p_suc}
\end{equation}
where $P(q|L) = 1$ if there exists a $\theta$ such that $L\cup BK\models
q\theta$, and $P(q|L)=0$ otherwise. In other words, the success
probability of query $q$ is the probability that the query $q$ is
\emph{provable} in a randomly sampled logic program.
In our example, $40$ of the $64$ possible subprograms allow one to prove \emph{path$(c,d)$}, namely all those that contain at least the edge from \emph{c} to \emph{d} or both the edge from
\emph{c} to \emph{e} and from \emph{e} to \emph{d}, so the success probability of that query is the sum of the probabilities of these programs:
$P_s(path(c,d)|T)=P(\{ab,ac,bc,cd,ce,ed\}|T)+\ldots +P(\{cd\}|T)=0\ldotp94$, where $xy$ is used
as a shortcut for \emph{edge$(x,y)$} when listing elements of a subprogram. We will use this convention throughout the paper. Clearly, listing all subprograms is infeasible in practice; an alternative approach will be discussed in Section~\ref{sec:exact}.
A ProbLog program also defines the probability of a \emph{specific} proof $E$, also called \emph{explanation}, of some query $q$, which is again a marginal of $P(L|T)$. Here, an \emph{explanation} is a minimal subset of the probabilistic facts that together with the background knowledge entails $q\theta$ for some substitution $\theta$. Thus, the probability of such an explanation $E$ is that of sampling a logic program $L\cup E$ that contains at least all the probabilistic facts in $E$, that is, the marginal with respect to these facts:
\begin{equation}
P(E|T) = \sum\nolimits_{ L\subseteq (L_T\backslash E)} P(L\cup E |T) = \prod\nolimits_{c_i \in E}p_i\label{eq:deriv_px}
\end{equation}
The \emph{explanation probability} $P_x(q|T)$ is then defined
as the probability of the most likely explanation or proof of the
query~$q$
\begin{equation}
P_x(q|T) = \max\nolimits_{E\in E(q)}P(E|T)
= \max\nolimits_{E\in E(q)} \prod_{c_i \in E}p_i,\label{eq:p_exp}
\end{equation}
where $E(q)$ is the set of all explanations for query
$q$, i.e., all minimal sets $E\subseteq L_T$ of probabilistic facts such that $E \cup BK \models q$~\cite{Kimmig07}.
In our example, the set of all explanations for \emph{path$(c,d)$}
contains the edge from \emph{c} to \emph{d} (with
probability 0.9) as well as the path consisting of the edges from
\emph{c} to \emph{e} and from \emph{e} to \emph{d} (with probability
$0\ldotp8\cdot 0\ldotp5=0\ldotp4$). Thus, $P_x(path(c,d)|T)=0\ldotp9$.
The ProbLog semantics is essentially a distribution
semantics~\cite{Sato:95}. Sato has rigorously shown that this class
of programs defines a joint probability distribution over the set of
possible least Herbrand models of the program (allowing functors), that is, of the
background knowledge $BK$ together with a
subprogram $L \subseteq L_T$; for further details we refer to~\cite{Sato:95}. The distribution semantics has been used widely in the
literature, though often under other names or in a more restricted setting; see e.g.~\cite{Dantsin,Poole:93,Fuhr00,Poole00,DalviS04}.
\section{Inference in ProbLog}\label{sec:inference}
This section discusses algorithms for computing exactly or
approximately the success and explanation probabilities of ProbLog
queries. It additionally contributes a new algorithm for Monte Carlo approximation of success probabilities.
\subsection{Exact Inference}\label{sec:exact}
Calculating the \emph{success probability} of a query using
Equation~(\ref{eq:p_suc}) directly is infeasible for all but the tiniest
programs, as the number of subprograms to be checked is exponential in the number of probabilistic facts.
However, as we have seen in our example in Section~\ref{sec:problog}, we can describe all subprograms allowing for a specific proof by means of the facts that such a program has to contain, i.e., all the ground probabilistic facts used in that proof. As probabilistic facts correspond to random variables indicating the presence of facts in a sampled program, we alternatively denote proofs by conjunctions of such random variables.
In our example, query \emph{path(c,d)} has two proofs in the full program: \emph{\{edge(c,d)\}} and \emph{\{edge(c,e),edge(e,d)\}}, or, using logical notation, $cd$ and $ce \wedge ed$. The set of all subprograms containing \emph{some} proof thus can be described by a disjunction over all possible proofs, in our case, $cd \vee (ce \wedge ed)$.
This idea forms the basis for the inference method presented in~\cite{DeRaedt07}, which uses two steps:
\begin{enumerate}
\item Compute the proofs of the query $q$ in the logical
part of the theory $T$, that is, in $BK \cup L_T$. The result will be a DNF
formula.
\item Compute the probability of this formula.
\end{enumerate}
Similar approaches are used for PRISM~\cite{SatoKameya:01}, ICL~\cite{Poole00} and pD~\cite{Fuhr00}.
The probability of a single given proof, cf.~Equation~(\ref{eq:deriv_px}), is the marginal over all programs allowing for that proof, and thus equals the product of the probabilities of the facts used by that proof. However, we cannot directly sum the results for the different proofs to obtain the success probability, as a specific subprogram can allow several proofs and therefore contributes to the probability of each of these proofs. Indeed, in our example, all programs that are supersets of \emph{\{edge(c,e),edge(e,d),edge(c,d)\}} contribute to the marginals of both proofs and would therefore be counted twice if summing the probabilities of the proofs. However, for mutually exclusive conjunctions, that is, conjunctions describing disjoint sets of subprograms, the probability is the sum of the individual probabilities.
This situation can be achieved by adding \emph{negated} random variables to a conjunction, thereby explicitly excluding subprograms covered by another part of the formula from the corresponding part of the sum.
In the example, extending $ce \wedge ed$ to $ce \wedge ed \wedge \neg cd$ reduces the second part of the sum to those programs not covered by the first:
\[P_s(path(c,d)|T)=P(cd \vee (ce\wedge ed)|T)\]\[= P(cd|T)+P(ce\wedge ed\wedge\neg cd|T) \]\[= 0\ldotp9 + 0\ldotp8\cdot0\ldotp5\cdot(1-0\ldotp9)=0\ldotp94\]
\noindent However, as the number of proofs grows, disjoining them gets more involved. Consider for example the query \emph{path(a,d)} which has four different but highly interconnected proofs. In general, this problem is known as the \emph{disjoint-sum-problem} or the two-terminal network reliability problem, which is \#P-complete~\cite{Valiant1979}.
Before returning to possible approaches to tackle the disjoint-sum-problem at the end of this section, we will now discuss the two steps of ProbLog's exact inference in more detail.
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{sld}
\caption{SLD-tree for query \emph{path$(c,d)$.}}
\label{fig:SLD}
\end{figure}
Following Prolog, the first step employs SLD-resolution to obtain all different
proofs. As an example, the SLD-tree for the query \emph{?- path$(c,d)$.}
is depicted in Figure~\ref{fig:SLD}.
Each successful proof in the SLD-tree uses a set of ground probabilistic facts $\{p_1::c_1, \cdots, p_k::c_k\}
\subseteq T$. These facts
are necessary for the proof, and the proof is \emph{independent} of other
probabilistic facts in~$T$.
Let us now introduce a Boolean random variable $b_i$ for each ground probabilistic fact
$p_i::c_i \in T$, indicating whether $c_i$ is in a sampled logic program, that is,
$b_i$ has probability $p_i$ of being true.\footnote{For better readability, we do not write substitutions explicitly here.}
A particular proof of query $q$ involving ground facts $\{p_1::c_1, \cdots, p_k::c_k\}
\subseteq T$ is thus represented by the conjunctive formula $b_1
\wedge \cdots \wedge b_k$, which at the same time represents the set of all subprograms containing these facts.
Furthermore, using $E(q)$ to denote the set of proofs or explanations of the goal $q$, the set of all subprograms containing \emph{some} proof of $q$ can be denoted by $\bigvee_{e \in E(q) } \, \bigwedge_{c_i \in e} b_i $, as the following derivation shows:
\begin{eqnarray*}
\bigvee_{e \in E(q) } \, \bigwedge_{c_i \in e} b_i & = & \bigvee_{e \in E(q) } \left( \bigwedge_{c_i \in e} b_i \wedge \bigwedge_{c_i \in L_T \backslash e} (b_i \vee \neg b_i)\right)\\
& = & \bigvee_{e \in E(q) } \bigvee_{L \subseteq L_T\backslash e } \left( \bigwedge_{c_i \in e} b_i \wedge \left(\bigwedge_{c_i \in L} b_i \wedge \bigwedge_{c_i \in L_T \backslash (L\union e)} \neg b_i\right)\right)\\
& = & \bigvee_{e \in E(q) , L \subseteq L_T\backslash e } \left( \bigwedge_{c_i \in L \union e} b_i \wedge \bigwedge_{c_i \in L_T \backslash (L\union e)} \neg b_i\right)\\
& = & \bigvee_{ L \subseteq L_T, \exists\theta L\union BK \models q\theta } \left( \bigwedge_{c_i \in L } b_i \wedge \bigwedge_{c_i \in L_T \backslash L} \neg b_i\right)
\end{eqnarray*}
We first add all possible ways of extending a proof $e$ to a full sampled program by considering each fact not in $e$ in turn. We then note that the disjunction of these fact-wise extensions can be written on the basis of sets. Finally, we rewrite the condition of the disjunction in the terms of Equation~(\ref{eq:p_suc}). This is possible as each subprogram that is an extension of an explanation of $q$ entails some ground instance of $q$, and vice versa, each subprogram entailing $q$ is an extension of some explanation of $q$.
As the DNF now contains conjunctions representing fully specified programs, its probability is a sum of products, which directly corresponds to Equation~(\ref{eq:p_suc}):
\begin{eqnarray*}
\lefteqn{ P(\bigvee_{ L \subseteq L_T, \exists\theta L\union BK \models q\theta } \left( \bigwedge_{c_i \in L } b_i \wedge \bigwedge_{c_i \in L_T \backslash L} \neg b_i\right)) }\\
&=& \sum_{ L \subseteq L_T, \exists\theta L\union BK \models q\theta } \left( \prod_{c_i \in L } p_i \cdot \prod_{c_i \in L_T \backslash L} (1- p_i)\right)\\
&=& \sum_{ L \subseteq L_T, \exists\theta L\union BK \models q\theta } P(L|T)
\end{eqnarray*}
We thus obtain the following alternative characterisation of the success probability:
\begin{equation}
P_s(q|T) = P\left( \bigvee_{e \in E(q) } \, \bigwedge_{c_i \in e} b_i \right)
\label{eq:dnf}
\end{equation}
where $E(q)$ denotes the set of proofs or explanations of the goal $q$
and $b_i$ denotes the Boolean variable corresponding to ground probabilistic
fact $p_i::c_i$. Thus, the problem of computing the
success probability of a ProbLog query can be reduced to that of
computing the probability of a DNF formula.
However, as argued above, due to overlap between different conjunctions, the proof-based DNF of Equation~(\ref{eq:dnf}) cannot directly be transformed into a sum of products.
Computing the probability of DNF formulae thus involves solving the disjoint-sum-problem, and therefore is itself a \#P-hard problem. Various
algorithms have been developed to tackle this problem. The pD-engine HySpirit~\cite{Fuhr00} uses the inclusion-exclusion principle, which is reported to scale to about ten proofs. For ICL, which extends PHA by allowing non-disjoint proofs, \cite{Poole00} proposes a symbolic disjoining algorithm, but does not report scalability results.
Our implementation of ProbLog employs Binary
Decision Diagrams (BDDs)~\cite{Bryant86}, an efficient graphical
representation of a Boolean function over a set of variables, which scales to tens of thousands of proofs; see
Section~\ref{sec:BDD} for more details. PRISM~\cite{SatoKameya:01} and PHA~\cite{Poole:93} differ from the systems mentioned above in that they avoid the disjoint-sum-problem by requiring the user to write programs such that proofs are guaranteed to be disjoint.
On the other hand, as the \emph{explanation probability} $P_x$ exclusively depends on the probabilistic facts used in one proof, it can be calculated using a simple branch-and-bound approach based on the SLD-tree, where partial proofs are discarded if their probability drops below that of the best proof found so far.
\subsection{Approximative Inference}
As the size of the DNF formula grows with the number of proofs, its
evaluation can become quite expensive, and ultimately infeasible. For
instance, when searching for paths in graphs or networks, even in
small networks with a few dozen edges there are easily $O(10^6)$
possible paths between two nodes. ProbLog therefore includes several
approximation methods.
\subsubsection{Bounded Approximation}
The first approximation algorithm, a slight variant of the one proposed in~\cite{DeRaedt07}, uses DNF formulae to obtain both an upper and a lower bound on the probability of a query. It is closely related to work by~\cite{Poole93:jrnl} in the context of PHA, but adapted towards ProbLog. The method relies on two observations.
First, we remark that the DNF formula describing sets of proofs is \emph{monotone}, meaning that adding more proofs will never decrease the probability of the formula being true. Thus, formulae describing subsets of the full set of proofs of a query will always give a lower bound on the query's success probability. In our example, the lower bound obtained from the shorter proof would be $P(cd|T) = 0\ldotp9$, while that from the longer one would be $P(ce\wedge ed|T) = 0\ldotp4$.
Our second observation is that the probability of a proof $b_1 \wedge \ldots\wedge b_n$ will always be at most the probability of an arbitrary prefix $b_1 \wedge \ldots\wedge b_i, i\leq n$.
In our example, the probability of the second proof will be at most the probability of its first edge from $c$ to $e$, i.e., $P(ce|T) = 0\ldotp8 \geq 0\ldotp4$. As disjoining sets of proofs, i.e., including information on facts that are \emph{not} elements of the subprograms described by a certain proof, can only decrease the contribution of single proofs, this upper bound carries over to a set of proofs or partial proofs, as long as prefixes for all possible proofs are included. Such sets can be obtained from an incomplete SLD-tree, i.e., an SLD-tree where branches are only extended up to a certain point.
This motivates ProbLog's \emph{bounded approximation algorithm}. The algorithm relies on a probability threshold $\gamma$ to stop growing the SLD-tree and thus obtain DNF formulae for the two bounds\footnote{Using a probability threshold instead of the depth bound of~\cite{DeRaedt07} has been found to speed up convergence, as upper bounds have been found to be tighter on initial levels.}. The lower bound formula $d_1$ represents all proofs with a probability above the current threshold. The upper bound formula $d_2$ additionally includes all derivations that have been stopped due to reaching the threshold, as these still \emph{may} succeed. Our goal is therefore to grow $d_1$ and $d_2$ in order to decrease $P(d_2|T)-P(d_1|T)$.
Given an acceptance threshold $\delta_p$, an initial probability threshold $\gamma$, and a shrinking factor $\beta\in(0,1)$, the algorithm proceeds in an iterative-deepening manner as outlined in Algorithm~\ref{alg:delta}. Initially, both $d_1$ and $d_2$ are set to \textsc{False}, the neutral element with respect to disjunction, and the probability bounds are $0$ and $1$, as we have no full proofs yet, and the empty partial proof holds in any model.
\begin{algorithm}[t]
\caption{Bounded approximation using iterative deepening with probability thresholds.}
\label{alg:delta}
\begin{algorithmic}
\FUNCTION{\textsc{Bounds}(interval width $\delta_p$, initial threshold $\gamma$, constant $\beta\in(0,1)$)}
\STATE $d_1 = $ \textsc{False}; $d_2 = $ \textsc{False}; $P(d_1|T) =0$; $P(d_2|T) = 1$;
\WHILE{$P(d_2|T) - P(d_1|T)>\delta_p$}
\STATE $p = $\textsc{True};
\REPEAT
\STATE Expand current proof $p$
\UNTIL {either $p$:
\STATE $\quad \quad$ (a) Fails, in this case backtrack to the remaining choice points;
\STATE $\quad \quad$ (b) Succeeds, in this case set $d_1 := d_1 \vee p$ and $d_2 := d_2 \vee p$;
\STATE $\quad \quad$ (c) $P(p|T) < \gamma$, in this case set $d_2 := d_2 \vee p$}
\IF{$d_2 == $ \textsc{False}} \STATE set $d_2 = $ \textsc{True} \ENDIF
\STATE Compute $P(d_1|T)$ and $P(d_2|T)$
\STATE $\gamma := \gamma\cdot\beta$
\ENDWHILE
\STATE return $[P(d_1|T),P(d_2|T)]$
\ENDFUNCTION
\end{algorithmic}
\end{algorithm}
It should be clear that $P(d_1|T)$ monotonically increases, as the number of proofs never decreases. On the other hand, as explained above, if $d_2$ changes from one iteration to the next, this is always because a partial proof $p$ is either removed from $d_2$ and therefore no longer contributes to the probability, or it is replaced by proofs $p_1,\ldots , p_n$, such that $p_i = p \land s_i$, hence $P(p_1 \lor \ldots \lor p_n|T) = P(p \land s_1\lor\ldots\lor p\land s_n|T) = P(p \land ( s_1\lor\ldots\lor s_n)|T)$. As proofs are subsets of the probabilistic facts in the ProbLog program, each literal's random variable appears at most once in the conjunction representing a proof, even if the corresponding subgoal is called multiple times when constructing the proof.
We therefore know that the literals in the prefix $p$ cannot be in any suffix $s_i$, hence, given ProbLog's independence assumption, $P(p \land ( s_1\lor\ldots\lor s_n)|T) = P(p|T)P(s_1\lor\ldots\lor s_n|T) \leq P(p|T)$. Therefore, $P(d_2)$ monotonically decreases.
As an illustration, consider a probability threshold $\gamma =0\ldotp9$ for the
SLD-tree in Figure~\ref{fig:SLD}. In this case, $d_1$ encodes the left
success path while $d_2$ additionally encodes the path up to
\emph{path$(e,d)$}, i.e., $d_1 = cd$ and $d_2 = cd \vee ce$, whereas the
formula for the full SLD-tree is $d = cd \vee (ce \wedge ed)$. The lower bound thus is $0\ldotp9$, the upper bound (obtained by disjoining $d_2$ to $cd \vee (ce\wedge\neg cd)$) is $0\ldotp98$, whereas the true probability is $0\ldotp94$.
Notice that in order to implement this algorithm we need to compute the probability of a set of proofs. This task will be described in detail in Section~\ref{sec:implementation}.
\subsubsection{K-Best} Using a fixed number of proofs to approximate the
probability allows better control of the overall complexity, which is crucial if large numbers of queries have to be evaluated, e.g., in the context of parameter learning.
\cite{Gutmann08}~therefore introduces the $k$-probability $P_k(q|T)$, which approximates
the success probability by using the $k$-best (that is, the $k$ most likely)
explanations instead of all proofs when building the DNF formula used
in Equation~(\ref{eq:dnf}):
\begin{equation}
P_k(q|T) = P\left( \bigvee_{e \in E_k(q) } \, \bigwedge_{b_i \in var(e)} b_i \right)\label{eq:p_k}
\end{equation}
where $E_k(q)=\{e \in E(q)|P_x(e)\geq P_x(e_k)\}$ with $e_k$ the $k$th
element of $E(q)$ sorted by non-increasing probability. Setting
$k=\infty$ leads to the success probability, whereas $k=1$ corresponds to the explanation probability provided that there is a single best proof.
The branch-and-bound approach used to calculate the explanation probability can directly be generalized to finding the $k$-best proofs; cf. also~\cite{Poole:93}.
To illustrate $k$-probability, we consider again our example graph,
but this time with query \emph{path$(a,d)$}. This query has four proofs,
represented by the conjunctions $ac\wedge cd$, $ab\wedge bc \wedge
cd$, $ac\wedge ce \wedge ed$ and $ab\wedge bc \wedge ce \wedge ed$,
with probabilities $0\ldotp72$, $0\ldotp378$, $0\ldotp32$ and $0\ldotp168$
respectively. As $P_1$ corresponds to the explanation probability
$P_x$, we obtain $P_1(path(a,d))=0\ldotp72$. For $k=2$, the overlap between the
best two proofs has to be taken into account: the second proof only
adds information if the first one is absent. As they share edge
$cd$, this means that edge $ac$ has to be missing, leading to
$P_2(path(a,d))=P((ac\wedge cd) \vee (\neg ac \wedge ab\wedge bc
\wedge cd))=0\ldotp72+(1-0\ldotp8)\cdot 0\ldotp378=0\ldotp7956$. Similarly, we obtain
$P_3(path(a,d))=0\ldotp8276$ and $P_k(path(a,d))=0\ldotp83096$ for $k\geq 4$.
\subsubsection{Monte Carlo}\label{sec:mc_method}
As an alternative approximation technique, we propose a Monte Carlo method, where we proceed as follows.
\noindent Execute until convergence:
\begin{enumerate}
\item Sample a logic program from the ProbLog program
\item Check for the existence of some proof of the query of interest
\item Estimate the query probability $P$ as the fraction of samples where the query is provable
\end{enumerate}
We estimate convergence by computing the 95\% confidence interval at each $m$ samples. Given a large number $N$ of samples, we can use the standard normal approximation interval to the binomial distribution:
\[ \delta \approx 2\times\sqrt{\frac{P.(P-1)}{N}} \]
\noindent Notice that confidence intervals do not directly correspond to the exact bounds used in our previous approximation algorithm. Still, we employ the same stopping criterion, that is, we run the Monte Carlo simulation until the width of the confidence interval is at most $\delta_p$.
A similar algorithm (without the use of confidence intervals)
was also used in the context of biological networks (not represented as Prolog programs) by~\cite{Sevon06}.
The use of a Monte Carlo method for probabilistic logic programs was suggested already by~\cite{Dantsin}, although he neither provides details nor reports on an implementation.
Our approach differs from the MCMC method for Stochastic Logic Programs (SLPs) introduced by~\cite{Cussens00} in that we do not use a Markov chain, but restart from scratch for each sample. Furthermore, SLPs are different in that they directly define a distribution over all proofs of a query. Investigating similar probabilistic backtracking approaches for ProbLog is a promising future research direction.
\section{Implementation}\label{sec:implementation}
This section discusses the main building blocks used to implement ProbLog on top of the YAP-Prolog system. An overview is shown in Figure~\ref{fig:problog_imp}, with a typical ProbLog program, including ProbLog facts and background knowledge (BK), at the top.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{implementation}\label{fig:problog}
\caption{ProbLog Implementation: A ProbLog program (top) requires
the ProbLog library which in turn relies on functionality from the
tries and array libraries. ProbLog queries (bottom-left) are sent to
the YAP engine, and may require calling the BDD library CUDD via SimpleCUDD.}
\label{fig:problog_imp}
\end{figure}
The implementation requires ProbLog programs to use the
\texttt{problog} module. Each program consists of a set of labeled
facts and of unlabeled \emph{background knowledge}, a generic
Prolog program. Labeled facts are preprocessed as described below.
Notice that the implementation requires all queries to non-ground probabilistic facts to be ground on calling.
In contrast to standard Prolog queries, where one is interested in
answer substitutions, in ProbLog one is primarily interested in a probability. As
discussed before, two common ProbLog queries ask for the most likely
explanation and its probability, and the probability of whether a
query would have an answer substitution. We have discussed two very
different approaches to the problem:
\begin{itemize}
\item In exact inference, $k$-best and bounded approximation, the engine explicitly reasons about
probabilities of proofs. The challenge is how to compute the
probability of each individual proof, store a large number of
proofs, and compute the probability of sets of proofs.
\item In Monte Carlo, the probabilities of facts are used to sample from ProbLog programs. The
challenge is how to compute a sample quickly, in a way that
inference can be as efficient as possible.
\end{itemize}
ProbLog programs execute from a top-level query and are driven through a ProbLog query. The inference algorithms discussed above can be abstracted as follows:
\begin{itemize}
\item Initialise the inference algorithm;
\item While probabilistic inference did not converge:
\begin{itemize}
\item initialise a new query;
\item execute the query, instrumenting every ProbLog call in the current proof. Instrumentation is required for recording the ProbLog facts required by a proof, but may also be used by the inference algorithm to stop proofs (e.g.,\ if the current probability is lower than a bound);
\item process success or exit substitution;
\end{itemize}
\item Proceed to the next step of the algorithm: this may be trivial or may require calling an external solver, such as a BDD tool, to compute a probability.
\end{itemize}
Notice that the current ProbLog implementation relies on the Prolog
engine to efficiently execute goals. On the other hand, and in
contrast to most other probabilistic language implementations, in ProbLog there
is no clear separation between logical and probabilistic inference:
in a fashion similar to constraint logic programming, probabilistic
inference can drive logical inference.
From a Prolog implementation perspective, ProbLog poses a number of interesting challenges. First, labeled facts have to be efficiently compiled to allow mutual calls between the Prolog program and the ProbLog engine. Second, for exact inference, $k$-best and bounded approximation, sets of proofs have to be manipulated and transformed into BDDs. Finally, Monte Carlo simulation requires representing and manipulating samples. We discuss these issues next.
\subsection{Source-to-source transformation}
We use the \texttt{term\_expansion} mechanism to allow Prolog
calls to labeled facts, and for labeled facts to call the ProbLog
engine. As an example, the program:
\begin{equation}
\begin{array}{l}
\mathtt{0\ldotp715::edge('PubMed\_2196878','MIM\_609065')\ldotp}\\
\mathtt{0\ldotp659::edge('PubMed\_8764571','HGNC\_5014')\ldotp}\\
\end{array}
\end{equation}
\noindent
would be compiled as:
\begin{equation}
\begin{array}{lll}
\mathtt{edge(A,B)} &\mathtt{:-} & \mathtt{problog\_edge(ID,A,B,LogProb),}\\
& & \mathtt{grounding\_id(edge(A,B),ID,GroundID),}\\
& & \mathtt{add\_to\_proof(GroundID,LogProb)\ldotp}\\
& & \\
\multicolumn{3}{l}{\mathtt{problog\_edge(0,'PubMed\_2196878','MIM\_609065',-0\ldotp3348)\ldotp}} \\
\multicolumn{3}{l}{\mathtt{problog\_edge(1,'PubMed\_8764571','HGNC\_5014',-0\ldotp4166)\ldotp}} \\
\end{array}
\end{equation}
\noindent
Thus, the internal representation of each fact contains an identifier, the original arguments, and the logarithm of the probability\footnote{We use the logarithm to avoid numerical problems when calculating the probability of a derivation, which is used to drive inference.}. The \texttt{grounding\_id} procedure will create and store a grounding specific identifier for each new grounding of a non-ground probabilistic fact encountered during proving, and retrieve it on repeated use. For ground probabilistic facts, it simply returns the identifier itself. The \texttt{add\_to\_proof} procedure updates the data structure representing the current path through the search space, i.e., a queue of identifiers ordered by first use, together with its probability. Compared to the original meta-interpreter based implementation of~\cite{DeRaedt07}, the main benefit of source-to-source transformation is better scalability, namely by having a compact representation of the facts for the YAP engine~\cite{DBLP:conf/padl/Costa07} and by allowing access to the YAP indexing mechanism~\cite{jit-index}.
\subsection{Proof Manipulation}
Manipulating proofs is critical in ProbLog. We represent each proof
as a queue containing the identifier of each
different ground probabilistic fact used in the proof, ordered by
first use. The implementation requires calls to non-ground probabilistic facts to be ground, and during proving maintains a table of groundings used within the current query together with their identifiers. Grounding identifiers are based on the fact's identifier extended with a grounding number, i.e.~$5\_1$ and $5\_2$ would refer to different groundings of the non-ground fact with identifier $5$.
In our implementation, the queue is stored in a backtrackable global variable, which is updated by calling \texttt{add\_to\_proof} with an identifier for the current ProbLog fact. We thus exploit Prolog's backtracking mechanism to avoid recomputation of shared proof prefixes when exploring the space of proofs.
Storing a proof is simply a question of adding the value of the variable to a store.
As we have discussed above, the actual number of proofs can grow very quickly. ProbLog compactly represents a proof as a list of numbers. We would further like to have a scalable implementation of \emph{sets} of proofs, such that we can compute the joint \emph{probability} of large sets of proofs efficiently. Our representation for sets of proofs and our algorithm for computing the probability of such a set are discussed next.
\subsection{Sets of Proofs}
Storing and manipulating proofs is critical in ProbLog.
When manipulating proofs, the key operation is often \emph{insertion}:
we would like to add a proof to an existing set of proofs. Some
algorithms, such as exact inference or Monte Carlo, only manipulate
complete proofs. Others, such as bounded approximation, require adding
partial derivations too. The nature of the SLD-tree means that proofs
tend to share both a prefix and a suffix. Partial proofs tend to share
prefixes only. This suggests using \emph{tries} to maintain the set of
proofs. We use the YAP implementation of tries for this task, based
itself on XSB Prolog's work on tries of terms~\cite{RamakrishnanIV-99}, which we briefly summarize here.
Tries~\cite{Fredkin-62} were originally invented to index
dictionaries, and have since been generalised to index recursive data
structures such as terms. Please refer
to~\cite{Bachmair-93,Graf-96,RamakrishnanIV-99} for the use of tries in
automated theorem proving, term rewriting and tabled logic
programs. An essential property of the trie data structure is that
common prefixes are stored only once. A trie is a tree structure where
each different path through the trie data units, the \emph{trie
nodes}, corresponds to a term described by the tokens labelling the
nodes traversed. For example, the tokenized form of the term
$f(g(a),1)$ is the sequence of 4 tokens: $f/2$, $g/1$, $a$ and
$1$. Two terms with common prefixes will branch off from each other at
the first distinguishing token.
Trie's internal nodes are four field data structures, storing
the node's token, a pointer to the node's
first child, a pointer to the node's parent and a
pointer to the node's next sibling, respectively. Each
internal node's outgoing transitions may be determined by following
the child pointer to the first child node and, from there, continuing
sequentially through the list of sibling pointers. When a list of
sibling nodes becomes larger than a threshold value (8 in our
implementation), we dynamically index the nodes through a hash table
to provide direct node access and therefore optimise the
search. Further hash collisions are reduced by dynamically expanding
the hash tables. Inserting a term requires in the worst case
allocating as many nodes as necessary to represent its complete
path. On the other hand, inserting repeated terms requires traversing
the trie structure until reaching the corresponding leaf node, without
allocating any new node.
In order to minimize the number of nodes when storing proofs in a
trie, we use Prolog lists to represent proofs. For example,
a ProbLog proof $[3, 5\_1, 7, 5\_2]$ uses ground
fact 3, a first grounding of fact 5, ground fact 7 and another
grounding of fact 5, that is, list elements in proofs are always
either integers or two integers with an underscore in between.
Figure~\ref{fig:trie_proofs} presents an example of a trie storing
three proofs. Initially, the trie contains the root node only. Next,
we store the proof $[3, 5\_1, 7, 5\_2]$ and six nodes (corresponding
to six tokens) are added to represent it
(Figure~\ref{fig:trie_proofs}(a)). The proof $[3, 5\_1, 9, 7, 5\_2]$
is then stored which requires seven nodes. As it shares a common
prefix with the previous proof, we save the three initial nodes common
to both representations (Figure~\ref{fig:trie_proofs}(b)). The proof
$[3, 4, 7]$ is stored next and we save again the two initial nodes
common to all proofs (Figure~\ref{fig:trie_proofs}(c)).
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{trie_proofs}
\caption{Using tries to store proofs. Initially, the trie contains the
root node only. Next, we store the proofs:
(a) $[3, 5\_1, 7, 5\_2]$;
(b) $[3, 5\_1, 9, 7, 5\_2]$; and
(c) $[3, 4, 7]$.}
\label{fig:trie_proofs}
\end{figure}
\subsection{Binary Decision Diagrams}
\label{sec:BDD}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{bdd}
\caption{Binary Decision Diagram encoding the DNF formula $cd \vee (ce \wedge
ed)$, corresponding to the two proofs of query \emph{path(c,d)} in
the example graph. An internal node labeled $xy$ represents the Boolean
variable for the edge between $x$ and~$y$, solid/dashed edges
correspond to values true/false and are labeled with the probability that the variable takes this value.}\label{fig:BDD}
\end{figure}
To efficiently compute the probability of a DNF formula representing a set of proofs,
our implementation represents this formula as a reduced ordered Binary Decision Diagram (BDD)~\cite{Bryant86}, which can be viewed as a compact encoding of a Boolean decision tree.
Given a fixed variable ordering, a Boolean function $f$ can be represented as
a full Boolean decision tree, where each node on the $i$th level is
labeled with the $i$th variable and has two children called low and
high. Leaves are labeled by the outcome of $f$ for the variable
assignment corresponding to the path to the leaf, where in each node
labeled $x$, the branch to the low (high) child is taken if variable
$x$ is assigned 0 (1). Starting from such a tree, one obtains a BDD
by merging isomorphic subgraphs and deleting redundant nodes until no
further reduction is possible. A node is redundant if the subgraphs
rooted at its children are isomorphic. Figure~\ref{fig:BDD} shows the
BDD for the existence of a path between \emph{c} and \emph{d} in our
earlier example.
We use SimpleCUDD\footnote{\url{http://www.cs.kuleuven.be/~theo/tools/simplecudd.html}} as a wrapper tool for the
BDD package
CUDD\footnote{\url{http://vlsi.colorado.edu/~fabio/CUDD}} to construct
and evaluate BDDs. More precisely, the trie representation of the DNF
is translated to a BDD generation script, which is processed by SimpleCUDD to build the
BDD using CUDD primitives. It is executed via Prolog's shell utility, and results
are reported via shared files.
\begin{algorithm}[t]
\caption{Translating a trie $T$ representing a DNF to a BDD generation script. \textsc{Replace}$(T,C,n_i)$ replaces each occurence of $C$ in $T$ by $n_i$.}
\label{alg:trie2bdd}
\begin{algorithmic}
\FUNCTION{\textsc{Translate}(trie $T$)}
\STATE $i := 1$
\WHILE{$\neg leaf(T)$}
\STATE $S_{\wedge} := \{(C,P)|C $ leaf in $T$ and single child of its parent $P \}$
\FORALL{$(C,P)\in S_{\wedge}$}
\STATE write $n_i = P\wedge C$
\STATE $T := \textsc{Replace}(T,(C,P),n_i)$
\STATE $i := i + 1$
\ENDFOR
\STATE $S_{\vee} := \{[C_1,\ldots,C_n]|$ leaves $C_j $ are all the children of some parent $P$ in $T\}$
\FORALL{$[C_1,\ldots,C_n]\in S_{\vee}$}
\STATE write $n_i = C_1 \vee \ldots \vee C_n$
\STATE $T := \textsc{Replace}(T,[C_1,\ldots,C_n],n_i)$
\STATE $i := i + 1$
\ENDFOR
\ENDWHILE
\STATE write $top = n_{i-1}$
\ENDFUNCTION
\end{algorithmic}
\end{algorithm}
During the generation of the code, it is crucial to exploit the
structure sharing (prefixes and suffixes) already in the trie
representation of a DNF formula, otherwise CUDD computation time
becomes extremely long or memory overflows quickly.
Since CUDD builds BDDs by joining smaller BDDs using logical operations, the trie
is traversed bottom-up to successively generate code for all its
subtrees. Algorithm~\ref{alg:trie2bdd} gives the details of this procedure. Two types of operations are used to combine nodes.
The first creates conjunctions of leaf nodes and their parent if the leaf is a single child, the second creates disjunctions of all child nodes of a node if these child nodes are all leaves.
In both cases, a subtree that occurs multiple times in the trie is
translated only once, and the resulting BDD is used for all occurrences
of that subtree. Because of the optimizations in CUDD, the resulting
BDD can have a very different structure than the trie.
The translation for query \emph{path(a,d)} in our example graph is illustrated in Figure~\ref{fig:trie2bdd}, it results in the following script:
\begin{eqnarray*}
n1 & = & ce \wedge ed\\
n2 & = & cd \vee n1\\
n3 & = & ac \wedge n2\\
n4 & = & bc \wedge n2\\
n5 & = & ab \wedge n4\\
n6 & = & n3 \vee n5\\
top & = & n6
\end{eqnarray*}
\begin{figure}
\centering
\subfigure[]{\includegraphics[scale=0.25]{trie2bdd}}
\subfigure[]{\includegraphics[scale=0.25]{trie2bdd_step1}}
\subfigure[]{\includegraphics[scale=0.25]{trie2bdd_step2}}
\subfigure[]{\includegraphics[scale=0.25]{trie2bdd_step3}}
\subfigure[]{\includegraphics[scale=0.25]{trie2bdd_step4}}
\caption{Translating the DNF for \emph{path(a,d)}.}
\label{fig:trie2bdd}
\end{figure}
After CUDD has generated the BDD, the probability of a formula is
calculated by traversing the BDD, in each node summing
the probability of the high and low child, weighted by the probability
of the node's variable being assigned true and false
respectively, cf.~Algorithm~\ref{alg:calcprob}. Intermediate results are cached, and the algorithm has a
time and space complexity linear in the size of the BDD.
\begin{algorithm}[t]
\caption{Calculating the probability of a BDD.}
\label{alg:calcprob}
\begin{algorithmic}
\FUNCTION{\textsc{Probability}(BDD node $n$ )}
\STATE If $n$ is the 1-terminal then return 1
\STATE If $n$ is the 0-terminal then return 0
\STATE let $h$ and $l$ be the high and low children of $n$
\STATE $prob(h) :=$ call \textsc{Probability}($h$)
\STATE $prob(l) :=$ call \textsc{Probability}($l$)
\STATE return $p_n \cdot prob(h) + (1-p_n) \cdot prob(l)$
\ENDFUNCTION
\end{algorithmic}
\end{algorithm}
For illustration, consider again Figure~\ref{fig:BDD}. The algorithm starts by assigning probabilities $0$ and $1$ to the $0$-~and $1$-leaf respectively. The node labeled $ed$ has probability $0\ldotp5\cdot1+0\ldotp5\cdot0=0\ldotp5$, node $ce$ has probability $0\ldotp8\cdot0\ldotp5+0\ldotp2\cdot0=0\ldotp4$; finally, node $cd$, and thus the entire formula, has probability $0\ldotp9\cdot1+0\ldotp1\cdot0\ldotp4=0\ldotp94$.
\subsection{Monte Carlo}
The Monte Carlo implementation is shown in Algorithm~\ref{alg:mc}. It receives a query $q$, an acceptance threshold $\delta_p$ and a constant $m$ determining the number of samples generated per iteration. At the end of each iteration, it estimates the probability $p$ as the fraction of programs sampled over all previous iterations that entailed the query, and the confidence interval width to be used in the stopping criterion as explained in Section~\ref{sec:mc_method}.
\begin{algorithm}[t]
\caption{Monte Carlo Inference.}
\label{alg:mc}
\begin{algorithmic}
\FUNCTION{\textsc{MonteCarlo}(query $q$, interval width $\delta_p$, constant $m$)}
\STATE $c = 0$; $i = 0$; $p = 0$; $\delta = 1$;
\WHILE{$\delta > \delta_p$}
\STATE Generate a sample $P'$;
\IF{$P'\models q$}
\STATE $c:=c+1;$
\ENDIF
\STATE $i:=i+1$;
\IF{$i$ mod $m ==0$}
\STATE $p := c/i$
\STATE $\delta := 2\times\sqrt{\frac{p\cdot(p-1)}{i}}$
\ENDIF
\ENDWHILE
\STATE return $p$
\ENDFUNCTION
\end{algorithmic}
\end{algorithm}
Monte Carlo execution is quite different from the approaches discussed before, as the two main steps are \textbf{(a)} generating a sample program and \textbf{(b)} performing standard refutation on the sample. Thus, instead of combining large numbers of proofs, we need to manipulate large numbers of different programs or samples.
Our first approach was to generate a complete sample and to check for a proof. In order to accelerate the process, proofs were cached in a trie to skip inference on a new sample. If no proofs exist on a cache, we call the standard Prolog refutation procedure. Although this approach works rather well for small databases, it does not scale to larger databases where just generating a new sample requires walking through millions of facts.
We observed that even in large programs proofs are often quite short, i.e., we only need to verify whether facts from a small fragment of the database are in the sample. This suggests that it may be a good idea to take advantage of the independence between facts and generate the sample \emph{lazily}: we verify whether a fact is in the sample only when we need it for a proof. YAP represents samples compactly as a three-valued array with one field for each fact, where $0$ means the fact was not yet sampled, $1$ it was already sampled and belongs to the sample, $2$ it was already sampled and does not belong to the sample.
In this implementation:
\begin{enumerate}
\item New samples are generated by resetting the sampling array.
\item At every call to \texttt{add\_to\_proof}, given the current ProbLog literal $f$:
\begin{enumerate}
\item if $s[f] == 0 $, $s[f] = sample(f)$;
\item if $s[f] == 1$, succeed;
\item if $s[f] == 2$, fail;
\end{enumerate}
\end{enumerate}
Note that as fact identifiers are used to access the array, the approach cannot directly be used for non-ground facts.
The current implementation of Monte Carlo therefore uses the internal database to store the result of sampling different groundings of such facts.
\section{Experiments}\label{sec:experiments}
We performed experiments with our implementation of ProbLog in the context of the biological network obtained from the Biomine project~\cite{Sevon06}.
We used two subgraphs extracted around three genes known to be connected
to the Alzheimer disease (HGNC numbers 983, 620 and 582) as well as
the full network. The smaller graphs were obtained querying Biomine for best paths of length 2 (resulting in graph \textsc{Small}) or all paths of length 3 (resulting in graph \textsc{Medium}) starting at one of the three genes. \textsc{Small} contains 79 nodes and 144 edges, \textsc{Medium} 5220 nodes and 11532 edges.
We used \textsc{Small} for a first comparison of our
algorithms on a small scale network where success probabilities can be calculated exactly.
Scalability was evaluated using both \textsc{Medium} and the entire
\textsc{Biomine} network with roughly 1,000,000 nodes and
6,000,000 edges. In all experiments, we queried for the probability that
two of the gene nodes mentioned above are connected, that is, we used queries such as \texttt{path('HGNC\_983','HGNC\_620',Path)}. We used the following definition of an acyclic path in our background knowledge:
\begin{equation}
\begin{array}{lll}
\mathtt{path(X,Y,A)} & \mathtt{:-} & \mathtt{path(X,Y,[X],A)},\\
\mathtt{path(X,X,A,A)\ldotp} & &\\
\mathtt{path(X,Y,A,R)} & \mathtt{:-} & \mathtt{X~\backslash ==~Y}, \\ & & \mathtt{edge(X,Z),} \\ & & \mathtt{absent(Z,A),} \\ & & \mathtt{path(Z,Y,[Z|A],R)\ldotp}\\
\end{array}
\end{equation}
As list operations to check for the absence of a node get expensive for long paths, we consider an alternative definition for use in Monte Carlo. It provides cheaper testing by using the internal database of YAP to store nodes on the current path under key \texttt{visited}:
\begin{equation}
\begin{array}{lll}
\mathtt{memopath(X,Y,A)} & \mathtt{:-} & \mathtt{eraseall(visited)}, \\ && \mathtt{memopath(X,Y,[X],A)\ldotp}\\
\mathtt{memopath(X,X,A,A)\ldotp} & &\\
\mathtt{memopath(X,Y,A,R)} & \mathtt{:-} & \mathtt{X~\backslash ==~Y}, \\ & & \mathtt{edge(X,Z),} \\ & & \mathtt{recordzifnot(visited,Z,\_),}\\
& & \mathtt{memopath(Z,Y,[Z|A],R)\ldotp}\\
\end{array}
\end{equation}
Finally, to assess performance on the full network for queries with smaller probabilities, we use the following definition of paths with limited length:
\begin{equation}
\begin{array}{lll}
\mathtt{lenpath(N,X,Y,Path)} & \mathtt{ :-} & \mathtt{lenpath(N,X,Y,[X],Path)\ldotp}\\
\mathtt{lenpath(N,X,X,A,A) } & \mathtt{ :-} & \mathtt{ N >= 0\ldotp}\\
\mathtt{lenpath(N,X,Y,A,P) } & \mathtt{ :-} & \mathtt{ X \backslash == Y},\\
&& \mathtt{ N > 0},\\
&& \mathtt{ edge(X,Z)},\\
&& \mathtt{ absent(Z,A)},\\
&& \mathtt{ NN\ is\ N-1},\\
&& \mathtt{ lenpath(NN,Z,Y,[Z|A],P)\ldotp}
\end{array}
\end{equation}
All experiments were performed on a Core 2 Duo 2.4 GHz 4 GB machine running Linux. All times reported are in \texttt{msec} and do not include the time to load the graph into Prolog. The latter takes 20, 200 and 78140 \texttt{msec} for \textsc{Small}, \textsc{Medium} and \textsc{Biomine} respectively. Furthermore, as YAP indexes the database at query time, we query for the explanation probability of \texttt{path('HGNC\_620','HGNC\_582',Path)} before starting runtime measurements. This takes 0, 50 and 25900 \texttt{msec} for \textsc{Small}, \textsc{Medium} and \textsc{Biomine} respectively.
We report $T_P$, the time spent by
ProbLog to search for proofs, as well as $T_B$, the time spent to execute BDD programs (whenever meaningful). We also report the
estimated probability $P$. For approximate inference using bounds, we
report exact intervals for $P$, and also include the number $n$ of
BDDs constructed. We set both the initial threshold and the shrinking
factor to $0\ldotp5$.
We computed $k$-probability for $k=1,2,\ldots,1024$. In the bounding algorithms, the error interval ranged
between 10\% and 1\%. Monte Carlo recalculates confidence intervals after $m=1000$ samples. We also report the number $S$ of samples used.
\paragraph{Small Sized Sample}
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
path &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
{\bf k}&
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} \\
\hline
1 & 0 & 13 & 0.07 & 0 & 7 & 0.05 & 0 & 26 & 0.66\\
2 & 0 & 12 & 0.08 & 0 & 6 & 0.05 & 0 & 6 & 0.66\\
4 & 0 & 12 & 0.10 & 10 & 6 & 0.06 & 0 & 6 & 0.86\\
8 & 10 & 12 & 0.11 & 0 & 6 & 0.06 & 0 & 6 & 0.92\\
16 & 0 & 12 & 0.11 & 10 & 6 & 0.06 & 0 & 6 & 0.92\\
32 & 20 & 34 & 0.11 & 10 & 17 & 0.07 & 0 & 7 & 0.96\\
64 & 20 & 74 & 0.11 & 10 & 46 & 0.09 & 10 & 38 & 0.99\\
128 & 50 & 121 & 0.11 & 40 & 161 & 0.10 & 20 & 257 & 1.00\\
256 & 140 & 104 & 0.11 & 80 & 215 & 0.10 & 90 & 246 & 1.00\\
512 & 450 & 118 & 0.11 & 370 & 455 & 0.11 & 230 & 345 & 1.00\\
1024 & 1310 & 537 & 0.11 & 950 & 494 & 0.11 & 920 & 237 & 1.00\\\hline
\textbf{exact} & 670 & 450 & 0.11 & 8060 & 659 & 0.11 & 630 & 721 & 1.00\\\hline
\end{tabular}
\caption{$k$-probability on \textsc{Small}. }
\label{tab:1}
\end{table}
We first compared our algorithms on \textsc{Small}.
Table~\ref{tab:1} shows the results for $k$-probability and exact inference. Note
that nodes 620 and 582 are close to each other, whereas node 983 is farther apart. Therefore, connections involving the latter are less likely.
In this graph, we obtained good approximations using a small fraction of proofs (the queries have 13136, 155695 and 16048 proofs respectively).
Our results also show a significant
increase in running times as ProbLog explores more paths in the graph,
both within the Prolog code and within the BDD code. The BDD running
times can vary widely, we may actually have large running times for
smaller BDDs, depending on BDD structure. However, using SimpleCUDD instead of the C++ interface used in~\cite{Kimmig08} typically decreases BDD time by at least one or two orders of magnitude.
Table~\ref{tab:2} gives corresponding results for bounded approximation. The
algorithm converges quickly, as few proofs are needed and BDDs remain small. Note however that exact inference is competitive for this problem size. Moreover, we observe large speedups compared to the implementation with meta-interpreters used in~\cite{DeRaedt07}, where total runtimes to reach $\delta=0\ldotp01$ for these queries were 46234, 206400 and 307966 \texttt{msec} respectively.
Table~\ref{tab:3} shows the performance of the Monte Carlo estimator. On \textsc{Small}, Monte Carlo is the fastest approach. Already within the first 1000 samples a good approximation is obtained.
The experiments on \textsc{Small} thus confirm that the implementation on top of YAP-Prolog enables efficient probabilistic inference on small sized graphs.
\begin{table}[t]
\begin{tabular}{|c|rr|rr|rr|}
\hline
path &
\multicolumn{2}{c|}{$983-620$} &
\multicolumn{2}{c|}{$983-582$} &
\multicolumn{2}{c|}{$620-582$} \\
$\delta$ &
\multicolumn{1}{c}{$T_P$~$T_B$~n} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$~~~$T_B$~~~n} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$~~$T_B$~~n} & \multicolumn{1}{c|}{$P$} \\
\hline
0.10 & 0~~48~~4 & [0.07,0.12] & 10~~~~~74~~~~6 & [0.06,0.11] & 0~~~~~25~~2 & [0.91,1.00] \\
0.05 & 0~~71~~6 & [0.07,0.11] & 0~~~~~75~~~~6 & [0.06,0.11] & 0~~~486~~4 & [0.98,1.00] \\
0.01 & 0~~83~~7 & [0.11,0.11] & 140~~3364~~10 & [0.10,0.11] & 60~~1886~~6 & [1.00,1.00] \\\hline
\end{tabular}
\caption{Inference using bounds on \textsc{Small}. }
\label{tab:2}
\end{table}
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
path &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
$\delta$ &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} \\
\hline
0.10 & 1000 & 10 & 0.11 & 1000 & 10 & 0.11 & 1000 & 30 & 1.00\\
0.05 & 1000 & 10 & 0.11 & 1000 & 10 & 0.10 & 1000 & 20 & 1.00\\
0.01 & 16000 & 130 & 0.11 & 16000 & 170 & 0.11 & 1000 & 30 & 1.00\\\hline
\end{tabular}
\caption{Monte Carlo Inference on \textsc{Small}. }
\label{tab:3}
\end{table}
\paragraph{Medium Sized Sample}
For graph \textsc{Medium} with around 11000 edges, exact inference is no longer feasible. Table~\ref{tab:1a}
again shows results for the $k$-probability. Comparing these
results with the corresponding values from Table~\ref{tab:1}, we
observe that the estimated probability is higher now: this is natural,
as the graph has both more nodes and is more connected, therefore leading to many more possible explanations. This also explains the increase in running times. Approximate
inference using bounds only reached loose bounds (with differences $>0\ldotp 2$) on queries involving node \texttt{'HGNC\_983'}, as upper bound formulae with more than 10 million conjunctions were encountered, which could not be processed.
The Monte Carlo estimator using the standard definition of \texttt{path/3} on \textsc{Medium} did not complete the first $1000$ samples within one hour. A detailed analysis shows
that this is caused by some queries backtracking too heavily. Table~\ref{tab:3a} therefore reports results using the memorising version \texttt{memopath/3}. With this improved definition, Monte Carlo performs well: it obtains a good approximation in a few seconds. Requiring tighter bounds however can increase runtimes significantly.
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
path &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
{\bf k} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} \\
\hline
1 & 180 & 6 & 0.33 & 1620 & 6 & 0.30 & 10 & 6 & 0.92 \\
2 & 180 & 6 & 0.33 & 1620 & 6 & 0.30 & 20 & 6 & 0.92 \\
4 & 180 & 6 & 0.33 & 1630 & 6 & 0.30 & 10 & 6 & 0.92 \\
8 & 220 & 6 & 0.33 & 1630 & 6 & 0.30 & 20 & 6 & 0.92 \\
16 & 260 & 6 & 0.33 & 1660 & 6 & 0.30 & 30 & 6 & 0.99 \\
32 & 710 & 6 & 0.40 & 1710 & 7 & 0.30 & 110 & 6 & 1.00 \\
64 & 1540 & 7 & 0.42 & 1910 & 6 & 0.30 & 200 & 6 & 1.00 \\
128 & 1680 & 6 & 0.42 & 2230 & 6 & 0.30 & 240 & 9 & 1.00 \\
256 & 2190 & 7 & 0.55 & 2720 & 6 & 0.49 & 290 & 196 & 1.00 \\
512 & 2650 & 7 & 0.64 & 3730 & 7 & 0.53 & 1310 & 327 & 1.00 \\
1024 & 8100 & 41 & 0.70 & 5080 & 8 & 0.56 & 3070 & 1357 & 1.00 \\
\hline
\end{tabular}
\caption{$k$-probability on \textsc{Medium}. }
\label{tab:1a}
\end{table}
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
memo &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
$\delta$ &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} \\
\hline
0.10 & 1000 & 1180 & 0.78 & 1000 & 2130 & 0.76 & 1000 & 1640 & 1.00\\
0.05 & 2000 & 2320 & 0.77 & 2000 & 4230 & 0.74 & 1000 & 1640 & 1.00\\
0.01 & 29000 & 33220 & 0.77 & 29000 & 61140 & 0.77 & 1000 & 1670 & 1.00\\\hline
\end{tabular}
\caption{Monte Carlo Inference using \texttt{memopath/3} on \textsc{Medium}. }
\label{tab:3a}
\end{table}
\paragraph{Biomine Database}
The Biomine Database covers hundreds of thousands of entities and
millions of links.
On \textsc{Biomine}, we therefore restricted our experiments to the approximations given by
$k$-probability and Monte Carlo. Given the results on \textsc{Medium}, we directly used \texttt{memopath/3} for Monte Carlo. Tables~\ref{tab:1c} and~\ref{tab:3b} show the results on the large network.
We observe that on this large graph, the number of possible paths is tremendous, which implies success probabilities practically equal to 1. Still, we observe that
ProbLog's branch-and-bound search to find the best solutions performs reasonably also on this size of network. However, runtimes for obtaining tight confidence intervals with Monte Carlo explode quickly even with the improved path definition.
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
path &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
{\bf k} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c}{$T_B$} & \multicolumn{1}{c|}{$P$} \\
\hline
1 & 5,760 & 49 & 0.16 & 8,910 & 48 & 0.11 & 10 & 48 & 0.59 \\
2 & 5,800 & 48 & 0.16 & 10,340 & 48 & 0.17 & 180 & 48 & 0.63 \\
4 & 6,200 & 48 & 0.16 & 13,640 & 48 & 0.28 & 360 & 48 & 0.65 \\
8 & 7,480 & 48 & 0.16 & 15,550 & 49 & 0.38 & 500 & 48 & 0.66 \\
16 & 11,470 & 49 & 0.50 & 58,050 & 49 & 0.53 & 630 & 48 & 0.92 \\
32 & 15,100 & 49 & 0.57 & 106,300 & 49 & 0.56 & 2,220 & 167 & 0.95 \\
64 & 53,760 & 84 & 0.80 & 146,380 & 101 & 0.65 & 3,690 & 167 & 0.95 \\
128 & 71,560 & 126 & 0.88 & 230,290 & 354 & 0.76 & 7,360 & 369 & 0.98 \\
256 & 138,300 & 277 & 0.95 & 336,410 & 520 & 0.85 & 13,520 & 1,106 & 1.00 \\
512 & 242,210 & 730 & 0.98 & 501,870 & 2,744 & 0.88 & 23,910 & 3,444 & 1.00 \\
1024 & 364,490 & 10,597 & 0.99 & 1,809,680 & 100,468 & 0.93 & 146,890 & 10,675 & 1.00 \\
\hline
\end{tabular}
\caption{$k$-probability on \textsc{Biomine}. }
\label{tab:1c}
\end{table}
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
memo &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
$\delta$ &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T_P$} & \multicolumn{1}{c|}{$P$} \\
\hline
0.10 & 1000 & 100,700 & 1.00 & 1000 & 1,656,660 & 1.00 & 1000 & 1,696,420 & 1.00\\
0.05 & 1000 & 100,230 & 1.00 & 1000 & 1,671,880 & 1.00 & 1000 & 1,690,830 & 1.00\\
0.01 & 1000 & 93,120 & 1.00 & 1000 & 1,710,200 & 1.00 & 1000 & 1,637,320 & 1.00\\\hline
\end{tabular}
\caption{Monte Carlo Inference using \texttt{memopath/3} on \textsc{Biomine}. }
\label{tab:3b}
\end{table}
Given that sampling a program that does not entail the query is extremely unlikely for the setting considered so far, we performed an additional experiment on \textsc{Biomine}, where we restrict the number of edges on the path connecting two nodes to a maximum of 2 or 3. Results are reported in Table~\ref{tab:shortpath}. As none of the resulting queries have more than 50 proofs, exact inference is much faster than Monte Carlo, which needs a higher number of samples to reliably estimate probabilities that are not close to $1$.
\begin{table}[t]
\begin{tabular}{|c|rrr|rrr|rrr|}
\hline
len &
\multicolumn{3}{c|}{$983-620$} &
\multicolumn{3}{c|}{$983-582$} &
\multicolumn{3}{c|}{$620-582$} \\
$\delta$ &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T$} & \multicolumn{1}{c|}{$P$} &
\multicolumn{1}{c}{$S$} & \multicolumn{1}{c}{$T$} & \multicolumn{1}{c|}{$P$} \\
\hline
0.10 & 1000 & 21,400 & 0.04 & 1000 & 18,720 & 0.11 & 1000 & 19,150 & 0.58\\
0.05 & 1000 & 19,770 & 0.05 & 1000 & 20,980 & 0.10 & 2000 & 35,100 & 0.55\\
0.01 & 6000 & 112,740 & 0.04 & 16000 & 307,520 & 0.11 & 40000 & 764,700 & 0.55\\\hline
exact & - & 477 & 0.04 & - & 456 & 0.11 & - & 581 & 0.55 \\\hline \hline
0.10 & 1000 & 106,730 & 0.14 & 1000 & 105,350 & 0.33 & 1000 & 45,400 & 0.96\\
0.05 & 1000 & 107,920 & 0.14 & 2000 & 198,930 & 0.34 & 1000 & 49,950 & 0.96\\
0.01 & 19000 &2,065,030 & 0.14 & 37000 & 3,828,520 & 0.35 & 6000 & 282,400 & 0.96\\\hline
exact & - & 9,413 & 0.14 & - & 9,485 & 0.35 & - & 15,806 & 0.96\\\hline
\end{tabular}
\caption{Monte Carlo inference for different values of $\delta$ and exact inference using \texttt{lenpath/4} with length at most $2$ (top) or $3$ (bottom) on \textsc{Biomine}. For exact inference, runtimes include both Prolog and BDD time.}
\label{tab:shortpath}
\end{table}
Altogether, the experiments confirm that our implementation provides efficient inference algorithms for ProbLog that scale to large databases. Furthermore, compared to the original implementation of~\cite{DeRaedt07}, we obtain large speedups in both the Prolog and the BDD part, thereby opening new perspectives for applications of ProbLog.
\section{Conclusions}\label{sec:conclusion}
ProbLog is a simple but elegant probabilistic logic programming
language that allows one to explicitly represent uncertainty by means of probabilistic facts denoting independent random variables. The language
is a simple and natural extension of the logic programming
language Prolog.
We presented an efficient
implementation of the ProbLog language on top of the YAP-Prolog system
that is designed to scale to large sized problems. We showed that
ProbLog can be used to obtain both explanation and (approximations
of) success
probabilities for queries on a large database. To the best of our
knowledge, ProbLog is the first example of a probabilistic logic
programming system that can execute queries on such large databases.
Due to the use of BDDs for addressing the disjoint-sum-problem, the initial implementation of ProbLog used in~\cite{DeRaedt07} already
scaled up much better than alternative implementations such as Fuhr's
pD engine HySpirit~\cite{Fuhr00}.
The tight integration in YAP-Prolog presented here leads to further speedups
in runtime of several orders of magnitude.
Although we focused on connectivity queries and Biomine in this
work, similar problems are found across many domains; we
believe that the techniques presented apply to a wide
variety of queries and databases because
ProbLog provides a clean separation between background knowledge and
what is specific to the engine. As shown for Monte Carlo inference,
such an interface can be very useful to improve performance
as it allows incremental refinement of background knowledge,
e.g., graph procedures. Initial experiments with Dijkstra's algorithm
for finding the explanation probability are very promising.
ProbLog is closely related to some alternative formalisms such as
PHA and ICL~\cite{Poole:93,Poole00}, pD~\cite{Fuhr00}
and PRISM~\cite{SatoKameya:01} as their semantics are all based on
Sato's distribution semantics even though there exist also some
subtle differences.
However, ProbLog is -- to the best of the authors' knowledge --
the first implementation that tightly integrates Sato's original
distribution semantics~\cite{Sato:95} in a state-of-the-art Prolog
system without making additional restrictions (such as the exclusive
explanation assumption made in PHA and PRISM).
As ProbLog, both PRISM and the ICL implementation AILog2 use a two-step approach to inference, where proofs are collected in the first phase, and probabilities are calculated once all proofs are known. AILog2 is a meta-interpreter implemented in SWI-Prolog for didactical purposes, where the disjoint-sum-problem is tackled using a symbolic disjoining technique~\cite{Poole00}. PRISM, built on top of B-Prolog, requires programs to be written such that alternative explanations for queries are mutually exclusive. PRISM uses a meta-interpreter to collect proofs in a hierarchical datastructure called explanation graph. As proofs are mutually exclusive, the explanation graph directly mirrors the sum-of-products structure of probability calculation~\cite{SatoKameya:01}.
ProbLog is the first probabilistic logic programming system using BDDs as a basic datastructure for probability calculation, a principle that
receives increased interest in the probabilistic logic learning community, cf.~for instance~\cite{Riguzzi,sato:ilp08}.
Furthermore,
as compared to SLPs~\cite{Muggleton96}, CLP($\cal BN$) \cite{Costa03:uai}, and BLPs~\cite{Kersting08}, ProbLog is a much
simpler and in a sense more primitive probabilistic programming
language. Therefore, the relationship between probabilistic logic
programming and ProbLog is, in a sense, analogous to that between
logic programming and Prolog. From this perspective, it is our hope
and goal to further develop ProbLog so that it can be used as a
general purpose programming language with an efficient implementation
for use in statistical relational learning~\cite{Getoor07} and
probabilistic programming~\cite{DeRaedt-PILPbook}.
One important use of such a probabilistic programming language is as a
target language in which other formalisms can be efficiently compiled.
For instance, it has already been shown that CP-logic~\cite{Vennekens}, a recent elegant probabilistic knowledge
representation language based on a probabilistic extension of clausal
logic, can be compiled into ProbLog~\cite{Riguzzi} and it is well-known that SLPs~\cite{Muggleton96} can be compiled into Sato's PRISM,
which is closely related to ProbLog. Further evidence is provided
in~\cite{DeRaedt-NIPSWS08}.
Another, related use of ProbLog is as a vehicle for developing
learning and mining algorithms and tools~\cite{Kimmig07,DeRaedt08MLJ,Gutmann08,Kimmig09,DeRaedt-IQTechReport}.
In the context of probabilistic representations~\cite{Getoor07,DeRaedt-PILPbook}, one typically distinguishes
two types of learning: parameter estimation
and structure learning. In parameter estimation in the context of
ProbLog and PRISM, one starts from a set of queries and the logical
part of the program and the problem is to find good estimates of the
parameter values, that is, the probabilities of the probabilistic
facts in the program. \cite{Gutmann08} introduces a gradient descent approach to parameter learning for ProbLog that extends the BDD-based methods discussed here.
In structure learning, one also starts from
queries but has to find the logical part of the program as well.
Structure learning is therefore closely related to inductive logic
programming.
The limiting factor in statistical relational learning and
probabilistic logic learning is often the efficiency of inference, as learning requires
repeated computation of the probabilities of many queries.
Therefore, improvements on inference in probabilistic programming
implementations have an immediate effect on learning.
The above compilation approach also raises the interesting and largely
open question whether
not only inference problems for alternative formalisms can be compiled
into ProbLog but whether it is also possible to compile learning
problems for these logics
into learning problems for ProbLog.
Finally, as ProbLog, unlike
PRISM and PHA, deals with the disjoint-sum-problem, it is interesting
to study how program transformation and analysis techniques could be
used to optimize ProbLog programs, by detecting and taking into
account situations where some conjunctions are disjoint. At the same
time, we currently investigate how
tabling, one of the keys to PRISM's efficiency, can be incorporated in ProbLog~\cite{Mantadelis09,Kimmig-SRL09}.
\subsubsection*{Acknowledgements}
We would like to thank Hannu Toivonen, Bernd Gutmann and Kristian Kersting for their many contributions to ProbLog, the Biomine team for the application, and Theofrastos Mantadelis for the development of SimpleCUDD.
This work is partially supported by the GOA project 2008/08 Probabilistic Logic Learning.
Angelika Kimmig is supported by the Research
Foundation-Flanders (FWO-Vlaanderen).
V\'{\i}tor Santos Costa and
Ricardo Rocha are partially supported by the research projects
STAMPA (PTDC/EIA/67738/2006) and JEDI (PTDC/ EIA/66924/2006) and by
Funda\c{c}\~ao para a Ci\^encia e Tecnologia.
|
1,116,691,497,707 | arxiv | \section{Introduction}
\label{sec:introduction}
In our previous work \cite{OblomkovRozansky16},\cite{OblomkovRozansky17},
\cite{OblomkovRozansky18}, \cite{OblomkovRozansky18a} we developed and explored a geometric construction for a
triply-graded link homology which categorifies the HOMFLYPT polynomial. The initial construction in \cite{OblomkovRozansky16}
is based on the homomorphism from the braid group to a special category of matrix factorizations:
\begin{equation}\label{eq:MFst}
\mathfrak{Br}_n\to \textup{MF}^{ \mathrm{st}}:=\textup{MF}_{\GL_n}((\gl_n\times \mathbb{C}^n \times \mathrm{T}^*\mathrm{Fl}\times \mathrm{T}^*\mathrm{Fl})^{ \mathrm{st}},W)\footnote{In the original
paper we worked with the affine version of this category of matrix factorizations. },\quad W=\mu_1-\mu_2,
\end{equation}
where \(\mathrm{Fl}\) is the flag variety, \(\mu\) is the moment map of the $\GL_n$ action on $ \mathrm{T}^* \mathrm{Fl}$ and the index `st' stands for the stability
condition.
The category of matrix factorizations in \eqref{eq:MFst} is
motivated by the 3D TQFT\ from the papers \cite{KapustinRozansky10},\cite{KapustinSaulinaRozansky09},
called the KRS model or, equivalently, a 3d topological gauged B-model. Here we make the connection between the KRS model and the
HOMFLYPT homology more precise and also provide a TQFT-style explanation for the conjectures from \cite{OblomkovRozansky18a}
for the Drinfeld center of the category \(\textup{MF}^{ \mathrm{st}}\).
In this paper we describe two families of specific KRS models indexed by a non-negative integer $n$. Both of them have
\(S^2\times\mathbb{R}\) as space-time. The target spaces for these families are \( \gl_n/\GL_n\) and \( \gl_n\times \mathbb{C}^n/\GL_n\). We call the second family { \it framed } and the first { \it unframed}. The total space-time\ \(S^2\times\mathbb{R}\)
can have `defect surfaces' separating different models, that is, the connected components of the complement of defect curfaces are labeled by integers $n$. The defects can intersect along curves and curves can intersect at points. In the framed category the defects have orientation
and in both settings the curves of the intersection of the defects have signs.
The simplest case of the defect picture is when the defect is equal to \(\mathrm{Def}=C\times \mathbb{R}^1\subset S^2\times \mathbb{R}\) where \(C\) is an submersed
curve (possibly with many connected components). In this case we define the partition-function evaluation \( \mathsf{Z}^\bullet\) where
\(\bullet\) is \(\emptyset\) in the unframed case and
\(\bullet= \mathrm{f}\) in the framed case. In particular, we can recover the
above category matrix factorizations
\begin{theorem}\label{thm:MF}
Let \(S^1\times 0\subset S^2\times \mathbb{R} \) be an embedded circle such that it intersect the defect \(\mathrm{Def}=C\times \mathbb{R}^1\) transversally at \(2n\) points and the labels of the connected componets of \(S^1\setminus S^1\cap \mathrm{Def}\) are \( 0,1,2,\dots,n,n-1,\dots,2,1\) then
\[ \mathsf{Z}(S^1)=\textup{MF}_{\GL_n}(\gl_n \times \mathrm{T}^*\mathrm{Fl}\times \mathrm{T}^*\mathrm{Fl},W).\]
Moreover, if the first \(n\) intersection points of \(S^1\cap \mathrm{Def}\) are oriented up and the rest down then
\begin{equation}
\label{eq:catzso}
\sfZ^{\tfrm}(S^1)=\textup{MF}_{\GL_n}\bigl((\gl_n \times\mathbb{C}^n \times\mathrm{T}^*\mathrm{Fl}\times \mathrm{T}^*\mathrm{Fl})^{ \mathrm{st}},W\bigr).
\end{equation}
\end{theorem}
In our first paper \cite{OblomkovRozansky16} we defined the triply-graded homology of the
closure \(L(\beta)\) of the braid \(\beta\in\mathfrak{Br}_n\) as the hypercohomology:
\begin{equation}\label{eq:old}
\mathrm{HHH}(\beta)=\mathbb{H}\bigl(\mathcal{E}xt(\Phi(\beta),\Phi(1)\otimes \Lambda^\bullet \mathcal{B})\bigr),
\end{equation}
where \(\mathcal{B}\) is the tautological bundle which we explain in the main body of the paper.
In~\cite{OblomkovRozansky18a} we constructed the pair of adjoint functors:
\begin{equation}
\label{eq:mnchcchf}
\begin{tikzcd}
\textup{MF}^{\st}\arrow[rr,bend left,"\mathsf{CH}^{\st}_{\loc}"]&&\rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2))\arrow[ll,bend left,"\mathrm{HC}^{\st}_{\loc}",pos=0.435]
\end{tikzcd},
\end{equation}
where \( \mathrm{Hilb}\) is the Hilbert scheme of \(n\) points on \(\mathbb{C}^2\), while \(\rmD^{\tper}( \mathrm{Hilb})\) is the derived category of
two-periodic \(\mathbb{T}_{q,t}\)-equivariant complexes on the Hilbert scheme. We also showed that
\begin{equation}\label{eq:Hilb}
\mathrm{HHH}(\beta)=\mathrm{RHom}(\mathsf{CH}^{\st}_{\loc}(\beta), \Lambda^\bullet \mathcal{B}).
\end{equation}
The TQFT\ picture gives a natural interpretation of the isomorphism~\eqref{eq:Hilb} as the result of gluing the same disc in two different ways. The appearance of \( \mathrm{Hilb}_n(\mathbb{C}^2)\) is due to the following:
\begin{theorem}\label{thm:DrZ}
Let \(S^1\times 0\subset S^2\times \mathbb{R} \) be an embedded circle which does not intersect the defect \(\mathrm{Def}=C\times \mathbb{R}^1\) and lies
in the connected component of the complement of the defect with the label \(n\) then
\[\sfZ^{\tfrm}(S^1)=\rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2)).\]
Moreover, if \(D\) is a disc in the complement of the defect, such that \(\partial D=S^1\), then
\[\sfZ^{\tfrm}(D)=\mathcal{O}\in \rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2)).\]
\end{theorem}
To explain the appearance of the exterior powers of \(\mathcal{B}\)
we introduce a special line of defect in our theory: \(0\times \mathbb{R}\subset S^2\times \mathbb{R}\), \(S^2=\mathbb{R}^2\cup \infty\).
We assume that this line of defect does not intersect the surface of defect.
For a small disc intersecting this line of defect we have:
\[\sfZ^{\tfrm}(D)=\Lambda^\bullet\mathcal{B}\in \mathrm{Hilb}_n(\mathbb{C}^2). \]
The link homology emerges as the vector space associated to a disc which intersects defect surfaces and a defect line:
\begin{theorem}
Suppose that the curve \(C\subset S^2=\mathbb{R}^2\cup \infty\) in the defect \(\mathrm{Def}=C\times \mathbb{R}\) is the
picture of the natural closure of the braid \(\beta\in \mathfrak{Br}_n\), the closure goes around the
line of defect as in the Figure 1. Let us also assume the infinite point has the label \(0\),
the connected component of \(S^2\times \mathbb{R}\setminus \mathrm{Def}\) has label \(n\) and the labels change by
\(1\) as we cross the surface of defect. Then
\[\sfZ^{\tfrm}(S^2)=\mathrm{HHH}(\beta).\]
\end{theorem}
Thus the formulas \eqref{eq:old} and \eqref{eq:Hilb} correspond to two ways to present \(S^2\) as a gluing of two
\(D^2\) along their common boundary $S^1$. The formula \eqref{eq:old} is given by cutting along \(S^1\) with \(S^1\) as in theorem~\ref{thm:MF} and
the formula \eqref{eq:Hilb} is given by cutting along \(S^1\) that is boundary of a tubular neighborhood of the line of defect.
Note that theorem~\ref{thm:DrZ} interprets \(\rmD^{\tper} ( \mathrm{Hilb}_n(\mathbb{C}^2))\) as the category of endomorphisms of the identity functor in
the two category \(\sfZ^{\tfrm}(\mathsf{p}_n)\):
\[ \mathbb{H}om(\mathbb{I}d,\mathbb{I}d)=\rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2))\]
where the point \(\mathsf{p}_n\) lies in complement of the defect with the label \(n\)
and \(\mathbb{I}d\) is the identity endomorphism of two-category \(\sfZ^{\tfrm}(\mathsf{p}_n)\).
Thus it is reasonable to
call \(\rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2))\) the Drinfeld center of two-category \(\sfZ^{\tfrm}(\mathsf{p}_n)\). Since it is not
common to work with the Drinfeld centers of two-categories, we spell out the expected property of such a
center in conjecture~\ref{conj:Z}.
In section~\ref{sec:algebraic-model-krs} we recall some basics of the constructions in~\cite{KapustinRozansky10}
In the section~\ref{sec:stacks-our-main} we explain how our particular example of TQFT\ fits into the setting of
\cite{KapustinRozansky10}. In section \ref{sec:defects-knot-invar} we
construct the partition-function \(\sfZ^{\tfrm}\) and prove the results that we mentioned in the introduction. We also discuss
the Drinfeld center subtleties in the subsection~\ref{sec:value-closed-curves}. In the final section we discuss further directions and the possible generalizations of the results in this paper. In particular, we outline first step of the program that
would relate our theory to the results of \cite{GorskyNegutRasmussen16}, \cite{GorskyHogancamp17} where Soergel bimodule
HOMFLYPT homology is explored from the perspective of the geometry of the flag Hilbert schemes.
{\bf Acknowledgments}
We would like to thank Dmitry Arinkin, Tudor Dimofte, Eugene Gorsky, Sergey Gukov, Tina Kanstrup, Ivan Losev, Roman Bezrukavnikov and Andrei Negu{\c t} for useful discussions.
The work of A.O. was supported in part by the NSF CAREER grant DMS-1352398, NSF FRG grant DMS-1760373 and Simons Fellowship.
The work of L.R. was supported in part by the NSF grant DMS-1108727.
\section{Algebraic model for KRS model}
\label{sec:algebraic-model-krs}
This section provides a motivation for the definition of our main
working category. We do not provide details for the constructions in
this section. The main two-category that we work with appears in section~\ref{sec:stacks-our-main} and the details of the main
working category are spelled out in this section.
\subsection{Large three-category}
\label{sec:large-three-category}
We introduce a three-category $\thc_{\mathrm{sym}}$. Its objects $\mathrm{Obj}(\thc_{\mathrm{sym}})$ are holomophic symplectic manifolds.
The morphisms between
two such manifolds $X,Y\in \mathrm{Obj}(\thc_{\mathrm{sym}})$ form a 2-category $\Hom(X,Y)$
whose objects are fibrations with lagrangian bases:
$$(F,L,f\colon F\to L),\quad L\subset X\times Y \mbox{ is a Lagrangian subvariety.}$$
The composition of $(F,L,f)\in\Hom(X,Y)$ and $(G,L',g)\in\Hom(Y,W)$ is defined to be $(H,L'',h)$ where
$$ H:=(F\times W)\times_{Y} (X\times G),$$
while $h\colon H\to X\times W$ is the natural projection and $L'':=h(H)$.
The composition is not always defined since the \(L''\) is not always a Lagrangian.
The morphisms between the objects $(F,L,f),(F',L',f')\in \Hom(X,Y)$ is defined as follows:
$$\Hom((F,L,f),(F',L',f')):= \DG^{\tcoh}(F\times_{X\times Y} F').$$
This is a DG category, hence the morphisms between the objects in this category are complexes of vector spaces.
Then the homomorphisms would form vector spaces, althogh
it is unclear whether it is possible to work in this framework.
Let us also remark that the category $\thc_{\mathrm{sym}}$ contains a final object $pt$ which is just a point.
Thus for every $X\in \mathrm{Obj}(\thc_{\mathrm{sym}})$ there is a related two-category $\ddot{\textup{Cat}}(X):=\Hom(pt,X)$.
\subsection{Small three-category}
\label{sec:small-three-category}
The three-category from the previous subsection is very rich but working with this category requires
the machinery of the derived algebraic geometry. To make our life simpler for us we will work with
smaller category $\thc_{\mathrm{man}}$ that has fewer objects and fewer morphisms.
First we define a slightly bigger category \(\thc_{\mathrm{man}}\).
The objects of the category $\thc_{\mathrm{man}}$ are algebraic manifolds.
The 2-category of morphisms $\mathbb{H}\mathrm{om}(X,Y)$ between two objects $X,Y\in \mathrm{Obj}(\thc_{\mathrm{man}})$ has objects
\[(Z,w),\quad Z \mbox{ is algebraic manifold}, \quad w\in\mathbb{C}[ X\times Z\times Y].\]
For $X,Y,W\in \mathrm{Obj}(\thc_{\mathrm{man}})$ and $(Z,w)\in \mathbb{H}\mathrm{om}(X,Y)$, $(Z',w')\in \mathbb{H}\mathrm{om}(Y,W)$ the composition is defined by:
$$(Z,w)\circ (Z',w')=(Z\times Y\times Z',w'-w)\in \mathbb{H}\mathrm{om}(X,W).$$
The category of morphisms $\Hom((Z,w),(Z',w'))$ between the objects $(Z,w),(Z',w')\in \Hom(X,Y)$ is a triangulated one-category
of matrix factorizations
\[\Hom((Z,w),(Z',w'))=\textup{MF}(X\times Z\times Z'\times Y,w'-w).\]
The objects of the homotopy category \(\textup{MF}(Z,w)\), \(w\in\mathbb{C}[Z]\) are pairs
\[(M,D),\quad M=M_0\oplus M_1,\quad D\in \Hom_{\mathbb{C}[Z]}(M,M), \quad D^2=w,\]
where \(D\) is \(\mathbb{Z}_2\)-graded morphism: \(D(M_i)=M_{i+1}\). If we think of the
matrix factorizations as two periodic curved complexes, then the homotopy is defined in exactly the same way as for usual complexes.
For two objects $\mathcal{F}=(M,D),\;\mathcal{G}=(M',D')\in \textup{MF}(Z,w)$ the space of morphisms
is defined by:
\[\mathcal{H}om(\mathcal{F},\mathcal{G})=\{\phi\in \Hom_{\mathbb{C}[Z]}(M,M')\;|\;D\circ\phi=\phi\circ D'\}.\]
We also define \(\rmD^{\tper}(Z)\) to be a derived category of the two-periodic complexes of
\(\mathbb{C}[Z]\)-modules. Given an element \(\mathcal{F}=(M,D)\in \textup{MF}(Z,w)\), by inverting the sign of the component of the differential
from \(M_0\) to \(M_1\) we obtain an element of \(\textup{MF}(Z,-w)\) which we denote \(\mathcal{F}^*\). Since the potentials of the matrix factorizations
add if take tensor product, we have:
\[\mathcal{E}\mathrm{xt}(\mathcal{G},\mathcal{F})=\mathcal{G}\otimes \mathcal{F}^*\in \rmD^{\tper}(Z).\]
To define a smaller category \(\dddot{\mathrm{Cat}}'_{man}\) we need to define an equivalence relation on the set of morphism and we use Knorrer periodicity \cite{Knorrer}
in this construction. Namely, if \(V\to Z\) is a finite rank vector bundle over \(Z\) and \(V^*\) is its dual then there is a canonical bilinear function
\[\mathsf{Q}_V\in \mathbb{C}[V\times V^*]. \]
For any \(W\in \mathbb{C}[V]\) there is a invertible functor \cite{Knorrer}:
\[\mathrm{KN}_{V}:\textup{MF}(Z,W)\rightarrow \textup{MF}(V\times V^*,W+\mathsf{Q}_V).\]
Since the Knorrer functor is canonical the equivalence relation on \(\mathbb{H}\mathrm{om}(X,Y)\):
\begin{equation} \label{eq:KN}
(Z,W)\sim (V\times V^*,W+\mathsf{Q}_V),\end{equation}
is compatible with the composition: if \(g\sim f\) then \(h\circ g\sim h\circ f\) and \(g\circ h\sim f\sim h\). Thus we have a well-defined three-category:
\[\dddot{\mathrm{Cat}}'_{\mathrm{man}}:=\dddot{\mathrm{Cat}}_{\mathrm{man}}/\sim.\]
The smaller category \(\dddot{\mathrm{Cat}}_{man}'\) is closer to the category proposed in \cite{KapustinRozansky10} but the categorical setting for
taking the quotient by the equivalence relation is rather subtle (see for example \cite{Drinfeld04}). We prefer to work with the large
category and indicate the Knorrer periodicity isomorphisms explicitly instead of taking quotient by all Knorrer isomorphisms.
\subsection{Relation between the three-categories}
\label{sec:embedd-three-categ}
There is a functor $j_3\colon\thc_{\mathrm{man}}\to \thc_{\mathrm{sym}}$ which acts on the objects as
$$ X\mapsto T^*X.$$
The embedding at the level of morphism is based on describing lagrangian submanifolds by generating functions \cite{Arnold74}. Let us recall the basic facts.
Given a (complex) manifold $Z$ and function $w: X\times Z\to \mathbb{C}$ we define a subvariety
$F_w\subset T^*X\times Z$ by the equations
$$ \partial_{z_i}w(x,p,z)=0,\quad \partial_{x_i}w(x,p,z)=p_i,$$
where $z_i$ are local coordinates along $Z$, $x_i$ are local coordinates along $X$ and $p_i$ are
the coordinates on the cotangent space that are dual to the coordinates $x_i$. As shown
in \cite{Arnold74}, the image $L_w$ of $F_w$ in $T^*X$ under the natural projection
$\pi$ is (generally) a Lagrangian subvariety.
Thus the functor $j_3$ at the level of homomorphisms between the objects is defined as
$$ (Z,w)\mapsto (F_w,\pi,L_w).$$
The real problem arises at the level of morphisms between the morphisms of objects.
It is tempting to
say that we have a functor
$$ \textup{MF}(X\times Z\times Z'\times Y,w'-w)\longrightarrow \DG_{\tcoh}^{\tper}(F_w\times_{T^*(X\times Y)} F_{w'}).$$
It is not clear to the authors how one could construct such a functor in a canonical way. One option here is
to use the functor \(\mathcal{E}xt\), if we fix some element \(\mathcal{F}\in \textup{MF}(X\times Z\times Z'\times Y,w'-w)\); then
we obtain a functor
\[\mathcal{E}xt(\cdot,\mathcal{F})\colon \textup{MF}(X\times Z\times Z'\times Y,w'-w)\rightarrow \DG_{\tcoh}^{\tper} (X\times Z\times Z'\times Y).\]
However, it is not clear whether we can make a choice of \(\mathcal{F}\) canonical.
\section{Stacks and our main category}
\label{sec:stacks-our-main}
Even the small category \(\dddot{\mathrm{Cat}}_{\mathrm{man}}\) seems to be too big to be useful for some concrete mathematical problems.
Besides, the category of matrix factorizations on general variety is overly complicated. So we can first restrict our
attention to the subcategory of \(\dddot{\mathrm{Cat}}_{\mathrm{man}}\) where the the objects are affine varieties.
A slight issue
with our choice is that the affine varieties have overly simple category of coherent sheaves and because of that
we would not get very interesting structure of the composition product in our category. One way to mend this is
to enhance our initial definition and to introduce equivariant matrix factorizations. The equivariant matrix factorizations
are two-morphism in the enlarged category \(\dddot{\mathrm{Cat}}_{\mathrm{man}}^{\mathrm{stack}}\) where the objects are Artin stacks with affine
stabilizers.
In this paper we will not attempt to rigorously define this large category \(\dddot{\mathrm{Cat}}_{\mathrm{man}}^{\mathrm{stack}}\), instead
we concentrate on the smaller subcategory \(\dddot{\mathrm{Cat}}_{ \mathrm{gl}}\) where the objects are the adjoint quotient stacks \(\gl_n/\GL_n.\)
Implicitly, this category was studied in \cite{OblomkovRozansky16}, \cite{OblomkovRozansky17},
\cite{OblomkovRozansky17a} where many interesting results in theory of knot homology were derived. These
papers also provide a motivation for the construction of the category \(\dddot{\mathrm{Cat}}_{ \mathrm{gl}}\) in this section.
In what follows, we do not use the language of stacks. Rather, we work in a more elementary setting
to emphasize the computational aspects of our theory.
\subsection{Objects and morphisms}
\label{sec:objects-morphisms}
The objects of $\dddot{\mathrm{Cat}}_{ \mathrm{gl}}$ are labeled by $\mathbb{Z}_{\ge 0}$:
\[\mathrm{Obj}\,\dddot{\mathrm{Cat}}_{ \mathrm{gl}}=\{\mathbf{n}|\mathbf{n}\in \mathbb{Z}_{\ge 0}\}.\]
The objects in the 2-category of morphisms are pairs $(Z,w)$, where $Z$ is an algebraic variety with an action of $\GL_n\times\GL_m$:
\[\mathrm{Obj}\,\mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{m})= \{(Z,w),\quad w\in \mathbb{C}[\gl_n\times Z \times \gl_m]^{\GL_n\times\GL_m}\}.\]
The actual space of morphisms is the quotient of
\(\mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{m})\) by the equivalence relation \(\mathrm{KN}_V\) as in (\eqref{eq:KN}) with
the restriction that the vector bundle \(V\) is required to be \(\GL_n\times\GL_m\)-equivariant
\[\mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{m})'=\mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{m})/\sim.\]
The composition of morphism $(Z,w)\in\mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{m})$, $(Z',w')\in \mathbb{H}\mathrm{om}(\mathbf{m},\mathbf{k})$ is defined as
$$ (Z,w)\circ(Z',w') = (Z\times \gl_m\times Z'/_+\GL_m,w'+w)\in \mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{k}).$$
Here the quotient is defined via GIT theory as follows. Suppose that $X$ is a variety with a $\GL_m$-action.
A character $\chi$ of $\GL_m$ determines the trivial line bundle $L_\chi$
with $\GL_m$-equivariant structure defined by $\chi$.
Recall that a point $x\in X$ is semistable (with respect to $L_\chi$) if there is
$m>0$ and $s\in \Gamma(X,L_\chi)^{GL_m}$ such that $s(x)\ne 0$. Denote
$$ X/_\chi \GL_m:=X^{ss}/\GL_m.$$
Since the group of characters of $\GL_m$ is generated by $\det$, we introduce short-hand notations:
$$ X/_{\pm} \GL_m:=X/_{\det^{\pm 1}} \GL_m,\quad X/_0\GL_m:=X/_{\det^0}\GL_m.$$
Two-morphism are the objects of the corresponding category of equivariant matrix factorizations.
Given \((Z,w),(Z',w')\in \mathbb{H}\mathrm{om}(\mathbf{n},\mathbf{m})\) we define:
\[\Hom\bigl((Z,w),(Z',w')\bigr)=\textup{MF}_{\GL_n\times\GL_m}(\gl_n\times Z\times Z'\times \gl_m,w-w').\]
The group \(\GL_n\times \GL_m\) is reductive, hence the equivariance of the matrix factorization \((M,D)\) is
equivalent to the condition that the group action on \(M\) commutes with the differential \(D\).
The space of morphisms \(\mathcal{H}om(\cdot,\cdot)\) between the equivariant matrix factorizations is defined
to be the space of morphisms between the underlying matrix factorizations that commute the group action.
\subsection{Framed version of the three-category}
\label{sec:framed-version-three}
We enlarge slightly our category to include the framing. The objects of the new three-category
\( \dddot{\tCat}^{\raisebox{-2pt}{$\scriptstyle\tfrm$}}_{\tgl} \) are again labeled by the positive integers:
\[\mathrm{Obj}( \dddot{\tCat}^{\raisebox{-2pt}{$\scriptstyle\tfrm$}}_{\tgl} )=\{\mathbf{n}^{ \mathrm{f}}\,|\,n\in\mathbb{Z}_{\ge 0}\}.\]
For the space of morphisms we have \[\mathbb{H}\mathrm{om}(\mathbf{n}^{ \mathrm{f}},\mathbf{m}^{ \mathrm{f}})=(Z,w),\quad w\in \mathbb{C}[V_n\times\gl_n\times Z\times\gl_m\times V_m]^{\GL_n\times\GL_m}\] here $Z$ is an algebraic variety with an
action of $\GL_n\times \GL_m$ and \(V_n=\mathbb{C}^n,V_m=\mathbb{C}^m\) with the standard \(\GL_n\) and \(\GL_m\) actions.
The rest of definitions are identical to the constructions from the previous subsection, after we replace
\(\gl_n\) with \(V_n\times \gl_n\). For brevity we introduce the following shorthand notation:
\[\gl_n^{ \mathrm{f}}:=\gl_n\times V_n.\]
In general, many definitions in our paper are parallel in framed and unframed cases, in the cases when the
definitions are parallel we use \(\bullet\) notation to indicate that \(\bullet \) could be "f" or empty set.
\subsection{Two-Categories}
\label{sec:two-categories}
The three-category axioms are quite involved and we do not want to dive into the long adventure of checking that
our construction satisfies them. Instead, we discuss the two-categorical level of our construction.
First of all, note that the category \(\mathbb{H}\mathrm{om}(\mathbf{n}^\bullet,\mathbf{m}^\bullet)\) has a natural monoidal structure.
Given \(\mathcal{F}\in \Hom((Z,w),(Z',w'))\) and \(\mathcal{G}\in \Hom((Z',w'),(Z'',w''))\) we define
\(\mathcal{F}\star\mathcal{G}\in \Hom((Z,w),(Z'',w''))\) as
\[\mathcal{F}\star\mathcal{G}:=\pi_{*}(\mathcal{F}\otimes\mathcal{G}),\quad \pi\colon \gl_n\times Z\times Z'\times Z''\times\gl_m\to \gl_n\times Z\times Z''\times\gl_m \]
Suppose \(S\subset \mathbb{Z}_{\ge 0}^2\) then we introduce two-category \(\ddot{\textup{Cat}}_S^\bullet=\bigcup_{ij\in S}\mathbb{H}\mathrm{om}(\mathbf{i}^\bullet,\mathbf{j}^\bullet)\) with objects
defined to be the collection of the morphisms \(\{(Z_{ij},u_{ij})\in \mathbb{H}\mathrm{om}(\mathbf{i}^\bullet,\mathbf{j}^\bullet)|ij\in S\}\). The space morphisms
between \(\{(Z_{ij},u_{ij})\}_{ij\in S}\) and \(\{(Z'_{ij},u'_{ij})\}_{ij\in S}\) consists of the collection \(\{\mathcal{C}_{ij}|ij\in S\}\)
\(\mathcal{C}_{ij}\in \textup{MF}_{\GL_i\times\GL_j}(\gl^\bullet_i\times Z_{ij}\times Z'_{ij}\times \gl^\bullet_j, u_{ij}-u'_{ij})\).
Let us notice that the \(\ddot{\textup{Cat}}^\bullet_\Delta\), \(\Delta:=\{ii| i\in \mathbb{Z}_{\ge 0}\}\) is has monoidal structure defined by \(\circ\) and
for any \(S\) the category \(\ddot{\textup{Cat}}^\bullet_S\) is a module over the category \(\ddot{\textup{Cat}}^\bullet_\Delta\).
It is immediate to see
\begin{proposition}
The categories \(\ddot{\textup{Cat}}^\bullet_S\) has a structure of two-category with composition of two-morphisms defined by \(\star\).
Moreover, \(\ddot{\textup{Cat}}^\bullet_\Delta\) is a monoidal two-category and \(\ddot{\textup{Cat}}^\bullet_S\) is a module two-category over \(\ddot{\textup{Cat}}^\bullet_\Delta\).
\end{proposition}
\section{Defects and knot invariants}
\label{sec:defects-knot-invar}
In this section we explain how the we interpret the results of \cite{OblomkovRozansky16} in terms of three-categories $\dddot{\mathrm{Cat}}_{ \mathrm{gl}}$ and $ \dddot{\tCat}^{\raisebox{-2pt}{$\scriptstyle\tfrm$}}_{\tgl} $. In particular, we
make a connection with the theory of foams and provide an explanation for the Chern character construction.
Recall the standard setup of the 3D topological field theory with defects. 3D QTFT is characterized by its partition-function evaluation \( \mathsf{Z}\). The
partition function is an assignment:
\[\mbox{ closed connected three-manifold } X\mapsto \mathsf{Z}(X)\in\mathbb{C},\]
\[\mbox{ closed connected surface } S\mapsto \mathsf{Z}(S)\in\mathrm{Vect},\]\[\mbox{ three-manifold $X$ with boundary $\partial X = \bigcup_i S_i$} \mapsto \mathsf{Z}(X)\in \mathsf{Z}(\partial X) = \bigotimes_{i=1}^m \mathsf{Z}(S_i),\]
\[\mbox{ closed connected curve } C\mapsto \mathsf{Z}(C)\in \mathrm{Cat},\]\[ \mbox{ surface with boundary } S\mapsto \mathsf{Z}(S)\in \mathsf{Z}(\partial S ) = \otimes_{i=1}^k \mathsf{Z}(C_i),\]
\[\mbox{ point } p\mapsto \mathsf{Z}(p)\in \dddot{\tCat}^{\raisebox{-2pt}{$\scriptstyle\bullet$}}_{\tgl} ,\]
\[\mbox{ interval } I\mapsto \mathsf{Z}(I)\in \Hom\bigl(Z(b),Z(e)\bigr).\]
This collection of data behaves naturally under the gluing operation.
For example, suppose that a three-manifold $X$ without a boundary is cut into two pieces over a surface $S$:
\[X=X_1\cup X_2, \quad\partial(X_1)=S,\quad\partial(X_2)=S^{\vee}.\]
Then $S$ and $S^\vee$ have opposite orientations, hence $ \mathsf{Z}(S)$ and $ \mathsf{Z}(S^\vee)$ are dual vector spaces and
the partition function $ \mathsf{Z}(X)$ is a pairing between their elements:
\[ \mathsf{Z}(X)= \mathsf{Z}(X_1)\cdot \mathsf{Z}(X_2),\]
More generally, the formalism of TQFT provides a method
for computing values of \( \mathsf{Z}(Y)\) by cutting \(Y\) into pieces, evaluating \( \mathsf{Z}\) on the pieces and pairing them
in a standard way.
More details can be found in \cite{Lurie08}. To define a 3D TQFT, we need to include into the domain
of \( \mathsf{Z}\) also manifolds with corners and work with more subtle setting of \((\infty,k)\)-categories.
We postpone the discussion of such extension to our future publication \cite{OblomkovRozansky18d}.
Often TQFT s may be defined not only on smooth manifolds but also on `smooth' CW-complexes. In particular, a TQFT\ may have defects (coming from lower-dimensional cells). Topologically, defects are unions of embedded surfaces and curves. Surfaces and curves may intersect.
The cuts must be transverse to the defects. All other properties of TQFT\ without defects are preserved.
The full categories \( \dddot{\tCat}^{\raisebox{-2pt}{$\scriptstyle\bullet$}}_{\tgl} \) will be used to construct 3D TQFT\ and we discuss the construction in the forthcoming publication
\cite{OblomkovRozansky18d} where we construct the corresponding maps \( \mathsf{Z}\) and \(\sfZ^{\tfrm}\).
In this note we concentrate
on the two-dimesional slice of the TQFT\ for the three-manifold \(X=S^2\times \mathbb{R}\).
We think of \(S^2\) as \(\mathbb{R}^2\) compactified by a point at infinity.
The two-dimensional cut is \(\mathbb{R}^2\cup \infty\). Studying two-dimensional
slice is equivalent to restricting ourselves to the defects of the form \(C\times \mathbb{R}\)
where \(C\) is a curve in \(\mathbb{R}^2\).
The surfaces of defect intersect our fixed \(\mathbb{R}^2\) along the union of oriented curves. Suppose that the curves on
the plane lies in an annulus and their union is a projection of the closure of a braid. By assigning a sign to each intersection, we obtain an interpretation
of the union of curves as a projection of a link in \(\mathbb{R}^3\) presented as a closure \(L(\beta)\) of a braid \(\beta\in \mathfrak{Br}_n\). Denote
the plane with defect \(\mathbb{R}^2_{\beta}\).
Our TQFT\ provides an isotopy invariant of \(L(\beta)\), namely, the vector space
\( \sfZ^{\tfrm}(\mathbb{R}^2_\beta),\) assigned to $\mathbb{R}^2_\beta$,
and we show that this invariant coincides with previously defined invariant for \cite{OblomkovRozansky16}.
Now let us give details of the \(\mathbb{R}^2\)-sliced TQFT. A small neighborhood of the plane contains two-dimensional surfaces of defect that are
products of the defect curves in \(\mathbb{R}^2\) with an interval. The surfaces of defect divide \(\mathbb{R}^3\) into connected components and each connected
component has an integer marking.
We choose the markings in such a way that if we move along the (oriented) intersection of a defect surface with $\mathbb{R}^2$ and the marking on the left is $k$, then the marking on the right is $k+1$.
We assume that the curves of intersection are compact, thus their union is contained in a large disc.
The marking of the disc complement (`the infinite marking') determines all other markings.%
In this paper we only consider the theories with the infinite marking equal to \(0\). Hence, if the intersection of the surface defect and $\mathbb{R}^2$ is
the braid closure then all markings are positive. Thus the picture of the braid \(\beta\)
determines the marking of our theory uniquely. Slightly abusing notation, we denote such data as \(\mathbb{R}^2_\beta\). The figure is a slice \(\mathbb{R}^2_\beta\)
for \(\beta=\sigma_1^3\). The closure of \(L(\beta)\) is a trefoil and we explain
below how we can compute the homology of this knot.
\begin{figure}\label{pic:1}
\includegraphics[width=2in]{Pics1.pdf}
\caption{\(\mathbb{R}^2_\beta\) for \(\beta=\sigma_1^3\).}
\end{figure}
\subsection{Values on points and intervals}
\label{sec:valu-points-interv}
There is a canonical way to upgrade the marking of \(\mathbb{R}^2_\beta\) to a categorical marking.
Denote by $ \mathsf{p}_n$ a point lying inside a region marked by $n$. To a pair of points $( \sfpnv{1}, \sfpnv{2})$ we assign a two-category\
\begin{equation}
\label{eq:mnass}
\sfZv{\bullet}( \sfpnv{1}, \sfpnv{2}) = \mathbb{H}\mathrm{om}( \bfnbv{1}, \bfnbv{2}).
\end{equation}
Recall that an object of this two-category\ is a pair $(Z,W)$, where $Z$ is a variety with a $( \GLnv{1}\times \GLnv{2})$-action and
$W \in\mathbb{C}[ \glnv{1}\times \glnv{2}\times Z]^{ \GLnv{1}\times \GLnv{2}}$.
In accordance with the assignment~\eqref{eq:mnass}, to an interval $I$ connecting $ \sfpnv{1}$ and $ \sfpnv{2}$ (and possibly crossing defect surfaces) we assign an object $ \sfZv{\bullet}(I)$ of $\mathbb{H}\mathrm{om}( \bfnbv{1}, \bfnbv{2})$, so that if $I$ is the result of gluing together the intervals $I_1$ and $I_2$ over the common middle point, then the object of $I$ is the composition:
\[ \sfZv{\bullet}(I) = \sfZv{\bullet}(I_2)\circ \sfZv{\bullet}(I_1).\]
This relation implies that if the interval $I$ lies within a single region marked by $ \sfp_n $, then the corresponding object $ \sfZv{\bullet}(I)$ is the identity with respect to the monoidal structure ({\it i.e.}, the composition) of the category $\mathbb{H}\mathrm{om}( \bfn^{\bullet}, \bfn^{\bullet})$.
Proposition~\ref{prop:unit} states that the following pair is the identity object of
the `unframed' two-category\ $\mathbb{H}\mathrm{om}( \mathbf{n}, \mathbf{n})$:
\[\mathrm{L}^n_{\mathrm{id}}=\Bigl(\mathrm{T}^*\GL_n,W_{\mathrm{id}}:=\textup{Tr}\,\phi\bigl(X-\mathrm{Ad}_g(X')\bigr)\Bigr)\]
in which $(g,\phi)\in \GLv{n}\times\gl_{n}\cong \rmTs\GLn $, while $(X, X')\in\gl_{n}\times\gl_{n}$, and the action of $ \GLv{n}\times \GLv{n}$ on the total variety $\gl_{n}\times\gl_{n}\times \rmTs\GLn $ is given by the following formula:
\[
(a,b)\cdot(X,X',g,\phi) = \bigl(\Adv{a}X,\Adv{b}Y,agb^{-1},\Adv{a}\phi\bigr)
\]
The identity object in the framed two-category $\mathbb{H}\mathrm{om}( \bfn^{\tfrm}, \bfn^{\tfrm})$ is a similar pair
\[\mathrm{L}^n_{\mathrm{id}}=\bigl(\mathrm{T}^*\GL_n\times V^*_n,W_{\mathrm{id}}^{ \mathrm{f}}
:=W_{id}(X,g,\phi,X')+w\cdot(v-gv')\bigr),\]
where $(X,v), (X,v')\in\gl_{n}\times V_n$, while $(g,\phi,w)\in \rmTs\GLn \times V^*_n$ and the action of $ \GLv{n}\times \GLv{n}$ on framing related variables is
\[
(a,b)\cdot(v,v',w) = (av,bv',aw).
\]
The proposition below shows that this is a unit in our two-category.
\begin{proposition}\label{prop:unit}
For any \(\mathrm{L}\in\Hom(\mathbf{n}^\bullet,\mathbf{m}^\bullet)\) we have
\[\mathrm{L}_{\mathrm{id}}^m\circ \mathrm{L}\sim\mathrm{L},\quad \mathrm{L}\circ\mathrm{L}_{\mathrm{id}}^n\sim\mathrm{L}\]
\end{proposition}
\begin{proof}
We prove the second equality in the unframed case since the framed case is similar.
If \[\mathrm{L}=\bigl(Z,W(z,X',X'')\bigr)\] with \(X',X''\) being the coordinate along \(\gl_n\)
and \(\gl_m\) then the potential of the composition is the sum
\[\textup{Tr}\,\phi(X-\mathrm{Ad}_gX')+W(z,X')\in\mathbb{C}[ \mathrm{T}^*\GL_n\times Z\times\gl_n\times\gl_n\times\gl_m/ \GLv{n}]^{ \GLv{m}\times \GLv{n}}.\]
The group \(\GL_n\) acts freely on itself hence the last quotient is
equal the product \(\gl_n\times Z\times\gl_n\times\gl_n\times\gl_m\) and the
potential on the quotient is obtain from the last potential by setting
\(g=1\). Finally, observe that the potential
\[\textup{Tr}\,\phi(X-X')\]
is quadratic and we use Knorrer periodicity \cite{Knorrer} to complete the proof.
\end{proof}
Now assume that \(I\) is a small interval intersecting a curve of defect in a smooth point.
Let us assume that the curve of defect separates regions with marking \(k \) and \(l\).
Denote by \((\phi_{kl},\phi_{lk})\) the coordinates on \(\mathrm{T}^* \Hom(V_k,V_l)\), where
\(\phi_{kl}\in \Hom(V_k,V_l)\) and \(\phi_{lk}\in \Hom(V_l,V_k)\). Also denote by $v_l$ and $v_k$ the coordinates on $V_l$ and $V_k$ and denote by $X_l$ and $X_k$ the coordinates on $\gl_l $ and $\gl_k$. Now the object assigned to the interval $\vec{I}$ in the unframed category is a pair
\[ \mathsf{Z}(\vec{I})=\bigl(\mathrm{T}^* \Hom(V_k,V_l),\quad W_{k,l}:=\textup{Tr}(X_k\phi_{kl}\phi_{lk})-\textup{Tr}(X_l\phi_{lk}\phi_{kl})\bigr).\]
In case of framed two-categories,
if the interval starts at \(k\) and ends in \(l\) and the shortest path from the head of the vector \(\vec{I}\) to the head of the vector of direction
of the defect goes clockwise, then we choose
\[ \sfZ^{\tfrm}(\vec{I})=\bigl(\mathrm{T}^* \Hom(V_k,V_l)\times V_k,\quad W^f_{k,l}:=\textup{Tr}(X_k\phi_{kl}\phi_{lk})-\textup{Tr}(X_l\phi_{lk}\phi_{kl})+\textup{Tr}(\psi v_k)\bigr).\]
In case of opposite orientation we set
\[ \sfZ^{\tfrm}(\vec{I})=\bigl(\mathrm{T}^* \Hom(V_k,V_l)\times V_l,\quad W^f_{k,l}:=\textup{Tr}(X_k\phi_{kl}\phi_{lk})-\textup{Tr}(X_l\phi_{lk}\phi_{kl})+\textup{Tr}(\psi v_l)\bigr).\]
The element \( \mathsf{Z}^\bullet(\vec{I})\) is an element of the two-category \(\mathbb{H}\mathrm{om}(\mathbf{k}^\bullet,\mathbf{l}^\bullet)\) thus the composition construction allow us
to interpret an element \( \mathsf{Z}^\bullet(\vec{I})\) as morphism from \( \mathbb{H}\mathrm{om}(\mathbf{k}^\bullet,\mathbf{0}^\bullet)\) to
\(\mathbb{H}\mathrm{om}(\mathbf{l}^\bullet,\mathbf{0}^\bullet)\).
Let us denote the intervals as above by \(\vec{I}_{k\uparrow l}\) and
\(\vec{I}_{k\downarrow l}\). More generally we denote by
\[\vec{I}_{k_1\uparrow k_2\uparrow\dots \uparrow k_l},\quad \vec{I}_{k_1\downarrow k_2\downarrow\dots \downarrow k_l}\]
the interval that connects the connected components with the labels \(k_1\) and \(k_l\) and traverses the
domains with the labels \(k_2,\dots,k_l\) in between with the orientation of the
intersections as indicated by the arrows. We also allow a mixture of down/up orientations of the intersections.
According to our definition of TQFT we have:
\[ \mathsf{Z}(\vec{I}_{k_1\uparrow k_2
\dots\uparrow k_l})= \mathsf{Z}(\vec{I}_{k_1\uparrow k_2})\circ \mathsf{Z}(\vec{I}_{k_2\uparrow k_3})\circ\dots\circ \mathsf{Z}(\vec{I}_{k_{l-1}\uparrow k_l}).\]
The GIT quotient in the definition of the composition can be made explicit in many important cases:
\begin{proposition} \label{prop:comp12}For any \(n\ge 0\) we have:
\[\sfZ^{\tfrm}(\vec{I}_{0\uparrow 1\uparrow \dots n})=(\mathrm{T}^*\mathrm{Fl}_m \times V_n,w),\quad
\sfZ^{\tfrm}(\vec{I}_{0\downarrow 1\downarrow \dots n})=(\mathrm{T}^* \mathrm{Fl}_n,w),\]
\[ \mathsf{Z}(\vec{I}_{0\vert 1\vert \dots n})=(\mathrm{T}^*\mathrm{Fl}_n,w),\quad w=\mu\cdot X\in \mathbb{C}[\gl_n\times\mathrm{T}^*\mathrm{Fl}_n]^{\GL_n},\]
where \(\mu:\mathrm{T}^*\mathrm{Fl}_n\to \gl_n^*\) is the moment map and \(X\) are the coordinates on \(\gl_n\).
\end{proposition}
\begin{proof} Let us first prove the last equation, the other equations are analogous and it will be indicated
at the end of the proof how one needs to modify the proof to get the first two formulas.
We proceed by induction on \(n\). Thus we need to compute the composition:
\[ \mathsf{Z}(\vec{I}_{0\vert 1\vert\dots\vert n-1})\circ \mathsf{Z}(\vec{I}_{n-1\vert n}).\]
It is convenient to think of \(\mathrm{T}^*\mathrm{Fl}_n\) as \(B_n\)-quotient because the trace map gives a natural
pairing on \(\gl_n\) thus we can think of \(\mu\) as a map \(\mathrm{T}^*\mathrm{Fl}_n\rightarrow\gl_n\):
\[\mathrm{T}^*\mathrm{Fl}_n=\GL_n\times\mathfrak{n}_n/B_n, \quad \mu(g,Y)=\mathrm{Ad}_gY,\]
where \(g\) and \(Y\) are the coordinates on \(\GL_n\) and \(\mathfrak{n}_n\).
In these notations
the composition in question is the pair of the GIT quotient space and a potential:
\[\mathrm{T}^*\mathrm{Fl}_{n-1}\times \mathrm{T}^*\Hom(V_{n-1},V_n)/_+\GL_{n-1}, w_{n-1,n}=\textup{Tr}(X'\mathrm{Ad}_g Y')+\textup{Tr}(X'\phi\psi)-\textup{Tr}(X''\psi\phi),\]
where \(X'\in\gl_{n-1}\), \(X''\in\gl_{n}\), \(g,Y\) are the coordinates along \(\mathrm{T}^*\mathrm{Fl}_{n-1}\) and
\(\psi\in \Hom(V_n,V_{n-1})\), \(\phi\in \Hom(V_{n-1},V_n)\) are the coordinates along \(\mathrm{T}^*\Hom(V_{n-1},V_n)\).
The GIT quotient in last formula could be made very explicit, we choose to describe the quotient by constructing
explicit charts in the quotients. Then we show that in each chart we can apply the Knorrer periodicity to
simplify the potential.
The GIT stable locus consists of points where \(\phi\) is injective. Thus we can assume that there is
\(k\) such that \[\det(\phi_{\widehat{k}})\ne 0,\] where \(\phi_{\widehat{k}}\) is \(\phi\) with \(k\)-th row removed.
Let us denote the locus where the last inquality holds by \(U_k\). It is clear that the quotient is covered
by the charts \(U_k/\GL_{n-1}\) and can analyze the potential in each chart.
To simplify notations let us consider the case \(k=n\). The natural slice to the \(\GL_n\)-action is the closed
subset of elements constrained by:
\[\phi_{ij}=\delta_{i,j},\quad 1\le i\le j\le n-1.\]
Let us also denote the last row of \(\phi \) by \(v\) and the matrix of the first \(n-1\) columns
of \(\psi\) by \(\tilde{\psi}\) and the last row of \(\psi \) by \(\psi'\). Then the potential \(w_{n-1,n}\) becomes:
\begin{equation}\label{eq:redw}\textup{Tr}(X'\mathrm{Ad}_gY')+\textup{Tr}(X'\tilde{\psi})+\textup{Tr}(vX'\psi')-\textup{Tr}(X''\phi\psi).\end{equation}
The first term is quadratic and we can apply Knorrer reduction. The reduction forces the following vanishing
of the coordinates:
\[X'=0,\quad \mathrm{Ad}_gY'+\tilde{\psi}+\psi'v=0.\]
Thus the new coordinates on the Knorrer reduced space are \(X'',Y',v,\psi'\) and in these coordinates we have:
\[\phi\psi=
\begin{bmatrix}
-\psi'v-\mathrm{Ad}_gY'&\psi'\\
-v\psi'v-v\mathrm{Ad}_gY'&v\psi'
\end{bmatrix}
\]
Thus a direct computation shows that the last term of \eqref{eq:redw} is equal to
\[\textup{Tr}(X''\mathrm{Ad}_hY),\mbox{ with}\quad Y=
\begin{bmatrix}
Y&g^{-1}\psi'\\0&0
\end{bmatrix},\quad
h= \begin{bmatrix}
g&0\\vg&1
\end{bmatrix}.
\]
Hence we proved the last formula in the chart \(U_n\) and the computations in other charts are analogous. The argument in the
framed case is basically the same.
\end{proof}
\subsection{The categories of closed curves}
\label{sec:value-closed-curves}
The choice of defect-related objects \( \sfZv{\bullet}(\vec{I})\) determines categories assigned to closed curves: a curve $C$ is presented as a gluing of two intervals, then its category $ \sfZv{\bullet}(C)$ must be the category of morphisms between their objects. Two curves are of special importance for our braid-related constructions.
The first type is a curve that does not intersects any defects. So the curve is a circle that lies inside of the connected component
with the marking \(n\). We denote such circle \(S^1_n\). To a point \(p\in S^1_n\) we assign two-category \(\mathbb{H}\mathrm{om}(\mathbf{n}^\bullet,\mathbf{0}^\bullet)\).
For brevity, we start using notation \[[\mathbf{n}^\bullet,\mathbf{m}^\bullet ]:=\mathbb{H}\mathrm{om}(\mathbf{n}^\bullet,\mathbf{m}^\bullet),
\]
for the corresponding two-category.
The interval \(I\) connecting
\(\mathsf{p}_n\) to itself get assigned the identity \( \sfZv{\bullet}(I)=\mathrm{L}_{\mathrm{id}}^n\in [n,n]\). Since $S^1_n$ is a result of gluing two such intervals, its category is the category of endomorphisms of the interval object:
\[ \sfZv{\bullet}(S^1_n)=\Hom(\mathrm{L}_{\mathrm{id}}^n,\mathrm{L}_{\mathrm{id}}^n).\]
The proposition~\ref{prop:unit} implies that
the object \(\mathbb{I}\mathrm{d}:=\{\mathrm{L}_{\mathrm{id}}^i\}_{i\in \mathbb{Z}_{\ge 0}}\in \mathrm{Obj}(\ddot{\textup{Cat}}^\bullet_\Delta)\) is a unit in the monoidal two-category \(\ddot{\textup{Cat}}^\bullet_\Delta\).
The Drinfeld center \(\mathfrak{Z}\) of a monoidal two-category is sometimes defined as one-category of endomorphism of the unit of the
category. Thus our computation suggests
\[ \bigcup_n \mathsf{Z}(S^1_n)=\Hom(\mathbb{I}\mathrm{d},\mathbb{I}\mathrm{d})=\mathfrak{Z}(\ddot{\textup{Cat}}_\Delta).\]
where the last equality is just a suggestion, not a rigorous statement, since we could not find a rigorous
discussion of the Drinfeld center of a monoidal two-category. We give a interpretation of the last equality
in more familiar terms of Drinfeld centers of monoidal categories.
For an object
\(\mathbb{O}=\{(Z_{i},u_{i})|i \in\mathbb{Z}_+\}\in \mathrm{Obj}(\ddot{\textup{Cat}}^\bullet_{\mathbb{Z}_+\times 0})\) we define
a monoidal category \(\mathrm{Cat}^\bullet[\mathbb{O}]\) with \(\mathrm{Obj}(\mathrm{Cat}[\mathbb{O}])=\bigcup_{i}\Hom((Z_{i},u_{i}),(Z_{i},u_{i}))\) and
the monoidal structure is defined by the convolution product \(\star\) discussed above. Thus
the category \(\mathrm{Cat}[\mathbb{O}]\) has a well-defined Drinfeld center \(\mathfrak{Z}(\mathrm{Cat}[\mathbb{O}])\) and we conjecture that
\(\Hom(\mathbb{I}\mathrm{d},\mathbb{I}\mathrm{d})\) is the inverse limit of these Drinfeld centers.
\begin{conjecture}\label{conj:Z} For every \(\mathbb{O}\in \mathrm{Obj}(\ddot{\textup{Cat}}_{\mathbb{Z}_+\times 0}^\bullet)\) there is a monoidal functor
\[\mathrm{HC}^\bullet[\mathbb{O}]:\quad \Hom(\mathbb{I}\mathrm{d},\mathbb{I}\mathrm{d})\to \mathfrak{Z}(\mathrm{Cat}[\mathbb{O}]),\]
and for every \(\mathcal{C}\in \Hom(\mathbb{O},\mathbb{O}')\) there is a natural transformation
\(\mathrm{HC}^\bullet[\mathcal{C}]\) between the functors \(\mathrm{HC}^\bullet[\mathbb{O}]\) and \(\mathrm{HC}^\bullet[\mathbb{O}']\).
Moreover, the category \(\Hom(\mathbb{I}\mathrm{d},\mathbb{I}\mathrm{d})\) is the universal category that
satistisfies these collection of properties. If \(\mathrm{Cat}'\) is another category with that
has collection of functors \(\mathrm{HC}'[\dots]\) from \(\mathrm{Cat}'\) into the Drinfeld centers
\(\mathfrak{Z}(\mathbb{O})\) then there is the functor \(\Psi: \mathrm{Cat}'\to \Hom(\mathbb{I}\mathrm{d},\mathbb{I}\mathrm{d})\)
that intertwines the natural transformations \(\mathrm{HC}'[\dots ]\) and \(\mathrm{HC}^\bullet[\dots]\)
\end{conjecture}
An evidence in favor of the conjecture is a construction of the monoidal functor
\[\mathrm{HC}^\bullet\colon \Hom(\mathbb{I}\mathrm{d},\mathbb{I}\mathrm{d})\to \mathrm{Cat}^\bullet[\mathbb{O}]\] from \cite{OblomkovRozansky18a}
for a particular choice of the elements \[\mathbb{O}_{\mathrm{br}}=\{\mathsf{Z}(\vec{I}_{0|1|\dots|n})|n\in \mathbb{Z}_+\},\quad \mathbb{O}_{\mathrm{br}}^{ \mathrm{f}}=\{\mathsf{Z}(\vec{I}_{0\uparrow 1\uparrow\dots n})|n\in\mathbb{Z}_+\}.\]
As we discussed before there are homomorphisms
\[\Phi\colon\bigcup_n\mathfrak{Br}_n\to \mathrm{Cat}[\mathbb{O}^{ \mathrm{f}}_{\mathrm{br}}],\qquad\Phi^{\mathrm{aff}}\colon\bigcup_n \mathfrak{Br}_n^{\mathrm{aff}}\to \mathrm{Cat}[\mathbb{O}_{\mathrm{br}}]\]
of finite and affine braid groups to these monoidal categories. The generators of the braid groups are mapped to special
Koszul matrix factorizations. Conjecturally, the images of $\mathfrak{Br}_n$ and $\mathfrak{Br}_n^{\mathrm{aff}}$ span the whole category \(\mathrm{Cat}[\mathbb{O}]\).
The image of \(\mathrm{HC}^\bullet\) commutes with any element of \(\mathrm{Cat}[\mathbb{O}]\)
(that is, with the images of braid groups).
In \cite{OblomkovRozansky18a} we defined the categories
\begin{equation}
\label{eq:drcat}
\textup{MF}_{ \mathrm{Dr}}^\bullet = \textup{MF}_{G_n}\bigl((\mathfrak{g}^\bullet\times\mathfrak{g}^\bullet\times G)^{\mathrm{st}}, W^\bullet_{ \mathrm{Dr}}\bigr),
\end{equation}
where
\[W_{ \mathrm{Dr}}(X,Y,g)=\textup{Tr} X(\mathrm{Ad}_g(Y)-Y), \quad W_{ \mathrm{Dr}}^{ \mathrm{f}}(X,v,Y,u,g)=W_{ \mathrm{Dr}}(X,Y,g)+u^*\cdot v- u^*\cdot gv.\]
\begin{proposition}
For \(\bullet=\emptyset, \mathrm{f}\)
the categories $\textup{MF}_{ \mathrm{Dr}}^\bullet$
are equivalent to the categories of
endomorphisms of the identity objects $\mathrm{L}^n_{\mathrm{id}}$:
\[\textup{MF}_{ \mathrm{Dr}}^\bullet=\Hom(\mathrm{L}^n_{\mathrm{id}},\mathrm{L}^n_{\mathrm{id}})\]
\end{proposition}
\begin{proof}
We consider only the case \(\bullet=\emptyset\) the other case is analogous.
The statement follows from the Knorrer periodicity \cite{Knorrer}. Indeed, by
our definition we have
\[\Hom(\mathrm{L}_{\mathrm{id}}^{n},\mathrm{L}_{\mathrm{id}}^n)=\textup{MF}_{\GL_n}(\mathrm{T}^*\GL_n\times\gl_n\times\gl_n\times\mathrm{T}^*\GL_n/\GL_n,W),\]
\[W=\textup{Tr}\, \phi\bigl(X-\mathrm{Ad}_g(X')\bigr)-\textup{Tr}\,\phi\bigl(X'-\mathrm{Ad}_{g'}(X)\bigr)\]
where \((\phi,g)\) and \((\phi',g')\) are the coordinates on two copies of
\(\mathrm{T}^*\GL_n\) and \(X,X'\) are the coordinates on two copies of \(\gl_n\).
By setting \(g'=1\) we take slice to the \(\GL_n\)-orbits. On the slice
the second term in the formula for \(W\) becomes
\(\textup{Tr}\,\phi'(X-X').\)
Thus the Knorrer periodicity implies that the restriction on the
locus \(X=X',\phi'=0\) is the equivalence of the corresponding categories
of matrix factorizations.
\end{proof}
We can linearize the potential \(W_{ \mathrm{Dr}}^\bullet\) by introducing a new variable \(U=Yg^{-1}\):
\[W_{ \mathrm{lin}}(X,U,g)=\textup{Tr}(X[g,U]),\quad W_{ \mathrm{lin}}^{ \mathrm{f}}(X,v,U,u,g)=W_{ \mathrm{lin}}(X,U,g)+u^*\cdot v -u^*\cdot gv.\]
The group \(G\) naturally embeds inside its lie algebra \(\mathfrak{g}\), \(j_G\colon G\to \mathfrak{g}\). Induced by this embedding we have
the localization functor:
\[\mathrm{\mathrm{loc}}^\bullet\colon \textup{MF}^\bullet_{ \mathrm{Dr}}\to \underline{\textup{MF}}^\bullet_{ \mathrm{Dr}}=\textup{MF}_G\bigl((\mathfrak{g}^\bullet\times\mathfrak{g}^\bullet\times \mathfrak{g})^{\mathrm{st}},W_{ \mathrm{lin}}^\bullet\bigr).\]
It turns out that in the framed case the localization functor is an isomorphism:
\begin{proposition} \cite{OblomkovRozansky18a}
The localization functor \(\mathrm{\mathrm{loc}}^{ \mathrm{f}}\) is an isomorphism.
\end{proposition}
Since the potential \(W^{ \mathrm{lin}}\) is linear along the last copy of \(\mathfrak{g}\),
the Koszul duality (see for example \cite{ArkhipovKanstrup15a} or \cite{OblomkovRozansky18a}) provides
an equivalence:
\begin{equation}
\label{eq:drcato}
\mathrm{KSZ}\colon \underline{\textup{MF}}^{ \mathrm{f}}_{ \mathrm{Dr}}\longrightarrow \rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2)).
\end{equation}
Thus we have completed proof of theorem~\ref{thm:DrZ}.
The second type of a closed curve is the line that intersects our braid transversally. The line goes through the regions with the
marks \(0,1,\dots,n-1,n,n-1,\dots,0\). Figure~\ref{fg:crvt} gives an example.
\begin{figure}
\label{fg:crvt}
\includegraphics[width=2in]{Pics2.pdf}
\caption{Plane \(\mathbb{R}^2_{\sigma_1^3}\) cut by \(\mathbb{R}_{0,1,2,1,0}\)}
\end{figure}
We denote such a line compactified by a point at infinity as \(S^1_{0\uparrow 1\uparrow \dots\uparrow n \downarrow n-1\downarrow \dots\downarrow 0}.\) The value
\( \mathsf{Z}^\bullet\) follows immediately from the proposition~\ref{prop:comp12}:
\begin{equation}
\label{eq:cats}
\sfZ^{\tfrm}(S^\bullet_{0\uparrow 1\uparrow\dots\uparrow n\downarrow\dots \downarrow 0})=\textup{MF}^{\mathrm{st}}_{G_n}(\gl_n^\bullet\times \mathrm{T}^* \mathrm{Fl}_n\times \mathrm{T}^* \mathrm{Fl}_n,w_1-w_2),
\end{equation}
where "st" indicate that we restrict to the GIT stable locus of the corresponding space.
The categories~\eqref{eq:cats} are closely related to the main categories of
\cite{OblomkovRozansky16}. Recall that in \cite{OblomkovRozansky16} we worked with \(G_n\times B\times B\)-equivariant categories of matrix factorizations on the space
\[\mathcal{X}^\bullet:=\gl_n^\bullet\times (G_n\times\mathfrak{n}_n)\times (G_n\times\mathfrak{n}_n)\]
with the equivariant structure preserving the potential
\[W(X,g_1,Y_1,g_2,Y_2)=\textup{Tr}\,X(\mathrm{Ad}_{g_1}X_1-\mathrm{Ad}_{g_2}X_2),\]
where \(X\) is the coordinate in \(\gl_n\) and \(g_i,Y_i\) are the coordinates in \(G\) and \(\mathfrak{n}\).
An object of the category \(\textup{MF}_{G\times B\times B}(\mathcal{X}^\bullet, W)\) is the collection of data
\[
\mathcal{F}=(M,D,\partial_l,\partial_r), \quad (M,D)\in \textup{MF}_G(\mathcal{X}^\bullet,W), \quad \partial_l,\partial_r\in
\Hom(\Lambda^*\mathfrak{n},\Lambda^{<*}\mathfrak{n})\otimes\Hom_{\mathbb{C}[\mathcal{X}^\bullet]}(M,M),\]
\[(D_{\mathrm{tot}})^2=W,\quad D_{\mathrm{tot}}=D+d_{ce}+\partial_l+\partial_r,\]
where \(D_{\mathrm{tot}}\in \End(\mathrm{CE}_{\mathfrak{n}^2}\stackon{$\otimes$}{$\scriptstyle\Delta$} M)\) and the \(d_{ce}\) is the Chevalley-Eilenberg\ differential.
The matrix factorization \((\mathrm{CE}_{\mathfrak{n}^2}\stackon{$\otimes$}{$\scriptstyle\Delta$} M,D_{\mathrm{tot}})\) is a strictly \(B^2\)-equivariant matrix factorization, so we
define a natural averaging functor:
\[\mathrm{Av}\colon \textup{MF}_{G\times B\times B}(\mathcal{X}^\bullet,W)\to \textup{MF}_{G}(\gl_n^\bullet\times \mathrm{T}^*\mathrm{Fl}_n\times\mathrm{T}^*\mathrm{Fl}_n,W).\]
Given an affine \(G\)-equivariant chart \(U\subset \gl_n^\bullet\times \mathrm{T}^*\mathrm{Fl}_n\times\mathrm{T}^*\mathrm{Fl}_n\), the
\(B^2\)-orbit \(\tilde{U}\) of the chart is affine and we define
\[\mathrm{Av}(\mathcal{F})(U)=(\mathcal{F}_{\tilde{U}}^{B^2},D_{\mathrm{tot}}).\]
The functor \(\mathrm{Av}\) is not invertible, since we forget the piece of data corresponding to the non-equivariant
specialization. However, one can easily see that the functor preserves a lot homological data:
\begin{proposition} The functor \(\mathrm{Av}\) is fully faithful and preserves extensions:
\[\mathcal{E} xt(\mathrm{Av}(\mathcal{F}),\mathrm{Av}(\mathcal{G}))=\mathcal{E} xt(\mathcal{F},\mathcal{G}),\]
where both sides are modules over \(\mathbb{C}[\gl_n]^{\GL_n}\).
\end{proposition}
In \cite{PolishchukVaintrob11} the authors define the category \(\mathrm{DMF}(X,s)\) where \(X\) is an algebraic stack and
\(s\in H^0(\mathcal{L})\) where \(\mathcal{L}\) is a line bundle on \(X\). The category
\(\mathrm{DMF}(X,s)\) is the quotent of the category of matrix factorizations \(\textup{MF}(X,s)\) by the subcategory
\(\mathrm{LMF}(X,s)\) of the locally contractible matrix factorization. We can refine the proposition above into
\begin{conjecture}
The functor \(\mathrm{Av}\) refined to the functor
\[\mathrm{Av}\colon \textup{MF}_{G\times B\times B}(\mathcal{X}^\bullet,W)\to \mathrm{DMF}_{G}(\gl_n^\bullet\times \mathrm{T}^*\mathrm{Fl}_n\times\mathrm{T}^*\mathrm{Fl}_n,W),\]
becomes an isomorphism.
\end{conjecture}
For now on we abbreviate the category \(\textup{MF}_{G_n\times B\times B}(\mathcal{X}^\bullet,W)\) of matrix factorizations as \(\textup{MF}^{\bullet}_n\). This category
is monoidal. Indeed, let \(D_{++}^-(n)\) be a disc with two holes (pair of pants) and defects that are straight non-intersecting segments
connecting outer boundary with the inner boundaries and such that all boundaries are \(S^1_{0\uparrow 1\uparrow \dots\uparrow n \downarrow n-1\downarrow \dots\downarrow 0}\). The Figure~\eqref{fg:comp} represents
\(D_{++}^-(3)\). By axioms of the QTFT \( \mathsf{Z}^\bullet(D_{++}^-(n))\) is a functor:
\[ \mathsf{Z}^\bullet(D_{++}^-(n)): \textup{MF}^\bullet_n\times \textup{MF}^\bullet_n\to\textup{MF}^\bullet_n,\]
which defines monoidal structure \(\star\) on the category. This monoidal structure was studied in details in \cite{OblomkovRozansky16}.
\begin{figure}
\label{fg:comp}
\includegraphics[width=2in]{Pics6.pdf}
\caption{\(D_{++}^-(3)\)}
\end{figure}
\subsection{Values on discs}
\label{sec:values-discs}
As a final step of our construction we need to discuss the values of TQFT\ on discs. The first type of disc is the disc \(D_\emptyset\) that bounds \(S^1_n\) and
does not contain tautological defect point. The category $ \sfZv{\bullet}(S^2) = \Hom(\mathrm{L}_{\mathrm{id}}^n,\mathrm{L}_{\mathrm{id}}^n)$ is monoidal and $ \sfZv{\bullet}(D_\emptyset)$ represents the identity object in it. Hence we set
\[\mathsf{Z}^{ \mathrm{f}}(D_\emptyset):=\mathcal{O}\in \rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2)).\]
If the disc contains the point of tautological defect then we set
\[\mathsf{Z}^{ \mathrm{f}}(D_{\mathrm{\mathrm{taut}}}):=\Lambda^\bullet \mathcal{B}\in \text{\it Coh}( \mathrm{Hilb}_n(\mathbb{C}^2)),\]
where \(\mathcal{B}\) is the tautological vector bundle.
The other important type of a disc is a half-plane $H$ bordered by the line \(S^1_{0\uparrow 1\uparrow \dots\uparrow n \downarrow n-1\downarrow \dots\downarrow 0}\). Its object $ \sfZv{\bullet}(H)$ lies in the monoidal category
\begin{equation}
\label{eq:moncatbr}
\sfZv{\bullet}(S^1_{0\uparrow 1\uparrow \dots\uparrow n \downarrow n-1\downarrow \dots\downarrow 0}) =
\End(\vec{I}_{0\uparrow 1\uparrow\cdots\uparrow n-1\uparrow n}).
\end{equation}
and it depends on the configuration of defects inside $H$.
Denote by $H_1$ the simplest configuration which is the collection of non-intersecting curves connecting the points of the same type as in the right half-plane in Figure~\ref{fg:manyred}.
In this situation $ \sfZv{\bullet}$ is the identity object:
\[ \mathsf{Z}^\bullet(H_1)=\mathcal{C}_1\in \textup{MF}_n^{\bullet}.\]
More generally, denote by $H_\beta$ the half-plane containing a braid \(\beta\) as in the left of Figure~\ref{fg:manyred}.
The value of \(Z(H_\beta)\) for more complicated configurations of defects can be computed by
using the monoidal structure of the category~\eqref{eq:moncatbr} through
cutting \(H_\beta\) into the union:
\[ H_\beta=
\bigcup_k S_{\sigma^{\epsilon_k}_{j_k}},\]
where \(\beta=\sigma^{\epsilon_1}_{j_1}\dots \sigma^{\epsilon_l}_{j_l}\) and \(S_{\sigma^{\epsilon_k}_{j_k}}\) is the disc with the boundary \(S^1_{0\uparrow 1\uparrow \dots\uparrow n \downarrow n-1\downarrow \dots\downarrow 0}\)
and defects inside the strip form an elementary braid on the $j_k$-th and $(j_k+1)$-st stands, see the figure below for the case \(\beta=\sigma_1^3\).
\begin{figure}
\label{fg:manyred}
\includegraphics[width=2in]{Pics5.pdf}
\caption{Decomposing \(\mathbb{R}^2_{\beta}\) on two half-planes}
\end{figure}
Since
\[ \mathsf{Z}^\bullet(H_\beta)= \mathsf{Z}^\bullet(S_{\sigma_{j_1}^{\epsilon_1}})\star\dots\star \mathsf{Z}^\bullet(S_{\sigma_{j_l}^{\epsilon_l}}),
\]
it is enough to define \( \mathsf{Z}^\bullet(S_{\sigma_k^{\pm 1}})\in \textup{MF}_n^\bullet\),
as in \cite{OblomkovRozansky16}:
\[ \mathsf{Z}^\bullet(S_{\sigma_k^{\pm 1}}):=\mathcal{C}_\pm^{(k)}\in \textup{MF}^\bullet,\]
It is shown in \cite{OblomkovRozansky16} that the element \(Z(H_\beta)\) only depends on the braid \(\beta\)
but not on the braid presentation, thus our disc assignment indeed is a well-defined partition function of TQFT.
Finally, we define the value of \(\sfZ^{\tfrm}\) on the half-plane \(H_1^{\mathrm{taut}}\) containing the
unit braid and the tautological point defect as
\[\sfZ^{\tfrm}(H_1^{\mathrm{\mathrm{taut}}}):=\mathcal{C}_1\otimes \Lambda^\bullet\mathcal{B}.\]
We leave the following statement as conjecture and will provide a proof in the forthcoming publication.
\begin{conjecture}
The above assignments of the values of \(Z\) are the part of the data of well-defined 3D TQFT.
\end{conjecture}
\subsection{Value on \(\mathbb{R}^2_\beta\)}
\label{sec:value-rr2_beta}
There are two ways to cut the plane with a closed braid defect \(\mathbb{R}^2_\beta\) into two pieces. As a result, the TQFT\ formalism implies two presentations of the corresponding vector space $\sfZ^{\tfrm}(\mathbb{R}^2_\beta)$ as the space of morphisms between two objects in the category of the cutting line.
The first cut splits \(\mathbb{R}^2_\beta\)
in two half-planes \(H_1^{\mathrm{\mathrm{taut}}}\) and \(H_\beta\), and the corresponding presentation is
\[\sfZ^{\tfrm}(\mathbb{R}^2_\beta)=\Hom\bigl(\sfZ^{\tfrm}(H_1^{\mathrm{\mathrm{taut}}}),\sfZ^{\tfrm}(H_\beta)\bigr)= \bigl(\Hom(\mathcal{C}_\beta,\mathcal{C}_1)\otimes\Lambda^\bullet\mathcal{B}\bigr)^{B^2\times G}.\]
The vector space \(\sfZ^{\tfrm}(\mathbb{R}^2_\beta)\) is triply-graded and
the main result of \cite{OblomkovRozansky16} could be restated as
\begin{theorem}
The triply-graded vector space \(\sfZ^{\tfrm}(\mathbb{R}^2_\beta)\) is an isotopy invariant of the closure of the braid \(\beta\) after a special shift of the grading.
\end{theorem}
The second cut (see Figure~\ref{fg:circlcut}) splits $\mathbb{R}^2_\beta$ into the inner disc $D_{\mathrm{taut}}$ (containing the tautological bundle defect) and its complement $D^\infty_\beta$ which contains the closed braid defect. The cut goes over a circle, that lies in the region marked by $n$ and does not intersect defect lines, hence its category is
\begin{equation}
\label{eq:vspco}
\sfZ^{\tfrm}(S^1_n) = \textup{MF}_{ \mathrm{Dr}}^{ \mathrm{f}}\cong\rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2))
\end{equation}
of~\eqref{eq:drcat} and~\eqref{eq:drcato}. The object of $D_{\mathrm{taut}}$ is just the defect bundle: $\sfZ^{\tfrm}(D_{\mathrm{taut}})= \Lambda^\bullet\mathcal{B}$. The object of $D^\infty_\beta$ is determined by the categorical Chern character functor
\[\mathsf{CH}^{\mathrm{fs}}_{\mathrm{loc}}\colon \textup{MF}_n^{ \mathrm{f}}\to\sfZ^{\tfrm}(S^1_n),\]
that is, $\sfZ^{\tfrm}(D^\infty_\beta) = \mathsf{CH}^{\mathrm{fs}}\bigl(\sfZ^{\tfrm}(H_\beta)\bigr)$. Thus we get the second presentation of the vector space $\sfZ^{\tfrm}(\mathbb{R}^2_\beta)$ as the $\textup{Ext}$ space between two complexes of sheaves within the derived category of 2-periodic sheaves on the Hilbert scheme $ \mathrm{Hilb}_n(\mathbb{C}^2)$:
\begin{equation}
\label{eq:vspct}
\sfZ^{\tfrm}(\mathbb{R}^2_\beta) = \textup{Ext}\bigl( \mathsf{CH}^{\mathrm{fs}}\bigl(\sfZ^{\tfrm}(H_\beta)\bigr), \Lambda^\bullet\mathcal{B}\bigr)
\end{equation}
The isomorphism between the vector spaces~\eqref{eq:vspco} and~\eqref{eq:vspct} is
one of the key properties of the functor \(\mathsf{CH}^{\mathrm{fs}}_{\mathrm{loc}}\)
(one may call it a simple case of the categorical Riemann-Roch formula):
\begin{theorem}
For any \(\mathcal{C}\in\textup{MF}_n\) we have:
\[\mathcal{E} xt(\Lambda^\bullet\mathcal{B},\mathsf{CH}^{\mathrm{fs}}_{\mathrm{loc}}(\mathcal{C}))^G=\mathcal{E} xt(\mathcal{C}_1\otimes\Lambda^\bullet\mathcal{B},\mathcal{C})^{B^2\times G}.\]
\end{theorem}
Thus we constructed a complex of sheaves \(S_\beta:=\mathsf{CH}^{\mathrm{fs}}_{\mathrm{loc}}(\mathcal{C}_\beta))\) such that its global sections are the knot homology. In the language of TQFT\ the categorical Riemann-Roch and our main theorem is just a gluing property of TQFT: the picture \ref{fg:manyred} and picture
\ref{fg:circlcut} present two different ways of computing
the same partition sum \(\sfZ^{\tfrm}(\mathbb{R}^2_\beta)\).
\begin{figure}
\label{fg:circlcut}
\includegraphics[width=2in]{Pics4.pdf}
\caption{Decomposition of \(S^2_\beta\) on two discs.}
\end{figure}
\section{Further directions}
\label{sec:further-directions}
With our TQFT\ formalisn we can construct a monoidal functor of triangulated categories:
\[\mathbb{B}\colon \textup{MF}_n^{ \mathrm{f}}\to \mathbb{B}\mathrm{im}_n,\]
where \(\mathbb{B}\mathrm{im}_n\) is the category of bimodules over \(\mathbb{C}[x_1,\dots,x_n]\).
To construct the functor we introduce a special object \(\mathbb{O}^{ \mathrm{f}}_{ \mathrm{pt}}\in\ddot{\textup{Cat}}^{ \mathrm{f}}_{\mathbb{Z}_+\times 0}\):
\[\mathbb{O}^{ \mathrm{f}}_{ \mathrm{pt}}=(\{ \mathrm{pt},0\}|n\in \mathbb{Z}_+),\]
here \(0\) is interpreted as zero function on \(\GL(n)\). The category \(\mathbb{H}\mathrm{om}(\mathbb{O}^{ \mathrm{f}}_{\mathrm{br}},\mathbb{O}_{ \mathrm{pt}}^{ \mathrm{f}})\) is not monoidal but
it is a module category over the monoidal category \(\mathbb{H}\mathrm{om}(\mathbb{O}^{ \mathrm{f}}_{\mathrm{br}},\mathbb{O}^{ \mathrm{f}}_{\mathrm{br}})\). Moreover, the category \(\mathbb{H}\mathrm{om}(\mathbb{O}^{ \mathrm{f}}_{\mathrm{br}},\mathbb{O}_{ \mathrm{pt}}^{ \mathrm{f}})\)
has more elementary description:
\begin{proposition}
For any \(n\) we have
\[\mathbb{H}\mathrm{om}((\mathrm{T}^*\mathrm{Fl}_n\times V,\mu)^{\mathrm{st}},(\mathsf{p},0))=\mathrm{
Mod}(\mathbb{C}[x_1,\dots,x_n]).\]
\end{proposition}
The category \(\mathbb{B}\mathrm{im}\) contains a collection of objects \(B_i:=R_n\otimes_{R_n^{s_i}} R_n\), here and everywhere below we use \(R_n\) for
\(\mathbb{C}[x_1,\dots,x_n]\). The compositions of these bimodules are direct sums of Soergel bimodules. Thus \(\mathbb{S}\mathrm{bim}_n\) for additive
monoidal subcategory of size \(n!\). Let us also recall that in setting of \cite{OblomkovRozansky16} we define element
\(\mathcal{C}_\bullet\in \textup{MF}_2 \) as Koszul matrix factorization and its higher rank version
\[\mathcal{C}_\bullet:=\mathrm{K}^W(Y_1-Y_2),\quad \mathcal{C}_\bullet^{(i)}:=\mathrm{ind}_{i,i+1}(\mathcal{C}_\bullet).\]
here we are using the induction functor \(\mathrm{ind}_{i,i+1}:\textup{MF}_2\to\textup{MF}_n\) from \cite{OblomkovRozansky16}. Let us denote by
\(\textup{MF}_{n,\bullet}\subset \textup{MF}^{ \mathrm{f}}_n\) the monoid generated by the elements \(\mathcal{C}_\bullet^{(i)}\).
\begin{theorem}\cite{OblomkovRozansky18d} The assignment
\[\mathrm{SB}(\mathcal{C}_\bullet^{(i)})=B_i,\]
extends to the monoidal functor \(\mathrm{SB}: \textup{MF}_{n,\bullet}\to \mathbb{S}\mathrm{bim}_n\) such that
\begin{equation}\label{eq:Hom}\Hom(\mathrm{SB}(\mathcal{C}),\mathrm{SB}(\mathcal{D}))=\Hom(\mathcal{C},\mathcal{D}).\end{equation}
\end{theorem}
We hope that this theorem would provide a guideline for a proof that the knot homology in this paper
coincide with knot homology defined with Soergel bimodule technique \cite{KhovanovRozansky08b}.
In most of the paper we imposed a constraint that the labels can only change by \(1\) as we cross the surface
of defect. This condition is motivated by our construction of the homomorphism \(\Phi\) from the introduction
and from \cite{OblomkovRozansky16}. On other hand it is expected that there is a non-trivial braid group
action on the category similar to \(\textup{MF}^{\mathrm{st}}\) where we replaced the flag variety \(\mathrm{Fl}\) with the partial
flag variety. So we expect that the construction in this paper could be extended to the setting with more general
distribution of the labels and we hope to relate this theory to the colored HOMFLYPT homology.
Suppose that we are given a domain \(D^n=D\setminus \bigcup_{i=1}^n D_i\) where \(D_i\) are the disjoint inside the big disc.
Our TQFT\ predicts existence of the natural functors from the category \( \mathsf{Z}^\bullet(\bigcup_i \partial D_i)\to \mathsf{Z}^\bullet(\partial D)\).
In the forthcoming paper \cite{OblomkovRozansky18c} we show that corresponding functors for \(n=1\) are the induction functors
from \cite{OblomkovRozansky16} and the functors for \(n>1\) are related to the Hall algebra structure on \(\rmD^{\tper}( \mathrm{Hilb}_n(\mathbb{C}^2))\).
\input{Lines-3d-a.bbl}
\end{document}
|
1,116,691,497,708 | arxiv | \section{Introduction}
The idea that the sharp slowing down of super-cooled liquids is related to
the growth of a cooperative length scale dates back at least to
Adam and Gibbs \cite{AG}.
But it is only a few years back that this idea has started being substantiated
by convincing experiments \cite{Ediger,Richert,Weeks,Israeloff,Vandenbout},
numerical simulations \cite{Harrowell,Harrowell2,Onuki,parisi,heuer,Glotzer,Glotzer2,hiwatari}
and simple microscopic
models~\cite{SR,Jaeckle,Sollich,nef,FA,GC,WBG,WBG2,BG,KA1,TBF,pan}.
One of the basic problem has been to find an observable that allows one to
define and measure objectively such a cooperative length scale. An interesting
quantity, proposed a few years ago in the context of mean field $p$-spin
glasses \cite{FP} (see \cite{KT} for an important early insight)
and measured in simulations, is a four-point density correlator, defined as
\begin{equation}
G_4(\vec r,t) = \langle \rho(0,0) \rho(0,t)
\rho(\vec r,0)
\rho(\vec r,t) \rangle-\langle \rho(0,0) \rho(0,t) \rangle \langle
\rho(\vec r,0)
\rho(\vec r,t) \rangle,\label{G4}
\end{equation}
where $\rho (\vec r,t)$ represents the density fluctuations at
position $\vec r$ and time $t$. In
practice one has to introduce an overlap function $w$~\cite{FP},
to avoid singularity due to the evaluation of the density at the same
point or consider slightly different correlation functions \cite{Berthier2}.
This quantity measures the correlation in space of local time correlation
functions. Intuitively if at point $0$ an event has occurred that
leads to a decorrelation of the local density over the time
scale $t$,
$G_4(\vec r,t)$ measures the probability that a similar event has
occurred a distance $\vec r$ away
within the same time interval $t$ (see e.g.~\cite{mayer}).
Therefore $G_4(\vec r,t)$ is a
candidate to measure heterogeneity and
cooperativity of the dynamics. The best theoretical justification for
studying this quantity is to
realize that the order parameter for the glass transition is already a
two-body object, namely
the density-density correlation
function $C(t)=\langle \rho(0,0) \rho(0,t) \rangle$, which decays to
zero in the liquid phase and
to a constant value in the frozen phase. The four-point correlation
$G_4(\vec r,t)$ therefore
plays the same role as the standard two-point correlation function
for a one-body order parameter
in usual phase transitions. Correspondingly, the associated susceptibility
$\chi_4(t)$ is defined as the volume integral of $G_4(\vec r,t)$,
and is equal to the
variance of the correlation function \cite{FP,Berthier1,BB1}. The
susceptibility $\chi_4(t)$
has been computed numerically for different model glass formers,
and indeed exhibits a maximum
for $t = t^* \sim \tau_\alpha$, the relaxation time of the system
\cite{heuer,Glotzer,Glotzer2,hiwatari}. The peak value
$\chi_4(t^*)$ is seen to increase as the temperature decreases,
indicative that the range of
$G_4(\vec r,t^*)$ increases as the system becomes more sluggish.
The dynamical correlation length $\xi_4 (t^*)$ extracted from $G_4(\vec
r,t^*)$ in molecular dynamics simulations grows and becomes
of the order of roughly $10$ inter-particle distances when the
time-scales is of the order of $10^{5}$ microscopic time-scales $\tau_0$
with $\tau_0 \sim 0.1$~ps for an atomic liquid.
In experiments close to the glass transition the dynamical correlation length
has been found to be only slightly larger, between $10$ and $20$
inter-particle distances \cite{Ediger,Richert}. This is puzzling because
experiments are done on systems with relaxation times
that are several orders of magnitude larger than in simulations.
In fact, extrapolating simulation results in the experimental regime
would lead to much larger dynamical correlation lengths.
The origin of this puzzle is still unclear, see Ref.~\cite{nef} for a recent discussion.
Experiments on dynamical heterogeneity bridging the gap
between numerical and macroscopic time-scales would be extremely valuable to resolve this paradox.
Several scenarii have been proposed to understand the existence of
non-trivial dynamical correlations, and their relation with thermodynamical
singularities. Adam and Gibbs \cite{AG}, Kirkpatrick, Thirumalai, Wolynes and collaborators~\cite{KTW}
(for a different formulation, see Ref.~\cite{BB2}), and
Kivelson and Tarjus~\cite{Tarjus} have proposed, using somewhat different arguments, the idea of collectively rearranging
regions (CRR), of size $\xi$ that increases as the temperature is decreased. The evolution of the system is such
that these regions are either frozen or allowed to temporarily and collectively unjam for a short time
until a new jammed configuration is found.
In apparent contradiction with the existence of the growing length scale, the
Mode Coupling Theory (MCT) of glasses states that the self consistent
freezing of particles in their cages is a purely local
process with no diverging length scale at the transition
\cite{Gotze}. However, this point of view is in disagreement with the results found
for mean-field disordered systems \cite{KT,FP} that are conjectured to
provide a mean-field description of the glass transition and
display an MCT-like dynamical transition. Indeed it was recently
shown that within MCT
$G_4(\vec r,t)$ in fact develops long range correlations close to the
critical MCT temperature $T_c$ \cite{BB1}. Within a phase-space
interpretation of the MCT transition, the mechanism
for this cooperative behaviour for $T > T_c$ is the progressive
rarefaction of energy lowering directions~\cite{Grigera}.
Within a real-space interpretation, the MCT transition is due to the formation of a large number of
metastable states, each one characterized by a surface
tension that increases from zero at $T_c$. As one approaches $T_c$ from above,
the relevant eigenvectors of the dynamical Hessian become more and more extended, which
means that
the modes of motion that allow the system to decorrelate are made of very well defined,
collective rearrangements of larger and larger clusters of particles (see the recent work
of Montanari and Semerjian \cite{MS}).
For smaller temperatures, $T < T_c$, `activated events'
are expected to play a crucial role. They
are believed to be responsible for the destruction of the freezing
transition at $T_c$. This regime has been tentatively described by
adding `hopping terms' in the MCT equations \cite{Gotze} or
within a CRR scenario \cite{KTW,BB2}.
Exploiting yet a different set of ideas, models of dynamical facilitation,
such as the Frederickson-Andersen~\cite{FA} or
Kob-Andersen models~\cite{KA1},
have recently been proposed as paradigms for glassy
dynamics~\cite{SR,GC,BG}.
In these models,
the motion of particles is triggered by `mobility defects' that
diffuse and possibly interact within the system. As the
temperature is lowered or the density is increased, the concentration of
defects goes down, and the relaxation
time of the system increases. Dynamics is obviously heterogeneous
since it is catalyzed by defects that cannot be everywhere simultaneously.
The characteristic length scale in this case is
related to the average distance between defects to some model and
dimension dependent exponent~\cite{SR,GC,BG,TBF,nef}.
Understanding the mechanism behind the growth of the dynamical
correlation length is certainly an important step---arguably the most
important one---to understand the cause of the slowing down
of the dynamics.
Furthermore, the different scenarii for the glass transition can be tested
contrasting their quantitative prediction for the four-point
correlation function $G_4(\vec
r,t)$ to the numerical, and hopefully soon experimental, results.
Following these premises we investigate in this paper
the analytical shape of $G_4(\vec r,t)$ for several simple models.
We show that $G_4(\vec r,t)$ indeed contains some important
information concerning the basic relaxation mechanisms. However we show
that, perhaps disappointingly,
models where cooperativity is absent or trivial lead to four-point
correlation functions and dynamical
susceptibilities $\chi_4$ that exhibit non trivial features. Other,
more complex observables will have to be defined to
really grasp the nature of the collective motions involved in the
relaxation process of glasses~\cite{HeuerStrings,Harrowell2}.
Let us summarize the main results of our study, in terms of the
susceptibility $\chi_4(t)$ and time-sectors. In a super-cooled liquid
there are separate regimes of time-scales corresponding to different physical
behaviour (see Fig. 1). On microscopic time-scales
particles move ballistically if the dynamics is Newtonian, or
diffusively if the dynamics is Brownian. On a longer time-scale, interactions start playing
a role, which can be described approximately using elasticity theory, before a truly
collective phenomenon sets in. This non trivial glassy regime is the $\beta$-regime,
within which correlation
functions, as for example the dynamical structure factor, develop a
plateau. The $\beta$-regime is divided further
in an early and a late $\beta$-regime corresponding, respectively, to the
approach and the departure from the plateau of the correlation function.
Finally the structural relaxation
time-scale on which correlation functions decay to zero is
the $\alpha$-regime. All previous studies have focused on the behaviour of $\chi_4(t)$ at
times of the order of $\tau_{\alpha }$ which correspond to the peak of $\chi_4(t)$.
We show that $\chi_4(t)$ has in fact a rich structure in time and
different behaviour in different time-sectors. In many of these regimes, $\chi_4(t)$
behaves as a power-law of time, $t^\mu$, with different values of $\mu$.
During the ballistic time-scale one finds $\mu=4$
($\mu={2}$ for Brownian dynamics) whereas during the elastic regime (most relevant deep in the
glass phase), the exponent becomes $\mu=1$ for ballistic phonons and
$\mu=1/2$ for diffusive phonons.
The behaviour in the $\beta$ and $\alpha$ regimes is intimately
related to the physical mechanism for relaxation and indeed we find
quite different answers depending on which scenario we focus on.
MCT predicts exponents $\mu={a}$ and $\mu={b}$ on time-scales
corresponding respectively to the early and late $\beta$ regimes,
where $a$ and $b$ are the standard MCT exponents obtained from the study of the
dynamical structure factor. The power-law $t^{b}$ extends until the
peak in $\chi_4(t)$ is reached.
The other scenarii only make predictions in the $\alpha$ regime.
In the case of CRR one has $\chi_4 \sim t$ or $\chi_4 \sim (\ln t)^{d+1/\psi }$ before the peak
depending whether one assumes that the relaxation occurs via bulk nucleation events
or domain wall fluctuations, see below. For diffusing
defects in dimension $d=3$, the exponent is $\mu={2}$. If defects have a non trivial
diffusion exponent $z$, such that their displacement
at time $t$ scales as $t^{1/z}$, then $\mu=2d/z$ for $d < z$ and
$\mu=2$ otherwise. The overall behaviour of $\chi_4(t)$ is summarized by Fig.~\ref{mctfig}, which specializes to
the MCT predictions for simplicity.
\begin{figure}
\psfig{file=chi4.eps,width=9.5cm}
\caption{\label{mctfig}
Sketch of the time behaviour of $\chi_4(t)$, with all the different time regimes,
within the MCT description that we find to be a good description around $T_c$. As the
temperature is lowered, we expect the elastic regime to extend up to $\tau_\alpha$.}
\end{figure}
Another important feature of $\chi_4$ is the growth of the
peak compared to the growth of the time $t=t^* \sim \tau_\alpha$ at
which the peak takes places~\cite{WBG}. This is found to scale as
$\chi_4(t^*) \sim t^{*\lambda}$,
with $\lambda=0$ (logarithm) for CRR, $\lambda=1$ for freely diffusing
defects, $\lambda=d/z$ for anomalously diffusing
defects for $d < z$ and $\lambda=1$ again for $d > z$.
Note that if the defect diffusion coefficient itself scales with
$t^*$ as $1/t^{*f}$, as for example in the one-spin
facilitated FA model, there is an extra contribution that gives $\lambda=1-f$ for $d > z$.
Finally, one has $\lambda=1/\gamma$ in the context of MCT, where
$\gamma$ describes the power-law divergence of the relaxation time as
the critical MCT temperature is approached.
We have checked these predictions in two model systems of glass-forming
liquids:
a Lennard-Jones and a soft-sphere mixture. Concerning the behaviour of
$\chi_4(t)$ in the late $\beta$ and $\alpha$
regime, the most interesting time-sectors, we have found reasonable agreement with
the MCT predictions for four point correlators. This agreement is by no means trivial
and is actually quite unexpected unless MCT indeed captures some of the physics of the problem.
Instead models of diffusing defects do not describe
well the numerical results unless one assumes
anomalously diffusing defects with $z$ substantially smaller than $2$.
This is perhaps not very surprising since we are focusing on two fragile
liquids (at least in the numerical time window) at temperatures
well above the experimental glass transition. It might be
that the predictions of these models work only on larger time-scales.
In any case, we expect instead that for strong liquids displaying
an Arrhenius behaviour the predictions for $\chi_4(t)$ obtained studying model of simple diffusing
defects should hold quantitatively, since
it is indeed quite well established that relaxation in strong liquids
is triggered by the diffusion of connectivity defects~\cite{Angell,OldMDpapersSIO2}.
Finally, the CRR picture does not agree quantitatively with our present numerical data.
However, this picture is supposed to describe the liquid dynamics precisely in the low temperature/
long time regime that is presently beyond numerical capabilities.
Again, experimental results probing the
behaviour of $\chi_4(t)$ in this regime would be highly valuable to put strong constraints on the
different theoretical scenarii of glass formation.
The organization of the paper is as follows. In
Section~\ref{Short-time behaviour} we discuss the behaviour
of $\chi_4(t)$ on microscopic time-scales. Then, we analyze the predictions of
elasticity theory in Section~\ref{Elastic}. In Sections~\ref{MCT} and \ref{CRR}
we focus on the behaviour of $\chi_4(t)$ in the $\beta$ and $\alpha$
regimes for MCT and CRR. In Section~\ref{defect} we discuss the
predictions of defect models analytically using an independent
defect approximation and by numerical simulations of
kinetically constrained models.
In Section \ref{numerics} we compare the different predictions to the
results of numerical simulations of models of glass-forming liquids.
We present our conclusions in Section \ref{conclusion}.
\section{Microscopic dynamics}
\label{Short-time behaviour}
On very short time-scales the behaviour of $\chi_4$ can be computed exactly.
For simplicity, we characterize the dynamics through the
self-intermediate scattering function,
\begin{equation}
F_s(k,t) = \frac{1}{N} \sum_i \left\langle
\cos { \vec{k} \cdot [\vec{r}_i(t) - \vec{r}_i(0) ]} \right\rangle ,
\end{equation}
and define the dynamic susceptibility as the variance
of the fluctuations of $F_{s} (k,t)$:
\begin{equation}
\chi_4(t)=N\left[ \left\langle \left( \frac{1}{N} \sum_i
\cos { \vec{k} \cdot [\vec{r}_i(t) - \vec{r}_i(0) ]}\right)^2
\right\rangle-\left\langle \frac{1}{N} \sum_i
\cos { \vec{k} \cdot [\vec{r}_i(t) - \vec{r}_i(0) ]}
\right\rangle^{2}\right].
\end{equation}
The full intermediate four point scattering function defined in Eq.~(\ref{G4})
in fact contains very similar information, even for interacting systems -- as shown
by numerical simulations \cite{Glotzer,Glotzer2}.
On a very short time-scale particles move ballistically if the dynamics
is Newtonian, $\vec{r}_i(t) - \vec{r}_i(0)=\vec{v}_{i}t+O(t^{2})$,
where $\vec{v}_{i}$ is
the velocity of the particle $i$ at time $t$. Since the system is in
equilibrium all the $\vec{v}_{i}$'s are independent Gaussian variables
with variance $\langle \vec{v}_{i}\cdot \vec{v}_{j} \rangle
=\delta_{ij}3k_{B}T/m$,
where $T$ is
the temperature, $m$ the mass of the particles,
and $k_{B}$ the Boltzmann
constant. Using this property it is straightforward to obtain
\begin{equation}
F_s(k,t) = \exp \left(-\vec{k}^{2}\frac{k_{B}T}{2m}t^{2} \right)
\end{equation}
and
\begin{equation} \label{e1}
\chi_4 (t)=
F_s(k,t)^{2}\left[\cosh \left(-2\vec{k}^{2}\frac{k_{B}T}{m}t^{2} \right)-
1\right]
\end{equation}
For an interacting particle systems this is only valid on
short time scales, for example smaller than the
collision time for short ranged interactions.
This leads to an initial power-law increase that reads
\begin{equation}\label{e2}
\chi_4 (t)=
\frac{1}{2} (\vec{k}^{2})^{2}\left(\frac{k_{B}T}{m}\right)^{2}t^{4}+O (t^{6}).
\end{equation}
Note that if one had chosen Langevin dynamics (i.e.
$\partial_{t}\vec{r}_{i}=\partial_{\vec{r}}H+\vec{\eta}_{i}$) instead
of Newtonian dynamics, Eqs. (\ref{e1},\ref{e2}) would have been
identical except for the replacement of
$k_{B} T t^{2}/m$ by $2Tt$, again for small times. Thus changing from Newtonian
to Langevin dynamics, the initial power-law increase of $\chi_4 (t)$
changes from $t^{4}$ to $t^{2}$.
This is similar to the change in the mean square
displacement that increases as $t^{2}$ and $t$, respectively, for
Newtonian and Langevin dynamics.
In the above example, however, it is clear that the increase of $\chi_4$ with time has nothing to do with
the increase of a correlation length, since particles are assumed to be independent. In other words, the
four-point correlation $G_4(\vec r,t)$ has a trivial $\delta$-function spatial dependence, but the height of
the $\delta$ peak increases with time. As will be discussed later in the paper, it is important to normalize
$\chi_4(t)$ by the value of $G_4(\vec r=0,t)$ to conclude from the four-point susceptibility that a length scale is
indeed growing in the system.
\section{Elastic contribution}
\label{Elastic}
For longer time-scales the interaction between particles starts playing a r\^ole. Generically
one expects that in the time regime where the displacements of particles remain small, an elastic description
should be valid. In a solid, or in a glass deep below $T_g$, there is no further relaxation channels and the
elastic contribution to $\chi_4$ should be the only relevant one. In a super-cooled liquid around the Mode-Coupling temperature $T_c$, the elastic regime is interrupted by
the collective $\beta$ regime, where in some sense phonon-phonon
interactions completely change the physical picture. Although we expect such
a crossover, we have at present no detailed theoretical description of it.
In the following we analyze again the behaviour of the four point self-intermediate scattering function
assuming that the dynamical behaviour of the liquid can be described, within a restricted time-sector,
as an elastic network (we will discuss later how to include, in a phenomenological way, viscous flow).
Perhaps surprisingly, we find a non trivial structure for $G_4$ in this model, with an ever growing
`cooperative' length scale which comes from the dynamics of phonons, that represent the simplest form
of cooperativity.
We consider an isotropic solid immersed in a viscous thermal bath. The energy of the system is given by:
\begin{equation}
H=\int d^dr \frac{1}{2} \kappa_1 [\sum_i u_{ii}]^2 + \kappa_2 \sum_{i,j}u_{i,j}^2
\end{equation}
where $\kappa_1,\kappa_2$ are the Lam{\'e} coefficient,
$u_{i,j}=\frac{1}{2}[\frac{d\phi_i}{dx_j}+\frac{d\phi_j}{dx_i}]$ is the deformation tensor and
$\vec{\phi}$ the displacement field from an undeformed reference state. Note that $\vec{\phi}(x)$ is
simply the continuum limit of the displacement of each particle with respect to its equilibrium (bottom of the well)
position.
As is well known, the above
energy leads to three independent phonon modes (one longitudinal and two transverse modes). For simplicity, we
only consider one deformation mode and write the Hamiltonian in Fourier space as:
\begin{equation}
H= \frac12 \kappa \int \frac{d^dk}{(2\pi)^d} k^2 \phi_k \phi_{-k},
\end{equation}
where $\kappa$ is an effective elasticity modulus. The mode $k$ has an energy
$E_k= \kappa k^2 \, \phi_k\phi_{-k}/2$ and therefore we expect, in equilibrium, $\langle \phi_k\phi_{-k}\rangle
= T/\kappa k^2$, where the Boltzmann constant has been set to unity. Our goal is to calculate the dynamical
correlation functions of the system. We describe the dynamics by a Langevin equation with a local noise:
\begin{equation}
m \frac{\partial^2 \phi(\vec r,t)}{\partial t^2} + \nu \frac{\partial \phi(\vec r,t)}{\partial t}=
\kappa \Delta \phi(\vec r,t)+ \zeta(\vec r,t),
\end{equation}
where $\zeta(x,t)$ is a Gaussian noise uncorrelated in space and time, of variance equal to $2\nu T$.
Taking the Fourier transform:
\begin{equation}
m \frac{\partial^2 \phi_k}{\partial t^2} + \nu \frac{\partial \phi_k}{\partial t}= -\kappa k^2 \phi_k + \zeta_k(t)
\end{equation}
$\zeta_k(t)$ is again a Gaussian noise uncorrelated for different $k$'s and time.
In this section, we only consider in details the over-damped case $m=0$ and set $D=\kappa/\nu$, but also give at
the end the result for the purely propagative case $\nu=0$ (see also Appendix A). One easily deduces the non
equal time correlation in the over-damped case:
\begin{equation}
\langle \phi_k(t)\phi_{-k}(0) \rangle=\frac{T}{\kappa k^2} e^{-D k^2 t}.
\end{equation}
Let us now define the function:
\begin{equation}
F^{(q)}(r,t)=\sum_i \delta(r-r_i(0))\cos (q[r_i(t)-r_i(0)]),
\end{equation}
whose average equals the self-intermediate scattering function up to a
constant (the particle density).
Using the microscopic definition of $\vec{\phi}$ we obtain that
\begin{equation}
C(q,t)=\langle F^{(q)}(r,t)\rangle\simeq \langle e^{iq[\phi(\vec r,t)-\phi(\vec r,0)]}\rangle =
e^{-q^2\langle [(\phi(\vec r,t)-\phi(\vec r,0)]^2\rangle/2},
\end{equation}
where the last equality comes from the Gaussian nature of the deformation field. Using the above results on the
correlation of the Fourier modes, we find:
\begin{equation}
\langle [\phi(\vec r,t)-\phi(\vec r,0)]^2 \rangle= \frac{2T}{\kappa}
\int\frac{1-e^{-D k^2 t}}{k^2}\, \frac{d^dk}{(2\pi)^d}
\end{equation}
As is well known, this integral behaves differently for $d \le 2$ and for $d > 2$, reflecting the fact that
phonons destroy translational order in low dimensions. As above, we will only consider here the physical
case $d=3$, relegating the discussion of the other cases to Appendix A. For $d=3$, we need to introduce an
ultraviolet cutoff $\Lambda$ on the wavevector $k$, which is the inverse of the underlying lattice spacing $a$.
Then, the above integral goes to a constant $\propto \Lambda$ at large times, reflecting the fact that
particles are localized in their `cage'. Therefore, the self-intermediate scattering function
$C(q,t)$ decays at small times $\Lambda^2 Dt \ll 1$ before saturating to a `plateau' value given by:
\begin{equation}
f_q \equiv C(q, t \to \infty) = \exp\left(-c \frac{T \Lambda q^2}{\kappa}\right),
\end{equation}
where $c$ is a numerical constant. [Note that $T \Lambda q^2/{\kappa}$ has no dimension, and is expected, from a
Lindemann criterion, to be of the order of $0.05$
at half the melting temperature and for $q = \Lambda$].
In real glass-forming liquids, this plateau phase does not persist for ever,
and $C(q,t)$ finally decays to zero beyond $t=\tau_\alpha$, in the so-called $\alpha$-relaxation regime.
A modification of the model to account for this decorrelation will be discussed later. Furthermore, the above
pseudo-$\beta$ regime predicted by elastic theory does not explain
quantitatively the $\beta$-regime in
super-cooled fragile liquids, except probably on relatively short time scales,
say up to a few picoseconds. On the other hand, at temperatures below $T_g$ or
for strong glasses, we expect that the elastic regime will extend up to
$\tau_\alpha$ and compete with other mechanisms, such as the defect mediated
correlation discussed in Section~\ref{defect} below.
The calculation of $G_4^{(q)}(\vec r,t)=\langle F^{(q)}(r',t)F^{(q)}(r'+r,t)\rangle _c $ is detailed in Appendix A.
One immediately sees that $G_4^{(q)}(\vec r,t)$ is governed
by a diffusive correlation length $\xi(t) \sim \sqrt{D t}$ with $D=\kappa/\nu$, as
expected from the structure of the Langevin equation that describes
relaxational dynamics. Clearly, in the case of propagative phonons, one finds $\xi(t) \sim Vt$ with $V^2=\kappa/m$.
The final result, see Appendix A, is:
\begin{equation} \label{eqel}
G_4^{(q)}(\vec r,t)=C^2(q,t)\left(\cosh(2q^2 R(\vec r,t))-1\right),
\end{equation}
where
\begin{equation}
R(\vec r,t)=\frac{T}{\kappa} (Dt)^{1-d/2} F\left(\frac{r}{\sqrt{D t}}\right)
\end{equation}
and we find (see Appendix A) $F(z) \simeq (4\pi z)^{-1}$ for $z \ll
1$ and $F(z) \simeq (2 \pi^{3/2})^{-1} \exp(-z^2/4)/z^2$ for $z \gg 1$.
Note the similarity between the expression in (\ref{eqel}) and the
corresponding one (\ref{e1}) derived in the previous section. One can
check that indeed the short time behaviour is indeed the one
derived before in the case of Langevin dynamics for the particles, as expected.
Let us now focus on long-times, but still within the elastic regime: $\Lambda^2 Dt \gg 1$,
and for $r \ll \xi(t)$:
\begin{equation} \label{eqg4el}
G_4^{(q)}(\vec r,t)=f_q^2 \left(\cosh(\frac{T q^2}{2\pi \kappa r})-1\right).
\end{equation}
Suppose for simplicity that we are in a regime where the argument of the $\cosh$ is always small, corresponding
to the limit $T q^2 \Lambda \ll \kappa$ (remember that by definition $r > a = 2\pi/\Lambda$, where $a$ is the
inter-atomic distance). Then, $G_4(\vec r,t) \sim r^{-2}$
for $\Lambda^{-1} \ll r \ll \xi(t)$. For larger scales $r \gg \xi(t)$ decays as a Gaussian, i.e.
super-exponentially fast. Note that the small $r$ behaviour of $G_4(\vec r,t)$ is not of the Ornstein-Zernike form
($1/r$ in $d=3$). Integrating $G_4$ over $\vec r$ we find the dynamical susceptibility,
\begin{equation}\label{chi4elas}
\chi_4^{(q)}(t) \sim \frac{T^2 q^4 f_q^2}{\kappa^2} \xi(t).
\end{equation}
This result is actually valid both for in the diffusive limit where $\xi(t) = \sqrt{Dt}$ and in the
propagative regime where $\xi(t)=Vt$. Therefore $\chi_4^{(q)}(t)$ increases either as $\sqrt{t}$
or as $t$ (note that in the limit of small times one recovers the $t^4$ or $t^2$ laws obtained in
the previous section). In the general case, one expects a crossover between a propagative regime at small times
$t < m/\nu = D/V^2$ (of the order of ps in glass formers, see \cite{pico}) and
a diffusive regime for longer time scales. Thus, looking at $\chi_4^{(q)}(t)$ as a function of time in a
log-log plot one should see first a straight
line corresponding to the ballistic or diffusive motion
leading respectively to slope $\mu=4$ or $\mu=2$, bending over towards a smaller
slope ($1$ or $1/2$, or both depending on the strength of the dissipation). The order of magnitude of
$\chi_4^{(q)}(t)$, as given by Eq.~(\ref{chi4elas}), can be estimated to be $\sim 10^{-3}-10^{-2}
a^2 \xi(t)$ for
$q= \Lambda$. In the propagative regime with $t = 1$ ps,
$V=3\, 10^3$ m/s, $a = 0.3$ nm, one finds
$\xi = 10 a$ and $\chi_4^{(q)} \sim 10^{-2}-10^{-1} a^3$, i.e. a small, but perhaps detectable
signal from the phonons. Only on much larger time scales will the
elastic contribution be significant, a regime that can be reached deep in the glass phase
\cite{foot}.
As mentioned above,
other collective modes come into play in super-cooled fragile liquids,
in particular around the mode-coupling temperature,
and give rise to the $\beta$-regime where `cages' themselves
become more complex, extended objects \cite{BB1}.
The above calculation shows that in an elastic solid with diffusive or propagative phonon modes,
the dynamical susceptibility increases without bound, reflecting the presence of Goldstone soft-modes in the system.
Of course, in a real glass the correlation function $C^{(q)}(t)$
eventually decays to zero beyond the $\alpha$ relaxation time $\tau_\alpha$, as particles start
diffusing out of their cages, far away from their initial position. If phonons were the only
relevant excitations, this would cause the dynamical susceptibility to peak around $t=t^*=\tau_\alpha$.
A phenomenological model that describes the decay of $\chi_4^{(q)}(t)$ within the above
elastic framework is to assume a (Maxwell) viscoelastic local modulus:
\begin{equation}
\frac{\partial \phi(\vec r,t)}{\partial t}=
\kappa \left[\int_{-\infty}^t dt'
e^{-\gamma(t-t')} \Delta \frac{\partial\phi(\vec r,t')}{\partial t'}\right]+
\zeta(\vec r,t),
\end{equation}
with $\gamma \sim \tau_\alpha^{-1}$, corresponding to a frequency dependent elastic modulus
$\kappa(\omega)=i\kappa \omega/(i \omega + \gamma)$.
In this model, the dynamics of $\phi$ becomes diffusive at times $> \gamma^{-1}$, and the dynamic structure
factor therefore decays exponentially beyond that time. Of course, the model itself becomes inconsistent
at large times, since the underlying lattice needed to define the deformation field $\phi$ has by then totally
melted.
The conclusion of this section, however, is that since super-cooled liquids behave at high frequencies
($\omega \gg \gamma, \tau_\alpha^{-1}$)
like solids, the four-point correlation and dynamical susceptibility are expected to reveal, in a certain
time domain, a non trivial behaviour unrelated to the structure of the `collective processes' discussed below
(MCT, diffusive defects, CRR's) that one usually envisions to explain glassy dynamics.
\section{Mode Coupling Theory}\label {MCT}
As mentioned in the introduction the mode-coupling theory of supercooled
liquids predicts the growth of a cooperative length
as the temperature is decreased or the density
increased \cite{KT,FP,BB1}, and makes detailed predictions on the shape
of $\chi_4(t)$. The four-point correlation function becomes critical near the mode-coupling transition temperature
$T_c$. The behaviour of the susceptibility $\chi_4(t)$ is encoded in ladder diagrams \cite{KT,BB1}.
From the analytical and numerical results of \cite{BB1}, and analyzing
the ladder diagrams \cite{BB1,BBB}, we have found that in the $\beta$ regime:
\begin{equation}
\chi_4(t)\sim f_1(t\epsilon ^{1/2a})/\sqrt{\epsilon}\qquad t \sim
\tau_{\beta}
\end{equation}
and in the $\alpha$ regime
\begin{equation}
\chi_4(t)\sim f_2(t\epsilon ^{1/2a+1/2b})/\epsilon \qquad t \sim
\tau_{\alpha}
\end{equation}
where $\epsilon=T-T_c$,
$a$, $b$ and $\gamma=1/2a+1/2b$ are the MCT exponents for the dynamical
structure factor, and $f_1(x)$ and $f_2(x)$
are two scaling functions. Requiring that the dependence on $\epsilon$ drops out when
$t\epsilon ^{1/2a} \ll 1$ one finds that
$f_1(x)\sim x^a$ when $x \ll 1$. This leads to a power-law behaviour,
$\chi_4 \sim t^a$, in the early $\beta$ regime, i.e. when the intermediate scattering
functions approaches a plateau. In the same way, matching the behaviour of $f_1$ when $x \gg 1$ to the
one of $f_2$ when $x \ll 1$ one finds another power-law behaviour, $\chi_4 \sim t^b$,
on timescales between the departure from the plateau and the peak of $\chi_4$. We give in
Fig.~\ref{mctfig} a schematic summary of the shape of $\chi_4(t)$ within the MCT description of
supercooled liquids.
Finally, as discussed in \cite{BB1}, at times $t =t^* \sim \tau_\alpha$, $\chi_4$
reaches a maximum of height $(T-T_c)^{-1}$. Using the relation $\tau_\alpha \sim (T-T_c)^{-\gamma}$,
valid within MCT, one finally finds $\chi_4(t^*) \sim t^{*1/\gamma}$.
\section{Collectively Rearranging Regions}
\label{CRR}
Under the term CRR, we gather many similar scenarii that differ in their
details, as discussed in the introduction \cite{AG,KTW,BB2,Tarjus}. Within the frustration-limited
domains scenario of Ref.~\cite{Tarjus} it seems natural
to envision the dynamics as the activated motion of
domains pinned by self-generated disorder. In the case of the random
first-order theory of Refs.~\cite{KTW,BB2}, the details of the decorrelation mechanism
are not entirely clear. There should be, on the one hand, activated fluctuations of domain walls
between different states, again pinned by self-generated disorder. However, the fluctuations leading
to a change of state may be the nucleation of a completely different state starting from the bulk.
The latter process can be modeled as a nearly instantaneous event with a certain (small) nucleation rate.
In the following we shall analyze separately these two types of fluctuations and their consequences on the
shape of $\chi_4(t)$.
\subsection{Instantaneous events}
Suppose that the dynamics is made of nearly instantaneous events that
decorrelate the system in a compact `blob' of radius
$\xi_0$. The probability per unit time and volume
for such an event to appear around site $\vec r$ is $\Gamma$.
We compute the four-body correlation of the persistence, $n_r(t)$,
defined to be equal to one if no event
happened at $\vec r$ between times 0 and $t$, and equal to zero
otherwise. The four-body correlation is then defined as
\begin{equation} \label{G4def}
G_4(\vec r,t)=\langle n_r(t) n_0(t) \rangle - \langle n_r(t) \rangle^2.
\end{equation}
Clearly, the
averaged correlation function, $C(t)=\langle n_r(t) \rangle$, is simply
given by $C(t)=\exp(-\Omega \Gamma
\xi_0^d t)$ where $\Omega$ is the volume of the unit sphere.
For $G_4(\vec r,t)$
to be non zero, an event must have
happened simultaneously at $\vec r$ and at $0$, leading to
\begin{equation}
G_4(\vec r,t)= C^2(t) \left[\exp\left(\Gamma t \xi_0^d f(r/\xi_0)\right) -
1\right],
\end{equation}
where $f(x)$ is the volume of the intersection between two spheres of unit
radius with centers at distance $x$ apart.
Clearly, $f(x>2)=0$. Therefore, $G_4(\vec r,t)$ is non zero only if $r <2
\xi_0$, and is in fact roughly
constant there. For a given $r$ satisfying this bound, $G_4$ first grows
linearly with time, reaches a maximum
for $t = t^* \approx \Gamma^{-1} \xi_0^{-d}$ and decays
exponentially beyond that time. The same behaviour is found for $\chi_4(t)$,
that grows initially as $t^\mu$ with
$\mu=1$, and reaches a maximum such that $\chi_4(t^*) \propto \xi_0^d$.
Assuming finally
that these events are activated \cite{KTW,BB2}, with a barrier
growing like $\Upsilon \xi_0^\psi$,
where $\psi$ is a certain exponent, one expects $t^* \sim \tau_0
\exp(\Upsilon \xi_0^\psi/T)$, and therefore
$\chi_4(t^*) \propto (\ln t^*)^{d/\psi} \propto \xi_0^d$.
The rearranging regions could have of course more complicated shapes than
the simple sphere assumed above. As
long as these objects are reasonably compact, the above results will still
hold qualitatively. On the other hand,
if these regions are fractal with a dimension $d_f< d/2$, the above results
on $G_4$ will hold with the argument in
the exponential given by $\Gamma t r^{2d_f-d}$; one also finds $t^*
\approx 1/\Gamma \xi_0^{d_f}$ and
$\chi_4(t^*) \propto \xi_0^{d_f}$.
\subsection{Domain wall fluctuations}
In this case the picture that we have in mind is similar to the case
of a disordered ferromagnet with pinned domain walls, where the typical
time to flip a domain is comparable to
the inter-event time. In that case, an `event' is in fact the slow
fluctuation of domain walls that progressively
invade the bulk of the domain. The early time behaviour of $\chi_4(t)$ is given by the square of
the number of particles that relax per unit volume thanks to
the same domain wall (see \cite{mayer} for the same situation out of
equilibrium in pure systems). Let again $\xi_0$ be the typical size of a domain
and $\ell(t)$ the lengthscale over which
the domain walls
fluctuate during time
$t$. Considering that on the surface of each domain
there are order
$({\xi_0}/{\ell})^{d-1}$ subdomains of linear size
$\ell$ and that the number of particles in each of these subdomains
is proportional to $\ell^d$, we get $\chi_4(t) \propto \xi_0^{-d}
({\xi_0}/{\ell})^{d-1} \ell^{2d}\propto \ell^{d+1}/\xi_0$. We are descarding
for simplicity both the possibility of fractal domains and that transverse
fluctuations behave differently from longitudinal ones.
Assuming thermal activation over
pinning energy barriers that grow like $\Upsilon \ell^\psi$~\cite{FH}, we
finally get $\chi_4(t)\propto \xi_0^{-1} (\ln t)^{d+1/\psi}$.
Therefore, in this case, the
exponent $\mu$ is formally zero and the growth of $\chi_4(t)$ is only
logarithmic. The maximum of $\chi_4$ occurs
at time $t^*$ such that $\ell(t^*) \approx \xi_0$,
which implies that the maximum of the susceptibility also
scales logarithmically with $t^*$, $\chi_4(t^*) \propto \xi_0^{-1} (\ln t^*)^{d+1/\psi} \propto \xi_0^d$.
The same scaling of the maximum of the susceptibility with the typical domain size is obtained in non-disordered
coarsening systems~\cite{mayer}.
The conclusion of the above analysis is that if the CRR relaxation is due to both instantaneous events
and domain wall fluctuations, the latter will dominate the time behaviour of $\chi_4$ before the peak as can be
readily deduced by comparing their relative contributions to $\chi_4(t)$. If for some reason, domain walls
are particularly strongly pinned and bulk nucleation becomes dominant, then the exponent $\mu=1$ should be
observable. The height of the peak, on the other hand, behaves identically in both models.
Thus, as the temperature is reduced, one should see a power-law behaviour before the peak with an
exponent $0<b<1$ in the MCT regime followed by an effective exponent $\mu$ either decreasing towards
zero or increasing towards one depending on whether the domain wall contribution dominates or not. However, at
lower temperatures, the elastic contribution will also start playing a role, that might completely dominate
over the CRR contribution. This suggests that other observables, that quantify more specifically the collective
dynamics, should be devised to reveal a CRR dynamics.
\section{Defect mediated mobility}\label{defect}
\subsection{Independently diffusing defects}
As the simplest realisation of the defect mediated scenario for glassy
dynamics advocated in \cite{Jaeckle,FA,KA1,SR,GC},
we consider a lattice model in which mobility defects, or vacancies,
perform independent symmetric random walks. We assume for the moment that
these vacancies cannot be created or destroyed
spontaneously. We shall compute the same function $G_4(\vec r,t)$ as in
Eq.~(\ref{G4def}) above, arguing that when
such a vacancy crosses site $\vec r$, the local configuration is
reshuffled and the local correlation drops to zero. Therefore,
$n_r(t)$ is equal to one if no vacancy ever visited site $\vec r$
between $t=0$ and $t$,
and zero otherwise. Thus, $\langle n_r(t) \rangle $ represents a
density-density dynamical correlation function whereas
$\langle n_0(t)n_{\vec{r}} (t) \rangle -\langle n_0(t)\rangle^{2}$
corresponds to $G_4(\vec r,t)$.
From now on we will denote by $N_v$ the number of vacancies, by $V$ the
total volume, by $\rho_v=N_v/V=1-\rho$ the vacancy density and by
$P^z_{{\overline x}}(t)$ the
probability that a vacancy starts in $z$ at time zero
and never reaches $x$ till time $t$.
The probability that a vacancy starts in $z$ at time zero
and reaches for the first time $x$ at a time $u\leq t$
is therefore $P^z_{x}(t)=1-P^z_{{\overline x}}(t)$.
The computation of $\langle n_x(t) \rangle$ is identical to the target
annihilation problem considered in \cite{Redner}.
Since we assume
defects to be independent, the defect distribution is uniform and we have:
\begin{equation}
\label{1}
\langle n_x(t) \rangle = \left[ \frac{1}{V}
\sum_{z,z\neq x} P^z_{{\overline x}}(t) \right]^{N_v}=
\left[ \frac{1}{V} \sum_{z,z\neq x} (1-P^z_{x}(t)) \right]^{N_v}=
\exp \left[ -\rho_v-\rho_v \sum_{z,z\neq x} P^z_{x}(t) \right].
\end{equation}
The correlation function $\langle n_x(t)n_y(t) \rangle$ can be also
expressed in terms of probability distributions of a single random walk in a
similar way:
\begin{eqnarray}
\label{2}
\langle n_x(t)n_y(t) \rangle &=&
\left[ \frac{1}{V}\sum_{z,z\neq x,y} P^z_{{\overline x}, {\overline y}}(t)
\right]^{N_v}=
\left[ \frac{1}{V}
\sum_{z,z\neq x,y} (1-P^z_{x}(t)-P^z_{y, {\overline x}}(t)
\right]^{N_v}=\nonumber\\
&&\left[1-\frac{2}{V}-\frac{1}{V}
\sum_{z,z\neq x,y} P^z_{x}(t)-\frac{1}{2V}
\sum_{z,z\neq x,y}(P^z_{y, {\overline x}}(t)+P^z_{x, {\overline y}}(t))
\right]^{N_v}=\nonumber\\
&&\exp \left(-2\rho_v-\rho_v\sum_{z,z\neq x} P^z_{x}(t)+\rho_v
P^y_x(t)-\frac{\rho_v}{2}\sum_{z,z\neq x,y} (P^z_{y,{\overline x}}(t)+
P^z_{x,{\overline y}}(t))\right),
\end{eqnarray}
where $P^z_{{\overline x}, {\overline y}}(t)$ is the
probability that a vacancy starts in $z$ at time
zero and never reaches neither $x$ nor $y$ till time $t$,
$P^z_{x, {\overline y}}(t)$ is the probability
that a vacancy starts in $z$ at time zero and reaches $x$ at $u \leq t$
but never reaches $y$ till time $t$.
In Eqs. (\ref{1},\ref{2}) we are left with
the calculation of probabilities of the form
$P^z_x(t)$, $P^z_{x, {\overline y}}(t)+P^z_{y {\overline x}}(t)$ for
a single random walk. This can be done using Laplace transforms
and, concerning $P^z_x(t)$, the computation has been performed
a while ago \cite{Montroll}. All the details can be found in Appendix B.
In the continuum limit, $(x-y)/\sqrt{Dt/2}\sim O(1)$,
i.e. for independent Brownian motion with diffusion
coefficient $D$, the final expression for $\langle n_{x} (t)\rangle$ on time
scales much larger than one is, in three dimensions,
\begin{equation}
\langle n_{x} (t)\rangle=\exp [-\rho_{v}-c_{1} D \rho_{v}t],
\end{equation}
where $c_{1}$ is a constant fixed by the short-lengthscale physics,
i.e. the underlying lattice structure (see Appendix B). It is clear
from this expression which is valid in all dimension larger than two
that the relaxation time-scale
is governed by the vacancy density $\rho_{v}$ and reads
$\tau =1/(c_1 \rho_{v}D)$. Physically $\tau$
corresponds to the time such that each site has
typically been visited once by a defect.
The final expression for $G_{4}$ is, for time and length scales much
larger than one, and in the small vacancy density limit,
$\rho_{v}\rightarrow 0$,
\begin{equation}\label{g43db}
G_{4}(\vec{r},t)=\frac{c_{2}}{\rho_{v}}\exp \left(-\frac{2t}{\tau }\right)
\left(\frac{t}{\tau } \right)^{2}\int_{0}^{1} du \int_{0}^{u}dv
\frac{e^{-\frac{r^{2}}{2Dvt }}}{(2Dvt )^{3/2}},
\end{equation}
where $c_{2}$ is a constant of order unity.
Note that the correlation length at fixed $t$ is given by $\xi(t)=\sqrt{Dt}$.
For $r \ll \xi(t)$, $G_4(\vec r,t)\sim 1/r$ whereas for $r \gg \xi(t)$,
$G_4$ decays at leading order as a Gaussian, that is, much
faster than exponentially.
The $1/r$ behaviour is cut-off on short-length scales, where (\ref{g43db})
does not hold. For $r=0$ one finds, when $t \gg 1$,
\begin{equation}
G_{4}(r=0,t) = \langle n_x(t) \rangle - \langle n_x(t)
\rangle^2 = \exp(-t/\tau) \left[1 - \exp(-t/\tau) \right],
\end{equation}
which behaves as $t/\tau$ at small times.
By integrating (\ref{g43db}) over $\vec r$ we get the dynamical susceptibility,
\begin{equation}
\chi_{4} (t)=\frac{c_{2}}{2\rho_{v}}
\left(\frac{t}{\tau}\right)^{2}\exp\left(-\frac{2t}{\tau }\right).
\end{equation}
For short times, $t<\tau $, the dynamical susceptibility is
proportional to $t^{2}$, so that
$\mu=2$. This is due to the diffusing nature of the
defects. The main contribution to $\chi_{4}$ is given by the square
of the number of sites visited by the same defect, which behaves as
$\rho_{v} (Dt)^2=\frac{1}{\rho_{v}}\left(\frac{t}{\tau}\right)^{2}$,
since a random walk in
three dimensions typically visits $t$ different sites.
For $t > \tau$, on the other
hand, the correlation decreases because sites start being visited by different
vacancies.
The maximum of $\chi_4(t)$ is reached for $t=t^*=\tau$, for which one has
$\chi_4(t^*) \sim \rho_{v}^{-1} \sim Dt^*$.
Note that because random walks are fractals of dimension
$d_f=2$,
the above relation can also be written as $\chi_4(t^*) \sim a^{d-d_f} \xi^{d_f}(t^*)$,
where we have added the lattice spacing $a$ to give to $\chi_4$ the dimension of a volume.
If for some reason $D$ depends on $\rho_v$,
as it happens for example for the one-spin facilitated FA model where $D \propto \rho_v$,
then one finds $t^* \sim \rho_v^{-2}$ and $\chi_4(t^*) \sim t^{*1/2}$.
Taking the Fourier transform of $G_{4}(r,t)$ given by Eq. (\ref{g43db}),
we find the four point structure factor,
$S_4(k,t)$,
\begin{equation}\label{s4k3d}
S_4(k,t) = \chi_4(t) {\cal F}(Dk^2 t); \qquad {\cal F}(u)
\equiv \frac{2}{u^2} \left(u-1+e^{-u}\right).
\end{equation}
Note that $S(k=0,t)=\chi_4(t)$, as it should. Furthermore for large
and small $k$, $S_4(k,t)$ behaves respectively as $S_4 \sim k^{-2}$ and
$S_4 \sim \chi_4 + O (k^{2})$, just as the
Ornstein-Zernike form, though the detailed $k$ dependence is different.
One can also study this problem in dimensions $d=1$ or $d=2$.
Qualitatively, the same conclusions
hold (diffusive correlation length $\sqrt{Dt}$, correlation time
$t^*$ set by the density
of vacancies, etc.), although the quantitative results differ because
a random walk in $d \le 2$
visits a number of sites that grows sub-linearly with time, see
Appendix B.1 and B.3. One finds in particular that
$\chi_4(t^*) \sim (Dt^{*})^{d/2} \sim \xi^{d}(t^*)$, with logarithmic
corrections for $d=2$.
The above arguments can be generalized if for some reason the
vacancies have an
anomalous diffusion motion, in the sense that their typical excursion
between time $t=0$ and
time $t$ scales as $t^{1/z}$, where $z$ is the dynamical exponent. When
$z = 2$, usual diffusion
is observed, but many models like diffusion in random media or kinetically
constrained models may lead to sub-diffusion, where
$z > 2$ \cite{WBG,Bertin}. In this case, one expects the small time behaviour of
$\chi_{4} (t)$ to be given by
$\chi_{4} (t) \sim t^{2d/z}$ for $d < z$ and $t^2$ for $d > z$ with
logarithmic corrections for $d=z$.
Similarly, the behaviour of $\chi_4(t^*)$ is a power-law, $\chi_4(t^*)
\sim t^{*\lambda}$, with
$\lambda=d/z$ for $d < z$ and $\lambda=1$ for $d > z$.
In the above model, mobility defects were assumed to be conserved in time. However, it is certainly more
realistic to think that these defects can be spontaneously created and disappear with time. Suppose that
defects are created with a rate $\Gamma$ per unit time and unit volume, and disappear with a rate $\gamma$
per unit time. The equilibrium density of defects is then $\rho_v=\Gamma/\gamma$. The above results on
$\chi_4$ can easily be generalized. At small times, the number of pairs of visited sites will now behave
as $\rho_v (Dt)^2 -\frac 2 3 \Gamma (Dt)^3/D$. Because of the death of vacancies there is an extra decay of the
dynamical susceptibility. The dominant rate of decay depends on the adimensional number $\gamma \tau$.
A very similar model for glassy dynamics was suggested in \cite{Houches}, where free volume is described as a diffusing
coarse grained density field $\rho(\vec r,t)$ with a random Langevin noise term. Mobility of particles is allowed whenever
the density $\rho$ exceeds a certain threshold $\rho_0$. The mobile regions are then delimited by the
contour lines of a random field, which already gives rise to a quite complex problem of statistical geometry
\cite{Isichenko}. The particle density correlation in this model is a
simple exponential with relaxation time $\tau \sim \exp(\rho_0/{\overline \rho})$, where $\overline{\rho}$ is the
average free volume density. One can also compute $\chi_4(t)$ in this model to find, in $d=3$,
\begin{equation}
\chi_4(t) \sim t \left[\exp \left(-\frac{t}{\tau} \right)
\left(1-\exp\left(-\frac{t}{\tau}\right) \right) \right]
\end{equation}
which behaves very much like the point like vacancy model studied above, with
in particular, $\chi_4(t) \sim t^2$ for $t \ll \tau$.
Let us finally note that from the point of view of interacting particles on a lattice
we have studied the persistence dynamical susceptibility, instead of the
density-density correlations discussed in the introduction. This is because
for the lattice
gas problem at hand, the former does not show any interesting
properties: except when a defect passes by, the local state is always
the same, i.e. occupied. For completeness, we give the corresponding
results in Appendix B.4. In a real system, however,
the local configuration is going to be
affected by the passage of a mobility defect, and one can expect that the
density-density correlations
will in fact behave more like the persistence dynamical susceptibility
computed before.
The correspondence between persistence and self-intermediate
scattering function is studied explicitly in
kinetically constrained models in Ref.~\cite{BCG}.
\subsection{Kinetically Constrained Models: numerical results}
Kinetically constrained models (KCM) postulate that glassy dynamics
can be modeled by forgetting about static interactions
between particles, putting all the emphasis on dynamical aspects.
Among those models are, for example, the Fredrickson-Andersen (FA), or
the Kob-Andersen (KA) on hyper-cubic lattices \cite{SR,TBF}.
The dynamics of these models can be
understood in terms of diffusion of defects \cite{Sollich,WBG,TBF}
and the models can be classified into cooperative and non-cooperative models,
depending on the properties of such defects.
For cooperative models
the size, $\xi_0$, the density,
and the time-scale for motion of the defects depends on the particle
density (for conservative models) or temperature (for non-conservative models)
and change very rapidly with increasing density or decreasing temperature
~\cite{TBF}. KA and FA models with more than one
neighbouring vacancy
needed in order to allow the motion of other vacancies belong to this class.
On the other hand
for the one-spin isotropically facilitated
FA model, a single facilitating spin is a mobile defect at all
values of temperature and the model is non-cooperative. A recent analysis \cite{WBG} suggests
that for these models defects can be considered as non-interacting
in $d>4$, while for $d < 4$ the r\^ole of fluctuations becomes important.
Therefore we expect that the previous results for the independent diffusing
defects model should
apply exactly for FA one-spin facilitated in $d>4$.
Furthermore, since
the corrections to the Gaussian exponents are not very large \cite{WBG}
in three dimensions, we still expect a semi-quantitative agreement.
In particular the initial
increase of the dynamic susceptibility as
$\chi_4(t) \sim N(t)^{2}$, where $N(t)$ is the total number
of distinct visited sites, is expected to be quite a robust result.
Also, we expect a diffusive growth of the dynamical length
scale $\xi(t)$ governing the scaling of $G_4$, at least in the
limit $\xi(t)\gg \xi_0$. At smaller times, one expects a
crossover between a CRR regime when $Dt \ll \xi_0^2$ (where the
dynamics inside the defects becomes relevant in cooperative models
to a mobility defect regime
for longer times.
Hence, in principle, looking at the detailed properties of $G_{4}(r,t)$
one should be able to extract the defect properties: density, size,
time-scale and decide which
theoretical scenario is most consistent with numerical results.
In the following, we discuss numerical results for
the one-spin facilitated FA model both in
$d=1$ and $d=3$, and for the $d=1$ East model where
facilitation is anisotropic \cite{SR}. The two models can be
described respectively in terms of diffusive and sub-diffusive
non-cooperative defects and indeed the numerical results are in
quantitative agreement with the predictions of previous section, as will be
explained in detail. We do not address the case of cooperative KCM models,
for which a more complicated behaviour is
expected. Indeed a first slowing down of dynamics should occur
near a dynamical crossover displaying the properties of an
MCT-like avoided transition \cite{TBF}.
In this regime the model cannot be approximated
as a system of independent freely diffusing defects and
deriving a quantitative prediction
for the behaviour of four point correlation
and susceptibility would deserve further work.
Such avoided transition should then be
followed at lower temperature or higher density
by an asymptotic behaviour described in terms of cooperative diffusing defects.
\subsubsection{One dimension}
Let us start with the simplest model, the $d=1$ FA model.
For a given temperature, we consider
the time evolution of the following quantities. The analog of the
spatial four-point correlator for this model is
\begin{equation}
G_4(r,t) = \frac{1}{N} \sum_{i=1}^N \left[
\langle n_i(t) n_{i+r}(t) \rangle - n^2(t) \right],
\label{c4}
\end{equation}
where
$n(t) = N^{-1} \sum_{i=1}^N \langle n_i(t) \rangle$
is the mean persistence, $n_i(t)$ being the persistence
at site $i$.
We also measure the corresponding four-point structure factor
\begin{equation}
S_4(k,t) = \frac{1}{N} \sum_{\ell,m=1}^N \left[ \langle n_\ell(t) n_m(t)
\rangle -n^2(t) \right]
e^{i k \cdot (\ell-m)},
\label{s4}
\end{equation}
and as usual we get the four-point susceptibility as the
$k \to 0$ limit of the structure factor, $\chi_4(t) = S_4(k=0,t)$.
We generally find that the results are in good agreement with the free
defect model described above, at least at sufficiently
low temperatures.
\begin{figure}
\psfig{file=facr.ps,width=8.5cm}
\caption{\label{1dfa}
Four-point spatial correlator (\ref{c4}) in the $d=1$ FA model
at fixed temperature, $T=0.2$, and various
times, $t= 10^3$, $3.10^3$,
$10^4$, $3.10^4$, $10^5$, $10^6$, $3.10^6$, $6.10^6$
(from left to right).
The correlator is normalized by its $r=0$ value.
At this temperature, the relaxation time is
$\tau \sim 10^6$, so that time-scales cover both regimes where
$t/\tau$ is smaller and larger than 1.}
\end{figure}
In Fig.~\ref{1dfa}, we show the evolution of the
spatial correlator (\ref{c4}) at a given low temperature,
$T=0.2$, and various times. At this temperature,
the relaxation time is about $\tau \sim 10^6$, so that
time-scales presented in Fig.~\ref{1dfa} cover a range
of times both smaller and larger than $\tau$.
The dynamic susceptibility $\chi_4(t)$ has the usual shape with
a maximum at a time close to $\tau$ indicating that
dynamics is maximally heterogeneous there.
This non-monotonic behaviour of $\chi_4$ in fact does not show up
in the spatial correlators of Fig.~\ref{1dfa}, which display
instead a smooth monotonic evolution with time. The spatial decay
of $G_4(r,t)$ becomes slower when $t$ increases indicating the presence
of a monotonically growing dynamic lengthscale $\xi(t)$.
One can estimate the time dependence of $\xi(t)$ by
collapsing the data of Fig.~\ref{1dfa} using a form like:
\begin{equation}
G_4(r,t) \sim G_4(0,t) \, {\cal G} \left( \frac{r}{\xi} \right).
\label{scal1d}
\end{equation}
Doing so, we find
that $\xi \sim t^{0.45}$ is a reasonable representation of the data
at $T=0.2$. Correspondingly, we find that the increase of
$\chi_4(t)$ for $t < \tau$ is well-described by a power-law,
$\chi_4 \sim t^{0.85}$, so that the expected
scaling $\chi_4 \sim \xi^2$ is reasonably verified
given the unavoidable freedom in estimating the range of time-scales
where power-laws apply.
The values of these exponents
are not far from the ones expected from freely diffusing defects
in one dimension, although slightly smaller. Indeed, we recall that the
results in Appendix B.1. predict $\xi=\sqrt{Dt}$, $\chi_4(t)\propto
\rho \xi(t)^2$ and $\chi_4(t^{*})=1/\rho$, where $\rho$ is the density of
defects, $D$ their diffusion coefficient and $t^{*}$ the time at which
$\chi_4(t)$ reaches its maximum value. This last prediction is also in good
agreement with the numerical results (see e.g. \cite{GC}).
Repeating the simulation at lower temperature, $T=0.15$, we obtain
$\chi_4 \sim t^{0.93}$, showing that deviations from theoretically
expected values are partly due to preasymptotic effects
that presumably disappear at very low temperatures.
It is important to remark that the scaling form
(\ref{scal1d}) is only approximately supported by the data.
The scaling
in fact deteriorates when times become larger than $\tau$.
This can be seen in Fig.~\ref{1dfa} where
data for large times become more and more stretched, indicating an
increasing polydispersity of the dynamical clusters.
Note that a change in
the shape of the spatial correlator makes a quantitative determination
of $\xi$ problematic. Usually, one wants to collapse various
curves using a form like (\ref{scal1d}) to numerically
extract $\xi$. Strictly speaking, this is not possible
here if one works at fixed $T$ and varying $t$
over a large time window.
This difficulty provides a second possible explanation for the small
discrepancy between the measured values of exponents
and the theoretical expectations.
The observation of a monotonically growing length begs the
question: how can
the correlation length increase monotonically with time
while the volume integral of the spatial correlator $\chi_4$
is non-monotonic, as reported in the previous section? This is due to the fact
that we have presented in Fig.~\ref{1dfa} results for the normalized
correlator, $G_4(r,t)/G_4(r=0,t)$. By definition, $G_4(0,t) = n (1-n)$, hence
the normalization itself exhibits a non-monotonic behaviour.
If one considers the normalized susceptibility,
$\tilde{\chi_4} = [G_4(0,t)N]^{-1} \sum_{\ell,m} \left[
\langle n_\ell n_m \rangle - n^2(t) \right]$,
one indeed finds that $\tilde{\chi_4}$ is monotonically growing as well.
In numerical works, the quantities that have been studied are in fact,
most of the time, normalized,
and the corresponding $\tilde{\chi_4}(t)$ observed for realistic systems
shows a peak, at variance
with what is observed in the $d=1$ FA model. As we shall show below,
this is due to the one-dimensional
nature of the model, and this difference is not observed
in three dimensions. This difference
in the behaviour of the normalized dynamical susceptibility
between one and three dimensions is indeed in full agreement
with the independent defect diffusion computation, see previous
section and Appendix B.
Results are qualitatively similar in the one-dimensional East model.
The dynamic susceptibility $\chi_4(t)$
develops a peak that grows and whose position
is slaved to the increasing relaxation time
when temperature decreases.
At fixed temperature, a monotonically
growing lengthscale is observed, while the scaling relation
$\chi_4 \sim \xi^2$ still holds within our numerical precision.
The novelty of this model lies in the fact that exponents are now
temperature dependent,
as all other dynamic exponents in this model.
For instance, we find that $\xi(t) \sim t^{0.28}$ at $T=0.4$,
$\xi(t) \sim t^{0.15}$ at $T=0.2$.
These results are in agreement with the above predictions
of the independent defects
model if the defect motion is
sub-diffusive, with a dynamic exponent $z=T_0/T$,
as expected from \cite{Sollich}.
Due to the quasi one-dimensional nature of the relaxation process
in the three-dimensional generalization of the East model~\cite{nef},
these results most probably carry over to larger dimensions
where they would differ by numerical factors only.
\subsubsection{Three dimensions}
In $d=3$, the situation is more subtle.
Results for the normalized susceptibility of the one-spin facilitated FA model
were presented in Ref.~\cite{WBG2}, where it was found
to have the standard non-monotonic shape already described several times above.
We find that the non-normalized $\chi_4(t)$ has the same qualitative behaviour.
Therefore, contrary to the $d=1$ case normalization is not a crucial issue in three dimensions.
In the following we check the predictions for independent diffusing defects in
three dimensions for the susceptibility and correlation length obtained
above, i.e. $\xi(t)=\sqrt{Dt}$, $\chi_4(t)\propto
\rho~\xi(t)^4$ and $\chi_4(t^{*})=1/\rho$, where $\rho$ is the density of
defects, $D$ their diffusion coefficient and $t^{*}$ the time at which
$\chi_4(t)$ reaches its maximum value. We find a semiquantitative agreement
with above prediction, with small deviations in the exponents that should be due to the interaction among
defects. In particular the scaling of the peak with the
density of defects was already analyzed in \cite{WBG2}, where the result
$\chi_4(t^{*})\propto 1/\rho^{1-\epsilon}$ was obtained, with
$\epsilon\simeq 0.03$.
As for the correlation length, we find $\xi(t)\propto t^{0.42}$, that shows
again a small deviation from
the diffusive prediction. Regarding the increase at
$t\ll\tau$ of the susceptibility we find a power-law as predicted.
As in $d=1$, the exponent changes slightly when
decreasing temperature because the scaling regime where
power-law applies becomes more and more extended.
We find $\chi_4 \sim t^{1.4}$ at $T=0.25$,
$\chi_4 \sim t^{1.55}$ at $T=0.17$ and $\chi_4 \sim t^{1.89}$ at
$T=0.095$.
This seems to indicate that the deviation from the
scaling $\chi_4(t)\propto t^2$ calculated for the independent diffusing
defect model is partly due to preasymptotic effects
that are less and less important at lower temperature.
Unfortunately,
we were not able to measure $\xi$ at much lower temperatures
with sufficient accuracy. We expect that even at very low temperature a small
deviation from the exponent of independent defects should survive due to the interaction among defects.
\begin{figure}
\psfig{file=3dfacr.ps,width=8.5cm}
\psfig{file=3dfask.ps,width=8.5cm}
\caption{\label{3dfa2}
Four-point correlations in the $d=3$ one-spin facilitated FA model
both in real space (left) and in Fourier space (right)
at fixed temperature, $T=0.17$,
and various times indicated in the figures.
In Fourier space, points represent numerical data,
while full lines are fits to the form (\ref{fit3d})
with fitting parameters described in the text.}
\end{figure}
In Fig.~\ref{3dfa2} we show the four-point
correlations both in real and Fourier space, Eqs.~(\ref{c4}) and (\ref{s4}).
In these curves
the temperature is fixed at a low value, $T=0.17$, and time is varied in a
wide range that includes the relaxation time, $\tau(T=0.17) \sim 5.10^4$, where
the dynamic susceptibility also peaks.
For times $t \ll \tau$, the spatial decay of $G_4(r,t)$ is fast. When
$t$ increases, the spatial decay becomes slower,
once again indicative of an increasing dynamic correlation
length $\xi(t)$. When $t$ becomes larger than $\tau$, however,
spatial correlations seem to become weaker.
It is obvious from Fig.~\ref{3dfa2} that the volume integral
of $G_4(r,t)/G_4(0,t)$ decreases when $t$ grows larger than $\tau$.
This is very different from the one-dimensional case in Fig.~\ref{1dfa},
but consistent with all known numerical results.
However, a closer look at Fig.~\ref{3dfa2} reveals that
even though the initial spatial decay of $G_4(r,t)$ is stronger at
larger times, the contrary is true at large distances.
This indicates that the topology of the dynamic clusters
changes when $t$ grows larger than $\tau$, but that
$\xi(t)$ may keep increasing in a monotonic manner.
Since the spatial correlator is very small at large distances,
quantitative measurements of $\xi(t)$ are more easily
performed in Fourier space via $S_4(k,t)$.
At short time, a fit of $S_4(k,t)$
using the functional form given by Eq. (\ref{s4k3d})
works reasonably well, but the fit quickly deteriorates
at long time. We have therefore used the following generalization of
Eq.~(\ref{s4k3d}):
\begin{equation}
S_4(k,t) = \chi_4(t) {\cal F}_\beta[k^2 \xi^2(t)];
\qquad {\cal F}_\beta(u) \equiv \frac{2^{2/\beta}}{u^\beta}
\left(u-1+e^{-u}\right)^{\beta/2}.
\label{fit3d}
\end{equation}
Freely diffusing defects correspond
to $\beta=2$ and $\xi(t) \sim \sqrt{t}$. Using $\beta(t)$ as an
additional free parameter, we are able to fit $S_4(k,t)$ at all times, see
Fig.~\ref{3dfa2}.
We find that $\beta$ decreases from $\beta \approx 2.5$
at small times to $\beta \approx 1$ for the longest
times scales investigated, which corresponds to
$t \approx 5 \tau$. At
such large times, the dynamic susceptibility
has already decreased by a factor $\approx 300$ from its maximum
value at $t = \tau$, and correlations become very weak indeed.
The values for $\beta$ found from the fits are consistent
with the value $\beta \approx 2.15$ reported in Ref.~\cite{WBG2}
where only fixed time ratio $t/\tau(T)=1$
at different temperatures have been studied.
From this fitting procedure,
we deduce a monotonically growing dynamic length $\xi(t)$, even beyond $t =\tau(T)$.
Fitting its time dependence with a power-law, we get
$\xi \sim t^{0.42}$ which appears to be slightly
sub-diffusive, but close to the value found above in the one-dimensional case.
In conclusion we find that on small
enough time-scales, one indeed has good
agreement with the above calculations based
on freely diffusing defects,
therefore defect branching
and defect coagulation can be neglected.
However, for longer time-scales, significant deviations
appear which correspond to the evolution of the exponent $\beta(t)$ and should
be responsible for the small deviations of the predicted exponent for
$\chi_4$.
Physically, the time evolution of the exponent $\beta(t)$ characterizing
the large $k$ behaviour of the dynamic structure factor is reasonable.
At very short-times, dynamic clusters consist of coils
created by random walkers, and an exponent close to
$\beta=2$ can be expected. For times $t \sim \tau$, clusters look
critical, as described in Refs.~\cite{WBG,WBG2}, and the exponent
$\beta=2-\eta$, $\eta < 0$ is expected.
At very large times, clusters are most probably extremely polydisperse
because the remaining spatial correlations at large times
are due to the largest regions of space that were
devoid of defects at time 0 and that take therefore a large time
to relax. But at large times, some isolated sites
that have not been visited by defects during the relaxation
might survive so that
the distribution of dynamic clusters at large times is very wide,
see Ref.~\cite{nef} for snapshots.
A small value of $\beta$ can therefore be expected.
\section{Numerical results on atomistic model systems}
\label{numerics}
In this section, we study numerical results
for the dynamic susceptibility and structure factor
of a super-cooled liquid simulated by molecular dynamics simulations.
The model we study is mainly the well-known binary Lennard-Jones mixture
as first defined and studied in Ref.~\cite{KA}, but we report also some
results for a soft-spheres mixture studied in \cite{Barrat,Grigera,Dave}.
We do not give details about our
numerical procedures since these
were given several times in the literature~\cite{KA,Berthier2,WBG}.
\subsection{Dynamical susceptibility}
\begin{figure}
\psfig{file=lj.ps,width=8.5cm}
\caption{\label{lj1} Time dependence of the dynamic susceptibility
in the binary LJ mixture at two different temperatures. The lines
are power-law fits with exponents indicated in the label.}
\end{figure}
In previous works on various realistic liquids, the
dynamic susceptibility was reported several
times~\cite{Onuki,Glotzer,FP,parisi}.
It is
known to exhibit at peak at a time-scale slaved to
the quantity chosen to quantify local dynamics. Typically,
particle displacements are chosen, and one computes
therefore the variance of some dynamical correlation,
\begin{equation}
\chi_4(t) = N \left[ \langle F^2(t) \rangle - \langle F(t) \rangle^2 \right],
\end{equation}
with
\begin{equation}
F(t) = \frac{1}{N} \sum_{i=1}^N F_i(t).
\end{equation}
The dynamic quantity $F_i(t)$ can be chosen as some
`persistence' function in which case $\langle F(t) \rangle$ resembles
the overlap function usually measured in spin
systems~\cite{Glotzer,FP,parisi}.
Other choices
are~\cite{WBG,Berthier2}
\begin{equation}
F_i(t) = \cos( \vec{q} \cdot \delta \vec{r}_i(t)),
\label{fk}
\end{equation}
where $\vec{q}$ is a wavector chosen in the first Brillouin zone, and
$\delta \vec{r}_i(t)$ is the displacement of particle $i$ in a time
interval $t$. In the limit of small $|\vec{k}|$, it is better to study
$F_i(t) = |\delta \vec{r}_i(t) | / \sqrt{\Delta r^2(t)}$, where
$\Delta r^2(t)$ is the mean square displacement of the
particles~\cite{Onuki,heuer}.
Whereas the general shape of $\chi_4(t)$ is well-documented in
the literature, its precise time dependence was never discussed.
In Fig.~\ref{lj1}, we present the time dependence of $\chi_4(t)$
in the binary Lennard-Jones mixture at two different temperatures.
The data are presented in a log-log scale, in order to emphasize
the existence of several time regimes that are generally hidden
in the existing reports.
To build these curves, we choose (\ref{fk}) as the local
observable, for a wavector that
corresponds roughly to the typical inter-particle distance.
In the ballistic regime at very short-times, we find that
$\chi_4(t) \sim t^4$, as described from Section~\ref{Short-time behaviour}.
The system then enters the time regime where
dynamic structure factors typically exhibit plateaus, as
a result of particle caging. As seen in Fig~\ref{lj1},
this is also the case for $\chi_4(t)$.
Finally, $\chi_4(t)$ reaches a maximum located close to the
relaxation time extracted from the time dependence of $\langle F(t) \rangle$,
and then rapidly decays to its long-time limit, equal to $1/2$ in the present
case.
In Fig.~\ref{lj1}, we fitted the time dependence of
the increase of $\chi_4(t)$ towards its maximum with power-laws $\chi_4 \sim t^\mu$.
The fits are satisfactory, although they only hold on
restricted time windows. We find a slight temperature dependence
of the exponent $\mu$. For instance, we find $\mu \approx 0.9$ at
$T=0.47$, and $\mu \approx 0.73$ at $T=0.42$.
As already discussed in the case of kinetically constrained models
above, it is not clear how the restricted time
window used to determine the exponents might affect their values.
However, the data in the Lennard-Jones system
behave quantitatively very differently from both
theoretical results obtained from
freely diffusing defects and numerical results
in the one-spin facilitated $d=3$ FA model, where $\mu=2$.
The small temperature evolution in the LJ liquid
differs even qualitatively from the one-spin facilitated $d=3$ FA model
where the exponent was found to increase when decreasing temperature.
These observations tend to discard a description of this
super-cooled liquid via a scenario with simple independently diffusing
defects, even interacting ones. The above value of $\mu$ is in principle
compatible with the predictions of elasticity theory, which yields $\mu=1/2$ or
$\mu=1$ depending on the damping of phonons. However, the time scale in which the above
mentioned power-law behaviour holds in the Lennard-Jones mixture corresponds to the $\beta$
regime where the displacement of particles is no longer small and the elastic description
unjustified. Within MCT, on the other hand,
$\chi_4$ should increase in that regime with an exponent $\mu=b$
that is known from previous analysis of the dynamics
of the binary Lennard-Jones mixture, $b \approx 0.63$~\cite{KA}.
The values found above are somewhat larger, but it is hard to know
how preasymptotic effects influence the numerical data.
Moreover, the value closest to $b$, $\mu \approx 0.73$,
is obtained for $T=0.42$, a temperature already lower than
the mode-coupling singularity located at $T_c \approx 0.435$ in this
system. MCT also provides a prediction for the height of the peak,
$\chi_4^* \sim t^{*1/\gamma}$, where $\gamma$ was measured to be $\approx 2.3$, leading
to $\lambda=1/\gamma \approx 0.43$. This prediction is in good agreement with the
results of Ref.~\cite{WBG} where $\chi_4(t^*) \sim t^{*0.4}$ was reported.
If on the other hand one insists to use a non-cooperative
kinetically constrained model
to describe the Lennard-Jones liquid, the small value of the short
time exponent $\mu$
forces one to choose a `fragile' KCM model,
such as the East model described above, where
the exponent for the dynamic susceptibility is found to be
much smaller than the diffusive value $\mu=2$, and indeed to decrease
when temperature is decreased.
However, the explanation given in Ref.~\cite{nef} that the large dynamic
length-scales observed in the Lennard-Jones system are due to the fact
that the underlying KCM model is relatively strong becomes hard to
reconcile with the present
results. This findings can therefore be added to the list
of unusual features displayed by
supposedly `fragile' numerical
models for super-cooled liquids~\cite{gilles}.
On the other hand, note that our results do not discard the possibility that
cooperative KCM (in a proper density or temperature regime) display a four point correlation and susceptibility
quantitatively similar to the one of the Lennard-Jones liquid. Indeed, as stressed in e.g \cite{TBF},
for these models one expects a first regime of slowing down of dynamics due to an avoided mode-coupling transition. The
susceptibility and four point correlation could then well be quantitatively comparable to that of Lennard-Jones liquids.
\begin{figure}
\psfig{file=fig5.ps,width=8.5cm}
\caption{\label{softsphere} Dynamic susceptibility $\chi_4(t)$
at $T=0.3$ and $0.26$ (from left to right)
in a log-log plot as a function of time for the soft-sphere
binary mixture of Ref.~\cite{Grigera,Dave}. The data was kindly provided to us by
D. Reichman and R. A. Denny. The straight line represent the MCT prediction for the power-law behaviour before the peak.}
\end{figure}
Finally, it is of course a natural question to ask whether the above agreement
between MCT predictions and numerical results is only restricted to the Lennard-Jones
system. Using the unpublished data of Ref.~\cite{Dave} for a soft-sphere
binary mixture where
$T_{c}\simeq 0.22-0.24$~\cite{Barrat,Grigera}
we actually found very similar results.
Close to $T_{c}$ a power-law behaviour of $\chi_4$ as a function
of time can again be observed. For instance,
$\chi_4 \sim t^{0.63}$ for $T=0.26$.
In Fig.~\ref{softsphere} we plot $\chi_4$, defined as
in Ref.~\cite{FP}, as a function of time. We also display
the power-law behaviour predicted by MCT before the peak with the exponent $b \simeq 0.59$ taken
from Ref.~\cite{Barrat}. There is a similar agreement between the exponent $\lambda$ measured
from the height of the peak and the value of $1/\gamma$ extracted from an MCT analysis of the data.
The fact that the predictions of MCT for the four-point susceptibility are in reasonable agreement
with numerical simulations in both systems is significant, since the exponents $b$ and $1/\gamma$
are measured on (local) two point functions and $\mu$ and $\lambda$ on four-point functions. The relation between
these exponents test a rather deep structural prediction of MCT that relates time
scales to length scales \cite{BB1}.
More numerical work, on other model systems with different values of $b$, for example,
would be needed to establish
more firmly whether the coincidence observed in the present paper is or not accidental.
\subsection{A growing length scale?}
We focus now more directly on the dynamic lengthscale.
In previous works, the dynamic lengthscale $\xi$ extracted
from four-point correlations was measured either at fixed temperature
for various times $t$ where it was found to be
non-monotonic~\cite{Glotzer,Glotzer2,pan} but monotonic in \cite{heuer},
or at fixed time $t = \tau(T)$,
for different temperatures, where it is found to be
increasing when the temperature decreases~\cite{Onuki,Glotzer,WBG}.
In practice, to extract $\xi(t,T)$ from the four-point correlation function
either in real space or
in Fourier space, one needs to postulate a specific
functional form of $G_4$.
In this respect, the results of the previous
section on simple lattice KCM's with no underlying liquid
structure prove instructive. It is clear that
with data similar to Fig.~\ref{3dfa2},
but obtained with much smaller system sizes,
with much less statistics, and polluted by the underlying structure
of the liquid, the precise extraction of dynamical
length-scales from Molecular Dynamics simulations
is not an easy task. More fundamentally,
extracting $\xi$ from fitting either $G_4(r,t)$
or $S_4(k,t)$ to a time-independent scaling form necessarily
biases the data as discussed above.
This also shows that it is a much easier and safer procedure
to work say at $t=\tau(T)$ and different temperature
to observe the growth of a cooperative length
$\xi(\tau,T)$ when decreasing $T$.
On the other hand, it is not {\it a priori} granted that
the growth law of $\xi$ with $t = \tau(T)$
when changing $T$ is identical to that of $\xi(t,T)$ with $t$ at
a given temperature $T$. We will not be able to answer this
question with our numerical data.
\begin{figure}
\psfig{file=monolj1.ps,width=8.5cm}
\psfig{file=monolj.ps,width=8.5cm}
\caption{\label{monolj} Left: dynamic susceptibility at $T=0.5$
and $q=4.21$. The vertical lines indicate the times
at which $S_4(k,t)$ is evaluated in the bottom
figure.
Right: the corresponding three $S_4(k,t)$ (the last two have been
multiplied by 2 for clarity). Lines are fits to
the form (\ref{fifit}), the $k \to 0$ limit being
fixed by the value of $\chi_4(t)$, with a monotonically
growing length scale $\xi(t)$.}
\end{figure}
With the above caveats in mind, we present in
Fig.~\ref{monolj} some numerical data
in the binary Lennard-Jones mixture
at a fixed temperature, $T=0.5$, and three different times
which fall before, at and after the peak in $\chi_4(t)$.
The difficulty of getting clear-cut quantitative determinations
for $\xi$ are obvious from Fig. \ref{monolj}. One would need
much larger system sizes to properly measure $S_4(k,t)$
at small wavectors, large times and low temperatures. The system
simulated here contains $1372$ particles. One could possibly
increase the number of particles by a factor 10,
but the increase in linear size would be very modest,
a factor $10^{1/3} \approx 2.15$.
Nonetheless, we have fitted the data in Fig.~\ref{monolj}
with a simple empirical form,
\begin{equation}
S_4(k,t) = \frac{\chi_4(t)-C}{1+ (k \xi)^\beta} + C,
\label{fifit}
\end{equation}
for $0 \leq k < k_0$, $k_0 \approx 7.21$
being the position of the first peak in the static structure factor.
As for the $d=3$ FA model, the exponent $\beta(t)$ and
the dynamic length $\xi(t)$ are fitting parameters. There is
an additional free parameter, the additive constant $C$ in Eq.~(\ref{fifit}),
which accounts for the fact that the structure of the liquid
starts to be visible and creates some signal in $S_4(k,t)$ when
$k \to k_0$.
The results of the fitting procedure are presented in Fig.~\ref{monolj}
with lines going through the data. Note that the fits
in Fig.~\ref{monolj} are constrained at low $k$
by the value of the dynamic susceptibility $\chi_4(t)$.
The most important result from Fig.~\ref{monolj} is that if the functional
form of $S_4(k,t)$ is given some freedom, here via the time
dependent exponent $\beta(t)$, the extracted dynamic
lengthscale $\xi(t)$ indeed continues to grow
monotonically after the peak of the dynamic susceptibility, contrary
to reported previously~\cite{Glotzer,Glotzer2,pan}, but in agreement with \cite{heuer}.
We emphasize once more that this result physically makes sense.
At times much larger than $t^*$, only very rare but very large
dynamical domains contribute to the dynamic structure factor, so that
spatial correlations are weak, but extremely long-ranged. The existence
of an ever growing length scale is supported by any model with an hydrodynamical
limit (such as the phonon or defect models studied here) and
is in a sense trivial. The really interesting
piece of information is the value of this length scale for $t=\tau_\alpha$,
i.e. when the relevant relaxation processes take place.
We conclude that our numerical data are not inconsistent with a
monotonically growing length scale even for $t > \tau$, although
addressing more quantitative issues such as
functional form at the growth law and its temperature dependence would
require quite an important, but certainly
worthwhile, numerical effort.
\section{Conclusion and final comments}\label{conclusion}
Let us summarize the results and the various points made in this rather
dense paper.
First, we have computed numerically and analytically, exactly or
approximatively, the four-point correlation function designed to characterize
non trivial cooperative dynamics in glassy systems
within several theoretical models: mode-coupling theory,
collectively rearranging regions,
diffusing defects, kinetically constrained models, elastic/plastic
deformations.
The conclusion is that the behaviour of $\chi_4(t)$ is rather rich, with
different regimes summarized in the Introduction and in Fig. 1. We have
computed the early time exponent $\mu$ and the peak exponent $\lambda$ for
quite a few different models of glass forming liquids, and shown that the
values of these exponents resulting from these models are quite different,
suggesting that the detailed study of $\chi_4(t,T)$ should allow one to
eliminate or confirm some of the theoretical models for glass formation.
In this spirit, we first simulated some non-cooperative KCMs as the
one-spin facilitated FA model
in $d=1$ and
$d=3$ and the East model. The assumption of
point-like defects that diffuse, possibly with an anomalous diffusion exponent,
gives a good account of the shape of the four-point correlation function
and of the four-point
susceptibility which are in quantitative agreement with the above results for
the independent defects model.
For strong glasses such as ${\rm S}{\rm i}{\rm O}_{2}$,
where the relaxation is
due to defect diffusion, our results should be quantitative. It would be very
interesting to reconsider numerical simulations of the dynamics of ${\rm S}{\rm i}{\rm O}_{2}$
under the light of the present paper to check in more details that the defect picture
is indeed correct in this case (note that our results should
enable one to extract, in
principle, the properties, density and relaxation times of defects from the
four-point correlation function).
For the $d=3$ one-spin facilitated FA model, we see clear indications of the
interactions between defects as time increases.
This leads to small deviations of the
numerically obtained exponents with respect to those predicted by our analysis
of the independent defect model, which does not account for interactions
between defects.
As far as the identification of a growing length scale $\xi(t)$ from numerical data,
we have seen that even within
this simplified lattice models, this can be a rather difficult
task. Our results points toward a dynamical correlation length that
grows forever and a behaviour of $S_{4} (k,t)$
different from the Ornstein-Zernike form but with similar asymptotic
behaviour. We leave the study of cooperative KCM,
for which a more complicated behaviour should occur, for future work. In particular, the
detailed form of $S_{4} (k,t)$ should contain information about the inner structure of the
corresponding defects.
We have also analyzed the four-point susceptibility of both a Lennard-Jones system and
a soft-sphere system, and shown that the initial
exponent $\mu$ of the four-point susceptibility is decreasing with the temperature
and rather small, $\mu < 1$. We have found, perhaps unexpectedly, a reasonable agreement for
$\mu $ and $\lambda $ with the predictions of MCT but not with other theoretical scenarii,
such as simple diffusing defects, strong KCMs or CRR (although this might be a question of
temperature and time scales, since both CRR and cooperative KCMs are supposed to apply closer
to the glass transition temperature). Finally we confirm that the extraction of the growth law
of $\xi(t)$ at a given temperature is difficult, and we can only say at this stage that the data is not
incompatible with the idea that $\xi(t)$ grows monotonically, even beyond $t = \tau_\alpha$,
in the Lennard-Jones system.
As for further work and perspectives, we think
that the following points would be worth investigating.
First, it would be very interesting to develop a detailed theory of
the crossover
between the elastic regime described in Section~\ref{Elastic} and the Mode-Coupling
$\beta$ relaxation regime. Is it possible, in particular, to describe approximately
the `melting' of the glass as one approaches the Mode-Coupling transition
temperature from below? Second,
we only considered systems in equilibrium.
One in fact expects that the four-point susceptibility
also contains
very useful information in the aging regime (see \cite{Leticia, mayer}).
Detailed predictions
in this regime may enable one to probe the mechanisms for slow dynamics and the issue of the cooperative
length at low temperature in the aging regime \cite{Leticia}. In particular, the elastic contribution should not
age whereas the CRR contribution (characterized by the same exponent $\mu$) should exhibit some aging,
possibly allowing one to separate the two effects. Third, the quantitative study of four-point functions in
cooperative KCMs where defects have a complex inner structure would be very interesting, since it is clear from the
present paper that simpler KCMs seem to fail at describing quantitatively $\chi_4(t)$ in fragile systems. Fourth, it
would be interesting to define more complicated correlation functions, for example, a fully general four point function,
or higher order correlation functions, in order to test in a more stringent way the idea
of cooperativity in glassy
systems, and distinguish systems where the growth of $\chi_4(t)$ is trivial, such as elastic solids, from
those in which
a truly non trivial cooperativity governs the dynamics. Finally, it seems clear that this issue of
cooperativity and
its associated length scale can only be convincingly settled if long time scales, low temperature
regimes can be
probed quantitatively in experimental systems. We hope that the present paper will motivate ways to
directly
access four-point functions experimentally in glassy systems (see \cite{mayer}); natural candidates
for this are
colloids \cite{Weeks} and granular materials \cite{Dauchot,Alex}, although there might be ways to
investigate this
question in molecular glasses and spin-glasses as well \cite{ustocome}.
\begin{acknowledgments}
Fig.~\ref{softsphere} was obtained from the unpublished data
of D.R. Reichman and R.A. Denny. We are very grateful to them for providing these results.
We thank E. Bertin, O. Dauchot, J.P. Garrahan and D.R. Reichman for discussions.
G. B. is partially
supported by the European Community's Human Potential Programme
contracts HPRN-CT-2002-00307 (DYGLAGEMEM).
C.T. is supported by the European Community's Human Potential Programme
contracts HPRN-CT-2002-00319 (STIPCO).
\end{acknowledgments}
\section*{Appendix A: Dynamics of elastic networks}
\subsection*{A.1 The four-point correlation function - Over-damped case}
We will define $G_4(\vec r,t)$ for the elastic model defined in the text as:
\begin{equation}
G_4(\vec r,t)=\langle \cos q \left[\phi(\vec r,t)-\phi(\vec r,0)\right]\cos q
\left[\phi(\vec r=0,t)-\phi(\vec r=0,0)\right]
\rangle-C^2(q,t),
\end{equation}
which is equivalent to:
\begin{eqnarray}\nonumber
G_4(\vec r,t)&=&\frac12 \langle \cos q \left[\phi(\vec r,t)-\phi(\vec r,0)+\phi(\vec r=0,t)-\phi(\vec r=0,0)
\right]
\\\nonumber
&+& \frac12 \langle \cos q \left[\phi(\vec r,t)-\phi(\vec r,0)-\phi(\vec r=0,t)+\phi(\vec r=0,0)
\right]-C^2(q,t).
\end{eqnarray}
Using the fact that the field $\phi$ is Gaussian, we finally find:
\begin{equation}
G_4(\vec r,t)=C^2(q,t)\left(\cosh(2q^2 R(\vec r,t))-1\right),
\end{equation}
where:
\begin{eqnarray}\nonumber
R(\vec r,t)&=& \langle(\phi(\vec r,t)-\phi(\vec r,0))(\phi(\vec r=0,t)-\phi(\vec r=0,0))\rangle\\
&=&\frac{T}{\kappa} \int \frac{d^dk}{(2 \pi)^d k^2} e^{-i\vec k \cdot \vec r}(1-e^{-\kappa k^2 t})
\end{eqnarray}
Hence,
\begin{equation}
R(\vec r,t)=\frac{T}{\kappa} (\kappa t)^{1-d/2} F(\frac{r}{\sqrt{\kappa t}})
\end{equation}
with:
\begin{equation}
F(z)=z^{2-d} [I(\infty)-I(z)]; \qquad I(z)=\int \frac{d^dw}{(2\pi)^d w^2} e^{-iw_1-\frac{w^2}{z^2}}.
\end{equation}
We thus see immediately that $G_4(\vec r,t)$ will be governed by a 'diffusive' correlation length
$\xi(t) \sim \sqrt{\kappa t}$, as expected from the structure of the Langevin equation that describes
relaxational dynamics. Note that for under-damped dynamics, sound waves would change this scaling.
It is useful to consider the following quantity:
\begin{equation}
J(z)=\frac{\partial I(z)}{\partial(\frac{1}{z^2})}=\int \frac{d^dw}{(2\pi)^d} e^{-iw_1-\frac{w^2}{z^2}}.
\end{equation}
In $d=3$, after integrating over $dw_1$, one has
\begin{equation}
J(z)= \frac{1}{8 \pi^{3/2}} z^3 e^{-\frac{z^2}{4}}
\end{equation}
and:
\begin{equation}
I(z)=\frac{1}{4 \pi^{3/2}} \int_z^\infty e^{-\frac{u^2}{4}} du
\end{equation}
Therefore, for $z \ll 1$, one finds $F(z) \simeq (4\pi z)^{-1}$ and $R(\vec r,t) \simeq T/(4\pi \kappa r)$,
whereas for $z \gg 1$, $F(z) \simeq (2 \pi^{3/2})^{-1} \exp(-z^2/4)/z^2$.
Thus, for $r \ll \xi(t)$ and $\kappa \Lambda^2 t \gg 1$, the four-point correlation function behaves as:
\begin{equation}
G_4(\vec r,t)=f_q^2 \left(\cosh(\frac{T q^2}{2\pi \kappa r})-1\right).
\end{equation}
\subsection*{A.2 The four-point correlation function - Under-damped case}
We have now:
\begin{equation}
m\frac{\partial^2 \phi(\vec r,t)}{\partial^2 t}= \kappa \Delta \phi(\vec r,t)
\end{equation}
which has for solutions in the Fourier space:
\begin{equation}
\phi_k(t)=\exp( ik Vt) \phi_k(0),
\end{equation}
with $V=(\kappa/m)^{1/2}$.
We now have:
\begin{equation}
\label{beta}
\langle [(\phi(\vec r,t)-\phi(\vec r,0)]^2\rangle=
\frac{2T}{\kappa} \int \frac{| \exp(ik Vt)-1|^2}{k^2} \frac{d^dk}{(2\pi)^d}=
\frac{4T}{\kappa} \int (1-\cos[Vkt])dk.
\end{equation}
In $d=3$, we find obviously the same result for $f_q$ and $G_4$ as above, but $R(\vec r,t)$ is now equal to:
\begin{equation}
R(\vec r,t)=\frac{T}{\kappa} \int \frac{d^dk}{(2 \pi)^d k^2} e^{-i\vec k \cdot \vec r}(1-\cos(k Vt))
\end{equation}
that we write:
\begin{equation}
R(\vec r,t)=\frac{T}{\kappa} [I(\vec r,0)-I(\vec r,t)]
\end{equation}
where:
\begin{equation}
I(\vec r,t)=\int \frac{d^dk}{(2 \pi)^d k^2} e^{-i\vec k \cdot \vec r}\cos(k Vt)
\end{equation}
By introducing $z=Vt/r$ and changing variable $q\equiv r k$, and also $u=\cos\theta$ and
integrating over $u$ one finds:
\begin{equation}
I(\vec r,t)= \frac{2\pi}{r} \int dq q^{-1} [\sin(q(1+z))+\sin(q(1-z))]
\end{equation}
Consider the first term:
\begin{equation}
I(\vec r,t)= \frac{2\pi}{r} \int dq q^{-1} \sin(q(1+z))
\end{equation}
Changing variable $v=q(1+z)$ directly shows that this integral do not depend on $z$, as long as $(1+z)$ is positive.
This is true for the other integral, which does not depend on $z$ as long as $1-z$ is positive. If $1-z$ is negative
then the integral changes sign. Therefore we have that $I(\vec r,t)=I(\vec r,0)$ if $z<1$, and $I(\vec r,t)=0$ if $z>1$.
Therefore $R(\vec r,t)=0$ if $z<1$ and $R(\vec r,t)=\frac{T}{4\pi\kappa r}$ when $z>1$. The result is very intuitive:
when $z<1$ the information does not have time to travel the distance $r$ and there are no correlation. For $z>1$ the
two regions are ``connected'' and one finds the free field correlations. Brownian and Newtownian dynamics
furnish the same correlation for a given $r$ when the time diverges, as we expect. Finally, it is straightforward
to obtain the result quoted in the text for $\chi_4(t)$.
\subsection*{A.3 The low dimensional case}
We give here, without much details, the results for elastic networks in $d=1$ and $d=2$. In
$d=1$, as is well known, each particle wanders arbitrary far from its initial position but in
an anomalous, sub-diffusing way, as $t^{1/4}$. Correspondingly, the dynamical structure factor
decays as a stretched exponential:
\begin{equation}
\ln C(q,t) \sim \frac{T}{\kappa} q^2 t^{1/2}
\end{equation}
Note that the $t^{1/4}$ comes from a collective displacement of the cages, and is similar to the
anomalous diffusion observed for hard spheres in one dimensions, since the latter problem can be
mapped onto the Edwards-Wilkinson problem i one dimension \cite{Arrantia,Alex}.
We expect that the results obtained here for $G_4$ should also hold for this case as well. In fact,
this model was recently discussed in the context of a simple $d=1$ granular compaction model, see \cite{Alex}.
In $d=2$, the displacement grows logarithmically with time, leading to a power-law decay of the
dynamical structure factor with a $q$ dependent exponent:
\begin{equation}
C(q,t) \sim t^{-y} \qquad y = \frac{q^2 T}{8 \pi \kappa}.
\end{equation}
Turning now to $\chi_4(t)$, we find that after a short transient, $\chi_4(t)$ grows as $t^{1/2}$
in $d=1$ and behaves as $t^{1-2y}$ in $d=2$.
\section*{Appendix B: Calculations for the defect model}
In section \ref{defect} we have reduced the computation of $G_4(r,t)$ and $\chi_4(t)$
to probability distributions of a single random walk. In the following we shall show
how these quantities can be computed in any spatial dimension.
Let us call $F^z_x(u)$ be the probability that a random walk starting in $z$ reaches $x$
for the first time at time $u$. $P^z_x(t)$, the probability that a vacancy
starts in $z$ at time zero and reaches for the first time $x$ at a time less than $t$, reads:
\begin{equation}
P^z_x(t)=\int_0^t F^z_x(u) du
\end{equation}
Therefore, we need to calculate $F^z_x(u)$. The trick to do that is writing a linear equation relating
$F^z_x$, that we want to compute, to $P^z(x,t)$, the probability that a random walk with self
diffusion coefficient $D$, starting in $z$, is in x at time t, which is well known.
This linear equation is:
\begin{equation}
P^z(x,t)=\delta_{x,z}\delta_{t,0}+\int_0^t F^z_x(u)P^x(x,t-u) du\quad.
\end{equation}
By taking Laplace transform (from now on $s$ is the variable
conjugated to $t$ and $\cal{L}$ indicates the Laplace transform) we obtain:
\begin{equation}
\label{10}
F^z_x(t)={\cal{L}}^{-1}\left(\frac{{\cal{L}} P^z(x,s)-\delta_{x,z}}{{\cal{L}} P^x(x,s)}\right)(t)
\end{equation}
and
\begin{eqnarray}
\label{10b}
P^z_x(t)&&=\int_0^t {\cal{L}}^{-1}\left(\frac{{\cal{L}} P^z(x,s)-\delta_{x,z}}{{\cal{L}} P^x(x,s)}\right)(t')
dt'\\
&&= {\cal{L}}^{-1}\frac{1}{s}\frac{{\cal{L}} P^z(x,s)-
\delta_{x,z}}{{\cal{L}} P^x(x,s)}
\end{eqnarray}
A similar strategy can be used to calculate $P^z_{x, {\overline y}}(t)$.
Indeed the following equality hold
\begin{eqnarray}
P^z_{x, {\overline y}}(t)&=&\int_0^t F^z_{{x,\overline y}}(t')P^x_{{\overline y}}(t-t') dt'\nonumber\\
P^z_{y, {\overline x}}(t)&=&\int_0^t F^z_{{\overline x},y}(t')P^y_{{\overline x}}(t-t') dt'\nonumber\\
\end{eqnarray}
where $F^z_{{\overline x},y}(t)$ is the probability that a random walk starting in $z$
at time zero reaches $y$ for the first time at t but never touches $x$ at
$s\leq t$.
Therefore, in order to calculate
$P^z_{x, {\overline y}}(t)+P^z_{y, {\overline x}}(t)$
we need to calculate $F^z_{x, {\overline y}}(t)+F^z_{y, {\overline x}}(t)$.
It is immediate to check that the following equations hold for any choice of $x,z,y$
\begin{equation}
\left\{
\begin{array}{ll}
F^z_x(t) & =\delta_{x,z}\delta_{t,0}+\int_{0}^t ds F^z_{{\overline x},y}(s)F^y_x(t-s)
+F^z_{x,{\overline y}}(t)\\
F^z_y(t) & =\delta_{y,z}\delta_{t,0}+\int_{0}^t ds F^z_{x,{\overline y}}(s)F^x_y(t-s)
+F^z_{{\overline x},y}(t)
\end{array}
\right.
\end{equation}
which implies, again by Laplace transform ($z$ is always different
from $x$ and $y$ in the following so we will skip the Kronecker deltas),
\begin{equation}
\label{12}
F^z_{{\overline x},y}(t)+F^z_{x,{\overline y}}(t)={\cal{L}} ^{-1}\frac{{\cal{L}}
F^z_x(s)+{\cal{L}}F^z_y(s)}{{\cal{L}}F^x_y(s)+{\cal{L}}F^{x}_x(s)}
\end{equation}
Using the expression (\ref{10}) for the $F^x_y(s)$ we get
\begin{equation}
F^z_{{\overline x}}(y,t)+F^z_{{\overline y}}(x,t)={\cal{L}} ^{-1}\frac{{\cal{L}}
P^z(x,s)+{\cal{L}}P^z(y,s)}{{\cal{L}}P^x (y,s)+{\cal{L}}P^{x} (x,s)}
\end{equation}
Furthermore $ P^y_{{\overline x}}(t)=1-P^{y}_{x} (t)$. Hence we obtain
\begin{equation}\label{}
{\cal{L}} P^y_{{\overline x}}(s)=\frac{1}{s} -{\cal{L}}P^{y}_{x} (s) =
\frac{1}{s} (1-\frac{{\cal{L}}P^{y} (x,s)}{{\cal{L}}P^{x} (x,s)})
\end{equation}
Finally, we obtain the expression for
\begin{equation}
{\cal {L}} (P^z_{x, {\overline y}}(s)+P^z_{y,{\overline x}}(s))=\frac{{\cal{L}}
P^z(x,s)+{\cal{L}}P^z(y,s)}{{\cal{L}}P^x(y,s)+{\cal{L}}P^{x} (x,s)}
\frac{1}{s} \left(1-\frac{{\cal{L}}P^{y} (x,s)}{{\cal{L}}P^{x} (x,s)}\right)
\end{equation}
An useful way to rewrite this expression is obtained
by summing and subtracting the Laplace transform of $P^{z}_{x} (t)+P^z_y(t)$:
\begin{equation}
P^z_{x, {\overline y}}(t)+P^z_{y,{\overline x}}(t)=P^{z}_{x} (t)+P^{z}_{y} (t)-2{\cal{L}}^{-1}\frac{{\cal{L}}
P^z(x,s)+{\cal{L}}P^z(y,s)}{{\cal{L}}P^x(y,s)+{\cal{L}}P^{x} (x,s)}
\frac{1}{s} \frac{{\cal{L}}P^{y} (x,s)}{{\cal{L}}P^{x} (x,s)}
\end{equation}
Finally putting together all the different terms we have:
\begin{equation}\label{}
\langle n_x(t)n_y(t)\rangle =\exp \left(-2\rho_{v}-2\rho_{v}N (t)+2\rho_{v}P^{y}_{x}
(t)+\rho_{v}G(t,x-y)
\right)
\end{equation}
where $N (t)=\sum_{z\neq x} P^z_x (t)$ is the average number of distinct sites
(minus 1) visited by a random walk during the interval of time $t$ and
\begin{equation}
G (t,x-y)={\cal{L}}^{-1}
\left[\sum_{z\neq x,y}
\frac{{\cal{L}}
P^z(x,s)+{\cal{L}}P^z(y,s)}{{\cal{L}}P^x(y,s)+{\cal{L}}P^{x} (x,s)}
\frac{1}{s} \frac{{\cal{L}}P^{y} (x,s)}{{\cal{L}}P^{x} (x,s)}\right]
\label{G}
\end{equation}
Since
\begin{equation}\label{}
\langle n_x(t)\rangle^{2}=\exp \left(-2\rho_{v}-2\rho_{v}N (t))
\right)
\end{equation}
the expression of
$G_4$ is
\begin{equation}\label{final}
G_4 (x-y,t)=\exp \left(-2\rho_{v}-2\rho_{v}N (t)
\right)\left[\exp \left(2\rho_{v}P^{y}_{x}
(t)+\rho_{v}G (t,x-y)
\right)-1 \right]
\end{equation}
In the following we shall analyze separately the one dimensional
case, the three or higher dimensional case and the two dimensional
case.
\section*{B.1 One Dimension}
\label{1D}
Consider a symmetric random walk on a one dimensional lattice with
lattice spacing $a$, by Laplace
transforming the master equation
\begin{equation}
\frac{dP^z(x,t)}{dt}=\frac{P^z(x+a,t)+P^z(x-a,t)-2P^z(x,t)}{2}
\end{equation}
one immediately obtains
\begin{equation}
{\cal{L}}P^z(x,s)=\int_{-\pi/a}^{\pi/a}\frac{dk}{2\pi}\frac{e^{ik (x-z)}}{\zeta(k)+s}
\end{equation}
where $\zeta(k)=(1-\cos k)$.
In the continuum limit $a\to 0$, $(x-y)\propto a$ $\sqrt{Dt/2}\propto a^2$,
the above integral can be solved with the well known result
\begin{equation}
{\cal{L}} P^z(x,s)=\frac{1}{ \sqrt{4 D s}}e^{\frac{-\sqrt{s} |x-z|}{\sqrt D}}
\end{equation}
which correspond to the solution of the diffusion equation for a one
dimensional Brownian
motion with diffusion coefficient $D$, i.e.
\begin{equation}
\frac{dP}{dt}=D\frac{d^2P}{dx^2}
\end{equation}
Let us now compute all the functions needed to get $G_{4}$.
First
\[
N (t)=\sum_{z\neq x} P^z_x(t)=\sum_{z\neq x}{\cal{L}}^{-1} \left(\frac{1}{s}
\frac{{\cal{L}}P^{z} (x,s)}{{\cal{L}}P^{x} (x,s)}\right) (t)
\]
where we used equation (\ref{10b}).
When $t>>1$ we get
\[
N (t)=4\frac{\sqrt{Dt}}{\sqrt{\pi}}
\]
Second,
using the expression (\ref{G}) of $G$ in terms of ${\cal{L}} P^z(x,s)$ we get:
\[
{\cal{L}}G (s,x-y)=2\frac{\sqrt{D}}{s^{3/2}}\frac{e^{\frac{-\sqrt{s}
|x-y|}{\sqrt D}}}{e^{\frac{-\sqrt{s} |x-y|}{\sqrt D}}+1}
\]
Changing variable in the Inverse Laplace transform we get:
\[
G (t)=4\sqrt{Dt}f \left( \frac{|x-y|}{\sqrt{2Dt}}\right)
\]
where $f \left(\frac{|x-y|}{\sqrt{2Dt}}\right)$ equals
\[
f \left(\frac{|x-y|}{\sqrt{2Dt}}\right)=\int_{-i\infty -\gamma}^{+i\infty -\gamma}
\frac{e^{\frac{-\sqrt{2s} |x-z|}{\sqrt Dt}}}{e^{\frac{-\sqrt{2s} |x-z|}{\sqrt Dt}}+1}
e^{-s}\frac{ds}{s^{3/2}}\]
Finally $P^{y}_{x} (t)$ can be computed easily but it is always much
smaller than the other terms in the exponential so we are going to
neglect it. The resulting expression for $G_{4}$ is:
\begin{equation}\label{g41d}
G_{4} (x-y,t)=\exp \left(-2\rho_{v}-\frac{8\rho_{v}}{\sqrt{\pi}}\sqrt{Dt}
\right)\left[\exp \left(\rho_{v}2\sqrt{Dt}f\left( \frac{|x-y|}{\sqrt{2Dt}}\right)
\right)-1 \right]
\end{equation}
Note that the typical time-scale is $\tau=\frac{1}{\rho_{v}^{2}D}$ and
since we focus on $\rho_{v}\rightarrow 0$ we can rewrite the above
expression as:
\begin{equation}\label{g41d2}
G_{4} (x-y,t)=\exp \left(-\frac{\sqrt{8}}{\sqrt{\pi}}\sqrt{t/\tau }
\right)\left[\exp \left(2\sqrt{t/\tau }f\left(
\rho_{v}\frac{|x-y|}{\sqrt{2t/\tau }}\right)
\right)-1 \right]
\end{equation}
Integrating over $x-y$ to get the $\chi_{4}$ we find:
\begin{equation}\label{chi41d}
\chi_{4} (t)=\frac{2}{\rho_{v}}\exp \left(-\frac{8}{\sqrt{\pi}}\sqrt{t/\tau }
\right)\sqrt{2t/\tau }\int_{0}^{+\infty}dx \left[\exp \left(2\sqrt{t/\tau }f(x)
\right)-1 \right]
\end{equation}
In particular when $t/\tau \ll 1$ we have
\begin{equation}\label{chi41da}
\chi_{4} (t)\propto \frac{1}{\rho_{v}} (t/\tau )
\end{equation}
The interpretation of this result is that at short times the defects
do not intersect and the $\chi_{4}$ is just the square of the number
of average sites visited by a random walk until time $t$. We'll see
that this interpretation is indeed correct in any dimension.
Finally, after some algebra it is possible to obtain from
(\ref{chi41da}) that $\chi_{4} (t)\simeq\frac{c}{\rho_v} \exp \left(-\frac{4}{\sqrt{\pi}}\sqrt{t/\tau }
\right)$ at very large times ($c$ is a numerical constant). Thus,
as found in simulations, the normalized $\chi_{4}$ does not go to zero as it happens in three dimensions.
\section*{B.2 Three dimension and higher}\label{3D}
Consider a symmetric random walk on a cubic lattice, the general expression for $P^z(x,s)$ is
\begin{equation}
P^z(x,s)=\int_{BZ}\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik (x-z)}}{\zeta(k)+s}
\end{equation}
where $BZ$ means Brillouin zone and
$\zeta(k)=\sum_{i=1}^{d}(1-\cos k_{i})$ for a hyper-cubic lattice ($k_i$ is the component of $\vec{k}$ in the direction $i$).
Also in this case we consider the continuum limit
$(x-y)/\sqrt{Dt/2}\propto O (1)$ and look for times $t$ much larger than one.
Let us again compute all the needed quantities.
First, $N (t)$. In this case for $t>>1$ we find that
\[
N (t)= D\left(\int_{BZ}\frac{d^{d}k}{\pi \zeta(k)}\right)^{-1}{\cal{L}}^{-1} \frac{1}{s^{2}}
\]
Hence $N (t)= c_1tD$
where $c_1=\left(\int_{BZ}\frac{d^dk}{\pi \zeta(k)}\right)^{-1}$.
Again, we neglect the $P^{y}_{x} (t)$ term and we focus on $G$ in the
continuum limit, for $t\gg a$ we get:
\[
{\cal{L}}G=\frac{1}{s^{2}}\frac{\int_{BZ}\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik (x-z)}}
{Dk^{2}+s}}{\left( \int_{BZ}\frac{d^{d}k}{(2\pi)^{d}}\frac{1}{Dk^2}\right)^{2}}
\]
Changing variable in the Inverse Laplace Transform we get:
\[
G (t)=D^2\int_{-i\infty -\gamma}^{+i\infty -\gamma}e^{ts}\int_{BZ}
\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik (x-z)}}{Dk^{2}+s} \frac{\exp (ts)}{C^{2}s^{2}}ds
\]
Since we know the Inverse Laplace Transform of the function resulting
from the integral over $k$ (it is simply $P^{y} (x,t)$) and each $1/s$
adds an integral we finally get
\[
G (t)= c_2(Dt)^{2}\int_{0}^{1}du\int_{0}^{u}dv \frac{e^{-\frac{(x-y)^{2}}{2Dtv}}}{(2Dtv)^{d/2}}
\]
where $c_2$ is a numerical constant of order unity. From this expression, we finally obtain
\begin{equation}\label{g43d}
G_{4} (x-y,t)=\exp (-2\rho_{v}-2\rho_{v}c_1D t)
\left[ \exp \left(\rho_{v}(c_2Dt)^{2}\int_{0}^{1}du\int_{0}^{u}dv \frac{e^{-\frac{(x-y)^{2}}{2Dtv}}}
{(2Dtv)^{3/2}} \right)-1\right]
\end{equation}
and the results quoted in the main text.
\section*{B.3 Two dimension}\label{2D}
In two dimension things are bit tricky because of log
corrections. Briefly, we obtain that
\begin{equation}\label{g42d}
G_{4} (x-y,t)=\exp \left(-2\frac{c_3 t}{\tau \ln t}\right)
\frac{1}{\rho_{v}}c_4^{2}(t/\tau)^{2}\frac{1}{(\ln t D)^{2}}\int_{0}^{1}du\int_{0}^{u}dv
\frac{e^{-\frac{(x-y)^{2}}{2Dvt}}}{(2Dvt)}
\end{equation}
with $c_3$ and $c_4$ constants of order unity.
Hence, integrating over $x-y$, we get:
\begin{equation}\label{chi42d}
\chi_{4}(t)=\exp \left(-2\frac{c_3 t}{\tau \ln t}\right)
\frac{1}{2\rho_{v}}c_4^{2}(t/\tau)^{2}\frac{1}{(\ln tD)^{2}}
\end{equation}
In particular when $t/\tau<<1$ we have
\begin{equation}\label{chi41db}
\chi_{4} (t)\propto \frac{1}{\rho_{v}} \left(\frac{t}{\tau \ln t}\right)^2
\end{equation}
Again, since the number of sites visited in average by a RW in 2D goes like
$t/\ln t$, at short times $\chi_4$ is the square of the number of average
sites visited until time $t$.
\section*{B.4 Density--density correlations}
We now sketch the calculation for the density four point correlation, defined as:
\begin{equation}
G_4^d(x-y,t)\equiv
\langle (\eta_x(t)\eta_x(0)-\rho^{2}) (\eta_{y} (t)\eta_y(0)-\rho^{2})\rangle-\langle\eta_x(t)\eta_x(0)\rangle_{c}^2
\end{equation}
with $\eta_x(t)=0,1$ if
the site $x$ is empty or occupied at time $t$, respectively. We start from:
\begin{equation}
\langle\eta_x(t)\eta_x(0)\rangle_{c}^2=\left(\left[\frac{1}{V}\sum_{z, z\neq x}
(1-P^z(x,t))\right]^{N_v}-\rho^{2}\right)^{2}\nonumber
\end{equation}
Using that $\sum_{z} P^z(x,t)=1$ we get
\begin{equation}
\langle\eta_x(t)\eta_x(0)\rangle_{c}^2=\exp (-4\rho_{v}) \left(\exp (\rho_{v}P^{x} (x,t))-1 \right)^{2}
\end{equation}
In the limit $\rho_{v}\rightarrow 0$ we have:
\begin{equation}
\langle\eta_x(t)\eta_x(0)\rangle_{c}^2= \left( \rho_{v}P^{x} (x,t)) \right)^{2}
\end{equation}
Similarly we find that
\begin{eqnarray}
\langle\eta_x(t)\eta_x(0)\eta_y(t)\eta_y(0)\rangle&=&(\frac{1}{V}\sum_{z, z\neq x,y}
(1-P^z(x,t)-P^z(y,t)))^{N_v}\nonumber\\
&=& \exp \left(-4\rho_{v}+2\rho_{v}P^{x} (x,t)+2\rho_{v}P^{y} (x,t) \right)\nonumber
\end{eqnarray}
Collecting all the pieces together we finally get at leading order in
$\rho_{v}$
\begin{equation}
G_4^d(x-y,t)=2\rho_{v}P^{y} (x,t)
\end{equation}
for $x\neq y$.
The interpretation of this eq is that the dynamical correlation
between $x$ and $y$ is due to the fact that the {\it same vacancy}
was in $x$ at time $0$ and $t$ at time $t$ or vice-versa.
Integrating over $x-y$ one finds that at long times
$\chi_4(t)\propto 1/t^{d/2}$,
showing no interesting structure.
|
1,116,691,497,709 | arxiv | \section{Introduction}
Reliability and band-width efficiency of relays have been crucial issues in {relay-based} wireless communications
including satellite communications and mobile communications.
Recently, the compute-and-forward (CAF) scheme~\cite{Nazer11} attracts great interests because of its high band-width efficiency.
In the CAF scheme, a relay tries to decode a linear combination of the transmitted signals from other relays and terminals.
In the next time slot, the decoded message is transmitted
to other relays and terminals.
It helps the wireless communication system to increase its band-width efficiency.
This concept is also crucial in the \emph{physical layer network coding}, which has been studied extensively~\cite{Katti08,Zhang09}.
In addition, the CAF scheme is effective for secure wireless communications~\cite{hwv}.
In order to investigate the performance limit of the CAF scheme,
we discuss the performance under the framework of the two-way relay channel.
In the two-way relay channel {(Fig.~\ref{zu_i0})},
two terminals named A and B try to exchange their messages $X_A$ and $X_B$, respectively,
through bi-directional connections to the relay R, instead of any direct connection to each other.
It has two phases.
First, the two terminals simultaneously send their messages to R, which is called
the multiple-access (MAC) phase.
Second, the relay R sends the received information to
both terminals A and B.
In the second phase, it is sufficient that the relay R
broadcasts the modulo sum $X_A\oplus X_B$ to both terminals A and B because
they have their own original messages.
Therefore, the relay R may decode only the modulo sum $X_A\oplus X_B$ in the MAC phase, which is called the CAF scheme.
In contrast, Yedla et al. \cite{Yedla09} proposed an efficient decoding strategy for decoding
both messages $X_A$ and $ X_B$ in the MAC phase, which is called the MAC separation decoding (SD) scheme
(see Fig.~\ref{zu_i0}).
The SD scheme has an advantage in directly decoding the messages without loosing information as in the CAF scheme
while a decoder for the CAF scheme is much simpler than that for the SD scheme.
It is thus crucial to analyze their decoding performance in the MAC phase
to realize an efficient and practical relaying technique with high reliability and low {complexity}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.375\linewidth]{./CAF_SD_ISIT2019.pdf}
\caption{Schematic diagram of the CAF and SD schemes in the MAC phase.
}\label{zu_i0}
\end{figure}
{The key to realize a practical CAF scheme is a proper choice of error-correcting codes.
The low-density parity-check (LDPC) code} is a leading candidate due to its high reliability against
various channel models including an additive white Gaussian noise (AWGN) channel~\cite{MacKay99}.
Sula et al. proposed the LDPC coding technique for the CAF scheme with the binary phase shift keying (BPSK) modulation and evaluated its
decoding performance by numerical simulations~\cite{Sula17}.
The authors {theoretically} analyzed the LDPC codes and its spatial coupling coding
by the \emph{density evolution} (DE) method
and found that they {exhibit higher reliability} than the SD scheme~\cite{tiwh,twh2}.
{
For realizing a CAF scheme with non-binary modulations, lattice-based CAF methods based on PAM or QAM modulation
have been extensively studied~\cite{Nazer11,Feng}.
This is because the additive group property of lattices is well matched to the CAF scheme.
Although the lattice-based CAF methods provide near optimal performance when SNR is high enough,
there remain two issues regarding its practical implementation.
The first one is the decoding complexity.
In general, lattice decoding is computationally expensive and harder to implement compared with binary channel coding.
The second problem is that the lattice-based method cannot apply to a PSK or APSK modulation format,
which is often employed in satellite communications.
In order to find a possible solution for these problems,
we here discuss a bit-interleaved coded modulation (BICM) with $2^K$-PSK modulations
as the basis of a CAF scheme.
In the BICM scheme, a bit interlearver is set between an encoder and a mapper to separate
the coding and modulation~\cite{BICM}.
It remarkably reduces the computational cost of a decoder in spite of a small loss in decoding performance.
Although the LDPC-BICM scheme for the CAF scheme is studied in~\cite{Du},
the analysis is only based on the numerical simulations.
This means that theoretical analysis unveiling the achievable rate with belief propagation (BP) decoding remains open.
}
In this paper, we will present the asymptotic performance analysis of the LDPC-BICM sheme
{in the MAC phase of the CAF scheme on the two-way relay channel.
First, we will propose a DE analysis of the LDPC-BICM scheme
without any conventional approximations.
Then, the achievable rate of of the LDPC-BICM scheme for the CAF scheme
is obtained by the DE analysis, which offers theoretical comparison with the alternative SD scheme.
}
\section{LDPC-BICM Scheme and Density Evolution}\label{sec_normal}
{Before describing the main results,
we first propose a novel DE analysis of the LDPC-BICM scheme.
Although the LDPC-BICM scheme has been extensively studied by a DE analysis~\cite{BICM,Lei},
the analysis is usually based on some \emph{approximations}.
A well-known approximation is the \emph{all-zero codeword assumption}~\cite{Hou} in which
a transmitter send an all-zero codeword as an instance of random codewords.
{Another approximation is the assumption on the distribution of extrinsic output values
in the \emph{extrinsic information transfer (EXIT) chart} analysis~\cite{Ash}.}
Unfortunately, these assumptions do not hold in general cases including the LDPC-BICM scheme
on higher order modulations.
Here, we will propose an alternative DE analysis based on that for asymmetric channels~\cite{Wang05}.
}
For simplicity, we consider the LDPC-BICM scheme for a single-access channel model such as a complex AWGN (CAWGN) channel.
Let us consider a {binary} LDPC code $C\subset \mathbb{F}_2^{Kn}$ ($\mathbb{F}_2\triangleq \{0,1\}$).
At a transmitter, a message is encoded to ${x} = (x_s^i)^{i=1,\dots , n}_{s=1,\dots, K}\in C$.
In the BICM scheme, the transmitter uses a bit interleaver $\pi$ to remove correlations in the interleaved signal $\pi({x})$.
The signal $\pi({x})$ is then modulated by a mapper and transmitted through a channel.
A receiver attempts to demap and decode $\pi({x})$ from the received signal $y$, and
detects the transmitted signal $x$ using the interleaver $\pi^{-1}$.
Assuming that the code length $n$ is sufficiently large and an interleaver $\pi$ is randomly generated,
each element of $\pi({x})$ is sufficiently uncorrelated and the receiver can decode $x_s^i$ \emph{one by one}.
It makes the structure of the decoder considerably simple because a standard {BP decoder for binary LDPC codes}
is available only by replacing its log-likelihood ratio (LLR) calculation unit.
In fact, if $p_{Y|\tilde{X}}(y|\tilde{x})$ is the conditional PDF
corresponding to the constellation map and channel model {for each $K$ bits $\tilde{x}\in \mathbb{F}_2^K$,
the LLR for the $s$th bit in $\tilde{x}$ is given by}
\begin{align}
\tilde\lambda_s({y})&\triangleq \ln \frac{\tilde{L}_s[y|1]}{\tilde{L}_s[y|0]} \quad (s=1,\dots,K)
, \label{eq_norm1}
\end{align}
where
$\tilde{L}_s[y|u]$ ($u\in\mathbb{F}_2$) is the likelihood function of $\tilde{x}$ whose $s$th bit is $u$, which is defined by
\begin{equation}
\tilde{L}_s[{y}|u]\triangleq \frac{1}{2^{K-1}}\sum_{\tilde{x}:\tilde{x}_s=u}p_{Y|\tilde{X}}({y}|\tilde{x}). \label{eq_norm2}
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=0.86\linewidth]{./zmutual_info_normal_qpsk_rot0.pdf}
\caption{BP thresholds of the LDPC-BICM scheme with the QPSK modulation as a function of the code rate $R$.
The solid line represents the SIR of the CAWGN channel.
}\label{zu_s1}
\end{figure}
The DE method is useful to analyze an asymptotic decoding threshold called BP threshold.
Now we introduce the DE method for the LDPC-BICM scheme {without conventional assumptions}.
Let us consider the $(d_v,d_c)$-regular LDPC-BICM scheme, where $d_v$ and $d_c$ represent
the variable and check node degrees, respectively.
The conditional PDF $P^{(l)}(m|u)$ (resp. $Q^{(l)}(\hat{m}|u)$) denotes
the PDF of a message $m$ from a variable node to a check node
(resp. $\hat{m}$ from a check node to a variable node) with a transmitted bit $u$ at the $l$th step.
Following the DE equations for binary asymmetric memoryless channels~\cite{Wang05}, we have
\begin{align}
P^{(l)}(m|u)&\!=\frac{1}{K}\sum_{s=1}^K
\int_{\mathbb{C}}\! d{y}\tilde{L}_s[{y}|u]\int\! \prod_{d=1}^{d_v-1}\! d\hat{m}^{d}Q^{(l-1)}\! (\hat{m}^{d}|u)\nonumber\\
&\times\delta\left(m-\tilde\lambda_s({y})-\sum_{d=1}^{d_v-1}\hat{m}^{d}\right), \label{eq_d3}\\
Q^{(l)}(\hat{m}|u)&\!=\!\frac{1}{2^{d_c-2}}\sum_{(u^{d})\in U_{d_c}^u}
\int \prod_{d=1}^{d_c-1}d{m}^{d}P^{(l)}({m}^{d}|u^{d})\nonumber\\
&\times\!\delta\left(\hat{m}\!-\!2\tanh^{-1}\left[\prod_{d=1}^{d_c-1}\tanh\left(\frac{{m}^{d}}{2}\right)\right]\right),\! \label{eq_d4}
\end{align}
where
$
U_D^u\triangleq\left\{(u^{d})\in \mathbb{F}_2^{D}:\bigoplus_{d=1}^{D}u^{d}=0, u^{D}=u\right\}
$.
To derive these equations, we use a condition that $\pi$ is uniformly chosen from all the possible permutations.
{We assume that the bit index $s$ in~(\ref{eq_norm1}) becomes an independent random variable
in the large-$n$ limit due to the random permutation.
This is a crucial assumption for the above DE analysis.}
\begin{figure*}[t]
\centerline{\subfigure[QPSK]{\includegraphics[width=2.8in]{zCAF_QPSK_constellation.pdf}
\label{zu_c1}}
\hfil
\subfigure[$8$PSK]{\includegraphics[width=2.8in]{zCAF_8PSK_constellation.pdf}
\label{zu_c2}}}
\caption{Constellation diagram of (a) QPSK modulation and (b) $8$PSK modulation.
Open and closed points are signal points at terminals and the relay, respectively.
Each label represents a bit sequence corresponding to a signal point.
In Fig.~\ref{zu_c1}, the arrows illustrate that the received signal point corresponding to label $01$ is generated by a pair of transmit signals whose labels are $00$ and $01$.
}
\label{zu_c0}
\end{figure*}
We then employ the population dynamics (PD) method to solve the DE equations efficiently.
The method is popular in statistical physics~\cite{Mezard} and have been applied to the DE analysis~\cite{tiwh,twh2}.
Note that a conventional numerical analysis based on the fast Fourier transformation is also available
but the PD method is more straightforward for {evaluating~(\ref{eq_d3}), (\ref{eq_d4})}.
In the PD method, the PDFs $P(\cdot|u)$ and $Q(\cdot|u)$ ($u\in\mathbb{F}_2$) are approximated
by histograms of $N$ samples.
The parameter $N$ is called the population size and the DE equations are exactly solved in the large-$N$ limit.
Each sample is recursively updated by an update rule written in a delta function $\delta(\cdot)$
in~(\ref{eq_d3}) or~(\ref{eq_d4}) up to the $T$th iteration step.
More detailed description is found in~\cite{tiwh}.
In the BICM scheme, it is necessary to add a sampling step of $s$ in~(\ref{eq_d3}) to the PD method.
The LLR function $\tilde L_s[y|u]$ is chosen by a sampled value of $s$.
We now demonstrate the DE analysis of the LDPC-BICM scheme on a CAWGN channel.
We use the QPSK modulation ($K\!=\!2$) with the Gray mapping.
Figure~\ref{zu_s1} shows the BP thresholds as a function of the code rate $R=K(1-d_v/d_c)$.
In the PD method, we set $N\!=\!10^5$ and $T\!=\!2000$,
which is sufficiently large to evaluate a BP threshold for an AWGN channel
with the BPSK modulation.
For comparison, we show the symmetric information rate (SIR) defined as the mutual information between
the transmitted signal and received signal assuming that the transmitted word is chosen uniformly.
We find that the DE analysis reasonably obtains the BP thresholds of the LDPC-BICM scheme.
It is emphasized that the result is asymptotically exact as shown in~\cite{Wang05} because the analysis
directly treats the asymmetric channel.
{This is the first case of evaluating BP thresholds by DE equations without any conventional approximations as far as the authors
aware of.}
\section{{CAF and SD} Schemes on MAC Phase}
In this section, we define the CAF and SD schemes on the MAC phase of the two-way relay channel.
Let us define a $2^K$-PSK modulation
by a constellation mapper $\mathcal{M}:\mathbb{F}_2^K\rightarrow \mathbb{C}$.
We assume that two terminals A and B use the QPSK and $8$PSK modulations
in Fig.~\ref{zu_c0}.
For each time slot $t(=1,\dots,n)$, A and B respectively transmit their messages
$X_A^{(t)},X_B^{(t)}(\in\mathbb{F}_2^{K})$.
The two-way relay channel~\cite{Noori,Wu} is then defined by
\begin{equation} \label{channel_model}
Y^{(t)} = \mathcal{M}(X_A^{(t)}) + \mathcal{M}(X_B^{(t)})e^{i\theta} + W^{(t)},
\end{equation}
where $Y^{(t)}(\in \mathbb{C})$ is a received signal at the relay R at time slot $t$
and $\theta$ is the phase difference between two terminals.
Here, we assume that the perfect phase synchronization and perfect power control
are available at R and $\theta$ can be tuned to an arbitrary value.
Each element of $W^{(t)}$ is an i.i.d. complex Gaussian random variable with
zero mean and variance $\sigma^2$. Its PDF is given by $F_c(w; 0, \sigma^2)$,
where
\begin{equation}
F_c(w; \mu, \sigma^2) \triangleq \frac{1}{{\pi \sigma^2}}
\exp \left( -\frac{|w-\mu|^2}{\sigma^2} \right)\quad (w,\mu\in\mathbb{C})
\end{equation}
We use peak signal-to-noise ratio (PSNR) defined by
{$\mathrm{PSNR}\triangleq 10\log_{10}(\max_{r\in \mathcal{M}(X): X\in\mathbb{F}_2^K}|r|^2/\sigma^2)$ [dB]}.
In the SD scheme,
both terminals transmit their message with their encoding
and the relay R decodes them simultaneously.
In the CAF scheme,
the relay R decodes only their modulo sum.
In particular, in the CAF scheme with a degraded channel,
to detect the transmitted signals from A and B,
the relay R adapts the decoding scheme with the degraded channel, which is defined as follows.
When the input signal is $Z \in \mathbb{F}_2^K$,
the output $Y$ of the degraded channel is defined by
\begin{equation} \label{deg-channel_model}
Y \triangleq \mathcal{M}(X_A) + \mathcal{M}(Z \oplus X_A )e^{i\theta} + W,
\end{equation}
where
$X_A$ is the random variable {uniformly chosen from $\mathbb{F}_2^K$},
and $W$ is the complex Gaussian random variable with
zero mean and variance $\sigma^2$.
The PDF $p_{Y|Z}$ is given by
\begin{equation}
p_{Y|Z}(y|z)
\triangleq
\frac{1}{2^K}\sum_{\substack{x_A,x_B\in \mathbb{F}_2^K:\\ x_A\oplus x_B=z}}
F_c(y;\mathcal{M}(x_A)+ \mathcal{M}(x_B)e^{i\theta},\sigma^2)
\label{eq_sd_caf4b}.
\end{equation}
{In other words, for each $z\in \mathbb{F}_2^K $, we have
the \emph{received constellation} at R~\cite{Dana,Noori}, which is defined by}
$M_{2,\theta}^z\triangleq \{\mathcal{M}(x_A)+ \mathcal{M}(x_B)e^{i\theta}: x_A\oplus x_B=z\}$.
The total received constellation
$M_{2,\theta}\triangleq\cup_{z\in\mathbb{F}_2^K}M_{2,\theta}^z$
is represented by open points in Fig.~\ref{zu_c0} .
\section{Asymptotic Analysis of LDPC-BICM Scheme for {CAF Scheme}}\label{sec_de}
Now we turn to the main results on the LDPC-BICM scheme for the CAF scheme with the degraded channel.
In the BICM scheme, we assume that {two terminals use the same {binary} LDPC code $C$ and
both terminals and the relay use the same random interleaver $\pi$.}
Then, the LDPC-BICM scheme is defined as described in Section~\ref{sec_normal} because of the linearity of LDPC codes.
By using the PDF $p_{Y|Z}$ in \eqref{eq_sd_caf4b},
the LLR and the likelihood function
read
\begin{align}
\lambda_s({y}) &\triangleq \ln \frac{L_s[y|1]}{L_s[y|0]}, \label{eq_bicm1}
\\
L_s[{y}|u]& \triangleq \frac{1}{2^{K-1}}\sum_{{z}:z_s=u}p_{Y|Z}({y}|{z}), \label{eq_bicm2}
\end{align}
The received signal is decoded as explained in Section~\ref{sec_normal}.
The only difference from Section~\ref{sec_normal}
is to replace the LLR~(\ref{eq_norm1}) and the likelihood function (\ref{eq_norm2})
by (\ref{eq_bicm1}) and \eqref{eq_bicm2},
respectively.
Consequently, the LDPC-BICM scheme for the CAF scheme has an advantage in a simple decoding structure.
Although the BICM scheme is also applicable to the SD scheme, its BP decoder is rather complicated
{because it decodes} a pair of transmitted codewords~\cite{Yedla}.
In addition,
the DE analysis in Section~\ref{sec_normal} is easily extended to the {present} LDPC-BICM scheme.
In fact, we have the same DE equations with~~(\ref{eq_d3}) and (\ref{eq_d4})
with the above replacement.
We can apply the PD method to the DE equations to estimate BP thresholds.
To compare the CAF and SD schemes, we utilize their SIRs,
which express the asymptotic transmission rates of random linear codes.
By using the uniform distribution $P_Z$ on $\mathbb{F}_2^K$
and the PDF $p_{Y|Z}$ given in \eqref{eq_sd_caf4b},
the SIR of the CAF scheme with the degraded channel is given by
\begin{align}
I(Y;Z)
=& \sum_{z\in \mathbb{F}_2^K}P_Z(z)\int_{\mathbb{C}} dy p_{Y|Z}(y|z)\log_2 p_{Y|Z}(y|z)
\nonumber\\
& -\int_{\mathbb{C}} dy p_Y(y) \log_2 p_Y(y),
\label{eq_sir_caf}
\end{align}
while that of the SD scheme reads
\begin{align}
\frac{1}{2}I(Y;X_A, X_B)
=& \frac{1}{2}\int_{\mathbb{C}} dw F_c(w; 0, \sigma^2)\log_2 F_c(w; 0, \sigma^2)
\nonumber\\
&-\frac{1}{2}\int_{\mathbb{C}} dy p_Y(y) \log_2 p_Y(y),
\label{eq_sd_caf}
\end{align}
where
\begin{align*}
p_Y(y)
\triangleq& \frac{1}{2^{2K}}
\sum_{x_A,x_B\in \mathbb{F}_2^K}
& F_c(y;\mathcal{M}(x_A)+ \mathcal{M}(x_B)e^{i\theta},\sigma^2). \label{eq_sd_caf4d}
\end{align*}
{Note that the SIR of the CAF scheme depends on a labeling of a constellation map while that of the SD scheme does not.
It implies that an optimal labeling for the CAF scheme exists as studied in~\cite{Dana}.
In this paper, we use the constellations in Fig.~\ref{zu_c0} which show reasonably high SIRs
because no labels are overlapped at signal points in their received constellations.}
As shown in~\cite{Wu}, SIRs depend on the phase difference $\theta$.
Figures~\ref{zu_s2},~\ref{zu_s3} respectively show the SIRs of both schemes for {the QPSK and $8$PSK modulation in Fig.~\ref{zu_c0} as a function of $\theta$.}
We find that the SIRs of the CAF scheme take maximum values at $\theta\!=\!0$.
In contrast, the SIR of the SD scheme is maximized at a nontrivial value of $\theta$
because received constellation points of different transmitted signal pairs, e.g., $(X_A,X_B)=(10,01), (01,10)$, are overlapped when $\theta=0$.
It shows that the value of $\theta$ is set properly when we compare both schemes.
In the following experiments, we set $\theta=\pi/4$ for the QPSK modulation and $\theta=\pi/8$ for the $8$PSK modulation.
\begin{figure}[!t]
\centering
\includegraphics[width=0.86\linewidth]{./zmutual_rot_qpsk_psnr6_2.pdf}
\caption{The SIRs of the CAF and SD schemes for the QPSK modulation (PSNR$=6$[dB]) as a function of the phase difference $\theta$.
}\label{zu_s2}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.86\linewidth]{./zmutual_rot_8psk_psnr10_2.pdf}
\caption{The SIRs of the CAF and SD schemes for the 8PSK modulation (PSNR$=10$[dB]) as a function of the phase difference $\theta$.
}\label{zu_s3}
\end{figure}
Figure~\ref{zu_s4} shows the BP thresholds of some LDPC-BICM schemes with the QPSK modulation.
We used $N\!=\!10^5$ and $T\!=\!2000$ in the PD method.
The figure also shows the SIRs of the CAF and SD schemes with appropriate $\theta$ to maximize them.
We confirm that, in terms of the SIR, the CAF scheme is superior to the SD scheme in the high rate regime as with the BPSK modulation~\cite{twh2}.
On the other hand, when the rate $R$ is below $1.0$, the SD scheme becomes effective because the CAF scheme is based on the degraded channel.
In addition, the BP thresholds of the LDPC-BICM shceme for the CAF scheme is superior to the SIR of the SD scheme.
For example, the BICM scheme with $(3,18)$-regular LDPC codes has about $2.0$ dB gain against the SIR of the SD scheme.
We also show the BP thresholds of LDPC-BICM schemes with the $8$PSK modulation in Fig.~\ref{zu_s5}.
The CAF scheme is superior to the SD scheme in terms of the SIR when the rate $R$ is larger than $1.8$.
For practical LDPC-BICM shcmes, the BICM scheme with $(3,18)$-regular LDPC codes has $1.0$ dB gain compared with the SIR of the SD scheme.
It is emphasized that the gain becomes larger {if a practical error-correcting code is applied to the SD scheme.}
\begin{figure}[!t]
\centering
\includegraphics[width=0.86\linewidth]{./zmutual_info_qpsk_rot0_3.pdf}
\caption{The BP thresholds of the LDPC-BICM scheme (symbols) and SIRs of the CAF and SD schemes with the QPSK modulation (lines) as a function of the PSNR.
Each label represents $(d_v,d_c)$ of the regular LDPC code ensemble.
}\label{zu_s4}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.86\linewidth]{./zmutual_info_8psk_rot0_2.pdf}
\caption{The BP thresholds of the LDPC-BICM scheme (symbols) and SIRs of the CAF and SD schemes with the $8$PSK modulation (lines) as a function of the PSNR.
Each label represents $(d_v,d_c)$ of the regular LDPC code ensemble.
}\label{zu_s5}
\end{figure}
\section{Summary}
In this paper, we have studied the LDPC-BICM scheme for the CAF scheme with the degraded channel on the two-way relay channel.
We have investigated its asymptotic decoding performance by using a novel DE method.
The results show that the LDPC-BICM scheme exhibits higher reliability than the {alternative} SD scheme in the high rate regime.
It indicates that the LDPC-BICM scheme for the CAF scheme is
an effective and practical coded modulation scheme to realize efficient and reliable relaying
because of its simple decoding structure, low computational cost, and {high achievable rate.}
\section*{Acknowledgment}
This work is supported
by JSPS Grant-in-Aid for Scientific Research (A) Grant Number 17H01280.
|
1,116,691,497,710 | arxiv | \section{Introduction}
In our current theoretical framework for the building blocks of nature, symmetries play a central role, not least because of the visionary insights of the mathematician Emmy Noether. As we push our understanding of the fundamental interactions of nature to smaller distance scales (higher energy scales), the fate of symmetries across different scales is a key piece of information. In particular, proposals that Lorentz symmetry is an emergent symmetry at low energies, but broken at and beyond the Planck scale, have been made, e.g., in \cite{Gambini:1998it,Jacobson:2000xp,Carroll:2001ws,Magueijo:2001cr,Magueijo:2002am,GrootNibbelink:2004za,Horava:2009uw,Liberati:2009pf,Kharuk:2015wga}, concertedly with suggestions for experimental tests, e.g., \cite{AmelinoCamelia:1997gz}, see also the reviews \cite{Mattingly:2005re,AmelinoCamelia:2008qg,Hossenfelder:2012jw,Liberati:2013xla}. In a quantum gravitational context, this implies, e.g., the breakdown of diffeomorphism invariance to foliation-preserving diffeomorphism symmetry, due to the presence of a preferred frame. The remaining theory is therefore invariant under three-dimensional rotations on the spatial slices orthogonal to the time-like vector $n_{\mu}$. Scenarios, where diffeomorphism invariance is broken due to the existence of more general background structures and therefore without residual foliation preserving diffeomorphism invariance, are investigated, for example in \cite{Sudarsky:2005ab,Cohen:2006ky,Bailey:2006fd,Bernardini:2007ez,Ackerman:2007nb,Tasson:2014dfa}.
The observation of gravitational waves from a binary neutron star merger~\cite{TheLIGOScientific:2017qsa,GBM:2017lvd,Monitor:2017mdv}, as well as from binary black hole mergers \cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2018lct}, has opened up novel observational opportunities in this area, cf.~\cite{Gumrukcuoglu:2017ijh,Mewes:2019dhj}, and \cite{Yunes:2016jcc,Ramos:2018oku}, respectively. Yet, most experimental constraints in this field come from the non-observation of Lorentz-symmetry violation in the matter sector, see, e.g., \cite{Wolf:2004gg,Antonini:2005yb,AmelinoCamelia:2005qa,Wolf:2006uu,Kostelecky:2006ta,Muller:2007es,5ecad05bdf014116b23a59df9fda1c5d,Chung:2009rm,Michimura:2013kca,Nagel:2014aga,Tasson:2014dfa,Lo:2014yea,Chen:2016eli,Kostelecky:2016kkn,Bars:2016mew,Fu:2016fmf,Wiens:2016bqh,Lai:2017bbl,Lehnert:2018lce,Sanner:2018atx,Goryachev:2018nln,Megidish:2018sey,Kelly:2018zbx,Ding:2019aox} and references therein, and \cite{Kostelecky:2008ts} for a summary of experimental bounds. For searches of Lorentz-symmetry violations in the pure gravitational sector, see, e.g., \cite{Kostelecky:2015dpa,Hees:2016lyw,Mewes:2019dhj}.
In fact, we expect that Lorentz-invariance violation (LIV) cannot occur just in the matter or the gravitational sector without also percolating into the respective other sector. This is due to the simple fact that any form of matter gravitates, and therefore the interaction between the two sectors cannot be switched off. Typically, loop corrections in such coupled systems result in the impossibility to isolate violations of symmetry to just one sector. This argument has been made, e.g., in~\cite{Kostelecky:2003fs,Collins:2004bp,Kostelecky:2010ze,Pospelov:2010mp,Liberati:2012jf,Belenchia:2016lfc,Bluhm:2019hct}.
Here, we support the general argument by an explicit calculation that provides an example showing that indeed Lorentz-symmetry violation in quantum gravity necessarily percolates into the matter sector. In particular, we show that under certain assumptions that we will spell out in detail below, the
``amount" of Lorentz-symmetry violation in the quantum-gravity sector (measured by the deviation of dimensionless couplings singling out a preferred frame from zero) correlates with the amount of Lorentz-symmetry violation in the matter sector. Typically, gravitational couplings of $\mathcal{O}(10^{-n})$ induce LIV-matter couplings of about the same order.
Hence, strong constraints on LIV couplings in the matter sector typically imply similarly strong constraints on the gravitational LIV couplings. Moreover, we highlight that the induced LIV couplings in the matter sector include \emph{marginal} couplings, which are -- unlikely their Planck-scale suppressed counterparts -- observationally easier to constrain.
We will work in a model of quantum gravity with foliation-preserving diffeomorphism symmetry, coupled to a single gauge field. This serves as a toy model of the Abelian gauge sector of the Standard Model (corresponding to electromagnetism below the scale of spontaneous electroweak symmetry breaking), in which the presence of LIV-couplings is observationally strongly constrained by astrophysical observations as well as laboratory experiments.
We will perform a Renormalization Group (RG) study of the system.
In summary, we will show that the following hold within our toy model and within the technical limitations of our study, to be discussed below: i) quantum gravitational dynamics which single out a preferred frame necessarily generate Lorentz-invariance violations for matter in the ultraviolet (UV) and ii) under appropriate conditions, this violation must necessarily persist in the infrared. In other words, the violation of Lorentz symmetry in the gravitational sector percolates into the matter sector in the UV. Under appropriate conditions -- amounting to the existence of a infrared (IR) attractive fixed point in the RG flow of the matter coupling -- this symmetry violation must persist into the infrared, where it is accessible to experimental tests. The qualitative picture is therefore that small violations of Lorentz invariance in the UV will in general grow towards the IR. The existence of an IR attractive fixed point prevents the violations of Lorentz invariance
from growing even larger. Experimental data provide strong constraints on LIV in realistic models which include all relevant degrees of freedom. Due to the points i) and ii), these experimental bounds can in turn be used to put constraints on UV violations of Lorentz symmetry in the gravity sector.
Further, we argue that the various terms in the Standard Model Extension (SME) \cite{Colladay:1998fq,Kostelecky:2002hh}, see \cite{Bluhm:2005uj,Tasson:2014dfa} for reviews, are typically not independent when derived from an underlying microscopic model. A given microscopic model (defined by a set of values for the gravitational couplings) most likely generates \emph{all} terms in the SME with a given set of symmetries (e.g., CPT symmetry might or might not hold in a given quantum-gravity setting). Typically, we expect that all these couplings are generated with dimensionless counterparts of order~$1$, if the gravity couplings are order~$1$.
This is of course a standard naturalness argument -- a given microscopic model might circumvent this and in fact provide an explanation why a set of couplings is ``unnaturally" small. Here, we will provide one example to show that a given set of microscopic gravitational couplings typically generates the strongly constrained marginal LIV couplings together with the less strongly constrained higher-order couplings. Therefore, weaker direct experimental constraints on the higher-order couplings could, under certain conditions, actually be supplemented by \emph{indirect} strong consistency constraints.
Let us stress that we perform our study within a toy model that does not account for the existence of Standard Model degrees of freedom beyond the Abelian gauge field, and that does not account for the difference between the Abelian hypercharge field at high energies and the photon at low energies, which is due to electroweak symmetry breaking. Yet, we do not expect that these additional intricacies can impact the main outcomes of our study, at least at the qualitative level. The interplay of electroweak symmetry breaking with LIV has been explored in \cite{Colladay:1998fq}.
This paper is organized as follows. In Sec.~\ref{sec: LIVimp} we introduce the system of an Abelian gauge field coupled to foliation-preserving-diffeomorphism invariant gravity, including the foliation structure and a LIV term for the Abelian gauge field. In Sec.~\ref{sec: GravMatInterplay}, we investigate the impact of Lorentz invariance violations in the gravity sector onto the Abelian gauge field, and discuss the role of (pseudo) fixed points as attractors and repulsors in the RG flow. In Sec.~\ref{sec: constraints} we study the regions in the gravitational parameter space giving rise to a universal value for the matter LIV coupling $\zeta$ at the Planck scale using the flow equation obtained in our approximation.
Further, we aim at highlighting the constraining power that arises from the type of study we perform here. For this purpose, we use experimental constraints on Lorentz symmetry violations in the photon sector. By imposing these bounds on the LIV coupling for the Abelian gauge field in our toy model, we arrive at strong constrains on the gravity LIV couplings. We stress that these constraints are subject to the systematic uncertainties of our study, and the difference of our toy model to the full SME coupled to gravity. Therefore, these constraints cannot yet be viewed on the same footing as direct experimental constraints on the gravity LIV couplings. However, our study clearly highlights the potential constraining power of the gravity-matter interplay within a LIV setting. This strongly motivates upgraded studies which go beyond our toy model, in order to bring the power of this idea to bear on quantum gravity. We understand our present study as a blueprint that exemplifies this idea, see also the corresponding comments in \cite{Gumrukcuoglu:2017ijh}.
In Section~\ref{sec: HigherOrders} we provide an explicit example to highlight that any marginal LIV coupling is likely to be gravitationally induced concertedly with a higher dimensional LIV couplings. Their dimensionless couplings are typically of the same order also due to the direct interplay between them. This could give rise to indirect constraints on higher order LIV couplings, which are expected to be stronger than direct experimental constraints. Finally, in Sec.~\ref{sec: Concl} we summarize our results and provide a brief outlook on future perspectives.
\section{Impact of quantum gravity with a preferred frame on matter}
\label{sec: LIVimp}
To investigate how the breaking of Lorentz symmetry in the matter sector is influenced by symmetry-breaking terms in the gravitational sector, we will explore the Wilsonian scale dependence of a matter LIV coupling. For this study, we make use of the well suited tool of the functional Renormalization Group (RG) (see, e.g., \cite{Berges:2000ew,Pawlowski:2005xe,Gies:2006wv,Rosten:2010vm} for reviews). Due to a suitable infrared (IR) and UV regularization, it implements the Wilsonian idea of momentum-shell wise integration of quantum fluctuations and allows to investigate the scale dependence of quantum field theories within and beyond perturbation theory.
More specifically, the functional RG relies on a flow equation, the Wetterich equation \cite{Wetterich:1992yh, Ellwanger:1993mw,Morris:1993qb,Reuter:1993kw,Tetradis:1993ts}, that is a functional integro-differential equation for the scale-dependent effective action~$\Gamma_k$. The latter provides the RG scale $k$ dependent
equations of motion for the expectation values of the quantum fields. In the limit~$k \rightarrow \infty$, $\Gamma_k$ essentially provides the microscopic or classical action, whereas in the physical limit~$k \rightarrow 0$ all quantum fluctuations are included, and~$\Gamma_k$ reduces to the standard effective action. The Wetterich equation gives rise to flow equations for the couplings, which encode how the couplings in the effective dynamics change, as quantum fluctuations with momenta of the order $k$ are integrated over. The functional RG is applied in a broad range of contexts; selected examples in models with interacting fixed points include the $O(N)$ model, e.g., \cite{Canet:2003qd,Litim:2010tt,Juttner:2017cpr,Balog:2019rrg} and the Gross-Neveu(-Yukawa) model \cite{Braun:2010tt,Classen:2015mar,Knorr:2016sfs}. In all cases, quantitative agreement with other methods was achieved by extending the truncation according to the canonical power-counting of higher-dimensional operators.
For more details on the functional RG for the present setup, see Appendix~\ref{sec: FRG}. For other ideas to constrain physics beyond the Standard Model using the functional RG in the context of Lorentz invariant asymptotically safe gravity, see, e.g., \cite{deBrito:2019epw,Reichert:2019car}.
Let us stress that the functional RG relies on using the Euclidean four-momentum, and therefore provides access to the scale-dependence in Riemannian quantum-gravity settings. Performing a continuation of the results to a Lorentzian setting in the presence of dynamical gravity is an outstanding challenge. Thus, we work under the assumption that our results carry over to a Lorentzian setting.
To explore the consequences of the existence of a preferred frame, we adapt our setup to allow direct access to the foliation structure of the manifold \cite{Knorr:2018fdu}, $\mathcal{M} = \Sigma \times \mathbb{R}$, where $\Sigma$ is a Riemannian 3-manifold and $\mathbb{R}$ is the ``time'' direction (i.e., a preferred spatial direction in our Euclidean spacetimes).
In this setup, the full metric~$g_{\mu\nu}$ is expressed in terms of a tensor~$\sigma_{\mu\nu}$, encoding the three-metric in $\Sigma$ in a covariant way, and an orthogonal, normalized time-like vector $n_{\mu}$, i.e.,
\begin{eqnarray}
g_{\mu\nu}&=&\sigma_{\mu\nu}+n_{\mu}n_{\nu}\,\notag\\
g^{\mu\nu}n_{\mu}\sigma_{\nu\rho}&=&0,\notag\\
g^{\mu\nu}n_{\mu}n_{\nu}&=&1.
\label{eq: cond}
\end{eqnarray}
We refer the reader to Appendix~\ref{sec: Foliation} for details.
The time-like vector~$n_{\mu}$ can be used to single out a preferred frame, such that full diffeomorphism invariance is broken to foliation-preserving diffeomorphisms. As we use the functional RG, we choose the four-metric to be a Riemannian metric. The vector field $n_{\mu}$ singles out a distinguished direction in which an analytic continuation of the metric could be performed and in this sense it singles out a time direction. Using the decomposition~\eqref{eq: cond}, quantum fluctuations of the full metric~$g_{\mu\nu}$ can be expressed in terms of fluctuations in~$\sigma_{\mu\nu}$ and $n_\mu$. In the following, we will denote the expectation values of these two fields simply by~$\sigma_{\mu\nu}$ and~$n_{\mu}$.
In our approximation, we will parameterize the dynamics of the diffeomorphism invariant sector of gravity via the Einstein-Hilbert action with the scale-dependent Newton coupling $G_{\rm N} (k)$ and the cosmological constant $\Lambda(k)$,
\begin{equation}
\Gamma_{k}^{\rm EH}
=\frac{1}{16\pi G_{\rm N}(k)}\int\! \sqrt{\det(g_{\rho\sigma})}\,\,(-R+2\Lambda(k)).
\label{eq: EH}
\end{equation}
We use $\Gamma_k$ to indicate that this is an ansatz for the scale-dependent effective action entering the Wetterich equation.
Additionally, we include in our truncation of the dynamics the effect of all the canonically most relevant operators that break Lorentz invariance. The three independent \cite{Knorr:2018fdu} tensor structures containing up to two derivatives are given by\footnote{The couplings associated with the breaking of Lorentz symmetry in the gravitational sector in \cite{Gumrukcuoglu:2017ijh} and in our work are related via $\gamma=k_0$, $\beta=k_2$ and $\alpha=-a_1$.}
\begin{eqnarray}
\Gamma_{k}^{\rm Grav, \,LIV}
&=&\frac{1}{16\pi G_{\rm N}(k)}\int\! \sqrt{\det(g_{\rho\sigma})}\,\,\Bigl(k_2(k) K^{\mu\nu}K_{\mu\nu}\nonumber\\
&{}&\quad \quad+k_0 (k)K^2+a_1
(k)\mathcal{A}^{\mu}\mathcal{A}_{\mu}\Bigr),
\label{eq: breakingaction}
\end{eqnarray}
with symmetry-breaking and scale-dependent couplings $k_2(k),\,k_0(k)$ and $a_1(k)$. Here, the extrinsic curvature on spatial slices $K_{\mu\nu}$ is orthogonal to the normal vector
\begin{equation}
\label{eq: extrorth}
n^{\mu}K_{\mu\nu}=0.\,
\end{equation}
In terms of the fields~$n_{\mu}$ and~$\sigma_{\mu\nu}$, it reads
\begin{equation}
K_{\mu\nu}=\frac{1}{2}(n^{\alpha}D_{\alpha}\sigma_{\mu\nu}+D_{\mu}n_{\nu}+D_{\nu}n_{\mu}).
\end{equation}
In addition, $K$ is the trace of the extrinsic curvature and~$\mathcal{A}_{\mu}$ is the acceleration vector
\begin{equation}
\mathcal{A}_{\mu}=n^{\alpha}D_{\alpha}n_{\mu}.
\end{equation}
All the ``breaking terms'' are invariant under foliation-preserving diffeomorphisms but not under full diffeomorphisms, thus singling out a physical, preferred frame.
All other terms with these symmetries and at second order in derivatives are, up to a total derivative which is neglected, related via the Gauss-Codazzi equations. Therefore, the truncation we consider in this paper corresponds to the IR limit of Horava-Lifshitz gravity coupled to an Abelian gauge field, which is perturbatively renormalizable. Due to the Wilsonian treatment, additional operators, such as Lorentz-invariance violating matter-gravity operators, are induced at lower scales. Besides that, the Wilsonian treatment allows for a broader perspective, since it allows the study of theories, where some other theory sets in beyond a cutoff scale in the far UV. The gravitational part of the action is complemented with the standard gauge-fixing and ghost term, and a constraint term which implements the foliation structure of the system. Following \cite{Knorr:2018fdu}, we implement the latter like a gauge condition into the path integral. For details on the implementation of the foliation constraint, see App.~\ref{sec: Foliation}. Since the conditions~\eqref{eq: cond} are second-class constraints, as opposed to gauge conditions which are first-class constraints, their implementation might require a modification of this procedure~\cite{Eichhorn2019}. This contributes to the systematic uncertainty of our results, which are however expected to be dominated by truncation errors. This ansatz for $\Gamma_k$ is based on canonical power counting, i.e., the truncation of the theory space is chosen by including operators by canonical relevance. Such a truncation is expected to reliably capture physics in the perturbative regime, where higher-order couplings typically remain small and irrelevant. In fact, even in a setting with interacting fixed points, such truncations could be reliable.
Indeed, in the context of diffeomorphism invariant gravity, there are indications that the asymptotically safe fixed point lies in a near-perturbative regime \cite{Denz:2016qks,Eichhorn:2018akn,Eichhorn:2018ydy,Eichhorn:2018nda}, where higher order operators follow their canonical scaling \cite{Falls:2013bv,Falls:2014tra,Falls:2017lst, Falls:2018ylp}. For a non-gravitational example showing the the convergence of a truncation based on canonical power counting for an interacting fixed point, see, e.g., \cite{Balog:2019rrg}.
\begin{table*}[ht]
\begin{tabular}{|M{1.1cm}|c|c|l|N
\textbf{Bound} & \textbf{Year} & \textbf{Ref.} & \textbf{Method} \\\hline\hline
%
$10^{-37}$ & $2006$& \cite{Kostelecky:2006ta} & polarization measurement in gamma ray bursts\\\hline
%
$10^{-9}$ &$2007$& \cite{Muller:2007es} & {atomic gravimeter}\\\hline
%
$10^{-15}$ &$2004$& \cite{Wolf:2004gg} &comparison of a cryogenic sapphire microwave resonator and a hydrogen maser\\\hline
%
$10^{-18}$ & $2014$&\cite{Nagel:2014aga} &terrestrial Michelson-Morley experiment\\\hline
%
$10^{-21}$ &$2018$& \cite{Sanner:2018atx} & Michelson-Morley with trapped ions {(assuming no Lorentz-symmetry violation for electrons)}\\\hline
%
$10^{-20}$ &$2016$& \cite{Kostelecky:2016kkn} & light interferometry (LIGO data)\\\hline
\end{tabular}
\caption{Different experimental bounds on the analogue of our Lorentz-symmetry breaking coupling $\zeta$ for the photon sector of the Standard Model. We assume that the existence of a single vector field $n_{\mu}$ as source of a preferred frame is the only source of Lorentz-symmetry violations. In this case the coupling $\zeta$ is the unique non-zero coupling. For each experiment, the strongest bound on the coefficients of $k_{\rm F}^{\mu\nu\rho\sigma}$, cf.~\eqref{eq: gaugeLIV}, are translated into bounds on $\zeta$. Except for the second line, all bounds assume the absence of LIV couplings in the pure gravity sector, since some assumption on the gravitational background is necessary for the conversion of experimental data to bounds on LIV couplings. We stress the difference between the experimental bounds on the photon-LIV coupling and the coupling $\zeta$ in our toy model. The above experimental bounds on the photon LIV couplings are intended to give an impression on the sensitivity of experiments in the photon sector. They do not directly translate into constraints on the LIV coupling $\zeta$ in our toy model.}
\label{tab: LIVMatterConstraints}
\end{table*}
As for the matter part of the action, we focus on the Abelian gauge sector with
\begin{equation}
\Gamma_{k}^{\rm Abelian}
=\frac{Z_A(k)}{4}\int \sqrt{\det(g_{\kappa\epsilon})} \,\,g^{\mu\rho}g^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma},
\label{eq:gaugekinetic}
\end{equation}
where $F_{\mu\nu}$ is the field-strength tensor of the Abelian gauge field $A_{\mu}$ and $Z_A(k)$ is a wave-function renormalization of the gauge field. Even in the absence of charged matter, quantum fluctuations of gravity generate a non-trivial scale-dependence, giving rise to an anomalous dimension
\begin{equation}
\eta_A = - k\, \partial_k \ln Z_A(k).
\end{equation}
Finally, all possible extensions of the Abelian gauge sector that violate Lorentz invariance but preserve CPT and gauge symmetry can be written as \cite{Colladay:1998fq}
\begin{equation}
\Gamma_{k}^{\rm Abelian,\, LIV}
=\frac{Z_A(k)}{4}\int \sqrt{\det(g_{\kappa\epsilon})}\,\, k_{\rm F}^{\mu\nu\rho\sigma}(k)\,F_{\mu\nu}F_{\rho\sigma},
\label{eq: gaugeLIV}
\end{equation}
where $k_{\rm F}^{\mu\nu\rho\sigma}(k)$ is real and has the symmetries of the Riemann tensor, i.e., antisymmetry under $\mu \leftrightarrow \nu$ and $\rho \leftrightarrow \sigma$ and symmetry under an exchange of the pairs~$\{\mu,\nu\}$ and~$\{\rho, \sigma\}$. To see this, start with a general tensor $\hat{k}_{\mu\nu\rho\sigma}$ with no symmetries. Its completely antisymmetric part results in the CP-violating~$\tilde{F}F$-term, which is a total derivative for the Abelian gauge field. Further, symmetry under exchange of the index pairs~$(\mu, \nu) \leftrightarrow (\rho,\sigma)$ follows from the contraction with two field-strength tensors. Finally, gauge symmetry demands antisymmetry of the field-strength tensor, resulting in antisymmetry of~$k_{\mu\nu\rho\sigma}$ under exchanges of indices~$\mu \leftrightarrow \nu$.
The presence of the LIV operator in Eq.~\eqref{eq: gaugeLIV} leads to vacuum birefringence: The dispersion relation resulting from Eqs.~\eqref{eq:gaugekinetic} and \eqref{eq: gaugeLIV} is still linear in the spatial momentum, i.e., $p_0 =c(k_F)\, |\vec{p}|$. Therefore, there is no wavelength dependence in the speed of propagation. Yet, the two polarizations feature a different proportionality factor $c(k_F)$, which leads to a phase shift between the two polarizations that accumulates with propagation distance. For a detailed discussion, see
\cite{Kostelecky:2001mb, Kostelecky:2002hh}.
Under the impact of quantum fluctuations, $k_{\rm F}^{\mu\nu\rho\sigma}$ acquires a dependence on the RG scale $k$. For a general dynamical preferred frame \cite{Jacobson:2000xp}, which was explicitly applied to Horava-gravity \cite{Horava:2009uw,Contillo:2013fua,DOdorico:2014tyh,DOdorico:2015pil,Barvinsky:2015kil,Barvinsky:2017kob,Barvinsky:2019rwn} in \cite{Bluhm:2019ato}, the only possible tensor in the general expression Eq.~\eqref{eq: gaugeLIV} is\footnote{The couplings associated with the breaking of Lorentz symmetry in \cite{Bluhm:2019ato} and in our work are related via $\frac{1}{2}(1-\lambda_{\gamma})=\zeta$.}
\begin{equation}
k_{\rm F}^{\mu\nu\rho\sigma}=\frac{\zeta}{4} \Bigl(n^{\mu}n^{\rho}g^{\nu\sigma}+n^{\nu}n^{\sigma}g^{\mu\rho} - n^{\nu}n^{\rho}g^{\mu\sigma} - n^{\mu}n^{\sigma}g^{\nu \rho}\Bigr),
\label{eq: gaugeLIVV}
\end{equation}
with the coupling $\zeta= \zeta(k)$. We stress that in the presence of a single vector field $n_{\mu}$ as the source of a preferred frame, $\zeta$ is the unique coupling that can be nonzero. In particular, if we assumed that the different components of~$k_F$ were parameterized by different couplings, this would amount to the introduction of the corresponding nontrivial \emph{tensor} as the source of a preferred frame\footnote{Experimentally, the various components of $k_F$ are constrained individually, as typically no assumption on the precise way in which a preferred frame is selected, is made.}.
Therefore, all experiments that put constraints on individual components of the general tensor $k_{\rm F}^{\mu\nu\rho\sigma}$ in Eq.~\eqref{eq: gaugeLIV} automatically put constraints on the coupling $\zeta$.
We summarize experimental bounds in Table~\ref{tab: LIVMatterConstraints}, where the strongest bound on any of the components of $k_{\rm F}^{\mu\nu\rho\sigma}$ is translated into a bound on~$\zeta$. Note that although the measurements on LIV in the matter sector could be affected by the presence of higher-order operators, due to the Planck-scale suppression of the corresponding couplings their impact to the low-energy measurement of $k_F$ is negligible.
\section{The relation between Lorentz invariance violation in the gravity sector and in the matter sector}
\label{sec: GravMatInterplay}
It is a crucial question, whether Lorentz symmetry breaking necessarily percolates from the gravitational sector into the matter sector. This matters both for theoretical as well as phenomenological reasons: On the theoretical side, this is crucial to understand the form of a matter sector that is consistently coupled to a LIV-gravity sector. On the phenomenological side this is key, as it allows to translate strong observational bounds on LIV in the matter sector into constraints on LIV couplings in the quantum gravity sector.
More formally, the key question is whether the Lorentz invariant subspace of the matter sector is attractive or repulsive under the RG flow towards low energies. In other words, starting from small deviations from the Lorentz invariant hypersurface at large energies, is the system driven away from the symmetric subspace or towards it, when lowering the energy?
To answer this question as comprehensively as possible, we remain agnostic about the properties of a UV completion for the system. Thus, we view the description in terms of quantum field theory as an effective description with a high-energy (i.e., transplanckian) cutoff scale $k_{\rm i}\gg M_{Pl}^2$. At that cutoff scale, a microscopic model sets the initial conditions for the RG flow towards the IR by determining the values of couplings at that scale\footnote{In case of an asymptotically safe/free fixed point, that initial condition corresponds to values of the relevant couplings a high-energy scale at which relevant perturbations drive the system away from the fixed point.}. We explore, whether gravitational fluctuations then drive the LIV matter coupling back to zero, or whether there is a nonzero preferred value.
In a nutshell, our results are the following: We will show that $\zeta(k)$ cannot consistently be set to zero in the presence of $k_0$, $k_2$ and $a_1$ at high energies. This is a consequence of the absence of a free fixed point in the beta function for $\zeta$. In other words, quantum fluctuations generate $\zeta$ and drive it away from zero. Moreover we find that, within our approximation, there is always an IR-attractive fixed point at a finite value of~$\zeta$. Consequently, quantum fluctuations drive $\zeta$ towards a preferred, nonzero value. Under the RG flow, a large range of initial conditions~$\zeta(k_{\rm i})$, set at the ultraviolet scale~$k_{\rm i}$,
is thus mapped into a unique Planck-scale value, corresponding to the IR-attractive fixed point of the RG flow.
Below the Planck scale, the effect of $k_0$, $k_2$ and $a_1$ switches off dynamically, simply because quantum fluctuations of gravity become negligible and quantum gravity decouples from particle physics.
In our toy-model, the flow of $\zeta$ vanishes in this regime, as quantum fluctuations of gravity are the only ones that drive the system. Therefore, the universal fixed-point value attained by the RG flow at Planckian scales is also the low-energy value of $\zeta(k=0)$, cf.~Fig.~\ref{fig:schematic} for an illustration.
In a more complete treatment that accounts for the other Standard-Model degrees of freedom, additional fluctuations would drive the low-energy flow of~$\zeta$. Since $\zeta$ is a marginal coupling, below the Planck scale it is expected to depend logarithmically on the RG scale $k$. In our toy model, this low-energy running is absent.
The low-energy value of $\zeta$ (referring to the actual electromagnetic interaction of the Standard Model) is constrained observationally. In turn, this experimental bound can be mapped onto a constraint for its Planck-scale value. We expect that the latter is a function of the LIV-gravity couplings $k_0$, $k_2$ and $a_1$, just as it is in our toy model. Accordingly, observational constraints on $\zeta$ constrain the microscopic values of $k_0$, $k_2$ and $a_1$, and can therefore indirectly constrain the fundamental symmetries of the theory. The conditions under which such an indirect constraint arises will be discussed in detail below.
This idea constitutes an example of how studies of the interplay of quantum gravity with matter can be key to constrain quantum gravity observationally, by tapping into the wealth of experimental data on particle physics.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{01illustration_key_idea.pdf}
\caption{\label{fig:schematic}We show a schematic illustration of the key idea underlying our results.}
\end{figure}
Having explained the key idea underlying our work, we can now discuss in detail how the IR-attractive fixed point is generated and how the gravitational couplings impact the scale dependence of the LIV-matter coupling~$\zeta$. To this end we compute the scale dependence of~$\zeta$, treating all gravitational couplings as input parameters. (See \cite{Knorr:2018fdu} for the beta functions of these couplings.)
This allows us to remain agnostic about a UV completion for the gravity sector\footnote{In \cite{Kostelecky:2003fs} the spontaneous breaking of Lorentz invariance has been discussed, while recent studies of gravity-matter systems also allow for the possibility of explicit symmetry breaking, cf.~\cite{Bluhm:2019ato,Bluhm:2019hct}.}, and explore a large class of possible models, labeled by different values of these couplings, simultaneously.
To obtain an analytical expression for the scale dependence of~$\zeta$, we expand all expression to first order in $k_0$, $k_2$ and $a_1$ and to second order in~$\zeta$, which is sufficient for the assumption of small deviations from the Lorentz invariant hypersurface at high energies.
In fact, it is the dimensionless ratios\footnote{Note that here $g(k)$ stands for the dimensionless Newton coupling, and should not be confused with the ``g'' of the metric tensor,~$g_{\mu\nu}$, which we always refer to with two indices.}
\begin{equation}
g(k)= G_N(k)\, k^2, \quad \lambda(k) = \Lambda(k)\, k^{-2},
\end{equation}
which enter the beta function
\begin{equation}
\beta_{\zeta} = k\,\partial_k\, \zeta(k).
\end{equation}
For technical details of our calculation, we refer the reader to the appendix. To evaluate the RG flow, we use the Mathematica package \emph{xAct} \cite{Brizuela:2008ra,Martin-Garcia:2007bqa,MartinGarcia:2008qz,2008CoPhC.179..597M} as well as the FORM-tracer~\cite{Cyrol:2016zqb}.
Driven by quantum fluctuations of gravity and the Abelian gauge field itself, the beta function for $\zeta$ is
\begin{eqnarray}
\beta_\zeta&=&g\,\bigg(-\frac{10 a_1+21 k_0+257 k_2}{384 \pi (1-2 \lambda )^2}+\frac{-6 a_1+53 k_0+329 k_2}{576 \pi (1-2 \lambda )^3}\bigg)\notag\\
&{}&+\zeta\, g\, \bigg(\frac{1}{6 \pi (1-2 \lambda )}\notag\\
&{}&\hspace{25pt}-\frac{183 a_1-390 k_0-1690 k_2+1840}{960 \pi (1-2 \lambda )^2}\notag\\
&{}&\hspace{25pt}+\frac{2313 a_1-5 (246 k_0+4039 k_2)}{1440 \pi (1-2 \lambda)^3}\bigg)\notag\\
&{}&+\zeta ^2\, g\,\bigg(
\frac{79}{60 \pi (1-2 \lambda )}\notag\\
&{}&\hspace{27pt}-\frac{21 a_1+495 k_0-920 k_2+5072}{960 \pi (1-2 \lambda )^2}\notag\\
&{}&\hspace{27pt}+\frac{6911 a_1-9515 k_0-60420 k_2}{1440 \pi (1-2 \lambda )^3}\bigg),
\label{eq: betazeta}
\end{eqnarray}
where we have dropped the dependence on~$k$ from all couplings for brevity. We briefly highlight the existence of the nontrivial denominators which are in contrast to beta functions obtained with perturbative techniques, which are typically purely polynomial. Such denominators arise when there is a mass-like term for a field, and result in a dynamical decoupling of the corresponding degree of freedom once the RG scale~$k$ drops below the corresponding mass. For metric fluctuations, the cosmological constant acts akin to a mass-like term, suppressing metric fluctuations for large negative~$\lambda$. Notice that this refers to the microscopic (i.e., high-energy) value of the dimensionless cosmological constant, which is itself a scale-dependent coupling that can take a rather different value in the UV than in the IR. In particular, a negative $\lambda$ in the UV is not incompatible with a positive cosmological constant at observational scales~\cite{Dona:2013qba,Biemans:2017zca}.
In order to understand the implications of the expression~\eqref{eq: betazeta}, we focus on special cases first.
If we set $g=0$, then the entire beta function vanishes. This is a consequence of the fact that at $g=0$, the model consists of just a kinetic term for the Abelian gauge field, i.e., it is a noninteracting theory. The beta functions in such a theory vanish identically. Beyond our toy model, the existence of additional matter degrees of freedom would not change this conclusion, unless there was LIV already present in other couplings in the matter sector.
At $g\neq 0$, we focus on the limit $\zeta = 0$ first. In this case, only the first line in Eq.~\eqref{eq: betazeta} remains. Except for very special points in the parameter space spanned by~$\{a_1, k_0, k_2, \lambda\}$, this expression is nonvanishing. This has important implications: Even setting $\zeta(k_{i})=0$ (where $k_{\rm i}\gg M_{Pl}$ is an arbitrary initial scale),~$\beta_{\zeta}(\zeta=0)\neq 0$, and therefore $\zeta(k_{\rm i}-\delta k)\neq 0$. In other words, quantum fluctuations generate $\zeta$, even if it vanishes at $k_{\rm i}$.
On the other hand, if $k_0=0, k_2=0, a_1=0$, then the first line of Eq.~\eqref{eq: betazeta} vanishes identically. This choice corresponds to a gravitational theory that respects full diffeomorphism invariance. In this case, there is no Lorentz symmetry violation in the gravitational sector, which is reflected in the existence of a fixed point of $\beta_{\zeta}$ at $\zeta=0$. The hypersurface in the space of couplings that preserves Lorentz symmetry in the matter sector is only an invariant surface under the RG flow, if no LIV exists in the gravity sector. Hence, Lorentz-symmetry breaking will percolate from the gravity sector into the matter sector, if the couplings $k_0,k_2$ and $a_1$ are non-vanishing.
In the next step, we take the terms $\sim \zeta\, g$ and~$\sim \zeta^2\, g$ in the second and following lines of Eq.~\eqref{eq: betazeta} into account. The beta function $\beta_{\zeta}$ is generically nonzero, i.e., starting from an initial condition $\zeta(k_{\rm i})$, the LIV coupling $\zeta(k)$ will flow, and assume a different value at lower scales. In this context, the notion of attractors of the flow, i.e., fixed points, is crucial. Under the influence of an IR-attractor, a large interval of initial conditions in the UV is mapped to a small interval of values at lower scales: A \textit{universal} prediction of a nonzero value of $\zeta$ arises that is largely independent of the UV initial conditions. We now analyze the notion of such IR attractors in terms of fixed points and pseudo fixed points in more detail.
Let us schematically write
\begin{equation}
\label{eq: betaschem}
\beta_{\zeta} = b_0+ b_1\, \zeta + b_2\, \zeta^2,
\end{equation}
and let us treat the $b_i$ as (real) constants for now.
The zeros of $\beta_{\zeta}$, where the scale-dependence of $\zeta$ vanishes, are
\begin{equation}
\label{eq: FPsec3}
\zeta_{*, 1/2}= \frac{-b_1\pm\sqrt{b_1^2-4b_0 b_2}}{2b_2}.
\end{equation}
If $b_1^2-4b_0\, b_2>0$, these fixed points of the RG flow lie at real values which are generically nonzero. One of the two fixed points is IR-repulsive and the other is IR-attractive, as one can infer by calculating the critical exponents
\begin{equation} \label{eq: critexp}
\theta_{1/2}= -\frac{\partial \beta_{\zeta}}{\partial \zeta}\Big|_{\zeta_{\ast, 1/2}}=\mp \sqrt{b_1^2-4b_0\,b_2}.
\end{equation}
The critical exponent encodes whether a fixed point is IR attractive or IR repulsive.
A positive critical exponent signifies that the distance to the corresponding fixed point increases under the RG flow to the IR -- the coupling is a relevant perturbation of this fixed point. In contrast, a negative critical exponent implies that the fixed point is an attractor of the RG flow.
This is clearly visible in Fig.~\ref{fig:fancyplotbeta} where we show selected RG trajectories (blue lines) for~$\zeta(k)$. The fixed point coming with $\theta_i<0$ (red dashed line) acts as an attractor, whereas the fixed point with~$\theta_i>0$ (magenta dotted line) repulses RG trajectories. Therefore, a \textit{universal} prediction arises: The IR-attractive fixed point at~$\zeta_{\ast, 1}$ \emph{focuses} trajectories.
Except for initial conditions \footnote{{This follows as an IR repulsive fixed-point shields a certain set of UV initial conditions from the IR attractive fixed point at lower energies. Specifically, in the case of Fig.~\ref{fig:fancyplotbeta}, any value of~$\zeta(k)<\zeta_{\ast, 2}$, with~$\zeta_{\ast, 2}$ being the IR repulsive fixed point, is inaccessible from initial conditions~$\zeta(k_{\rm i})>\zeta_{\ast, 2}$.}} which lie above~$\zeta_{\ast,2}$, a large range of initial conditions at $k=k_{\rm i}$ is mapped to $\zeta (10^{-10} k_{\rm i})=\zeta_{\ast, 1}$.
Therefore, trajectories with initial conditions below the IR-repulsive fixed point (magenta dotted line) will be focused on the IR-attractive fixed point $\zeta_{\ast, 1}$.
Trajectories starting at $\zeta(k_{\rm i})>\zeta_{\ast, 2}$ are quickly driven towards rather large values of $\zeta$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{02FancyPlotF.png}
\caption{\label{fig:fancyplotbeta} We show the beta function (right panel) and the associated flow of $\zeta(k)$ for $g=1$, $a_1=\lambda=0$ and $k_2=k_0=1$ (left panel). The magenta dotted line corresponds to the IR-repulsive fixed point. The blue lines are a sample of RG trajectories obtained from varying the initial condition~$\zeta(k_{\rm i})$. The arrows indicate the direction of the flow, towards the IR. RG trajectories with initial conditions above the magenta line are driven away from the Lorentz-invariant sub-theory space. Conversely, RG trajectories set by initial conditions below the magenta line flow towards the IR-attractive fixed point, i.e., the red dashed line, at low energies.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{03FancyPlotAltFlow.pdf}
\caption{\label{fig:flowplotrunning} We show the flow for the case $g=1$, $\lambda=0$, $a_1=0$, $k_0=1-t^2/20$ and $k_2=1$. The zeros of $\beta_{\zeta}$ acquire a scale dependence through the scale-dependence of the LIV coupling~$k_0$. These pseudo-fixed points (magenta dotted line and red dashed line) approximate the attractors/repulsors of the flow.
}
\end{figure}
Next, we have to understand the situation when the coefficients~$b_0$,~$b_1$ and~$b_2$ are scale dependent due to the scale dependence of the gravitational couplings. In this case, $\zeta_{\ast,1/2} = \zeta_{\ast,1/2}(k)$ become pseudo fixed-points: They are still the solutions to $\beta_{\zeta}=0$, but these solutions are no longer scale-independent. Accordingly, they lose the interpretation as a scale-invariant regime of the theory, but they do keep the interpretation as (scale-dependent) attractors and repulsors of the flow\footnote{{Strictly speaking, not the points $\zeta_{\ast,1/2}(k)$ are the attractors/repulsors, but a close-by set of points $\tilde{\zeta}_{\ast,1/2}(k)$, where the slope of $\tilde{\zeta}_{\ast,1/2}(k)$ balances a non-vanishing contribution from $\beta_{\zeta}$. The larger the slope of the beta function compared to the ``speed" of the points $\tilde{\zeta}_{\ast,1/2}(k)$, the closer~$\tilde{\zeta}_{\ast,1/2}(k)$ lie to the pseudo fixed points $\zeta_{\ast,1/2}(k)$. }}. Their effectiveness depends on the speed of the flow -- the derivative of the beta function -- compared to the speed with which the pseudo-fixed-point value changes as a function of the scale.
If the derivative of the beta function is large, then the flow easily follows the IR-attractive pseudo fixed-point, as in the example in~Fig.~\ref{fig:flowplotrunning}. From an appropriate set of initial conditions, the flow is quickly attracted to the vicinity of the IR-attractive pseudo fixed-point, and then converges to it at lower scales. In the other case, where the speed of the flow is slow compared to the rate of change of the pseudo-fixed-point value, the flow cannot follow the pseudo fixed-point and the latter becomes ineffective as an attractor of the flow.
In the following, we will make the assumption that the gravitational couplings change as a function of the RG scale in such a way that the rate of change of the pseudo fixed-point is smaller than the speed of the flow. This ensures that the pseudo fixed-points are effective as attractors/repulsors. For an IR-attractive pseudo fixed point, a large range of UV initial conditions for~$\zeta(k)$ is mapped to a small interval around the pseudo fixed-point (red dashed curve, Fig.~\ref{fig:flowplotrunning}). The latter is the \emph{instantaneous} fixed-point value, i.e., the solution to~$\beta_{\zeta}=0$ with the values of the gravitational couplings at the Planck scale. Therefore, if the initial condition $\zeta(k_{\rm i})$ lies in the basin of attraction of the IR-attractive pseudo fixed-point, the ``history" of the trajectory, i.e., the scale-dependence of the gravitational couplings above the Planck scale, becomes unimportant. Otherwise, i.e., if $\zeta(k_{\rm i})$ is outside the basin of attraction of the IR-attractive pseudo fixed-point, the corresponding RG trajectory will flow away from the IR-repulsive fixed point (magenta dotted curve, Fig.~\ref{fig:flowplotrunning}), so that $\zeta(k)$ becomes large at low energies.
As the gravitational contributions to the flow turn off at the Planck scale, the flow towards the IR vanishes. Therefore, each value at the Planck scale can be translated into a unique value in the IR, such that the IR value of $\zeta$ is a prediction of the theory. For initial conditions in the basin of attraction of the IR-attractive pseudo fixed point, the IR value of $\zeta$ is a universal\footnote{Universality here does not refer to scheme-independence, as the gravitational contributions depend on the scheme due to the dimensionful nature of the Newton coupling. Universality in our context means the independence from microscopic physics, i.e., initial conditions for the RG flow.} prediction, as in this case the flow ``looses memory'' of the initial conditions: The IR-attractive pseudo fixed point depends on the gravity-LIV couplings of the system, but is independent of the initial value $\zeta(k_{\rm i})$. Thus, changes in the gravity LIV couplings result in a change of the Planck-scale value of $\zeta$, and thereby its low- scale value.\\
In summary, this setup provides us with a map between Planck-scale values of the gravitational couplings and the IR value of $\zeta$. With the help of such a map, strong observational constraints on $\zeta$ can in principle be translated into strong constraints on the gravitational couplings.
These hold in a setting where
\begin{enumerate}
\item a quantum-field theoretic description is applicable beyond the Planck scale,
\item the rate of change of the pseudo fixed point is smaller than the speed of the flow (this can be checked from the beta function, given a particular scale-dependence for the gravitational couplings),
\item the initial condition for $\zeta(k_{\rm i})$ lies in the basin of attraction of the IR fixed point,
\item the additional Standard Model degrees of freedom beyond our setting do not significantly alter the flow of $\zeta$.
\end{enumerate}
As already mentioned, for initial conditions outside the basin of attraction of the IR fixed point, the RG flow generically drives $\zeta$ towards large absolute values. In this case, the IR value of $\zeta$ is not a universal prediction, but depends on the initial condition $\zeta(k_{\rm i})$. Generically, the initial condition~$\zeta(k_{\rm i})=0$ results in
\begin{equation}
\zeta(k) = -\frac{b_0}{b_1} \left(1-\left(\frac{k_{\rm i}}{k}\right)^{b_1} \right),
\end{equation}
which holds for small enough $\zeta$, so that the quadratic term in Eq.~\eqref{eq: betaschem} can be neglected. Parametrically, the value of $\zeta(k)$ is set by the gravitational LIV couplings which enter~$b_0$ and~$b_1$.
To keep $\zeta$ small, it follows that $b_0\ll1$ if $b_1 \sim \mathcal{O}(1)$, or $b_1\ll1$. Such conditions require either very small LIV gravity couplings, and for the case of $b_1 \sim \mathcal{O}(1)$ at least one LIV coupling of order 1, which is incompatible with direct constraints \cite{Gumrukcuoglu:2017ijh}, derived under the assumption of photons propagating at the speed of light, see \cite{Jacobson:2001tu,Jacobson:2002ye,Bolmont:2006kk,Ackermann:2009aa,Vasileiou:2013vra,Kostelecky:2013rv,Ellis:2018lca,Abdalla:2019krx}.
Accordingly, we tentatively conclude that strong observational constraints on $\zeta$ are only compatible with Lorentz invariance violation~$\mathcal{O}(1)$ in the gravitational sector under the assumption of very special initial conditions for~$\zeta(k_{\rm i})$.
\section{Constraints}
\label{sec: constraints}
We now discuss how the considerations in the previous section constrain the quantum gravitational LIV sector. To exemplify this, we work with the beta function in Eq.~\eqref{eq: betazeta}. This beta function was obtained within truncations of the dynamics and further approximations, cf.~App.~\ref{sec: AppFRG}, and is therefore subject to systematic errors.
Further, our results are obtained using a Euclidean setup and we make the assumption that the form of the beta function will be the same in the Lorentzian setup. Due to the absence of a well-defined Wick rotation in quantum gravity, this is an assumption which is not straightforward to test, although the presence of a foliation is a prerequisite for a Wick rotation. Finally, we work within a toy model for the Standard Model which only includes an Abelian gauge sector, but neglects the other Standard Model degrees of freedom as well as the difference between the Abelian hypercharge gauge field and the photon that is due to electroweak symmetry breaking.
Due to the above points, the quantitative limitations of our study should be obvious. Nevertheless, the result that the gravity LIV couplings enter the beta function for matter LIV couplings with numerical prefactors $\mathcal{O}(1)$ should be generic. Therefore, the key result, that constraints on the matter LIV couplings of order $\mathcal{O}(10^{\#})$ constrain gravitational LIV couplings to roughly the same precision, is expected to be generic and is indeed a key point we want to make in this paper.
\begin{figure*}[t!]
\includegraphics[width=\linewidth]{04ExclPlotNGFPk20}
\caption{Exclusion for $\lambda>\lambda_{\rm crit}$ with initial conditions in the basin of attraction of the IR attractive interacting fixed point $\zeta_{*,2}$, for $\lambda=k_2=0$. Left panel: the red region shows the excluded region by demanding that $\zeta_{*,2}<10^{-4}$. The hatched area marks the region which is already excluded by cosmology and observation of gravitational waves. Right panel: zoom into the only region, which can accommodate $\zeta_{*,2}<10^{-4}$ (lighter red areas). The tiny white band corresponds to values of~$k_0$ and~$a_1$ that make~$\zeta_{*,2}$ exactly zero, according to Eq.~\eqref{eq: IntFP}. This region is already excluded by cosmological observations \cite{Gumrukcuoglu:2017ijh}.}
\label{fig: ExclPlotNGFPk20}
\end{figure*}
Since we are interested in small deviations from the Lorentz-invariant subspace, let us start with the Lorentz invariant case, i.e., $k_0=k_2=a_1=0$. In this case the coefficient $b_0$ in Eq.~\eqref{eq: betaschem} vanishes, and $\beta_{\zeta}$ features one Gaussian and one non-Gaussian fixed-point:
\begin{equation}
\label{eq: newFP}
(\zeta_{*,\,1},\zeta_{*,\,2})\big|_{k_0=k_2=a_1=0}=\left(0, -\frac{5 (4 \lambda +21)}{158 \lambda +238}\right)\,,
\end{equation}
with critical exponent
\begin{equation}
\label{eq: critexpsym}
(\theta_1,\theta_2)\big|_{k_0=k_2=a_1=0}=\left(\frac{g (4 \lambda +21)}{12 \pi (1-2 \lambda )^2},-\frac{g (4 \lambda +21)}{12 \pi (1-2 \lambda )^2}\right)\,.
\end{equation}
For non-vanishing, but small LIV couplings $k_0,\,k_2$ and $a_1$, the coefficient $b_0$ in Eq.~\eqref{eq: betaschem} is non-vanishing, shifting the Gaussian fixed point (GFP) $\zeta_{*,\,1}$ to an interacting shifted Gaussian fixed point (sGFP). (The notation $\zeta_{*,\, 1/2}$ in Eq.~\eqref{eq: newFP} was chosen such that $\zeta_{*,\,1}$ always corresponds to the GFP, in contrast to Eq.~\eqref{eq: FPsec3}, where the sGFP can be either of the fixed points, depending on the sign of $b_1$.) For small LIV gravity couplings, the sGFP is a continuous deformation of the GFP in the symmetry-restored case, while the fixed point $\zeta_{*,\,2}$ is always interacting. Therefore, for small LIV gravity couplings, the existence of the sGFP is robust and controlled as it is a continuous deformation of the free fixed point. The interacting fixed point $\zeta_{*,\,2}$, however, cannot be traced back to a free fixed point. Therefore, its existence might be subject to extensions of the truncation.\\
From Eq.~\eqref{eq: critexpsym} it is evident that the critical exponent of the (s)GFP changes sign at $\lambda_{\rm crit}=-\tfrac{21}{4}$. For $\lambda>\lambda_{\rm crit}$ the sGFP is IR repulsive and the interacting fixed point IR attractive. This situation is illustrated in Figs.~\ref{fig:fancyplotbeta} and \ref{fig:flowplotrunning}.
In the following we investigate both cases, i.e.,~$\lambda>\lambda_{\rm crit}$ and~$\lambda<\lambda_{\rm crit}$.\\
\paragraph{Constraints on LIV gravity couplings for $\lambda>\lambda_{\rm crit}$}\textcolor{white}{b}\\
In the case of $\lambda>\lambda_{\rm crit}$, the sGFP $\zeta_{*,1}$ is IR repulsive, while the interacting fixed point $\zeta_{*,2}$ is IR attractive, as illustrated in Fig.~\ref{fig:fancyplotbeta}. The basin of attraction of~ $\zeta_{*,2}$ contains all values~$\zeta(k)<\zeta_{*,1}$. For all these values it follows that $\zeta(M_{\rm Pl})\approx\zeta_{*,2}$.
To link to experimental constraints,~$\zeta(M_{\rm Pl})$ has to be used as the initial condition for the RG flow below the Planck scale. In our case, the dynamical switching off of gravitational fluctuations, encoded in a quadratic scaling of~$g(k)$ to zero, results in a swift freeze-out of~$\zeta(k<M_{\rm Pl})$.
Therefore, for the exclusion plots in Fig.~\ref{fig: ExclPlotNGFPk20} and Figs.~\ref{fig: ExclPlotPosNGFP}--\ref{fig: k2posexclreg}, we make the rather conservative assumption that the ratio~$\zeta(0)/\zeta(M_{\rm Pl})$ is not smaller than~$1/10$.
Through the map $\{k_0, k_2, a_1, \lambda\} \rightarrow \zeta_{\ast , 2} = \zeta(M_{\rm Pl}) \rightarrow \zeta(k=0)$, we translate the experimental constraints on the LIV coupling $\zeta$ to constraints on $\zeta$ at the Planck scale. In turn, this constrains the gravitational LIV couplings~$k_2$, $k_0$ and $a_1$. We emphasize that the limitations of our study should be kept in mind when interpreting the constraints that result from our study.
To highlight the typical strength of such constraints, let us investigate the special case of $\lambda=0$. To linear order in the LIV couplings, the IR-attractive fixed point is given by
\begin{equation}
\label{eq: IntFP}
\zeta_{*,2}=\frac{-179025 a_1+766741 k_0+1776274 k_2}{1165248}-\frac{15}{34}\,.
\end{equation}
If we assume experimental constraints $|\zeta| < 10^{-10}$, it is evident from Eq.~\eqref{eq: IntFP} that values for~$k_2,\, k_0$ and~$a_1$ of order~1 are already excluded.
Only specific combinations of $k_0,\,k_2$ and $a_1$ with at least one coupling of $\mathcal{O}(1)$ can satisfy this bound, cf.~Fig.~\ref{fig: ExclPlotNGFPk20}.
This is a direct consequence of the non-vanishing value of~$\zeta_{*,2}\big|_{k_0=k_2=a_1=0}$, which is of~$\mathcal{O}(10^{-1})$. However, as shown in~Fig.~\ref{fig: ExclPlotNGFPk20}, any~$\mathcal{O}(1)$ LIV gravity coupling compatible with~$|\zeta|<10^{-10}$ is already excluded by direct cosmological constraints and by the observational data on gravitational waves \cite{Gumrukcuoglu:2017ijh}. We emphasize that the existence of the IR attractive fixed point $\zeta_{*,2}$ saves the system from an uncontrolled behavior towards the IR, rather than generating the strong constraints. In other words, without the interacting fixed point, the system would be driven to even larger values of couplings in the IR, resulting in even stronger constraints.
If therefore in future studies the interacting fixed point $\zeta_{*,2}$ turns out to be spurious tiny violations of Lorentz invariance in the gravity sector will conflict with constraints in the matter sector, since $\zeta$ will increase very strongly towards the IR.
For initial conditions in the range~$\zeta(k_{\rm i})\geq \zeta_{*,1}$, the flow of~$\zeta(k)$ is governed by the sGFP $\zeta_{*,1}$, which is IR-repulsive. In contrast to the IR-attractive fixed point, the sGFP defocuses trajectories. Hence, $\zeta(k)$ is driven away from $\zeta_{*,1}$ towards lower scales, and no universal bound arises. In this case, the IR value of $\zeta$ is generically too large to stay within the experimental bounds. The set of initial conditions at the scale~$k_{\rm i}$ which allow for a small enough $\zeta (M_{\rm Pl})$ to satisfy strong constraints is very special: Generically, the flow towards larger $|\zeta|$ is rather fast, unless one starts very close to the sGFP. Specifically, for a non-vanishing value $c$ of any of the gravity-LIV couplings, a value $\zeta(10^{-5}k_{\rm i})\sim c$ is generated, starting from the initial condition $\zeta(k_{\rm i})=0$, cf.~Fig.~\ref{fig: RegenFLow}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{05RegenFLow}
\caption{For the case $\lambda=0$ (which is generic for the purposes of this plot), the RG flow over a few orders of magnitude generates~$\zeta(k)\sim c$ from $\zeta(k_{\rm i})=0$, for any non-vanishing gravity LIV coupling of~$\mathcal{O}(c)$. The couplings $k_0$, $k_2$ and $a_1$ not mentioned in the respective label are set to zero.}
\label{fig: RegenFLow}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{06ExclPlotk20Flow}
\caption{For $\lambda=0$, $g=1$ and $k_2=0$, we show the value of~$\zeta(10^{-4}k_{\rm i})$ generated by the RG flow, starting from the initial condition $\zeta(k_{\rm i})=0$ at the transplanckian UV scale $k_{\rm i}$. The colored regions indicate where~$\zeta(10^{-4}k_{\rm i})$ exceeds a certain value. }
\label{fig: k20exclFlow}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{07ExclPlotPosNGFP}
\caption{Exclusion plot for $\lambda<\lambda_{\rm crit}$ with initial conditions close to the IR repulsive interacting fixed point $\zeta_{*,2}$. We display the results for the specific choice $\lambda=-11/2$ (which is generic for the purposes of this argument). The red region marks the area where $|\zeta|>10^{-10}$. The white line is the only allowed region: it is a band with width of twice the bound,~$2\,\zeta_{\rm exp}$, centered on values of~$a_1$ and~$k_0$ that render $\zeta_{*,2}$ exactly zero, cf.~Eq.~\eqref{eq: NGFPIRREP}. The white band lies within the black hatched region, that marks the range of values in the $(a_1,k_0)$--plane already excluded by direct observations \cite{Gumrukcuoglu:2017ijh}.}
\label{fig: ExclPlotPosNGFP}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{09ExclPlotPos}
\caption{Exclusion for~$\lambda<\lambda_{\rm crit}$ and~$k_2=10^{-15}$ with initial conditions in the basin of attraction of the IR attractive sGFP, cf.~\eqref{eq: sGFPbelow}. The dark and light red regions indicate the areas where $\zeta_{*,1}>10^{-20}$ and $\zeta_{*,1}>10^{-10}$, respectively. The white line corresponds to $\zeta_{*,1}=0$, while the black hatched region indicates the area of exclusion by direct observations~\cite{Gumrukcuoglu:2017ijh}.}
\label{fig: k20exclreg}
\end{figure}
\begin{figure*}[t!]
\includegraphics[width=\linewidth]{08ExclPlotNegZero}
\caption{
Exclusion for $\lambda<\lambda_{\rm crit}$ with initial conditions in the basin of attraction of the IR attractive sGFP, cf.~\eqref{eq: sGFPbelow}. The red region marks the area where the value of~$\zeta_{*,1}$ exceeds a certain bound, e.g., $|\zeta|\leq10^{-10}$. The different contours and shades exclude the corresponding area for different values of the assumed bound. The white line indicates $\zeta_{*,1}=0$, and the hatched region marks area of exclusion by direct observations~\cite{Gumrukcuoglu:2017ijh}. The left panel shows the case $k_2=-10^{-15}$, while the right panel refers to the case $k_2=0$.}
\label{fig: k2posexclreg}
\end{figure*}
This behavior follows from the dependence of the sGFP~$\zeta_{*,1}$ on the gravitational LIV couplings.
For the case $\lambda=0$ (which is generic for the purposes of this argument), it reads
\begin{equation}
\zeta_{*,1}=\frac{-42 a_1+43 k_0-113 k_2}{2016}\,,
\end{equation}
such that very small or very special values for the gravity LIV couplings are necessary to accommodate a small value for the sGFP. From Fig.~\ref{fig: k20exclFlow}, we can estimate how non-generically the initial conditions have to be chosen above the Planck scale to accommodate the strong bounds at lower scales: Starting from $\zeta(k_{i})=0$, a flow over
four orders of magnitude is sufficient to regenerate a non-vanishing value of $\zeta$. Imposing constraints on the IR value of $\zeta$ as they arise from the corresponding observations on $\zeta$ in the full Standard Model would result in the conclusion that these initial conditions are excluded, except for very special points in the gravitational parameter space, i.e., exactly on the fixed point itself or tiny deviations around it.\\
\paragraph{Constraints on LIV gravity couplings for $\lambda<\lambda_{\rm crit}$}\textcolor{white}{b}\\
We now focus on the case of $\lambda<-\tfrac{21}{4}$. We emphasize that $\lambda$ pertains to the microscopic value of the dimensionless cosmological constant.
Quantum fluctuations of matter might drive the cosmological constant to positive values in agreement with observations in the IR, starting from initial conditions at negative $\lambda$ in the UV~\cite{Dona:2013qba}.
For a specific realization in the case of Lorentz-invariant gravity, see~Fig.~4 in~\cite{deAlwis:2019aud}, where one explicit trajectory in the approximation of~\cite{Dona:2013qba} was employed.
For $\lambda<\lambda_{\rm crit}$, the interacting fixed point~$\zeta_{*,2}$ is IR repulsive and therefore shields all initial conditions with~$\zeta(k_{\rm i})<\zeta_{*,2}$ from a phenomenologically viable regime. Specifically, small deviations from~$\zeta_{\ast,2}$ at~$k_{\rm i}$ will increase towards lower scales. In a similar manner to the case analyzed in the previous section, this results in strong constraints on the gravitational LIV couplings.
For the case of initial conditions exactly on the fixed point, a small value of~$\zeta(M_{\rm Pl})$ can still be achieved. The white line in Fig.~\ref{fig: ExclPlotPosNGFP} shows where~$\zeta(M_{\rm Pl})=0$ can be satisfied. The allowed region for~$|\zeta_{*,\,2}|<\zeta_{\rm exp}$ with some experimental bound~$\zeta_{\rm exp}$ corresponds to a band with the width of~$2\,\zeta_{\rm exp}$ around the white line. As can be understood from the linear expansion of this fixed point for the generic choice of $\lambda=-11/2$,
\begin{equation}
\label{eq: NGFPIRREP}
\zeta_{*,2}\approx-0.317 a_1-0.597 k_0-7.518 k_2-0.00792\,,
\end{equation}
this cancellation can only happen for~$-a_1 \sim \mathcal{O}(10^{-2})$, which is excluded by observations~\cite{Gumrukcuoglu:2017ijh}.
For initial conditions with~$\zeta(k_{\rm i})>\zeta_{*,2}$, the RG trajectories are focused by the IR attractive sGFP $\zeta_{*,1}$ towards a universal value~$\zeta(M_{\rm Pl})\approx\zeta_{*,1}(M_{\rm Pl})$. This universal value depends on the LIV couplings~$k_0,\,k_2$ and~$a_1$.
In turn, this allows to translate the constraints on~$\zeta(M_{\rm Pl})$ into strong constraints on~$k_0,\,k_2$ and~$a_1$.
We use the observational constraints on~$\zeta(0)$ to constrain~$\zeta(M_{\rm Pl})$ in our toy model. With the corresponding caveats in mind, we extract bounds on the gravitational LIV couplings, as shown in the exclusion plots~Figs.~\ref{fig: k20exclreg} and~\ref{fig: k2posexclreg}. There, we
combine the constraints on the gravity LIV couplings coming from the strong constraints on~$\zeta_{*,1}$ with the existing constraints from cosmology and the observation of gravitational waves from a neutron-star merger with electromagnetic counterpart \cite{TheLIGOScientific:2017qsa,GBM:2017lvd,Monitor:2017mdv}. We focus on the $(k_0,a_1)$--plane for different values of $k_2$, since the observation of gravitational waves leads to the strong constraint~$|k_2|<10^{-15}$~\cite{Gumrukcuoglu:2017ijh}\footnote{Strictly speaking this bound is obtained by the LIGO data \cite{Monitor:2017mdv} assuming that the speed of photons remains unchanged, i.e. $v_{\gamma}=1$. Neglecting the difference between the Abelian gauge field $A_{\mu}$ in our work and photons, in the present context, the photons are expected to propagate with $v_{\gamma}=1+C\,\zeta$, $C$ being a constant, due to the presence of the LIV coupling $\zeta$. Therefore the observation of gravitational waves from a neutron-star merger with electromagnetic counterpart leads to a constraint $|k_2-C\,\zeta|<10^{-15}$. While for $k_2=0$ this would actually constrain the value of $\zeta$, we do not use this constraint, to emphasize the difference between our toy model containing the Abelian hypercharge, and the measurement involving the photon.}.
For the generic choice of $\lambda=-11/2$, the IR attractive sGFP to linear order in the gravity LIV couplings reads
\begin{equation}
\label{eq: sGFPbelow}
\zeta_{*,1}=\frac{186 a_1+325 k_0+4297 k_2}{576}\,.
\end{equation}
Hence, for~$k_2=\pm 10^{-15}$, the viable region with~$|\zeta_{*,1}|<\zeta_{\rm exp}$, where~$\zeta_{\rm exp}$ is the experimental bound, is a band with width of~$2\zeta_{\rm exp}$ in the $(a_1,k_0)$--plane,~cf.~Figs.~\ref{fig: k20exclreg} and~\ref{fig: k2posexclreg}. We emphasize that the fixed point $\zeta_{*,1}$, which leads to a universal value of $\zeta$ at the Planck scale, goes over into the GFP in the limit of vanishing LIV couplings. Therefore, its existence is expected also beyond the present truncation, such that the qualitative features of the above analysis are expected to carry over to extended truncations. Furthermore, if we assume the interacting fixed point $\zeta_{*,2}$ to be an artifact of the truncation, the region~$\zeta(k_{\rm i})<\zeta_{*,2}$ is not shielded from the phenomenologically viable region. Consequently, any value of $\zeta(k)$ would lie in the basin of attraction of the IR attractive FP $\zeta_{*,1}$, leading to constraints on the gravity LIV couplings.
\section{Modified dispersion relations}
\label{sec: HigherOrders}
The experimental study of Lorentz symmetry violation often proceeds by constraining each term in the SME separately. Within a given theoretical setting, these terms are typically not independent. This is important in light of the fact that leading order, marginal couplings are typically much simpler to constrain experimentally. In contrast, higher-order terms are generically Planck-scale suppressed, and therefore hard to strongly constrain by observations. Yet, within a given theoretical setting, consistency conditions link these couplings. These conditions can be derived from their beta functions. To exemplify this idea, let us explore terms which modify the dispersion relation for propagating Abelian gauge fields.
Due to their canonical mass dimension, the corresponding couplings are expected to feature an IR-attractive (shifted) Gaussian fixed point. Accordingly, their IR value is a universal prediction of the theory in the same way as exploited in the previous sections.
As an example, we consider the higher-order operator~$\bar{\kappa}\,\,n_{\alpha} n_{\beta}D^{\alpha}D^{\beta}\,F^{\mu\nu}F_{\mu\nu}$. Such a term gives rise to a higher-order dependence on the energy in the dispersion relation, i.e.,
\begin{equation}
\vec{p}^2 = E^2 + \frac{\kappa}{M_{\rm Pl}^2}E^4.
\end{equation}
For photons, such modifications have received significant observational interest, see, e.g., \cite{Jacobson:2001tu,Jacobson:2002ye,Bolmont:2006kk,Ackermann:2009aa,Vasileiou:2013vra,Ellis:2018lca,Abdalla:2019krx} and references therein.
Within our setup, the cubic term, explored in an EFT setting, e.g., in \cite{Myers:2003fd}, can actually be set to zero consistently. We focus on the dimensionless counterpart of the coupling $\bar{\kappa}$, i.e., $\kappa = \bar{\kappa}/k^2$.
Instead of considering the full beta function for $\kappa$, we limit our study to the \emph{inducing} term,
i.e., the analog of $b_0$ in Eq.~\eqref{eq: betaschem}. It reads
\begin{eqnarray}
b_{0,\,\kappa}&=& g\,\bigg(\frac{815 a_1-179 k_0-1847 k_2}{1080 \pi (1-2 \lambda )^2}\notag\\
&{}&\hspace{18pt}+\frac{230 a_1+262 k_0+1021 k_2}{540 \pi (1-2 \lambda )^3}\bigg)\notag\\
&{}&+\zeta\, g\, \bigg(-\frac{4}{3 \pi (1-2 \lambda )}+\frac{1685 a_1-1905 k_0+1204 k_2}{2160 \pi (1-2 \lambda )^3}\notag\\
&{}&\hspace{28pt}-\frac{3167 a_1-243 k_0+280 k_2+5760}{2160 \pi (1-2 \lambda )^2}\bigg)\notag\\
&{}&+\zeta ^2\, g\,\bigg(
\frac{9}{4 \pi (1-2 \lambda )}+\frac{-17277 a_1+6809 k_0+9807 k_2}{5400 \pi (1-2 \lambda )^3}\notag\\
&{}&\hspace{27pt}+\frac{24819 a_1+3797 k_0-5109 k_2+32580}{10800 \pi (1-2 \lambda )^2}\bigg).
\label{eq: b0HO}
\end{eqnarray}
The first line shows that $\kappa$ is induced by gravitational fluctuations in the presence of the LIV couplings $k_2, k_0, a_1$. The following lines highlight that $\zeta$ is also induced once a finite $\zeta$ is present. Lorentz symmetry breaking therefore percolates from the gravitational to the matter sector, but also spreads within the matter sector, once a ``seed" in the form of one nonvanishing LIV matter coupling is present.
Including the term linear in $\kappa$, which also accounts for the canonical mass dimension of $\kappa$, the beta function reads
\begin{equation}
\beta_{\kappa}=b_{0,\,\kappa}+2\,\kappa+b_{1,\,\kappa}\,\kappa\,,
\label{eq: betaHO}
\end{equation}
where the second term is the contribution due to the canonical mass dimension of $\bar{\kappa}$.
From Eq.~\eqref{eq: betaHO}, the fixed-point value of $\kappa$ is given by
\begin{equation} \label{eq: kappastar}
\kappa_{*} = -\frac{b_{0,\, \kappa}}{2+b_{1,\,\kappa}}.
\end{equation}
This relation holds under the self-consistency assumption that~$\kappa_{\ast}\ll1$, as then terms quadratic in $\kappa$ can be neglected in $\beta_{\kappa}$. The critical exponent of this fixed point~\eqref{eq: kappastar}~is $\theta=-2-b_{1,\,\kappa}$. This fixed point is IR attractive, as long as metric fluctuations remain near-perturbative, i.e., $|b_{1,\,\kappa}|<1$.
Due to the constant offset in the denominator, the fixed-point value $\kappa_*$ is parametrically set by the fixed-point value for $\zeta$ and the values of~$k_0, \,k_2$ and~$a_1$. Following the same logic as in the previous section, the value of $\kappa$ at the Planck scale corresponds to the fixed-point value~$\kappa_\ast$. Below the Planck scale, gravitational fluctuations turn off dynamically, resulting in a vanishing flow for $\kappa$ except for the dimensional term. This has the simple solution
\begin{equation}
\kappa(k<M_{\rm Pl}) = \kappa_{*} \left(\frac{k}{M_{\rm Pl}}\right)^2.
\end{equation}
For the dimensionful counterpart $\bar{\kappa}$, this implies
\begin{equation}
\bar{\kappa}(k<M_{\rm Pl}) = \frac{\kappa_{*}}{M_{\rm Pl}^2}.
\end{equation}
Let us briefly compare this with experimental constraints on the quartic term in the dispersion relation for photons, which constrains the dimensionless coupling to be $|\kappa_{\rm exp}|<10^6$ \cite{Kostelecky:2013rv}, see also \cite{Vasileiou:2013vra}.
In contrast, a significantly stronger indirect constraint can be obtained by choosing $\zeta,\, k_2,\, k_0$ and~$a_1$ such that they satisfy the corresponding constraints but maximize $b_{0\, \kappa}$. With the exception of very special points in the parameter space, this generically constrains $b_{0\, \kappa}$ to about the same order as $\zeta$ itself. Conversely, within a setting described by our toy model we would not expect direct searches for $\kappa$ to result in a detection, unless a rather significant improvement was achieved in future observations. We emphasize that the above analysis, especially the restriction to the inducing contribution to $\beta_{\kappa}$, is only the first step in the analysis of effects of the higher order coupling $\kappa$. However, the qualitative feature of the sGFP $\kappa_{*}$, i.e., the form of Eq.~\eqref{eq: kappastar}, will remain unchanged under the consideration of the full $\beta_{\kappa}$, since any direct contribution will contribute to $b_{1,\, \kappa}$ or $b_{2,\, \kappa}$. Therefore, the qualitative feature that $\kappa_*$ serves as an IR attractor, with the value parametrically set by $\zeta_*$, already follows from the analysis of the inducing term $b_{0,\, \kappa}$.
\section{Conclusions and outlook}
\label{sec: Concl}
Probing the quantum nature of gravity observationally is notoriously difficult. The interplay of quantum gravity with matter could become a key stepping stone for progress in this direction: At the microscopic level, this interplay could determine properties of elementary particles that are accessible to experiments at lower energies. Thereby, low-energy (sub-Planckian) measurements could constrain transplanckian Physics.
This idea to use matter as a ``magnifying glass" for the quantum properties of spacetime underlies part of the swampland-program in string theory, as well as a similar program within the asymptotic-safety approach. Here, we highlight the potential power of such considerations for Lorentz-invariance violating gravity-matter models.
The key idea underlying this paper is the following: Quantum fluctuations of gravity that only respect foliation-preserving diffeomorphisms generate Lorentz-invariance violating interactions for matter, in our case parameterized by the scale dependent coupling $\zeta(k)$. Within the toy model we consider and within our approximation of the dynamics,
the corresponding beta function features an infrared-attractive fixed point. Its value is determined by the gravity-LIV couplings. Under appropriate conditions, spelled out in this paper, it governs the scale dependence of $\zeta$.
Due to its infrared-attractive nature, it results in a \emph{universal} value of~$\zeta$ at the Planck scale, which is independent of the initial conditions for~$\zeta(k_i)$ at the high-energy scale $k_i$, but depends on the values of the gravity-LIV couplings. The Planck-scale value of $\zeta$ can be mapped to its low-energy value by the standard RG flow without gravity. At low energies, experimental constraints on LIV-matter couplings exist. Such low-energy experimental bounds also indirectly constrain the values attained by the LIV-matter couplings at Planckian scales. As the latter Planck-scale values depend on the LIV gravitational couplings, experimental bounds on LIV-matter couplings can be translated into constraints on the LIV couplings of the gravitational sector. Moreover, as observational constraints on matter-LIV couplings are rather strong for marginal couplings, this mechanism can provide constraints on the gravity-LIV couplings which are significantly stronger than the direct observational constraints.
To support this general idea, we have performed a study of an Abelian gauge field coupled to gravity with foliation-preserving diffeomorphisms only. Our study has the following technical limitations: It is performed in a truncation of the full dynamics, as the RG flow generates additional terms. We do not account for their feedback. This results in a systematic uncertainty of our results. Further, we work in a Euclidean setup in order to apply FRG techniques. The presence of a foliation should ensure that a Wick-rotation exists. Finally, we work in a toy model for the photon-gravity system: We do not account for the additional matter degrees of freedom of the Standard Model, and neglect electroweak symmetry breaking which implies that the U(1) gauge field relevant at high energies is not the same as the photon of electromagnetism. Bearing these limitations in mind, our study supports the general idea explained above.
Specifically, we have shown that the breaking of Lorentz symmetry in the gravitational sector automatically percolates into the matter sector. This result is entailed in the $\zeta$-independent part of the beta function for $\zeta$, cf.~Eq.~\ref{eq: betazeta}. This term measures the ``amount of LIV'' in the gravitational sector that impacts the matter sector, and it vanishes if gravity retains full diffeomorphism invariance. Due to this term, $\zeta=0$ is no longer a fixed point of the RG flow. Thus, a non-vanishing $\zeta$ is generated by the flow, even if it is set to zero at some initial scale. Consequently, Lorentz symmetry violation necessarily percolates from the gravitational to the matter sector.
Furthermore, as we have shown within our approximation, the beta function for~$\zeta$ always features an IR-attractive fixed point, that can be reached from a wide range of initial conditions in the far UV, i.e., at transplanckian scales. Therefore, we can remain agnostic about the ultimate UV completion of the theory: As long as it sets initial conditions for the couplings within the appropriate range, there will be a \emph{universal} Planck-scale value of~$\zeta$, corresponding to the IR-attractive fixed point of its RG flow.
In this case, the value of $\zeta$ at the Planck scale is fully determined by the values of the LIV couplings in the gravitational sector, i.e., $k_0, k_2$ and $a_1$ (cf.~Eq.~\eqref{eq: breakingaction}), as well as the dimensionless cosmological constant $\lambda$. The RG flow below the Planck scale is trivial in our setting, where gravitational fluctuations switch off dynamically, resulting in $\zeta(k=0)\approx \zeta(M_{\rm Pl}).$
To exemplify the constraining power of such a fixed point, we translate the stringent experimental bounds on the actual photon-LIV coupling, cf.~Tab.~\ref{tab: LIVMatterConstraints}, into bounds on the gravity LIV couplings $k_0,k_2$ and $a_1$ using the fixed-point relation. Note that for quantitatively robust constraints, this should be repeated in an extended study accounting for the presence of additional degrees of freedom and reducing systematic uncertainties by working within extended truncations. We highlight that if we nevertheless used the fixed-point relation for $\zeta$ that arises from our calculation, even the least stringent observational bound on $\zeta$ would exclude an additional area in the parameter space spanned by the gravity-LIV coupling that is not excluded by the observation of gravitational waves, the BBN and ppN constraints. This highlights the power of an IR-attractive fixed point which is related to the breaking of some symmetry: If this symmetry-breaking is strongly constrained in one sector of the system, an IR-attractive fixed point in this sector can be used to translate observational bounds into constraints on the other sector. A future analysis of the RG flow of the combined gravity-matter system, including the scale dependence of the gravity LIV couplings, would allow to identify intervals of initial conditions for the gravitational LIV couplings, for which the scenario presented in this paper is valid.
Finally, we have also shown that a higher-order LIV coupling $\kappa$ -- related to a modification of the dispersion relation -- is induced in the same way as $\zeta$ is induced. Due to its canonical mass dimension, it is expected to feature an IR attractive fixed point, whose value is parametrically of the same size as the fixed-point value of $\zeta$. Therefore, constraints on $\zeta$ restrict the possible values of $\kappa_*$. Due to their Planck-scale suppression, this kind of irrelevant coupling is experimentally less strongly constrained. We therefore estimate that the parametric dependence of $\kappa(M_{\rm Pl})$ on $\zeta(M_{\rm Pl})$, together with the strong constraints on $\zeta$ could result in strong, indirect constraints of $\kappa$. A more extended analysis including the entire beta-function of $\kappa$ and an analysis of the fixed-point structure can confirm this expectation.\\
\emph{Acknowledgements:} We thank B.~Knorr and S.~Lippoldt for insightful discussions. This work is supported by the DFG under grant no.~Ei-1037/1, and A.~E.~is also partially supported by a visiting fellowship at the Perimeter Institute for Theoretical Physics. This
research is also supported by the Danish National Research
Foundation under grant DNRF:90. A.~P.~is supported by the Alexander von Humboldt Foundation and M.~S.~is supported by a scholarship of the German Academic Scholarship Foundation. A.~E.~would like to acknowledge the contribution of the COST Action CA18108: QG-MM (Quantum Gravity phenomenology in the Multi-Messenger approach). M.S.~gratefully acknowledges the hospitality at CP3-Origins during the final stages of this work.
|
1,116,691,497,711 | arxiv | \section{Introduction}
The transition dynamics in a quantum two-level system with a time-dependent
Hamiltonian
varying such that the energy separation of the two diabatic states is a linear function
of time is
a central problem since the early days of quantum mechanics. It is commonly
denoted as the Landau-Zener (LZ) problem, although it has been solved
independently
by Landau \cite{LZLa1932}, Zener \cite{LZZe1932}, St\"uckelberg \cite{LZSt1932}
and Majorana \cite{LZMa1932} in 1932 (for a more detailed discussion of the
differences between the
four approaches, we refer the interested reader to a paper by Di Giacomo and
Nikitin \cite{Giacomo}).
Nonadiabatic transitions at avoided level crossings are at the heart of
many dynamical processes throughout physics and chemistry. They have been
extensively studied both theoretically and experimentally in, e.g., spin flips
in nanomagnets \cite{Wernsdorfer}, solid state artificial atoms
\cite{LZSi2006,LZBe2008,LZKi2008}, nanocircuit QED \cite{QED1,QED2},
adiabatic quantum computation \cite{AQC}, the dynamics of chemical
reactions \cite{Nitzan}, and in Bose-Einstein condensates in optical
lattices \cite{Zenesini2009}. Also the nonequilibrium dynamics of glasses
at low temperatures is dominated by the transition dynamics of avoided level
crossings~\cite{TSGLNEQRo2003,TSGLNEQLu2003,TSGLNEQNa2004,TSGLNEQNa2005}.
In the pure Landau-Zener problem, two quantum states interact by a constant
tunneling
matrix element $\Delta_0$. A control parameter is swept through the avoided
level crossing at a constant velocity $v$, such that the
energy gap between the two diabatic states depends linearly on time.
The Landau-Zener problem addresses the
case when the system starts in the lower energy eigenstate in the infinite past
and asks for the probability of
finding the system in the lower energy eigenstate in the infinite future
(a Landau-Zener transition). Certainly, for infinitely slow variation of the
energy difference $v \to 0$, the adiabatic theorem states that no transition
between energy eigenstates will occur, since at any moment of time,
the system will always be in an instantaneous eigenstate of
the Hamiltonian.
For $v\ne 0$, the probability $P_0$ for no transition~\cite{foot1} is
described by the Landau-Zener formula \cite{LZLa1932,LZZe1932,LZSt1932,LZMa1932}
$P_0(v,\Delta_0) = 1 - \exp[ -\pi\Delta_0^2/(2v)]$.
Most experimental investigations, on the one hand, actually deal with
more complex systems than two-state systems and, on the other
hand, do not necessarily start in the ground state.
In close proximity around the avoided crossing in most cases only
the two crossing states are important to describe the dynamics.
However, in general in such an approximate system we also have to deal
with the system being initially in the excited state.
In the pure Landau-Zener problem the probability to end up in the
ground (excited) state when starting in the ground (excited) state
are identical due to symmetry.
However, in any physical realization, a quantum system is influenced by
its environment leading to relaxation and phase decoherence during time
evolution \cite{WeissBuch,SpiBoLe1987}.
At low temperatures when the environmental fluctuations are not thermally
occupied, the Landau-Zener probability $P$ (to end up in the ground state
when starting in the ground state) should hardly be influenced since no
phonons are available for the system to be in the excited state at
asymptotically long times. However,
spontaneous emission is possible even at lowest temperatures and the
excitation survival probability $Q$ (to end up in the excited state
when starting in the excited state) is expected to decrease when the
system has the time to relax during the driving. Thus especially at
low driving speeds, the excitation survival probability will be reduced
since the system will decay, while the pure Landau-Zener mechanism predicts
full survival of excitations when driving through an avoided crossing
at low speeds.
The dissipative Landau-Zener problem received a lot of
attention~\cite{LZKaI1984,LZKaII1984,LZAo1991,LZWu2006,LZSa2007,LZKa1998,
LZPo2007,LZNa2009} in the past 25 years due to its relevance for controlled
quantum state preparation which became experimentally feasible in many physical
realizations. Although the full problem is analytically unsolved, many limiting
cases are analytically tractable and numerical approaches are available. Usually
a dissipative environment causes fluctuations of the energies of the diabatic
states (longitudinal or diagonal system-bath coupling) but occasionally the
environments can also cause transitions between the diabatic states (transversal
system bath coupling). In this work, we focus on the more common case of longitudinal
coupling. The transversal coupling has been treated in Refs.\
\cite{LZAo1991,LZWu2006} for zero temperature.
In the limit of very fast sweeps (nonadiabatic driving), a thermal heat bath has
been
shown not to influence the Landau-Zener probability. This follows consistently
from all approaches (for the corresponding limiting conditions of the other free
parameters)~\cite{LZKaI1984,LZKaII1984,LZAo1991,LZWu2006,LZSa2007,LZKa1998,
LZPo2007,LZNa2009}.
At low temperatures, the Landau-Zener (or, equivalently, ground state survival)
probability $P$
(to end up in the ground state when starting in the ground state) is only
influenced little
by a diagonally coupled bath. This was first shown by Ao and
Rammer~\cite{LZAo1991}.
Kayanuma~\cite{LZKaI1984,LZKaII1984} showed before that in the limit of slow
fluctuations
of the diabatic energies (as might be caused by a diagonally coupled bath at
low temperatures),
the Landau-Zener probability is not modified. At zero temperature, a diagonally
coupled bath
has strictly no influence on the Landau-Zener probability, as has been shown by
Wubs {\em et al.\/}~\cite{LZWu2006}. This is true not only for bosonic but also for spin
environments~\cite{LZSa2007}.
In contrast, transversely coupled baths lower the Landau-Zener probability
even at zero temperatures~\cite{LZWu2006} and their effects depend on the
details of the bath characteristics~\cite{LZSa2007}.
The strong damping limit can be reached in two ways. For large coupling
between system and bath and finite temperatures, Ao and Rammer~\cite{LZAo1991}
have shown that again a diagonally coupled bath does not influence the Landau-Zener
probability. Conversely, in the high temperature limit under adiabatic conditions
(i.e., slow sweeps), the two states are driven to equal population, $P=P_{\rm
SD}=\halb(1-\exp(\pi\Delta_0^2/v))$. This has first been shown by
Kayanuma~\cite{LZKaI1984,LZKaII1984} for large and fast fluctuations and later
by Ao and Rammer~\cite{LZAo1991}. Pokrovsky and Sun~\cite{LZPo2007} have extended
Kayanuma's result to transversely coupled environments.
For the experimentally important parameter range of slow (adiabatic) sweeps and
intermediate temperatures but only weakly coupled environments, Pokrovsky and
Sun~\cite{LZPo2007}, Kayanuma and Nakayama~\cite{LZKa1998} and Ao and
Rammer~\cite{LZAo1991} each give approximate solutions of the influence of the
environmental fluctuations on the Landau-Zener probability. Using the numerically exact
quasiadiabatic propagator path-integral (QUAPI) technique,
we recently described the Landau-Zener probability in
the full parameter space~\cite{LZNa2009}. For small sweep velocities and medium
to high temperatures, we have discovered non-monotonic dependencies on the sweep
velocity, temperature, coupling strength and cut-off frequency which were not
included in the previous approximate solutions. This behavior can be understood
in simple physical terms as a nontrivial competition between relaxation and
Landau-Zener driving.
The direct influence of environmental fluctuations on the dynamics of a driven
two-state system is much more pronounced when the system is initially prepared in the
excited state since spontaneous emission is even possible at zero temperature.
Accordingly,
the excitation survival probability $Q$ (to end up in the excited state having started
in the excited state), which is without environment strictly identical
to the Landau-Zener probability, is strongly modified for all temperatures and
bath coupling strengths~\cite{LZAo1991,LZKa1998}. Although experimentally
highly relevant, the excitation survival probability received much less
attention. In the limit of fast (nonadiabatic) sweeps, no influence of a bath is
expected~\cite{LZAo1991,LZKa1998}. In the high temperature limit under adiabatic
conditions (slow sweeps), the two states are again driven to equal population,
$Q_{\rm SD}=P_{\rm SD}=\halb(1-\exp(\pi\Delta_0^2/v))$~\cite{LZAo1991,LZKa1998}.
Thus, in both limits the excitation survival probability is identical to
the Landau-Zener probability, $Q=P$. However, this is quite different at
intermediate and low temperatures since spontaneous emission drastically changes
the excitation survival probability. Ao and Rammer~\cite{LZAo1991} were the
first to remark that in the limit of weak system-bath coupling, a
discontinuity occurs. In the limit of strong coupling and low
temperature, Kayanuma and Nakayama~\cite{LZKa1998} find another simple
analytical expression, $Q_{\rm sc} =P_0(1-P_0)$ with $P_0$ being the Landau-Zener
probability without environment. Both, Kayanuma and Nakayama~\cite{LZKa1998}
and Ao and Rammer~\cite{LZAo1991} give approximate solutions to the experimentally
important parameter range of slow (adiabatic) sweeps and intermediate
temperatures, but only weakly coupled environments. Nevertheless,
this parameter range is still largely unexplored.
Beyond these direct approaches, the dissipative Landau-Zener problem was
discussed in many more facets. Not being able to give a full review of all
works, we just mention in passing that Moyer
discussed the Landau-Zener problem for decaying states involving complex energy {\it
eigenvalues\/}~\cite{LZMo2001}. Extensions to three-state systems~\cite{LZSa2002}
or circuit QED problems~\cite{LZZu2008} have been made recently.
Finally, we also mention that spin environments in context of Landau-Zener transitions
have been discussed by Garanin et al.~\cite{LZGa2008}.
In this paper, we investigate the dissipative Landau-Zener problem in the full
parameter range of sweep velocities, temperatures, damping strengths and cut-off
frequency by means of the quasiadiabatic propagator path-integral (QUAPI)
\cite{QUAPI1,QUAPI2,QUAPI3,QUAPI4,QUAPI5}.
It allows to include nonadiabatic as well as non-Markovian effects yielding
numerically exact results.
In the next section we introduce the basic model. In the third section we
discuss the time-dependent occupation probability of the two states during a
Landau-Zener transition and in the following section we focus on the asymptotic populations
discussing the Landau-Zener (ground state survival) and the excitation survival
probabilities in dependence of the sweep velocities, temperatures, damping strengths
and cut-off frequency. We show that interesting features arise due to a competition
between time scales associated to the Landau-Zener sweep and to dissipative transitions.
Finally, we conclude with a short summary.
\section{Model}
A quantum mechanical two-state system which shows an avoided energy
level crossing when driven is described
by the Landau-Zener Hamiltonian ($\hbar=1$)
\be
H_{LZ}(t) = \frac{\Delta_0}{2}\sigma_{x} +\frac{vt}{2} \sigma_z \, ,
\ee
with the tunneling matrix element $\Delta_0$ and the energy gap between the
diabatic states $vt$, changing linearly in time with sweep velocity $v$.
Here, $\sigma_{x,z}$ are Pauli matrices and the diabatic states are the
eigenstates
($|\downarrow\rangle$ and $|\uparrow\rangle$)
of $\sigma_z$. Asymptotically at times $|t|\gg\Delta_0/v$,
the diabatic states coincide with the momentary eigenstates of $H_{LZ}$.
Figure~\ref{figH1} plots the eigenenergies of $H_{LZ}$ (full lines) which show
an avoided level crossing with minimal splitting $\Delta_0$ and the energies of
the diabatic states (dashed lines) which form an exact crossing as a function of
time.
The Landau-Zener problem asks for the probability $P_0$ of the system to end up
in the ground state at $t=+\infty$, having started in the ground state at
$t=-\infty$ (the corresponding one to end in the excited state is given as
$1-P_0$). Its exact solution dates back to the year 1932
\cite{LZLa1932,LZZe1932,LZSt1932,LZMa1932} and is given by
\be
\hspace*{-1mm}P_0(v,
\Delta_0)=|\langle\uparrow(\infty)|\downarrow(-\infty)\rangle|^2 = 1 -
\exp\left( -\frac{\pi\Delta_0^2}{2v} \right)
\ee
The excitation survival probability $Q_0$ to end up in the excited state at
$t=+\infty$, having started in the excited state at $t=-\infty$ is strictly
identical, $Q_0=P_0$ for the coherent two-state problem.
To include environmental fluctuations on Landau-Zener transitions, we couple
$H_{LZ}$ diagonally to a harmonic bath \cite{WeissBuch,SpiBoLe1987}, yielding
\be H(t) = H_{LZ}(t) -\frac{\sigma_z}{2}\sum_k\lambda_k (b_k+b_k^\dagger)
+\sum_k \omega_k \left(b_k^\dagger b_k+\halb\right)
\ee
with the bosonic annihilation/creation operators $b_k/b_k^\dagger$.
The bath influence is captured by the spectral function $J(\omega)=2\alpha\omega
\exp(-\omega/\omega_c)$, for which
we choose here for definiteness an Ohmic form
with the cut-off frequency $\omega_c$ and the coupling strength $\alpha$
\cite{WeissBuch,SpiBoLe1987}.
The Landau-Zener probability for the dissipative problem
$P=\spur{|\uparrow\rangle\langle\uparrow|U_\infty|\downarrow\rangle\langle\downarrow|U^{-1}
_\infty}$
with the time evolution operator $U_\infty={\cal T}
\exp[-i\int_{-\infty}^\infty dt H(t)]$ as well as the excitation survival
probability
$Q=\spur{|\downarrow\rangle\langle\downarrow|U_\infty|\uparrow\rangle\langle\uparrow|U^{-1}
_\infty}$
are now functions not only of $\Delta_0$ and $v$, but also of $\alpha, \omega_c$
and the temperature $T$.
In the following, we use $\omega_c=10\Delta_0$ unless specified otherwise.
\section{Time dependent occupation probabilities}
In this section, we explicitly consider the time-dependence of the
population of the diabatic state $|\uparrow\rangle$ at any instant of time $t$, having
started in $|\downarrow \rangle$ with probability one. This
is given by
\be
P(t)=\spur{|\uparrow\rangle\langle\uparrow|U_t|\downarrow\rangle\langle\downarrow|U^{-1}_t}
\ee
with the time evolution operator $U_t={\cal T}
\exp[-i\int_{-\infty}^t dt' H(t')]$. Note that at asymptotic times, this quantity
coincides with the standard Landau-Zener probability $P=P(t\to \infty)$.
Note furthermore that $P(t)$ is the tunneling probability at any instant of time.
Here, the dynamics of the quantum two-level system is described
in terms of the time evolution of the reduced density matrix $\rho(t)={\rm Tr}_{B}
\{U_t \rho_0 U^{-1}_t\}$, starting
from a total initial density matrix $\rho_0=\rho_S \otimes e^{-H_B/T}/Z_B$, where
$\rho_S$ is the density operator of the quantum two-level system and $H_B$ denotes the
bath Hamiltonian which is assumed to be decoupled at $t_0=-\infty$ (which is in practice set to zero)
and instantly switched on directly afterwards. Moreover, $Z_B=\spur{e^{-H_B/T}}$ with $k_B=1$.
$\rho(t)$ is obtained after
tracing over the bath degrees of freedom. We calculate $\rho (t)$ using the numerically
exact quasiadiabatic propagator path-integral \cite{QUAPI1,QUAPI2,QUAPI3,QUAPI4,QUAPI5} scheme.
For details of the iterative technique, we refer to previous works \cite{QUAPI1,QUAPI2,QUAPI3,QUAPI4,QUAPI5}.
In brief, the algorithm is based on a symmetric Trotter splitting
of the short-time propagator ${\cal K}(t_{k + 1}, t_k)$ for the full Hamiltonian
into a part depending on the system Hamiltonian and a part involving the
bath and the coupling parts. The short time propagator describes time evolution
over a Trotter time slice $\Delta t$. This splitting is of course exact in the limit
$\Delta t \to 0$ but introduces a finite Trotter error to the splitting,
which has to be eliminated by choosing $\Delta t$ small enough such that convergence
is achieved. On the other hand, the bath degrees of freedom generates correlations
being non-local in time. For any finite temperature, these correlations
decay exponentially fast at asymptotic times, thereby defining the associated
memory time scale. QUAPI now defines an object called the reduced
density tensor, which lives on this memory time window and establishes an iteration scheme
in order to extract the time evolution of this object. Within the memory time window,
all correlations are included exactly over the finite memory time
$\tau_{\rm mem} = K \Delta t$ and can safely be neglected for times beyond $\tau_{\rm mem}$.
Then, the memory parameter $K$ has to be increased, until convergence is found.
Typical values, for which convergence can be achieved for our problem,
are $K\le 12$ and a reasonable choice is $\Delta t \sim (0.1 - 0.2)/\Delta_0$.
The two strategies to achieve convergence, namely decreasing $\Delta t$ and at
the same time increasing the considered memory time $\tau_{\rm mem} = K \Delta t$, are
clearly increasing both the needed $K$ which results in severe demands considering that the
needed computer power grows exponentially with $K$. Nevertheless convergent results can
be obtained in a wide range of parameters.
Fig.\ \ref{figH2} shows the population $P(t)$ vs.\ time for different temperatures for a fixed sweep
velocity $v=0.02 \Delta_0^2$ in the weak coupling regime $\alpha=0.0016$. We start in the
infinite past with $P(t=-\infty)=0$ as the state $|\downarrow\rangle$ is fully populated, see
Fig.\ \ref{figH1}. The Landau-Zener sweep reaches the minimal gap at $t=2500/\Delta_0$. Approaching this
point, $P(t)$ starts to increase. For larger temperatures, this increase is less pronounced than
for lower temperatures. This is shown more explicitly in Fig.\ \ref{figH3}, where the corresponding
derivative $dP(t)/dt$ is shown. Note that this quantity may be viewed as tunneling rate for the
dissipative Landau-Zener transition. It is naturally more pronounced at low temperatures.
Increasing the speed of the Landau-Zener sweep, the
population $P(t)$ of the $|\downarrow\rangle$ state shows some transient oscillatory
dynamics before a stationary value is reached (results not shown). We would like
to point out that the iterative QUAPI approach by construction allows the access to
the full time-dependent Landau-Zener transition. In the following, we focus on the
stationary populations at asymptotic times without discussing how the stationary state
is reached in all studied parameter configurations.
\section{Landau-Zener and excitation survival probability}
In this section, we turn to the asymptotic populations and study the Landau-Zener and the excitation
survival probabilities.
\subsection{Landau-Zener probability at weak coupling}
Figure \ref{figH5} shows the Landau-Zener probability $P$ versus sweep velocity
$v$ for different temperatures and for weak coupling, $\alpha=0.0016$. Clearly
the regime with large velocities, $v\gg \Delta_0^{2}$ is distinguishable from a
regime with small velocities (adiabatic regime) $v\lesssim\Delta_0^{2}$.
At low temperatures one expects no sizable influence of the bath
\cite{LZKaI1984,LZKaII1984,LZAo1991,LZKa1998} which should vanish totally at
$T=0$ \cite{LZWu2006,LZSa2007}. This is confirmed by our numerical results.
For small $v$ and low temperatures, $T\lesssim\Delta_0$, we find $P\sim 1$, and
thus unmodified compared to the pure quantum mechanical Landau-Zener result
$P_0$ (solid line).
For increasing velocity, the Landau-Zener probability decreases rapidly and there is
hardly any temperature effect in the considered temperature range, see Fig.\
\ref{figH5}.
This observation agrees with results by Kayanuma and Nakayama who determined the
Landau-Zener probability in the limit of high temperatures,
$P_{SD}=\halb(1-\exp(-\pi\Delta^2/v))$ \cite{LZKa1998,LZKaI1984,LZKaII1984}
(dot-dashed line),
assuming dominance of phase decoherence over dissipation.
For large velocities, $P_{SD}$ decreases as the pure Landau-Zener probability,
$P_{SD}\sim P_{LZ}\sim \halb\pi\Delta^2/v$, and accordingly no sizable
temperature effect is expected.
In the experimentally most relevant parameter range of intermediate to high
temperatures, $T>\Delta_0$ and small sweep velocities, $v<\Delta_0^{2}$, we find
(as reported before~\cite{LZNa2009}) a nontrivial and unexpected behavior of
$P$.
Besides an overall decrease of $P$ with increasing temperature, we find (at
fixed $T$) for decreasing velocity first a maximum of $P$ at $v_{\rm
max}\lesssim \Delta_0^{2}$, then a minium at $v_{\rm min}$ and finally again an
increase.
For decreasing temperatures, $v_{\rm min}$ decreases, and $P(v_{\rm min})$
increases. For high temperatures $T\ge 500\Delta_0$,
our data follow nicely the predictions by Kayanuma and
Nakayama~\cite{LZKa1998,LZKaI1984,LZKaII1984}.
This nonmonotonic behavior cannot be described in terms of perturbative
approaches.
Ao and Rammer derived temperature-dependent corrections to the Landau-Zener
probability for low temperatures \cite{LZAo1991}. They report an onset
temperature $T_o\propto 1/v$,
above which temperature affects $P$. Thus, at larger velocities, the decrease of
$P$ due to increasing temperature starts at higher temperatures. This is in line
with our findings of the maximum in $P(v)$, but it does not account for the
minimum and the subsequent increase of $P$ for smaller $v$. In the limit of high
temperatures, $P_{SD}\rightarrow 1/2$ for $v<\Delta_0^2$. Thus, $P_{SD}$
captures the decrease of $P$ in Fig.~\ref{figH5} with increasing temperature,
but it does not account for the nonmonotonic behavior for decreasing $v$.
In Fig.~\ref{figH7}, we compare our data (triangles for $P$ (top) and circles for $Q$ (bottom))
for $T=4 \Delta_0$ and $\alpha=0.0016$ with the result of Ao and Rammer
(dotted lines). In fact, the latter describes
qualitatively the maximum both for $P$ and $Q$, but fails quantitatively.
Similarly, we were not able to match either Eq.\ (54) of Ref.\ \cite{LZKa1998}
(dashed line in Fig.~\ref{figH7}) nor Eq.\ (40) of Ref.\ \cite{LZPo2007}
(dash-dash-dotted line in Fig.~\ref{figH7}) with our exact data, see Fig.~\ref{figH7}.
Both describe a
reduction of the Landau-Zener probability with increasing temperature but
neither the maximum nor the subsequent minimum
in the behavior versus $v$ is predicted correctly.
\subsection{Physical picture}
\label{physpic}
The observed behavior can be understood within a simple physical picture
realizing that the bath induces relaxation as main effect, whose time scale can compete
with the Landau-Zener sweep velocity.
Since initially the system is in the ground-state, only absorption can occur, if
an excitation with energy $\Delta_t=\sqrt{\Delta_0^2+(vt)^2}$ exists in the
bath spectrum and is thermally populated. Since $\Delta_t$ is (slowly)
changing with time, relaxation can only occur during a time window $|t|\le\halb
t_r$ with the resonance time
\be \label{restime}
t_r \,=\, \frac{2}{v}\sqrt{\Delta_c^2-\Delta_0^2} \, ,
\ee
in which the energy splitting fulfills the condition
$\Delta_{t}\le\Delta_c=\mbox{min}\{T,\omega_c\}$ \cite{note1} as illustrated in
Fig.~\ref{figH1}.
In order for relaxation processes to contribute, the (so far unknown) relaxation
time $\tau_r$ must be shorter than $t_r$.
For large sweep velocities, $t_r\ll\tau_r$, relaxation is negligible and no
influence of the bath is found as expected. In the opposite limit,
$t_r\gg\tau_r$, relaxation will dominate and the two levels will at any time
adjust their occupation to the momentary $\Delta_t$ and $T$. Once
$\Delta_t\ge\Delta_c$, relaxation stops since no spectral weight of the bath
modes is available and the corresponding
``critical'' Landau-Zener probability can be estimated as
\be \label{pc}
P_c \,=\, \halb \left[ 1+\tanh\left(\frac{\Delta_c}{2 T} \right) \right] .
\ee
For small but finite $v$, equilibration is retarded. The two levels need
the finite relaxation time to adjust their occupation to the momentary $\Delta_t$
and $T$. In this time, however, $\Delta_t$ might
exceed $\Delta_c$ and then relaxation is not possible anymore. Thus equilibrium is
reached for an energy splitting in the past $\Delta_{t'}<\Delta_{t_c}$ with $t'<t_c$
and $t_c$ the time when $\Delta_{t_c}=\Delta_c$.
Accordingly, $P$ increases with decreasing $v$ since $\Delta_t$ changes slower and
$P(v\rightarrow0)\le
P_c$, as observed in Fig.~\ref{figH5}. In Fig.~\ref{figH15} (main), we plot the
Landau-Zener probability $P(v=0.005\Delta_0^2)$ (blue diamonds) for the smallest investigated sweep velocity
and compare it with $P_c$ of Eq.~(\ref{pc}) (blue full line).
Relaxation will maximally suppress the Landau-Zener transition when both time scales
coincide, leading to a minimum of $P$ at $v_{\rm min}$ given by the condition
\be\label{vmin} t_r(v_{\rm min})=\tau_r(T,\alpha,\omega_c).
\ee
Within the resonance time window, $|t|\le\halb t_r$, only a single phonon absorption is likely.
We thus can assume equilibration associated to a time-averaged energy splitting
$\overline{\Delta}_r=(2/t_r)\int_0^{t_r/2}dt\Delta_t\simeq\mbox{max}\{\Delta_c/2
,\Delta_0\}$
and a resulting Landau-Zener probability
\be\label{pvmin} P_{\rm min} \,=\, \halb
\left[1+\tanh\left(\frac{\overline{\Delta}_r}{2T}\right)\right] = P(v_{\rm min}).
\ee
Subsequently, there is a maximum for a sweep velocity between $v_{\rm
min}<v_{\rm max}<\Delta^2_0$.
Fig. \ref{figH15} (main) shows the QUAPI data $P(v_{\rm min})$ (black circles) vs. $T$
together with $P_{\rm min}$ given in Eq.~(\ref{pvmin}) (black dashed
line).
For a fixed time and for weak coupling, we can estimate the decay rate out of
the ground state using Golden Rule, $\tau^{-1}(t)=\pi\alpha(\Delta_0^2/\Delta_t)
\exp(-\Delta_t/\omega_c) n(\Delta_t)$ with the Bose factor
$n(\Delta_t)=[\exp(\Delta_t/T)-1]^{-1}$.
For the time-dependent Landau-Zener problem at slow sweep velocities, we may
assume that the bath sees a time-averaged two-level system and thus estimate
the relaxation rate $\tau_r^{-1}$ by using the time-averaged energy splitting
$\overline{\Delta}_r$, i.e.,
\be\label{rate} \tau_r^{-1} \,\simeq\, \pi\alpha
\frac{\Delta^2_0}{\overline{\Delta}_r}\, \exp(-\overline{\Delta}_r/\omega_c)
n(\overline{\Delta}_r)\, .
\ee
For increasing temperature, relaxation becomes faster, and, accordingly, the
condition for $v_{\rm min}$, Eq.~(\ref{vmin}), leading to
\be\label{vmin2} v_{\rm min} \,=\, 2\tau_r^{-1} \sqrt{\Delta_c^2-\Delta_0^2}
\ee
is fulfilled for larger velocities. Qualitatively this picture describes the
temperature dependence of $v_{\rm min}$ observed in the inset of
Fig.~\ref{figH15}. We plot $v_{\rm min}$ (red squares) taken from the data of
Fig.~\ref{figH5} and $v_{\rm min}$ according to Eq.~(\ref{vmin2}).
In detail, given the section-wise definition of the parameters $\Delta_c$ and
$\overline{\Delta}_r$, the agreement in Fig.~\ref{figH15} is rather satisfactory,
keeping in mind that there are no adjustable parameter involved.
\subsection{Excitation survival probability at weak coupling}
Figure~\ref{figH6} shows the excitation survival probability $Q$
versus sweep velocity $v$ for different temperatures and
for weak coupling, $\alpha=0.0016$. As for
the Landau-Zener probability, the regime with large velocities, $v\gg
\Delta_0^{2}$ is distinguishable from a regime with small velocities (adiabatic
regime) $v\lesssim\Delta_0^{2}$.
For large sweep velocities, the excitation survival probability decreases
rapidly. There is no sizable difference to the undamped case
for all the considered temperatures, as expected in the
high temperature limit, $Q_{SD}(v\gg \Delta_0^{2})$ as well as in the low
temperature limit for weak coupling $Q_0(v\gg \Delta_0^{2})$ and strong coupling
$Q_{\rm sc}(v\gg \Delta_0^{2})$ since
\bee Q_{\rm sc}(v\gg \Delta_0^{2}) &\simeq& Q_0(v\gg \Delta_0^{2}) \,\simeq\,
Q_{SD}(v\gg \Delta_0^{2}) \nonumber\\
&\simeq& P_{0}(v\gg \Delta_0^{2})\,\simeq\, \halb\pi\frac{\Delta^2}{v}
\eee
At low temperatures, $T\lesssim\Delta_0$, no sizable influence of the bath is
found as the data in Fig.~\ref{figH6} for $T=0.01\Delta_0$ almost coincide with
the data for $T=0.5\Delta_0$ (and even more so for data at $T=0.05\Delta_0$ and
$T=0.1\Delta_0$ not shown in the figure). However, for all temperatures we find
the excitation survival probability to peak for sweep velocities between $(0.1 -
1)\Delta_0^2$ and a strong decrease for lower sweep velocities as predicted by
\cite{LZAo1991,LZKa1998}. For small sweep velocities relaxation of the excited
state takes place and thus reduces the excitation survival probability as soon
as the relaxation time becomes comparable or shorter than the resonance time.
Due to spontaneous emission relaxation is present at all temperatures.
Furthermore we would expect in this picture that only at temperatures
$T\ge\Delta_0$ the relaxation time becomes shorter due to induced emission by
excited phonons and thus only for $T\ge\Delta_0$ a temperature effect should be
visible in the excitation survival probability. This is confirmed
by the results in Fig.~\ref{figH6}. Consistent with our simple physical picture
introduced in the last section is that the sweep
velocity of the sharp reduction of the excitation survival probability coincides
roughly with the minimum in Fig.~\ref{figH5} for $T>\Delta_0$. It, as well,
shifts to larger $v$ for increasing temperatures.
Taking the sweep velocity $v_a=v[Q(T)=\halb\Delta_0]$ (with temperature fixed) as
a measure for the {\it sweep velocity of the sharp reduction}, we expect $v_a$ to
be the sweep velocity for which the resonance time (Eq.~(\ref{restime}))
coincides with the relaxation time, up to a prefactor $f_a$ of order one which
is due to the arbitrary definition of $v_a$, i.e.,
\be \label{form2}
f_a t_r(v_a) \,=\, \tau_{d,r} \, .
\ee
The inverse decay time of the excited state is now given as
\be\label{downrate} \tau_{d,r}^{-1} \,\simeq\, \pi\alpha
\frac{\Delta^2_0}{\overline{\Delta}_r}\, \exp(-\overline{\Delta}_r/\omega_c)
\left[ 1 + n(\overline{\Delta}_r)\right] \, .
\ee
Since thermal occupation is no limiting factor for the decay of the excited state,
we have that $\Delta_c=\omega_c$ and thus $\overline{\Delta}_r=\omega_c/2$.
Fig.~\ref{figHB2}
plots $v_a$ (black squares) versus temperature which nicely shows a temperature
behavior $\sim [1 + n(\overline{\Delta}_r)]$. The black line marks the result
of the best fit of Eq.~(\ref{form2}) with the
fitting parameter $f_a\simeq 3.04$, confirming again the good
agreement between our numerical data and our simple physical picture.
In the high temperature limit, $Q\rightarrow P_{SD}$~\cite{LZAo1991,LZKa1998}.
In our data, we observe that the peak in $Q$ decreases with increasing
temperature and $Q$ at smallest investigated sweep velocity increases with
increasing temperature. Both features are in agreement with the expectation in
the high temperature limit. For the smallest investigated sweep velocity,
the relaxation time should exceed the resonance time by far and thus we expect within our simple
picture that the system fully relaxes towards the momentary equilibrium until
the energy splitting exceeds $\Delta_c=\omega_c$. This yields the condition
\be \label{qc}
Q_c \,=\, \halb \left[ 1-\tanh\left(\frac{\Delta_c}{2 T} \right) \right] .
\ee
The inset of
Fig.~\ref{figHB2} compares $Q_c$ with $Q(v=0.005\Delta_0^2)$ extracted from
Fig.~\ref{figH6} and again shows satisfactory agreement.
The excitation survival probability is numerically much harder to determine
than the
Landau-Zener probability. For example, at the sweep velocities
$v=0.005\Delta_0^2$ our data did not converge for temperatures below
$T=4\Delta_0$ and hence is not shown. According to our physical picture,
the main effects of the bath occur during the resonance time window. Outside of it,
only multi-phonon processes can occur, which are
strongly suppressed due to the weak coupling. This directly affects our numerical
scheme which is thus not sensitive to the very long times needed to perform a Landau-Zener
experiment (from $t=-\infty$ to $t=+\infty$). In contrast,
for the Landau-Zener probability
all relaxation effects outside the resonance time window are additionally suppressed
by thermal occupation numbers of the phonons. This effect does not influence the
excitation survival probability due to spontaneous decay. These multi-phonon
processes outside the resonance time window are, however, processes which need long
memory in the bath and may thus be critical for convergence of our numerics. We should
emphasize that our simple physical picture even explains our stronger
numerical efforts to obtain
converged results for the excitation survival probability.
In summary, at weak coupling our simple physical picture explains qualitatively fully and quantitatively satisfactorily both the Landau-Zener as well as the excitation survival probability. Both are unmodified by the dissipative effects of the environment for large velocities, $v\gg\Delta_0^{2}$. For slow velocities , $v\lesssim\Delta_0^{2}$, both are strongly influenced due to relaxation where a striking difference emerges since the excited state can relax by spontaneous emission whereas the ground state can only absorb a phonon when one is thermally occupied. Thus the Landau-Zener probability at vanishing temperature is unmodified by the presence of the bath whereas the excitation survival probability is strongly altered, i.e. decreases rapidly for sweep velocities smaller than $v_a$ determined by the competition of driving and relaxation.
\subsection{Medium to strong couplings}
Surprisingly, our simple picture still holds qualitatively for stronger damping,
when the Golden Rule is not expected to hold.
Increasing $\alpha$ enhances relaxation and $\tau_r$ and $\tau_{d,r}$ decrease.
Thus, the sweep velocity of the minimum in the Landau-Zener probability as well
as $v_a$ in the excitation survival probability increase for larger coupling
strengths $\alpha$ (at fixed temperature).
This is confirmed by Fig.~\ref{figH8} where $P$ is shown for
the same temperatures as in Fig.~\ref{figH5}, but for a larger value of
$\alpha=0.02$. Fig.~\ref{figH9} shows the corresponding excitation
survival probability for $\alpha=0.02$.
The minimum of $P$ and $v_a$ for $Q$ are still observable for $\alpha=0.02$ for
temperatures $\Delta_0\lesssim T\lesssim 4\Delta_0$.
At higher temperatures, only a shoulder remains for $P$ in Fig.~\ref{figH8} and
$Q$ in Fig.~\ref{figH9} does not exceed $1/2$ anymore.
Results for $P$ and $Q$ for even stronger coupling $\alpha=0.2$ are shown in
Fig.~\ref{figH10}. For $\alpha=0.2$, the local extrema in $P$ disappear, but a monotonic growth of
$P$ with decreasing sweep velocity is still in line with our simple picture. For
coupling strengths $\alpha\ge1/\sqrt{2}$ no bath influence is expected
anymore~\cite{LZAo1991} for the Landau-Zener probability, consistent with our
data. Therefore, we focus on $\alpha\le0.2$. At the same time the excitation
survival probability is expected to follow $Q_{\rm sc}=P_0(1-P_0)$ at strong
coupling and low temperatures~\cite{LZKa1998}. As seen in Fig.~\ref{figH10}, $Q$
indeed approaches $Q_{\rm sc}$ with decreasing temperature for $\alpha=0.2$.
This striking behavior of the excitation survival probability at strong
coupling opens a road to determine the coupling strengths in experimental
systems. Under weak coupling conditions, the minimum in the Landau-Zener probability
allows to obtain $\alpha$. For strong coupling, the minimum vanishes but the
excitation survival probability still shows a clear peak. Especially the
temperature dependence of the peak clearly separates strong from weak coupling.
For strong coupling, the peak height increases with temperature while for
weak coupling, the peak height decreases with
increasing temperature.
\subsection{Dependence of $P$ and $Q$ on the system-bath coupling}
Next, we investigate the dependence of $P$ and $Q$ on the system-bath coupling
in the regime of weak coupling.
Fig.\ \ref{figH11} shows the Landau-Zener probability for different $\alpha$ at
a fixed temperature $T=4 \Delta_0$. For increasing coupling, $v_{\rm min}$
shifts to larger velocities. In fact, $v_{\rm min}$ depends linearly on
$\alpha$, see inset of Fig.\ \ref{figH11}.
The linear dependence is also predicted by our model, i.e., $v_{\rm
min}=15.8\alpha\Delta_0^2$,
in very good agreement with the fit $v_{\rm min}=17.54\alpha\Delta_0^2$.
The decreasing maximum $P(v_{\rm max})$ results from the shifting minimum.
Another remarkable fact is that the Landau-Zener probability $P(v_{\rm min})$ at
the minimum velocity is independent of $\alpha$, as predicted by our
physical picture, see Eq.~(\ref{pvmin}). We estimate the averaged splitting
$\overline{\Delta}_r=T/2=2\Delta_0$, in fair agreement with
$\overline{\Delta}_r=2.476\Delta_0$, obtained with $P(v_{\rm min})=\halb
[1+\tanh(\overline{\Delta}_r/2T)]$ from the data in Fig.\ \ref{figH11}. This
agreement strongly supports our physical picture that relaxation dominates in
the intermediate temperature range for small sweep velocities.
The excitation survival probability $Q$ is shown in Fig.~\ref{figH12} for
different $\alpha$ at a fixed temperature $T=4 \Delta_0$. For increasing
coupling, $v_a$ shifts to larger velocities. As before $v_{\rm min}$ for the
Landau-Zener probability, now $v_a$ for the excitation survival probability
depends linearly on $\alpha$, see inset of Fig.~\ref{figH12}.
Again, the linear dependence is predicted by our model, i.e.,
$v_a=32.3\alpha\Delta_0^2$ (taking the previously determined factor $f_a=3.04$
into account),
in very good agreement with the fit $v_{\rm min}=37.4\alpha\Delta_0^2$.
The decreasing peak height in $Q$ results from the shifting $v_a$.
\subsection{Dependence on cut-off frequency}
There is an additional time scale provided by the bath dynamics, which
determines how fast the bath relaxes to its own thermal equilibrium
due to the coupling to the system. It is given by the reorganization energy
\cite{WeissBuch} and depends on the cut-off frequency $\omega_c$
of the bath spectrum. In turn, the relaxation rate
(\ref{rate}) also depends on $\omega_c$ and relaxation is strongly suppressed when
$\Delta_t>\omega_c$.
Fig.\ \ref{figH13} shows $P$ and Fig.\ \ref{figH14} shows $Q$ for different $\omega_c$,
ranging down to $\omega_c =0.5 \Delta$. Such small values of the cut-off
frequency describe slow bath fluctuations, a situation, for instance,
typical for the biomolecular exciton dynamics in a protein-solvent
environment \cite{BioTh2008}.
With decreasing cut-off frequencies, the minimum in $P(v)$ as well as $v_a$ in $Q$
shift to smaller $v$ as qualitatively expected from Eq.\ (\ref{rate}) and (\ref{downrate}) respectively.
We note that this is rather
surprising since a small $\omega_c$ also induces strong non-Markovian effects.
At the same time, the Landau-Zener probability $P(v_{\rm min})$ decreases.
With decreasing $\omega_c$, the resonance time $t_r$ and the averaged energy
splitting $\overline{\Delta}_r$ also decrease. For cut-off frequencies
$T\le\omega_c$, we expect $v_{\rm min}=0.19\exp(-2\Delta_0/\omega_c)$, in fair
agreement with the fit $v_{\rm min}=0.23\exp(-2\Delta_0/\omega_c)$. This is
shown in the upper inset in Fig.\ \ref{figH13}. The lower inset shows $P(v_{\rm
min})$ versus $\omega_c$. The solid lines are predictions from our model and are
in fair agreement with data. A similar analysis for $v_a$ was not possible since for
small $\omega_c$ the excitation survival probability $Q(v)$ did not
sharply fall below $\halb$ and thus $v_a$ could not be determined unambiguously.
\section{Summary}
We have investigated the dissipative Landau-Zener problem by means of the numerically
exact quasiadiabatic propagator path-integral \cite{QUAPI1,QUAPI2,QUAPI3}
approach for an Ohmic bath.
Thereby we discussed the Landau-Zener probability (to end up in the ground state
when starting in the ground state) as well as the excitation survival probability (to end up in the excited state when starting in the excited state).
In the limits of large and small sweep velocities and low temperatures, our results coincide with
analytical predictions \cite{LZAo1991,LZKa1998,LZWu2006,LZZu2008}.
In the intermediate regime, when the
sweep velocities are comparable to the minimal Landau-Zener gap and intermediate temperatures, we have identified novel
non-monotonic dependencies of the Landau-Zener probabilities
on the sweep velocity, temperature, system-bath coupling strength
and cut-off frequency. This parameter range is clearly not accessible
by perturbative means.
The observed behavior can be understood in rather simple physical terms as a nontrivial
competition between relaxation and Landau-Zener driving.
The main difference between the Landau-Zener and the excitation survival probability results from the simple fact that the excited state can always decay via spontaneous emission while the ground state needs a thermally excited phonon in order to become excited. Thus even at vanishing temperature the excitation survival probability decreases strongly for small enough driving speed whereas the Landau-Zener probability only for high temperatures.
As nowadays advanced experimental set-ups allow for a rather
comprehensive control of the parameters, this
novel feature should be accessible by available experimental techniques.
We thank V. Peano and S. Ludwig for discussions and acknowledge support by the
Excellence Initiative of the German Federal and State Governments.
|
1,116,691,497,712 | arxiv | \section*{Introduction}
Complex engineered systems are known to exhibit unintended states in their collective dynamics that often disrupt their function \cite{Strogatz2005, buldyrev2010catastrophic, Havlin2012, Helbing2013, Schroder2018}. In complex mobility systems, examples include the emergence of congestion \cite{loder2019understanding, Helbing2000b}, anomalous random walks in human travel patterns \cite{brockmann2006scaling} and cascading failures of mobility networks \cite{bashan2013extreme, li2015percolation, zeng2019switch}. As urban mobility becomes increasingly self-organized and digitized, mobility services increasingly employ dynamic pricing schemes \cite{dynamicPricingGeneral, AmazonDynamicPricing, schafer2015decentral, LyftPricing2019, UberSurge2019}. Dynamic pricing in general serves two main purposes (Fig.~\ref{fig:Fig1}a). First, it adjusts the price of a product or service to compensate for changes in its intrinsic base cost.
Second, it creates incentives for all market participants to equilibrate demand-supply imbalances by increasing the price if demand exceeds supply and vice versa.
A higher price both imposes higher costs to customers incentivizing them to decrease their demand and, at the same time, offers higher profit for identical service to suppliers, in turn motivating them to increase their supply.
However, recent reports on on-demand ride-hailing \cite{ABC7News2019a, ABC7News2019b, Mohlmann2017} indicate that dynamic pricing may have the opposite effect and instead \emph{cause} demand-supply imbalances.
Here we quantitatively demonstrate the existence of these imbalances by comparing price time series and demand estimates for ride-hailing services.
In a game theoretic analysis we reveal the incentive structure for drivers to induce anomalous supply shortages as a generic feature of dynamic pricing.
This observation suggests that similar dynamics should emerge independent of the location or industry.
Comparing price time series for 137 locations in 59 urban areas across six continents we indeed find price dynamics reflecting anomalous supply shortages in several cities around the world.
\newpage
\section*{Results}
Dynamic pricing schemes in mobility services are commonly applied by on-demand mobility service providers, such as Lyft and Uber \cite{LyftPricing2019, UberSurge2019}.
For Uber, the price of the service (the total fare for a ride) decomposes into the same two parts described above \cite{UberSurge2019}, base cost $p_\mathrm{base}$ and surge fee $p_\mathrm{surge}$,
\begin{equation}
p = p_\mathrm{base} + p_\mathrm{surge}(D,S) \,,
\end{equation}
as illustrated in Figure~\ref{fig:Fig1}b for trips from Reagan National Airport (DCA) to Union Station in Washington, D.C. (see Methods and Supplementary Material for more information).
\newpage
\begin{figure*}
\centering
\includegraphics{FIG1_dynamicPricing_v15.pdf}
\caption{\textbf{Dynamic pricing in on-demand mobility}. \textbf{a}, Schematic illustration of dynamic pricing. The total price separates into the base cost of the product or service and a supply and demand dependent surge fee. Three fundamental mechanisms underlying price changes are (i) changes of the base cost, (ii) demand exceeding current supply levels and (iii) supply shortage compared to current demand. Price adaptations (ii) and (iii) are intended to drive the system back to a supply-demand equilibrium. \textbf{b}, The total fare for Uber ride-hailing services similarly decomposes into base cost and surge fee.
Base cost depend on trip duration and reflect current traffic conditions while surge fees result from supply-demand imbalances. Both effects are illustrated here for trips between Reagan National Airport (DCA) and Washington Union Station in Washington, D.C., USA.
During commuting hours (grey) base cost increase because of longer expected trip duration
during rush-hour.
The slower speed effectively reduces the supply of available drivers as they spend more time in traffic and naturally causes accompanying surge fees. During late evening and nighttime, the total fare exhibits repeated price surges triggered by supply-demand imbalances (dashed box) not reflected in the demand dynamics (passenger capacity of airplanes landing in DCA). \textbf{c}, Supporting the previous observation, no apparent correlation exists between the surge fee and the demand dynamics during the evening hours (20:00 - 02:00), even at five and 38 minute delays, the two local maxima of the correlation function (see Supplementary Material for a more detailed analysis).
}
\label{fig:Fig1}
\end{figure*}
\clearpage
The first component (base cost) are regular fees for a ride
\begin{equation}
p_\mathrm{base} = p_0 + p_t \, \Delta t + p_l \, \Delta l \,,
\end{equation}
including one-off fees $p_0$ as well as trip fees $p_t$ and $p_l$, proportional to the duration $\Delta t$ and distance $\Delta l$ of the trip, similar to the fare for a typical taxi cab.
These base cost increase, for example, during times of heavy traffic, such as morning and evening commuting hours (grey shading in Fig.~\ref{fig:Fig1}b) when the trip duration $\Delta t$ increases due to congestion.
The second component (surge fee $p_\mathrm{surge}$) implements Uber's \emph{surge pricing} algorithm \cite{UberSurge2019, Garg2019} and reflects the time evolution of supply-demand imbalances.
The surge fee
increases due to persistent supply-demand imbalance during commuting hours.
Longer trip duration means that drivers spend more time in traffic serving the same number of customers which effectively reduces the supply of available drivers compared to the demand and causes an increase of the surge fee.
These price surges are meant to incentivize customers to delay their request, reducing the current demand, as well as to incentivize drivers to offer their service in areas or at times with high demand, increasing the supply.
As illustrated in Fig.~\ref{fig:Fig1}b, during the evening the system settles
to constant base cost, reflecting constant trip duration in uncongested traffic.
Yet,
even under these apparent equilibrium conditions, the surge fee exhibits a series of short, repeated price surges
(dashed box in Fig.~\ref{fig:Fig1}b) that are not reflected in the demand dynamics (Fig.~\ref{fig:Fig1}c).
Consistent with this observation, recent reports \cite{ABC7News2019a, ABC7News2019b, Mohlmann2017} suggest that Uber drivers at DCA and other locations cause artificial supply shortages on purpose to induce these price surges.
This behavior enables drivers to increase their revenue by capitalizing on the increased total fare.
Still, a couple of key questions remain open.
First, what is the dynamic origin of these non-equilibrium dynamics and under which conditions do they emerge?
Second, can this non-equilibrium state be identified from available data without direct observation of the supply dynamics?
\newpage
\begin{figure*}[h]
\centering
\vspace{5cm}
\includegraphics{FIG2_gametheory_v10.pdf}
\vspace{3cm}
\end{figure*}
\newpage
\begin{figure*}
\centering
\caption{
\textbf
Incentive structure in dynamic pricing}. \textbf{a}, A two player game captures the fundamental incentives for drivers.
Both drivers compete for a fixed average number of customers $1 \le D \le 2$. The drivers may choose to temporarily switch off their apps to induce an artificial supply shortage and additional surge fees (see Methods and Supplementary Material for details). (top left): If both drivers keep their apps on, both earn $p_\mathrm{low}$ (\$) with probability $D/2$. (top right and bottom left): If one driver switches their app off, the total fare increases to $p_\mathrm{mid}$ (\$\$). However, the other driver exploits their first-mover advantage to secure a customer, earning guaranteed $p_\mathrm{mid}$, while the offline driver only earns $(D-1) \, p_\mathrm{mid}$ from the remaining demand. (bottom right): If both drivers switch off their apps, they induce a larger supply deficit and thus a larger surge fee, resulting in the total fare $p_\mathrm{high}$ (\$\$\$). Both drivers again share the demand equally when they go back online.
\textbf{b}, Phase diagram of the resulting Nash equilibria in the two player game. (i): If the demand is sufficiently large,
the game is trivial and both drivers always switch off their app, triggering anomalous supply shortages (orange).
At low demand
the game becomes a prisoner's dilemma \cite{PrisonersDilemma2019} or stag hung \cite{Osborne1994} and both drivers remain online (green). (ii) and (iii): As the demand becomes more elastic and decreases as the price increases, drivers switching off their app risk missing out on a customer completely and the parameter range promoting artificial price surges becomes smaller (orange). Drivers are more likely to both remain online (green).
\textbf{c}, A dynamic game with multiple drivers (see Methods and Supplementary Material for details) qualitatively reproduces the observed dynamics (compare DCA, Fig.~\ref{fig:Fig1}b): Sustained non-zero surge fees occur during commuting hours (grey). During non-commuting hours, drivers cooperate to induce artificial supply shortages and create price surges to optimize their collective profit.
}
\label{fig:Fig2}
\end{figure*}
\clearpage
A first principles game theoretic description captures the fundamental incentives underlying the anomalous supply shortages:
$S = 2$ drivers are competing for a fixed demand $D$ aiming to maximize their expected profit (Fig.~\ref{fig:Fig2}a). For illustration, we take a piecewise linear function such that drivers earn the total fare
\begin{eqnarray*}
p^\prime(S,D) = \begin{cases}
p_\mathrm{base} \quad
\text{if}\quad S \ge D \\
p_\mathrm{base} + p_\mathrm{surge}^\mathrm{max} \, \frac{D-S}{D}& \text{else}
\end{cases}
\end{eqnarray*}
when they serve a customer, where $p_\mathrm{base}$ denotes the (constant) base cost and $p_\mathrm{surge}^\mathrm{max}$ denotes the maximum possible surge fee when $S = 0$ (see Methods and Supplementary Material for details).
Each driver has the option to temporarily not offer their service, contributing to an artificial supply shortage, $S < 2$.
As drivers turn off their app, the fare increases from $p_\mathrm{low} = p^\prime(2,D)$ with both drivers online over $p_\mathrm{mid} = p^\prime(1,D) \ge p_\mathrm{low}$ as one driver goes offline to $p_\mathrm{high} = p^\prime(0,D) \ge p_\mathrm{mid}$ when both drivers withhold their service.
While drivers who do not offer their service would typically miss out on a customer, the use of online mobile applications in most ride-sharing services enables them to quickly change their decision.
Turning their app back on, they can capitalize on the additional surge fee and earn the higher total fare by quickly accepting a customer before the dynamic pricing algorithm reacts (Fig.~\ref{fig:Fig2}a, see Methods for details).
Figure~\ref{fig:Fig2}b illustrates the phase diagram of the resulting Nash equilibria.
When the demand is inelastic and does not change as the price increases [Fig.~\ref{fig:Fig2}b, panel (i)], the payoff structure of the game changes from a prisoner's dilemma \cite{PrisonersDilemma2019} over a stag hunt \cite{Osborne1994} to a trivial game as demand increases.
At low demand, the high risk of completely missing out on a customer if the other driver remains online disincentivizes switching off the app.
At high demand, this risk disappears and drivers always profit from inducing artificial supply shortages to earn the additional surge fee.
As the demand becomes more elastic, i.e. the demand decreases in response to an increase of the total fare as
\begin{align}
D^\prime(p^\prime, D) = D \, (1 - \delta\, (p^\prime - p_\mathrm{base})) \,
\end{align}
where $\delta$ denotes the price elasticity of the demand,
the risk of missing out on a customer increases and the range of parameters where artificial price surges are incentivized becomes smaller [Fig.~\ref{fig:Fig2}b, panel (ii) and (iii)].
In general the specific conditions promoting artificial price surges depend on the details of the demand dynamics. Nonetheless, the supply-side incentives remain qualitatively unchanged and are a generic property of dynamic pricing schemes.
To isolate the impact of these supply side incentives, we simulate a time-continuous game under constant conditions (constant demand, a constant number of drivers and a constant price elasticity of demand) where the ON-OFF-decisions of the drivers are the only remaining dynamics (Fig.~\ref{fig:Fig2}c).
Drivers react to the current conditions and can choose to turn their app on or off at any time.
They contribute to an artificial supply shortage if sufficiently many other idle drivers are willing to also participate, following their mean-field optimal strategy.
To avoid never making profit, however, individual drivers remain offline only for a short amount of time, explicitly fixing the timescale of potential artificial price surges (see Supplementary Material for details).
The simulations reproduce qualitatively the same non-equilibrium price dynamics as observed in the recorded price data (compare Fig.~\ref{fig:Fig1}b):
Increases of the trip duration during commuting hours (grey shading in Fig.~\ref{fig:Fig2}c) are accompanied by a sustained supply-demand imbalance and surge fees without drivers turning off their app.
At other times, the drivers create short, artificial price surges to maximize their profit.
This result demonstrates that the systemic incentives in dynamic pricing schemes alone are sufficient to cause anomalous supply dynamics.
The fact that these incentives are generic to dynamic pricing schemes suggests that artificial supply shortages and non-equilibrium surge dynamics emerge independent of the location.
However, direct observation of the supply dynamics, e.g. of the number and location of online drivers, is typically impossible as this information is not publicly available.
Even with the above results, a bottom-up prediction
is practically not feasible since the exact conditions under which these dynamics are promoted depend on the specific details of the trip, the local demand dynamics, publicly unavailable details on the surge pricing algorithm as well as additional external influences such as local legislation.
We overcome these obstacles by exploiting the characteristic temporal structure of the surge dynamics (compare Fig.~\ref{fig:Fig1}b) to identify locations with similar dynamics that are
characteristic for
artificial supply shortages.
Based only on the price time series, without requiring further input on demand or supply or the specific dynamic pricing algorithm, we quantify the timescales of normalized price changes $\Delta p$
for 137 different routes in 59 urban areas across six continents (Fig.~\ref{fig:Fig3}a, see Methods for details).
The distribution of price changes separates into a slow and fast timescale and a contribution where the price does not change
\begin{equation}
P\left(\Delta p\right) = w_\mathrm{base} \, P_\mathrm{base}\left(\Delta p; \sigma_\mathrm{base}\right) + w_\mathrm{surge} \, P_\mathrm{surge}\left(\Delta p; \sigma_\mathrm{surge}\right) + w_0 \, \delta(\Delta p) \,.
\end{equation}
The slow price changes $P_\mathrm{base}\left(\Delta p; \sigma_\mathrm{base}\right)$ describe changes of the base cost varying as slowly as traffic conditions change during the day.
The fast price changes $P_\mathrm{surge}\left(\Delta p; \sigma_\mathrm{surge}\right)$ are associated with sudden changes of the surge fee.
The last term $w_0 \, \delta(\Delta p)$ describes times when the price remains constant price and contributes only at $\Delta p = 0$, where $\delta$ represents the Dirac-Delta distribution and $w_0$ the remaining weight $w_0 = 1 - w_\mathrm{base} - w_\mathrm{surge}$.
Characterizing the contribution $w_\mathrm{surge}$ of the surge fee and the magnitude $\sigma_\mathrm{surge}$ of the associated price changes with a maximum likelihood Gaussian mixture model fit (see Methods for details)
\begin{equation}
P_x(\Delta p; \sigma_x) = \frac{1}{\sqrt{2 \pi \sigma_x^2}}\,e^{-\frac{\Delta p^2}{2\sigma_x^2}}
\end{equation}
with $x \in \left\{\mathrm{base}, \mathrm{surge}\right\}$ we find
locations without surge activity (Fig.~\ref{fig:Fig3}b and c) as well as locations with strong but infrequent price surges (Fig.~\ref{fig:Fig3}e).
Importantly, we also identify several locations with price change characteristics similar to those observed at DCA, with a high magnitude and contribution of surge price changes, suggesting strong and frequent price surges potentially driven by anomalous supply dynamics (compare Fig.~\ref{fig:Fig3}d).
\newpage
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{FIG3_classifications_v15.pdf}
\caption{\textbf{Characterizing non-equilibrium surge dynamics}.
The timescales of price changes characterize the surge dynamics at different locations.
These price changes separate into small, slow changes $P_\mathrm{base}$ corresponding to varying base costs and fast, strong changes $P_\mathrm{surge}$ corresponding to the surge fee (red and blue line in the histograms in \textbf{(b)}-\textbf{(e)}, respectively).
\textbf{a}, Characterizing locations by the total weight $w_\mathrm{surge}$ of the surge component of the price change distribution
and the magnitude $\sigma_\mathrm{surge}$ of the associated price changes
reveals several locations [e.g., Warsaw, Montreal, Chicago, New York City [city(1) and station(2)] and Chennai] with similar characteristics to DCA (see Fig.~\ref{fig:Fig4} and Methods and Supplementary Material for more details and additional examples).
\textbf{b} and \textbf{c}, Locations with low surge strength $\sigma_\mathrm{surge}$ exhibit no significant surge activity and no price changes on a fast time scale, shown here for Johannesburg (JNB, South Africa) and Brussels (BRU, Belgium).
\textbf{d}, Locations with high surge strength $\sigma_\mathrm{surge}$ and small surge contribution $w_\mathrm{surge}$ exhibit relatively few price surges (San Francisco, USA).
\textbf{e}, Locations with high surge strength $\sigma_\mathrm{surge}$ and high surge contribution $w_\mathrm{surge}$ exhibit a large number of fast price surges potentially driven by artificially induced supply shortages. Figure~\ref{fig:Fig4} confirms that the surge fee dynamics at these locations is indeed similar to the dynamics observed at DCA (Washington D.C., USA).
}
\label{fig:Fig3}
\end{figure*}
\clearpage
Indeed, all of the identified locations exhibit qualitatively similar non-equilibrium surge fee dynamics with a large number of repeated price surges, in particular during evening hours, demonstrating that the phenomenon is ubiquitous (Fig.~\ref{fig:Fig4}, see Supplementary Material for additional examples).
While these results do not directly imply that the price surges at these locations are artificially induced, the similarity to confirmed artificial price surges and the universality of the incentives for drivers makes it a likely conclusion.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{FIG4_SurgeCities_v5.pdf}
\caption{\textbf{Identifying non-equilibrium surge dynamics and anomalous supply shortages}. Repeated price surges similar to those observed at DCA (compare Fig.~\ref{fig:Fig3}e) emerge in locations across the globe (America, Asia and Europe) and independent of type of origin (airport, train station and other prominent locations). The surge dynamics at the six locations identified in Fig.~\ref{fig:Fig3}a is qualitatively and statistically similar to DCA.
The time scale separation indicates that all these observed surge dynamics are similar to characteristic dynamics
of anomalous supply shortages.
In particular, sustained periods with non-zero surge fee likely reflect a real supply-demand imbalance at that time while periods with repeated surge peaks are characteristic for price surges induced by artificial supply shortages (e.g. Warsaw evening, Montreal evening, Chicago evening, New York City afternoon and evening).
}
\label{fig:Fig4}
\end{figure*}
\clearpage
\section*{Discussion}
In summary, we
quantitatively demonstrated the emergence of non-equilibrium price dynamics in on-demand mobility systems
at various locations
across the globe.
We
showed that the fundamental incentives
sufficient for
promoting
their
emergence
through supply anomalies
constitute a generic property of dynamic pricing schemes.
In particular, these incentives are independent of the urban area, the specific route, or the exact type of mobility service and should even apply across industries.
Our methodology to classify the price dynamics of
on-demand mobility systems without explicit knowledge about the time-resolved demand and supply evolution
enables
a direct, systematic search for
supply
anomalies based on price time series only.
Furthermore, characterizing
the incentives and the conditions that promote artificially induced price surges enables targeted action to suppress the emergence of
such supply anomalies.
Specific
actions may include
offering ride-sharing options \cite{Vazifeh2018, Tachet2017, Santi2014, Molkenthin2019} (effectively lowering the demand, compare Fig.~\ref{fig:Fig2}b)
or providing more
or
alternative public transport options (effectively increasing the price elasticity of demand, compare Fig.~\ref{fig:Fig2}b).
In particular, our results suggest that limiting the maximum surge fee, as
done
in response to the initial reports from DCA \cite{NPR2019} and frequently discussed as potential legislation \cite{Patel2019, ETTech2019}, is not an effective response and may even result in the opposite effect if the demand is highly elastic (compare Chennai, Fig.~\ref{fig:Fig4}).
In general,
with the emergence of digital platforms, sharing economies and autonomous vehicle fleets, mobility services are becoming increasingly self-organized and complex
such that
new, potentially unintended collective dynamics will emerge \cite{Millard-Ball2019, Helbing2000b, Strogatz2005, Havlin2012, Helbing2013, Schroder2018, zeng2019switch}.
Our results provide conceptual insights into these dynamics to support the creation and regulation of fair, efficient and transparent publicly available
mobility services \cite{Vazifeh2018, Tachet2017, Santi2014, auer2015dynamics, OKeeffe2019, Molkenthin2019}.
\clearpage
\section*{Methods}
\textbf{Data sources and acquisition}.
In this work we recorded approximately 28 million ride-hailing price estimates for 137 routes
of \textit{Uber} rides
in 59 urban areas across six continents between 19-05-31 and 19-06-25. We distinguish between four types of routes based on the origin location:
63 airport, 23 convention center, 12 train station and 39 city trips (see Supplementary Data for detailed information and precise GPS coordinates of the different routes).
For each route, we prompted total fare requests with a fixed interval via Uber's \textit{price estimate} API endpoint recording the price estimates for each route every 2 to 30 seconds. Per request, the API returned lower and upper total fare estimates for all Uber products operating in the local area
as well as estimated distance and duration of the trip
which we equipped with the request timestamp.
Using Uber's \textit{products} API endpoint, we complemented the price estimate data with information on local booking fee, price per minute, price per mile, distance unit, minimum fees and the currency code parameter per product and location. We convert all price estimates to
US Dollars
based on currency exchange rates provided by the European Central Bank for the date of recording.
In all our analyses, we work with the lower estimate of the local economy product (\textit{UberX}, \textit{UberGO} in India).\\
\textbf{Base cost}.
To determine the base cost (sum of pickup fee, trip fee and surcharges) of a trip we first compute the trip fee based on the price per mile, price per minute and the estimated trip length and duration. We add the pickup fee obtained from the Uber \textit{products} API. Since data on the surcharges (e.g. airport fees or tolls) of individual trips is not available, we take surcharges to be constant for each trip. We subtract the pickup fee and trip fee from the price estimate and take the minimum value of this remaining surge fee and surcharge cost as estimate of the surcharges, such that zero surge fee occurs at least once in the recorded price estimates.\\
\textbf{Surge fee}.
To estimate the surge fee time series we subtract the base cost of the respective product from the total fare estimate. Since the available price estimates are rounded to integer values, the recorded price estimate may not reflect all changes of the trip fare especially for shorter trips with lower absolute total fare). This leads to small fluctuations in the extracted surge fee that do not correspond to actual surge activity. \\
\textbf{Airport arrival data}.
To estimate the demand for rides at airports, we record the number of arrivals at each of the 63 airports where we recorded price estimates. We collected aircraft landing times, call signs and type of aircraft using flightradar24's open API in the corresponding time frame, as well as information on the different aircraft's current seat configuration obtained via flightera.net. We disregard entries without call signs or real landing times. In rare cases where no seat configuration was available, we estimate the number of seats as the average of all recorded flights with the same aircraft model (or the average over all aircraft models if no other similar model was recorded).\\
\textbf{Airport demand}.
We estimate
the demand for ride-hailing services as
proportional to the number of seats of all arriving airplanes (implying a constant fraction of potential Uber customers).
To create a continuous time series from the discrete arrival events of individual airplanes we compute a five minute moving average to create equidistant records every minute. This also slightly reduces the strong variations between minutes with and without arrivals.
Because we have much more frequent but not equally spaced data for the Uber price estimates,
we use the same procedure and compute a five minute moving average of the surge fee for every minute. This leaves us with the same granularity of the data as for the deplanements.
Using
these
data, we compute the cross-correlation between the Uber surge fee estimates and the deplanement data at the corresponding airport. In Fig.~1c we show the scatterplot at the timelag (deplanement earlier than surge) where this correlation is maximal for the illustrated window from 20:00 to 02:00 of the surge fee.\\
\newpage
\textbf{Comparison of surge dynamics}. To compare and characterize the surge dynamics for different trips we normalize the absolute surge fee time series by the base cost at that time, yielding an effective surge factor. For these normalized time series, we compute the per minute changes $\Delta p$ between consecutive time points (time $t$ in minutes),
\begin{eqnarray}
\Delta p(t) &=& \frac{ \textrm{total~fare}(t) }{ \textrm{base~cost}(t) } - \frac{ \textrm{total~fare}(t-1) }{ \textrm{base~cost}(t-1) } \label{eq:supp_normalized_price} \\
&=& \frac{ \textrm{surge~fee}(t) }{ \textrm{base~cost}(t) } - \frac{ \textrm{surge~fee}(t-1) }{ \textrm{base~cost}(t-1) } \,.\nonumber
\end{eqnarray}
To quantify and compare the statistical properties of the surge factor time series we split the price changes into three contributions. We take any data point with $\Delta p^2 < 10^{-7}$ to belong to a Dirac delta distribution at zero (not shown in the histograms) and fit a Gaussian mixture model with two Gaussian distributions to the remaining data.
Taking both distributions
to
have a mean of zero (no price change on average)
yields
\begin{eqnarray*}
\mathrm{Prob}\left(\Delta p\right) &=& w_0 \, \delta(\Delta p) \nonumber \\
&+& w_\mathrm{base} \, \frac{1}{\sqrt{2 \pi \sigma_\mathrm{main}^2}} \, e^{-\frac{\Delta p^2}{2\,\sigma_\mathrm{base}^2}} \nonumber \\
&+& w_\mathrm{surge} \, \frac{1}{\sqrt{2 \pi \sigma_\mathrm{surge}^2}} \, e^{-\frac{\Delta p^2}{2\,\sigma_\mathrm{surge}^2}}
\end{eqnarray*}
where the weight $w_\mathrm{surge}$ defines the \textit{surge contribution} and the standard deviation $\sigma_\mathrm{surge}$ is the \textit{normalized surge strength} used to characterize the surge dynamics. \\
\textbf{Two player game -- minimal theoretical model}.
The results presented in the manuscript (Fig.~\ref{fig:Fig2}B) are obtained with normalized parameters $p_\mathrm{base} = 1$ and $\delta \in \left\{0, 0.15, 0.30\right\}$, allowing up to $p_\mathrm{surge}^\mathrm{max} = 1/0.30 \approx 3.33$ before no customer orders a ride at the maximum surge fee. See Supplementary Material for a detailed description.\\
\textbf{Dynamic multiplayer game}.
For the dynamic multiplayer game, we consider a single origin location with $N=160$ drivers. Upon completing a trip, drivers return to the origin location after a total round-trip time $t_s$ uniformly distributed in $\left[\left<t_s\right> - 5, \left<t_s\right> + 5\right]$ minutes. We increase the round-trip time from the base value $\left<t_s\right> = 30$ minutes to $\left<t_s\right> = 60$ minutes in the morning and afternoon (starting at 08:00 and increasing linearly up to the maximum at 9:30 and back to the base value until 11:00. Similarly in the afternoon from 15:00 to the maximum at 18:00 and back until 20:00).
The base cost $p_\mathrm{base}$ depend linearly on the round-trip time as $p_\mathrm{base} = 1 + \left<t_s\right>/2 \in \left[16, 31\right]$ USD as the round-trip time changes during day. Similar to the two-player game, we
take
a linear price dependence for the surge pricing as
\begin{equation}
p^\prime(t) = \begin{cases}
p_\mathrm{base} \quad &\text{if} \quad N_\mathrm{idle}(t) \ge N_\mathrm{thresh} \\
p_\mathrm{base} + p_\mathrm{surge}^\mathrm{max} \left(1 - \frac{N_\mathrm{idle}(t)}{N_\mathrm{thres}}\right) \quad &\text{else
\end{cases} \nonumber
\end{equation}
based on the number $N_\mathrm{idle}$ of online drivers at the trip origin and the number of drivers $N_\mathrm{thresh}$ before the surge fee becomes non-zero. We
take
$N_\mathrm{thresh} = \lambda \, \left<t_s\right>$, where $\lambda = 2$ requests per minute describes the demand modeled as a Poisson process in time.
We model responses of the price to the current system state (number of available drivers and round-trip time) as instantaneous.
The behavior of customers and drivers is as follows:
Each customer $i$ is assigned a uniformly random maximum price $p_\mathrm{max,i} \in [p_\mathrm{base}, p_\mathrm{max}]$ they are willing to pay, where we take the price of Uber Black as $p_\mathrm{max} = 54$ USD. When the customer makes a request, they check the current total fare. If the current total fare is smaller than $p_\mathrm{max,i}$, the customer orders the ride. If the total fare is higher or no drivers are online and idle, the customer waits and checks again every 2 minutes.
After 10 minutes without ordering a ride, the customer leaves the system.
At every point in time the drivers decide whether to switch their app off or on. They make this decision based on the (mean field) optimal strategy to optimize their collective payoff. A driver switches off their app only if two conditions are fulfilled: first, if there are sufficiently many drivers available and willing to be offline to induce a non-zero surge fee. Second, if the price is less than the (mean field) optimal value for the drivers given the current system state. Each driver remains offline for at most 20 minutes. After this time, the driver only considers going offline again after serving a customer (drivers try to obtain similar individual profits whereas their optimal strategy based on maximizing their collective profit would be for some drivers to be always offline).
\newpage
\nocite{AmazonDynamicPricing, UberSurge2019, LyftPricing2019, UberCities2019, UberSurge2019, Garg2019, UberPriceEstimates2019, Garg2019, UberNewSurge2018, UberPriceEstimates2019, UberLaunchDates2014, ABC7News2019a, DCTaxiMarketShare2019, ABC7News2019a, ABC7News2019a, ABC7News2019a, UberNewSurge2018, Garg2019, Skyrms2003, PrisonersDilemma2019, UberStatistics2019, ABC7News2019a}
\bibliographystyle{unsrt}
|
1,116,691,497,713 | arxiv | \section{Regularisation of the modular operator}
\label{appendixA}
In this section, we provide a more detailed presentation of the regularisation involved in defining the modular operator via the resolvent, i.e. \eqref{Sigmat1}. We shall consider the case of the plane, but the generalisation to the other cases is straightforward. After replacing the resolvent \eqref{eq:f1} and changing to the variable $z=(1-\lambda)/\lambda$, the operator to be computed is
\begin{align}\label{}
\Sigma_t=-\frac{1}{2\pi i}\oint_\Gamma \frac{dz}{z}z^{it} \left[ \frac{z}{1+z}-(-z)^{i\tilde t}G \right]\,,
\end{align}
where the contour $\Gamma$ is depicted in \ref{fig:cont_3}. This integral is not well defined, as the contributions at both the origin and infinity are not properly defined. Nevertheless, it is easy to perform a regularisation to give it a precise meaning as follows. In order to tame both the origin and infinity, consider instead the function \cite{Fries:2019ozf}
\begin{align}\label{eq:Sigmae}
\Sigma^{(\epsilon)}_t&=-\frac{1}{2\pi i}\oint_\Gamma \frac{dz}{z}z^{it} \left[ \frac{z}{1+z}-G(-z)^{i\tilde t} \right] \cdot \frac{\left( 1-\epsilon^{-1} \right)z}{(z-\epsilon)\left( z-\epsilon^{-1} \right)}\,.
\end{align}
The last fraction contains the regulating function. When $z\to 0$, this vanishes linearly, which cancels the pole at the origin rendering the integrand bounded there. As $z\to \infty$ it decays linearly, which provides the necessary falloff for the contribution at infinity to converge. Finally, we see that
\begin{align}\label{}
\Sigma_t=\lim_{\epsilon\to 0} \Sigma_t^{(\epsilon)}\,.
\end{align}
The regularised integral $\Sigma_t^{(\epsilon)}$ is now easily computed. We have
\begin{align}\label{}
\Sigma^{(\epsilon)}_t&=-\frac{1}{2\pi i}(1-\epsilon^{-1})\oint_{\Gamma} \frac{dz}{z-\epsilon}z^{it} \left[ \frac{z}{1+z}-G(-z)^{i\tilde t} \right]\frac{1}{z-\epsilon^{-1}}\,,
\end{align}
which possesses two single poles, at $z=\epsilon$ and $z=\epsilon^{-1}$ respectively. The first term in the integral vanishes, because the function is holomorphic in a small neighbourhood around $\Gamma$ which does not contain any of the poles. This explains the transition from \eqref{Sigmat1} to \eqref{Sigmat2} in the main text. This is consistent with the resolvent method, because with our approach, all the contribution should come from the branch cut of the resolvent which is contained in the second term. Thus we are left with
\begin{align}\label{eq:Sigmae2}
\Sigma^{(\epsilon)}_t=\frac{1}{2\pi i}(1-\epsilon^{-1})G\oint_{\Gamma} dz\frac{z^{it} (-z)^{i\tilde t} }{(z-\epsilon)(z-\epsilon^{-1})}\,.
\end{align}
Now the discontinuity along $\mathbb R^+$ implies that just above and below the cut we have
\begin{align}\label{}
\left( -(z\pm i0^+) \right)^{i\tilde t}=e^{\pm \pi \tilde t} z^{i\tilde t}\,.
\end{align}
Again, since after regularization the integrand is bounded in a vicinity of the origin, there is no contribution around the origin, and we are left with
\begin{align}\label{eq:Sigmae2}
\Sigma^{(\epsilon)}_t=\frac{1}{2\pi i}(1-\epsilon^{-1})G \left( e^{\pi\tilde t}-e^{-\pi\tilde t} \right)\int_0^\infty dz\frac{z^{i(t+\tilde t)} }{(z-\epsilon)(z-\epsilon^{-1})}\,.
\end{align}
This last integral is readily evaluated. Relabelling the regulator as $\epsilon=-ie^{-2\pi m}$, in the limit $m\to\infty$ it yields \eqref{Sigmasol}.
\section{Discussion}
\label{sec:conclusions}
In this paper we computed the modular flow for the chiral fermion CFT in $1+1$ dimensions,
for entangling regions consisting of arbitrary sets of disjoint intervals. Working in the framework of functional calculus, we derived two important formulae:
1) The modular flow of the field operators \eqref{tomitapsi}, 2) the associated modular two-point
function \eqref{eq:mod2pt}. This was done for the cases of the vacuum and thermal states, both on
the infinite line and on the circle, giving an extensive overview of cases that are
usually of interest\,\cite{DiFrancesco:1997nk}.
A central element in our analysis is that we made extensive use of the resolvent method.
This technique has allowed us to resolve two main obstacles in the understanding of fermionic entanglement. First, we computed modular flow directly, bypassing the need for the modular Hamiltonian. This is important because although the modular Hamiltonian for the cases of interest was known\,\cite{Casini:2009vk,Klich:2015ina,Fries:2019ozf,PhysRevD.100.025003}, determining the flow from the Hamiltonian is in general very involved and remained unknown. The second problem one faces is that, even with the knowledge of the operator flow, finding the modular two-point function analytically appears hopeless at first sight. The reason is that, as discussed in section \ref{sec:mod_two}, the operator flow generically involves solving transcendental equations whose solutions are unknown. However, we have shown that the resolvent yields directly the modular correlator in closed analytic form, without even the need of considering such equations.
On the circle, we considered both antiperiodic (A) and periodic (P) boundary conditions, yielding
closed form expressions for both modular flows, including the extra terms that appear due to a zero mode
contribution in the periodic sector. These extra terms appear in the modular two-point function as a branch
cut --- a feature which, to our knowledge, is novel for exact solutions and which can be
traced back to the non-locality of the associated modular flow, as we discussed in section \ref{subsec:mod2pt-poles}. On the torus, we considered the $\nu=2,3$ sectors, showing that the flow leads to an infinite bi-local set of couplings. Moreover, we illustrated how the analytic structure of the two vacua on the cylinder arises as the low temperature limit of the exact result on the torus.
These solutions display a spectrum of degrees of non-locality. To discuss them, let us
restrict to the periodic case: Starting at high temperature, the correlations are
dominated by thermal fluctuations. Entanglement is thus suppressed and the reduced density
matrix is still a thermal state, albeit with a different temperature $\beta/L$ due to
the introduction of the region size $L$ as an additional scale. As a result, modular flow
coincides with regular, local, time evolution. While this is to be expected in fairly
general theories, it can also be seen explicitly from our results in
eq.~\eqref{tdZ_torus},\eqref{psi_t_torus} and figure~\ref{fig:torus}: In the limit of
small $\beta$, the $Z$-terms vanish, and a field initially localized at $y$ is
transported to the point $x = y - t\beta/L$. As we lower the temperature, we see that
modular flow adds bi-local couplings between an infinite discrete set of points, one for
each value of $k$ in eq.~\eqref{tdZ_torus}. As we derived in
eq.~\eqref{eq:sigma_series},\eqref{eq:dirac-comb}, these additional couplings can be
traced back to the anti-periodicty of the thermal propagator, i.e., the KMS condition in
physical time. Finally, as the temperature approaches absolute zero, the discrete
couplings condense to a continuum, which we derived in eq.~\eqref{sigma_R}.
As we saw in this paper, we have explicit results for mixed states in regions that consists of arbitrary sets of disjoint intervals. We cover finite temperature states, as well as that of the perfect mixture of degenerate ground states. It is remarkable how far one can actually go: This method provides not only a recipe to determine modular flow ‘in principle’, but — by virtue of the power of complex analysis — allows to determine modular flow in closed analytic form.
Let us point out some future directions. In addition to naturally continuing the work on the modular theory for free fermions on
the torus, our results may give a starting point for computations of more
general modular flows. For example, the methods presented in this paper can readily be applied to excited states in the
fermion CFT, since the corresponding resolvent is known \cite{Klich:2015ina}. We
expect to find additional non-local terms there.
Another possible direction for further progress lies in the generalization to higher
dimensions, and even to massive (i.e.~non-conformal) theories: Since the resolvent formulae that we derived are valid
for \emph{any} free fermionic theory, they can be readily applied if one has the
corresponding resolvent at hand, which in turn can always be found by solving an integral
equation.
Finally, since the algebra of higher spin fields embeds into
the free fermion CFT \cite{Bischoff:2011mx}, it will be interesting to see if similar
methods can be used to derive modular flows there. We expect this to have applications to higher
spin AdS/CFT\,\cite{Campoleoni:2010zq,Gaberdiel:2010pz,Castro:2011zq,Ammon:2013hba,deBoer:2013vca} and to bulk reconstruction in this context. A starting point in this direction will be to explore the modular flow of higher spin operators by adapting the techniques presented here.
\bigskip
{\bf Acknowledgements}
We are grateful to Haye Hinrichsen for discussions.
The work of IR is funded by the Gravity, Quantum Fields and Information group at AEI, which is generously supported by the Alexander von Humboldt Foundation and the Federal Ministry for Education and Research through the Sofja Kovalevskaja Award. PF is supported by the DFG project HI 744/9-1.
\section{Modular flows: An overview}
\label{sec:modular-flows-overview}
\subsection{A lightning overview of modular theory}
Before we specialize to the free fermion, let us first recall some basic notions of
(Tomita-Takesaki) modular theory. Consider a von Neumann algebra ${\mathcal R}$ in normal form,
i.e. acting on a Hilbert space ${\mathcal{H}}$ with a cyclic separating vector $\gns$. We
can then define the Tomita conjugation $S$ by
\begin{equation}
S {\mathcal{O}}\gns := {\mathcal{O}}^\dagger\gns
\end{equation}
for all operators ${\mathcal{O}}\in{\mathcal R}$. Since $\gns$ is cyclic and separating, this defines $S$ on
a dense subspace of ${\mathcal{H}}$. Furthermore, one can show that $S$ is closable, hence, admits a
unique polar decomposition
\begin{equation}
S = J \Delta^{1/2}\,,
\end{equation}
where $J$ is antiunitary and $\Delta$ is positive. We denote $J$ and $\Delta$ as the
modular conjugation and modular operator, respectively.
The importance of these operators stems from Tomita's theorem, which states that
\begin{equation}
J {\mathcal R} J^\dagger = {\mathcal R}' \qtext{and} \Delta^{{\mathrm i} t} {\mathcal R} \Delta^{-{\mathrm i} t} = {\mathcal R}\,,
\end{equation}
i.e. $J$ intertwines ${\mathcal R}$ and its commutant ${\mathcal R}'$ (the set of bounded operators in ${\mathcal{H}}$ that commute with those in ${\mathcal R}$) while the modular flow
\begin{equation}
\sigma_t({\mathcal{O}}) := \Delta^{{\mathrm i} t} {\mathcal{O}} \Delta^{-{\mathrm i} t}
\end{equation}
preserves ${\mathcal R}$. Hence we see that modular flow is a
symmetry of ${\mathcal R}$ arising just from the algebraic structure. The generator $K$ of this
symmetry, in the sense that
\begin{equation}
{\mathrm e}^{-{\mathrm i} t K} := \Delta^{{\mathrm i} t}\,,
\end{equation}
is referred to as the modular Hamiltonian. The reason for this is that, just like time
evolution, modular flow obeys a Kubo-Martin-Schwinger (KMS) condition: For
${\mathcal{O}}_1,{\mathcal{O}}_2 \in {\mathcal R}$, the modular correlation functions
\begin{equation}\label{eq:strips}
\gnsev{{\mathcal{O}}_1 \sigma_t({\mathcal{O}}_2)} \qtext{and} \gnsev{\sigma_t({\mathcal{O}}_2) {\mathcal{O}}_1}
\end{equation}
admit analytic continuations to the strips $-1 \leq \Im(t) \leq 0$ and
$0 \leq \Im(t) \leq 1$, respectively, where they satisfy
\begin{equation}\label{eq:kms}
\gnsev{{\mathcal{O}}_1 \sigma_t({\mathcal{O}}_2)} = \gnsev{\sigma_{t+{\mathrm i}}({\mathcal{O}}_2) {\mathcal{O}}_1}\,.
\end{equation}
This is to be compared with the original KMS condition for a state
$\rho = {\mathcal{Z}}^{-1} {\mathrm e}^{-\beta H}$ of inverse temperature $\beta$: Denoting time evolution by
$\alpha_t({\mathcal{O}}) := {\mathrm e}^{{\mathrm i} t H} {\mathcal{O}} {\mathrm e}^{-{\mathrm i} t H}$, we have
\begin{equation}
{\mathrm{Tr}}[\rho {\mathcal{O}}_1 \alpha_t({\mathcal{O}}_2)] = {\mathcal{Z}}^{-1} {\mathrm{Tr}}[{\mathrm e}^{-{\mathrm i}(t-{\mathrm i}\beta) H} {\mathcal{O}}_1 {\mathrm e}^{{\mathrm i} t H} {\mathcal{O}}_2] = {\mathrm{Tr}}[\rho \alpha_{t-{\mathrm i}\beta}({\mathcal{O}}_2){\mathcal{O}}_1]\,.
\end{equation}
Therefore, $\sigma_t$ behaves like a time evolution with respect to the Hamiltonian $-K$
in a state of inverse temperature $-1$. In other words: If we only consider operators in
${\mathcal R}$, the vector $\gns$ behaves like a thermal state $\rho_{\mathcal R} = {\mathcal{Z}}^{-1} {\mathrm e}^{-K}$. This
coincides with the definition of modular flow in terms of ``reduced density
matrices'' \cite{Witten:2018zxz}. As a word of caution, we would like to mention here that
reduced density matrices do not exist as operators in a genuine quantum field theory (QFT)
due to the universal divergence of vacuum
entanglement \cite{Haag:1992hx,Brattelli:1997fuh,Witten:2018zxz}. However, in the
following, we will use the formal analogy to finite dimensional systems (where reduced
density matrices do exist) to derive relevant formulae. While not entirely rigorous, this
method has proven to be very useful in the past and was confirmed in many
cases \cite{Araki:1971id,Hollands:2019hje}.
We now specialize to QFT in flat spacetime: In the Haag-Kastler approach to QFT, a von
Neumann algebra ${\mathcal R}$ is associated to each (causally complete) region in space time,
typically denoted by the same symbol. This algebra can be thought of as consisting of the set of
(bounded) operators that have support in ${\mathcal R}$. Since the associated modular flow
preserves this algebra and, hence, the associated region, it is tempting to ask, to which
extent it has a geometric or physical meaning. Remarkably, this question has an affimative
answer in the following scenario: If $\gns$ is the vacuum state and ${\mathcal R}$ a Rindler wedge,
then $K$ is nothing but the (approriately scaled) generator of Lorentz boots that
preserves this wedge \cite{Bisognano:1975ih}. Furthermore, we can sometimes use additional
symmetries (e.g. conformal symmetry) to generalize this geometric action to other regions
such as a lightcone, double cones, or even to other states such as the thermal state \cite{Hislop1982zhn}.
Finally, let us elaborate further on the KMS condition in the context of fermionic
theories. As an example, we consider eq.~\eqref{eq:kms} for the case of two field operators
${\mathcal{O}}_1 = \psi(x)$ and ${\mathcal{O}}_2 = \psi^\dagger(y)$. As stated, the two functions defined on \eqref{eq:strips} are analytic when restricted to the lower and upper unit strips respectively. Therefore, it is natural to define the \textit{modular} two-point function \cite{Haag:1967zfg,Haag:1992hx,Hollands:2019hje}
\begin{equation}\label{eq:master}
G_{\text{mod}}(x,y;t) :=
\begin{cases}
-\gnsev{\sigma_t(\psi^\dagger(y))\psi(x)} &\text{for } 0 < \Im(t) < 1\\
+\gnsev{\psi(x) \sigma_t(\psi^\dagger(y))} &\text{for } -1 < \Im(t) < 0.
\end{cases}
\end{equation}
This function, defined via a \textit{different} function in each strip, satisfies the
following antiperiodicity due to the KMS condition \eqref{eq:kms}
\begin{equation}
\label{eq:kms2}
G_{\text{mod}}(x,y;t) = - G_{\text{mod}}(x,y;t+i)\ \ ,\ \ -1 < \Im(t) < 0
\end{equation}
allowing to continue $G_{\text{mod}}$ to arbitrary non-integer imaginary parts of
$t$.
The reason to define \eqref{eq:master} is because of its relation to the
anticommutator. While $G_{\text{mod}}$ is analytic in the lower and upper strips by
construction, the interesting question regards its regularity properties of along
$\Im (t)=0$ (and, hence, at all integer imaginary parts due to antiperiodicity). Now it is
easy to see that the variation of $G_{\text{mod}}$ along the real axis is given by the
anticommutator between a field and the modular evolved field:
\begin{equation}\label{eq:dGmod}
G_{\text{mod}}(x,y;t-{\mathrm i} 0^+) - G_{\text{mod}}(x,y;t+{\mathrm i} 0^+) = \gnsev{\{\psi(x), \sigma_t(\psi^\dagger(y))\}}\ \ ,\ \ t\in\mathbb R
\end{equation}
This condition becomes particularly useful in cases where the modular evolution is given
by a smearing of the field (this will be precisely the case for gaussian free fermion
states)
\begin{equation}
\sigma_t\big(\psi^\dagger(y)\big) = \int_V \dif[d]x \psi^\dagger(x) \Sigma_t(x,y).
\end{equation}
with which \eqref{eq:dGmod} becomes
\begin{equation}
G_{\text{mod}}(x,y;t-{\mathrm i} 0^+) - G_{\text{mod}}(x,y;t+{\mathrm i} 0^+) = \Sigma_t(x,y)
\end{equation}
by the canonical anticommutation relation
$\{\psi(x),\psi^\dagger(y)\}=\delta(x-y)$. Equivalently, we can use the antiperiodicity
\eqref{eq:kms2} to rewrite this purely in terms of the function on the lower strip,
\begin{equation}
\label{eq:kms3}
G_{\text{mod}}(x,y;t-{\mathrm i} 0^+) + G_{\text{mod}}(x,y;t-{\mathrm i}+{\mathrm i} 0^+) = \Sigma_t(x,y).
\end{equation}
This relation is important because it relates the analytic structure of the modular
correlator to the locality properties of the modular flow, via the Kernel $\Sigma_t$ of
the operator flow. If the flow under consideration is local, i.e., if
$\Sigma_t(x,y) \propto \delta(x-y)$, the right hand side vanishes almost everywhere and we
obtain antiperiodicty of the two-point function in imaginary time. On the other hand, this
means that every failure of such regularity (e.g. a branch cut of $G_{\text{mod}}$) is a
clear sign of non-locality in the modular flow.
In section \ref{sec:mod_two} we will explicitly determine $G_{\text{mod}}(t)$ for the
two-dimensional chiral fermion. We will confirm that for the cases where modular flow is
local or bi-local (such as a single interval on the plane, antiperiodic cylinder, or
torus), the modular correlator restricted to the lower strip yields a function that is
analytic everywhere away from isolated simple poles. The salient example where analyticity
fails is a single interval on the periodic vacuum on the cylinder, where the flow is
completely non-local and $G_{\text{mod}}$ has branch cuts.
\subsection{Modular flows for free Fermions}
\label{subsec:mod_flow_fermions}
For the rest of the paper, we restrict to Gaussian states in a Fermionic theory. Also, we are
working on a Cauchy slice and consider only subregions
$V$ of that slice. In this case, we can formally decompose the modular Hamiltonian as
\begin{equation}
K = \int_V \dif[d]x \int_V \dif[d]y \psi^\dagger(x)k(x,y)\psi(y) + \int_{V^c} \dif[d]x \int_{V^c} \dif[d]y \psi^\dagger(x)k^c(x,y)\psi(y)\,,
\end{equation}
where $V^c$ is the complement of $V$ in the Cauchy slice. The absence of mixing terms
between $V$ and $V^c$ reflects the fact that modular flow preserves ${\mathcal R}$ and
${\mathcal R}'$, which are associated to $V$ and $V^c$ respectively. Mathematically, the above Kernels $k, k^c$ have to be understood in the
distributional sense, i.e. they are only defined as integrated against suitably smooth
test functions. For the remained or the text we restrict to $k$, as the calculation of $k^c$ is completely analogous.
To derive an explicit formula for $k$, we require that the ``reduced density
matrix'' ${\mathcal{Z}}^{-1} {\mathrm e}^{-K}$ reproduces the correct expectation values for operators with
support in $V$. By Wick's theorem it is enough to reproduce the propagator
\begin{equation}
G(x,y) := \gnsev{\psi(x)\psi^\dagger(y)}
\end{equation}
in the subregion $x,y\in V$, which allows to derive the relation \cite{Araki:1971id,Peschel:2003xzh}
\begin{equation}
{\mathrm e}^{-k} = \frac{1-G|_V}{G|_V}\,,
\end{equation}
where $G|_V$ is the restriction of $G$ to $V$. In a similar manner, we arrive at
\begin{equation}\label{tomitapsi}
\sigma_t\big(\psi^\dagger(y)\big) = \int_V \dif[d]x \psi^\dagger(x) \Sigma_t(x,y) \qtext{with} \Sigma_t = \bigg[\frac{1-G|_V}{G|_V}\bigg]^{{\mathrm i} t}
\end{equation}
and thus (for $-1 < \Im(t) \leq 0$)
\begin{equation}
\label{eq:mod2pt}
G_\mathrm{mod}(x,y;t) = \gnsev{\psi(x)\sigma_t\big(\psi^\dagger(y)\big)} = \bigg(G|_V\bigg[\frac{1-G|_V}{G|_V}\bigg]^{{\mathrm i} t}\bigg)(x,y)\,.
\end{equation}
Here and below, we use the compact notation $\Sigma_t$ omitting its space-time dependence,
but it should always be kept in mind that it is a linear operator acting on
functions. Similarly for other operators such as the resolvent.
The problem is thus reduced to computing functions of the (restricted) propagator
$G|_V$. Since this is a bounded operator -- its spectrum is contained in the interval
$[0,1]$ -- we can use functional calculus to write
\begin{equation}
\label{eq:cauchy}
f(G|_V) = \frac 1{2\pi{\mathrm i}} \oint_{\gamma} \dif \lambda f(\lambda) \frac 1{\lambda-G|_V}\,,
\end{equation}
where $1/(\lambda-G|_V)$ is the resolvent of $G|_V$ and $\gamma$ denotes that the integral
is to be done counter-clockwise along a contour that tightly wraps around the spectrum
$[0,1]$, as shown in figure~\ref{fig:cont_1}. Eq.~\eqref{eq:cauchy} can easily be seen to
be correct in an eigendecomposition of $G|_V$ and implies that the resolvent
$1/(\lambda-G|_V)$, as a function of $\lambda$, is analytic in a neighbourhood of $[0,1]$,
but not along the interval itself: If the spectrum of $G|_V$ is discrete, we expect a
simple pole whenever $\lambda$ approaches an eigenvalue. For continuous portions of the
spectrum, this culminates in a branch cut. In any closed form expression of the resolvent,
such a branch cut will only be visible as a pair of branch points and one might be tempted
to assume that the precise location of the cut is indeterminate. However, as just
discussed, one should always keep in mind that the branch cut will always be situated
alongside $[0,1]$.
Let us compare eq.~\eqref{eq:cauchy} with the general definition of a function of the kernel $G|_V$, given by
\begin{equation}
\label{spectral}
f(G|_V) = \int_V \dif{E_\lambda} f(\lambda) = \int_V \dif\lambda \od{E_\lambda}{\lambda} f(\lambda)\,,
\end{equation}
where $E_\lambda$ is the spectral measure of $G|_V$. Assuming the contributions to
eq.~\eqref{eq:cauchy} at $0$ and $1$ vanish, we obtain
\begin{equation}
\od{E_\lambda}{\lambda} =
\frac 1{2\pi{\mathrm i}} \bigg[\frac 1{\lambda-G|_V-{\mathrm i} 0^+} - \frac 1{\lambda-G|_V+{\mathrm i} 0^+}\bigg],
\end{equation}
which characterizes the spectral measure completely. Note that the requirement of
vanishing contributions at $0$ and $1$ also imposes a regularity constraint on the
function $f$. For the computations in section~\ref{sec:op_flow}, it will turn our that
this contraint is violated and we will have to work with eq.~\eqref{eq:cauchy} directly.
\begin{figure}[h]
\def.7\linewidth{.7\linewidth}
\centering{
\input{contourz_1.pdf_tex}
\caption{Contour used in \eqref{eq:cauchy}. The resolvent $(\lambda-G)^{-1}$ must
possess a cut along $[0,1]$, the spectrum of $G$.}
\label{fig:cont_1}}
\end{figure}
To proceed any further, it is necessary to find the resolvent for the state and region under
consideration. To this end, we can make the ansatz
\begin{equation}
\label{eq:res}
\frac 1{\lambda -G|_V} = \frac 1\lambda + \frac{F_\lambda}{\lambda^2}\,,
\end{equation}
which turns the functional equation $(\lambda-G_V) \times 1/(\lambda-G_V) = 1$ into the
integral equation
\begin{equation}
\label{eq:integral-equation}
-G(x,y) + F(x,y) - \frac 1\lambda \int_V\dif[d]z G(x,z)F_\lambda(z,y) = 0\,, \quad x,y\in V\,.
\end{equation}
Notice that while this equation is valid for fermions in arbitrary dimensions, the
solutions are only known in two dimensions, which is the focus of the next subsection.
\subsection{Resolvent for the chiral Fermion}
\label{subsec:resolvent}
Due to recent developments \cite{Fries:2019ozf}, \eqref{eq:integral-equation} is well
understood in the special case where we are dealing with a chiral fermion in one dimension
and $V = \bigcup_n [a_n,b_n]$ is a finite union of disjoint intervals. This is because
there, the propagator is a Cauchy kernel, hence the integral equation can be reduced to yet another complex analysis problem, where the resolvent has a branch cut along $V$. We omit details here and just state the results in the
following cases, classified by the domain/periodicities of $G$:
\begin{itemize}
\item No periodicity (the entire complex plane)---the corresponding propagator is given by
\begin{equation}
\label{eq:g1}
G(x,y) = \frac 1{2\pi{\mathrm i}}\frac 1{x-y-{\mathrm i} 0^+}\,.
\end{equation}
The solution is
\begin{equation}
\label{eq:f1}
F_\lambda(x,y) = - \frac\lambda{1-\lambda} G(x,y) \bigg[- \frac{1-\lambda}\lambda\bigg]^{{\mathrm i} [Z(x)-Z(y)]}\,,
\end{equation}
with
\begin{equation}
\label{eq:z1}
Z(x) = \frac 1{2\pi} \log \bigg[-\prod_n \frac{a_n-x}{b_n-x}\bigg]\,.
\end{equation}
\item One periodicity, taken to be $1$ without loss of generality (the complex cylinder)
-- the propagators are
\begin{align}
G(x,y) &= \frac 1{2{\mathrm i}} \csc \pi(x-y-{\mathrm i} 0^+) \label{eq:g2}\,, \\
G(x,y) &= \frac 1{2{\mathrm i}} \cot \pi(x-y-{\mathrm i} 0^+) \label{eq:g3}\,,
\end{align}
depending on the choice of antiperiodic or periodic boundary conditions,
respectively. The corresponding solutions are
\begin{align}
F_\lambda(x,y) &= - \frac\lambda{1-\lambda} G(x,y) \bigg[- \frac{1-\lambda}\lambda\bigg]^{{\mathrm i} [Z(x)-Z(y)]}\,, \label{eq:f2} \\
F_\lambda(x,y) &= - \frac\lambda{1-\lambda} \bigg[G(x,y) + \frac 12 \frac{[-(1-\lambda)/\lambda]^L-1}{[-(1-\lambda)/\lambda]^L+1}\bigg]
\bigg[- \frac{1-\lambda}\lambda\bigg]^{{\mathrm i} [Z(x)-Z(y)]}\,, \label{eq:f3}
\end{align}
with the total length $L = \sum_n (b_n-a_n)$ of $V$ and
\begin{equation}
\label{eq:z23}
Z(x) = \frac 1{2\pi} \log \bigg[-\prod_n \frac{\sin \pi(a_n-x)}{\sin \pi(b_n-x)}\bigg]\,.
\end{equation}
\item Two periodicities $1, \tau$ (the complex torus) -- here, the propagators are
\begin{equation}
\label{eq:g4}
G^{(\nu)}(x,y;\tau) = \frac{\eta^3(\tau)}{{\mathrm i}\vartheta_1(x-y-{\mathrm i} 0^+|\tau)} \frac{\vartheta_\nu(x-y|\tau)}{\vartheta_\nu(0|\tau)}
\end{equation}
with $\nu=2,3$ denoting the periodic-antiperiodic (PA) and antiperiodic-antiperiodic
(AA) boundary conditions, respectively. The conventions for Jacobi theta and Dedekind eta functions are the same as in \cite{DiFrancesco:1997nk}. The solutions of \eqref{eq:integral-equation}
are now
\begin{align}
F^{(\nu)}_\lambda(x,y) &= - \frac\lambda{1-\lambda} G^{(\nu)}(x,y;\tau,Lh) \bigg[- \frac{1-\lambda}\lambda\bigg]^{{\mathrm i} [Z(x)-Z(y)]}\,, \label{eq:f4} \\
\end{align}
where $h$ is defined by ${\mathrm e}^{2\pi h} := -\frac{1-\lambda}\lambda$ and
\begin{align}
G^{(\nu)}(x,y;\tau,\mu) &= \frac{\eta^3(\tau)}{{\mathrm i}\vartheta_1(x-y-{\mathrm i} 0^+|\tau)} \frac{\vartheta_\nu(x-y-{\mathrm i}\mu|\tau)}{\vartheta_\nu(-{\mathrm i}\mu|\tau)}\,, \\
Z(x) &= \frac 1{2\pi} \log \bigg[-\prod_n \frac{\vartheta_1(a_n-x|\tau)}{\vartheta_1(b_n-x|\tau)}\bigg] \label{eq:z4}\,.
\end{align}
Note that $G^{(\nu)}(x,y;\tau,Lh)$ is the propagator of a state with chemical potential $Lh$, i.e. we have the series representations
\begin{align}
G^{(2)}(x,y;\tau,Lh) &= \sum_{k\in{\mathbb{Z}}}' \frac{{\mathrm e}^{-2\pi{\mathrm i} k(x-y-{\mathrm i} 0^+)}}{1+{\mathrm e}^{2\pi ({\mathrm i} k \tau-Lh)}} = \sum_{k\in{\mathbb{Z}}}' \frac{{\mathrm e}^{-2\pi{\mathrm i} k(x-y-{\mathrm i} 0^+)}}{1+[-(1-\lambda)/\lambda]^{-L}{\mathrm e}^{2\pi {\mathrm i} k \tau}}\,, \label{eq:gg2}\\
G^{(3)}(x,y;\tau,Lh) &= \sum_{k\in{\mathbb{Z}}+1/2}' \frac{{\mathrm e}^{-2\pi{\mathrm i} k(x-y-{\mathrm i} 0^+)}}{1+[-(1-\lambda)/\lambda]^{-L}{\mathrm e}^{2\pi {\mathrm i} k \tau}}\,, \label{eq:gg3}
\end{align}
where the symbol $\sum'$ denotes that the sums have to be ordered symmetrically to ensure convergence.
\end{itemize}
\section{Introduction}
Recent years have witnessed significant progress in our understanding of the role that entanglement, as well as other ideas from quantum information theory, play in the context of high energy physics, including Quantum Field Theory (QFT) and gravity. A remarkable example of this exchange of ideas between different research areas is the Ryu-Takayanagi formula \cite{Ryu:2006bv} for the entanglement entropy in the AdS/CFT correspondence, which generalises the Bekenstein-Hawking area law for the black hole entropy.
One of the key concepts that have aided this progress is that of \textit{modular flow}. Loosely speaking, the modular flow $\sigma_t$ of an operator is given by a generalised time evolution with the density matrix $\rho$ itself, $\sigma_t({\mathcal{O}}):=\rho^{{\mathrm i} t} {\mathcal{O}} \rho^{-{\mathrm i} t}$.
An important property of this flow is that, when introduced in expectation values, it must satisfy a periodicity condition in imaginary time known as the Kubo-Martin-Schwinger (KMS) condition. Originally introduced within algebraic QFT~\cite{Takesaki:1970kop,Haag:1992hx,Brattelli:1997fuh,Borchers:2000pv,Takesaki:2003ght}, modular flow and its associated generator, the modular Hamiltonian, have found applications across a wide spectrum of topics due to its close connection to quantum information measures. This includes modular theory \cite{Lashkari:2018nsl,Witten:2018zxz,Lashkari:2019ixo}, relative entropy in QFT~\cite{Sarosi:2017rsq,Casini:2017roe,Blanco:2017akw}, entropy and energy inequalities~\cite{Casini:2008cr,Blanco:2013lea,Faulkner:2016mzt,Balakrishnan:2017bjg,Ceyhan:2018zfg}, conformal field theories \cite{Lashkari:2015dia,Cardy:2016fqc,Lashkari:2018oke,Long:2019fay}, and bulk reconstruction in gauge/gravity duality~\cite{Casini:2011kv,Blanco:2013joa,Jafferis:2014lza,Jafferis:2015del,Lashkari:2013koa,Koeller:2017njr,Czech:2017zfq,Chen:2018rgz,Belin:2018juv,Abt:2018zif,Faulkner:2018faa,Jefferson:2018ksk,Czech:2019vih,deBoer:2019uem,Arias:2020qpg}. For free fermions, the use of the resolvent method was first introduced in \cite{Casini:2009vk} to study the vacuum on the plane, and subsequently for the cylinder \cite{Klich:2015ina}, and torus \cite{Fries:2019ozf,PhysRevD.100.025003}. The modular two-point function on the cylinder was studied in \cite{Hollands:2019hje}.
Despite the many contexts in which modular flow appears, there are very few cases where its action is explicitly known.
In the general context of QFT, the vacuum modular flow in a Rindler wedge is fixed by Poincaré symmetry alone\,\cite{Bisognano:1975ih}, while conformal symmetry fixes it for diamond shaped geometries\,\cite{Hislop1982zhn}. Anything beyond these cases, be it the choice of another state or a different region, depends on the details of the theory under consideration and is largely unknown.
Many discussions are concerned with universal properties of modular flows, or deal with highly symmetric configurations where the flow has a geometric (local) interpretation. In a generic case however, we expect to see many forms of non-localities. An initially local operator that is subject to non-local modular flow acquires contributions from spacelike separated regions with increasing modular time. Quantum entanglement of spacetime seems to play a crucial role here. Therefore it is of great interest to obtain further detailed understanding of explicit realizations of modular flow for specific cases.
The example of the chiral fermion is rich enough for understanding non-universal behaviour in detail,
but still simple enough for explicit computations. Recently, this has lead to novel results on the modular Hamiltonian and relative entropy for disconnected regions \cite{Casini:2009vk,Klich:2015ina,Klich:2017qmt,Fries:2019ozf,Fries:2019acy,PhysRevD.100.025003,Hollands:2019hje}. In this paper we go beyond these studies and derive results for the modular flow itself. We provide explicit formulae that may find direct applications in studies of fermionic entanglement.
Let us briefly state our main results. We consider density matrices reduced to an arbitrary set of disjoint intervals $V = \bigcup_n [a_n,b_n]$. Modular flow of a fermion operator localised at $y\in V$ is given by the convolution
\begin{align}\label{eq:psi_1}
\sigma_t\left( \psi^\dag(y) \right)=\int_V\dif{x}\psi^\dag(x) \Sigma_t(x,y) \, ,
\end{align}
where the kernel is a function of the correlator,
\begin{align}\label{}
\Sigma_t=\left( \frac{1-G|_V}{G|_V} \right)^{{\mathrm i} t}.
\end{align}
Here the reduced propagator $G|_V$ is understood as a linear operator acting on smooth functions via convolution. This approach avoids the computation of the modular Hamiltonian and instead directly yields the flow. We determine this kernel, and the explicit formulae for the modular flow are given in \eqref{sigmasol} (for the plane or cylinder (A)), \eqref{sigma_R} (cylinder (P)) and \eqref{psi_t_torus} (torus), which are illustrated in figures \ref{fig:two_ints}, \ref{fig:Ramond}, and \ref{fig:torus} respectively. Here, the notation P and A refers to periodic and antiperiodic boundary conditions for the fermions on the spatial circle, in other words the Ramond and Neveu-Schwarz sector. As a second novel result, we explicitly compute the modular two-point function, which is given by
\begin{align}\label{eq:Gmod_1}
G_{\text{mod}}(x,y;t)=\langle \psi(x)\, \sigma_t( \psi^\dag(y))\rangle\,.
\end{align}
The final results are, in the same order as above, \eqref{eq:g-mod12}, \eqref{eq:g-mod-cyl} and \eqref{eq:g-mod-torus}. A remarkable feature of this approach is that, although $\sigma_t(\psi^\dag)$ generically involves solving higher-degree polynomials or transcendental equations, the modular two-point function can be determined analitically by direct integration, without the need of solving such equations.
An important concept in our paper will be that of different degrees of \textit{locality} of the modular flow. This can be understood directly from \eqref{eq:psi_1}. We call a flow \textit{completely non-local} if the kernel $\Sigma_t(x,y)$ is a smooth function of $x,y$ supported on the entire region $V$, since it mixes operators along the entire region. If however $\Sigma_t(x,y)\sim \delta \left( f(x,y) \right)$ for some function $f$, the integral in \eqref{eq:psi_1} will localise to a discrete set of isolated contributions, namely the zeroes of $f$. Generically, these solutions will be non trivial, in the sense that $x\neq y$ at $t=0$. We call these solutions \textit{bi-local}, since they couple pairs of distinct points. Finally, if there is a solution such that $x=y$ at $t=0$, we call it \textit{local}. We will use this terminology throughout the text. As we shall see, one of the essential features of our operator flow \eqref{eq:psi_1} is that it changes its locality properties depending on the temperature and the spin boundary conditions. In turn, this manifests itself in the structure of poles and cuts of the modular correlator \eqref{eq:Gmod_1}.
The paper is organized as follows. In section \ref{sec:modular-flows-overview} we specify the objects that we aim to compute: the modular flow of fermion operators and the modular two-point function. To this effect, we first review some basic notions of Tomita-Takesaki theory, a mathematical framework relevant for local quantum field theories. In section \ref{subsec:mod_flow_fermions} we introduce the particular physical system we focus on -- the free chiral fermion -- together with the necessary tools we will use to study modular flow. The main ingredient is holomorphic functional calculus, including the method of the resolvent. Section \ref{subsec:resolvent} includes a list of all known fermion resolvents. Section \ref{sec:op_flow} contains the first new results of this paper: We apply the above techniques to find the modular flow of the fermion operator for all cases considered.
Section \ref{sec:mod_two} presents our second important result, the modular two-point function. We verify that it obeys all required properties, such as analyticity and the KMS condition. We conclude by a summary and future directions in section \ref{sec:conclusions}.
\section{Modular flow of operators}
\label{sec:op_flow}
In this section we will compute explicitly the modular flow of the fundamental field, $\sigma_t(\psi^\dag)$ from \eqref{tomitapsi}. This is a basic building block that allows to compute the flow of composite operators.
As explained in section \ref{subsec:mod_flow_fermions}, the task reduces to determining the kernel
\begin{align}\label{eq:Sigmat0}
\Sigma_t=\left( \frac{1-G|_V}{G|_V} \right)^{{\mathrm i} t}\,.
\end{align}
Using the Cauchy formula \eqref{eq:cauchy}, with $f(\lambda)=\left( \frac{1-\lambda}{\lambda} \right)^{{\mathrm i} t}$ and decomposing the resolvent as in \eqref{eq:res} we have
\begin{equation}\label{Sigmat1}
\Sigma_t = \frac 1{2\pi{\mathrm i}} \oint_\gamma \dif\lambda \left( \frac{1-\lambda}{\lambda} \right)^{{\mathrm i} t} \left[ \frac{1}{\lambda}+\frac{F_\lambda}{\lambda^2} \right]\,.
\end{equation}
As it stands, this integral is not completely well defined, as the integrand is both divergent and highly oscillating around the branch points. However, this should come as no surprise: since we know that $\Sigma_t(x,y)$ represents a distribution, we expect the appearance of Dirac delta distributions. Thus the strategy is to regularise the integral, evaluate it, and finally remove the regulator and identify the remaining distributions.
Here the analytic structure of the integrand is crucial. In addition to the cut associated to the resolvent -- the last factor in \eqref{Sigmat1} -- $f(\lambda)$ has introduced another cut, branched over the same endpoints. The latter cut can be freely chosen as long as it does not overlap with the former. For simplicity, we choose it to run along the real complement, ${\mathbb{R}} \setminus [0,1]$ -- see fig. \ref{fig:cont_2}.
In appendix \ref{appendixA} we provide a rigorous treatment of this integral and evaluate it by residues. Here instead, we proceed with a more straightforward but nevertheless equivalent approach. As explained in appendix \ref{appendixA}, a standard regularisation consists of avoiding the poles at $\lambda=0$ and $\lambda=1$ by shifting them slightly into the complex plane. As a consequence, the integral with the first term in the square brackets, proportional to $1/\lambda$, vanishes. This can be seen as follows. This term has a branch cut along $(-\infty,0)\,\cup\,(1,\infty)$ and a pole at $\lambda=0$, but everywhere else is holomorphic. In particular, the contribution around $\lambda=1$ vanishes due to the KMS requirement. Since both boundary terms vanish, the integral vanishes. For more details, please refer to appendix \ref{appendixA}. Thus, we are left with
\begin{equation}\label{Sigmat2}
\Sigma_t = \frac 1{2\pi{\mathrm i}} \oint_\gamma \dif\lambda \left( \frac{1-\lambda}{\lambda} \right)^{{\mathrm i} t} \frac{F_\lambda}{\lambda^2}\,.
\end{equation}
Finally, $F_\lambda$ takes a different form depending on the topology and boundary conditions chosen. We consider them case by case.
\subsection{Plane}
\label{subsec:vacuumflow}
We start with the simplest case. For the vacuum state on the plane \eqref{eq:f1}, $F$ takes the form
\begin{align}\label{F_lam}
F_\lambda(x,y)=-\frac{\lambda}{1-\lambda} G(x,y) \left( - \frac{1-\lambda}{\lambda} \right)^{{\mathrm i} \tilde t}\,,
\end{align}
where we have introduced the shorthand notation
\begin{align}\label{tildet}
\tilde t(x,y)=Z(x)-Z(y)
\end{align}
and throughout the text we use $\tilde t=\tilde t(x,y)$ and omit the spacetime dependence, which should nevertheless be kept in mind. The notation will become clear shortly, as we will see that $\tilde t$ plays a role closely analogous to modular time $t$.
Also, it is important to realise that here, the propagator $G(x,y)$ has no dependence on $\lambda$ and therefore can be pulled out of the integral (notice however that this will not hold for the cylinder (R) or torus). Introduced back in \eqref{Sigmat2}, one obtains
\begin{align}\label{Sigma_t}
\Sigma_t(x,y)&=G(x,y)S(x,y)\,,
\end{align}
where we have defined the integral (see fig. \ref{fig:cont_2})
\begin{align}\label{Sxy}
S(x,y)=-\frac{1}{2\pi{\mathrm i}}\oint_{\gamma} \frac{\dif\lambda}{\lambda(1-\lambda)} \left( \frac{1-\lambda}{\lambda} \right)^{{\mathrm i} t} \left( - \frac{1-\lambda}{\lambda} \right)^{{\mathrm i} \tilde t}\,.
\end{align}
\begin{figure}[h]
\def.7\linewidth{.7\linewidth}
\centering{
\input{contourz_2.pdf_tex}
\caption{ Contour for the integral \eqref{Sxy}, with the associated branch cuts indicated. As explained in the text, the regularisation yields the integrand bounded around the origin, and thus the contribution the along the dashed semicircles vanish. }
\label{fig:cont_2} }
\end{figure}
\begin{figure}[h]
\def.7\linewidth{.7\linewidth}
\centering{
\input{contourz_3.pdf_tex}
\caption{ Contour for \eqref{Sxy2}, obtained from the previous figure by $z=(1-\lambda)/\lambda$. Again, the contribution of the dashed segment vanishes when properly regularised. }
\label{fig:cont_3} }
\end{figure}
In order to exploit the symmetry and simplify the integral, it is useful to change variable to $z=(1-\lambda)/\lambda$, which maps the cut along $[0,1]$ to ${\mathbb{R}}^{+}$, and ${\mathbb{R}}\setminus [0,1] $ to ${\mathbb{R}}^{-}$, while the image of the contour wraps positively around ${\mathbb{R}}^{+}$, see fig. \ref{fig:cont_3}. To make the branch cuts explicit, we use $-\pi<\arg z<\pi$ for the cut along $\mathbb R^-$ and $0<\arg z<2\pi$ for the one along $\mathbb R^+$. The integral \eqref{Sxy} now reads
\begin{align}\label{Sxy2}
S(x,y)&=\frac{1}{2\pi {\mathrm i}}\oint_{\Gamma} \frac{\dif{z}}{z} z^{{\mathrm i} t} (-z)^{{\mathrm i}\tilde t}\,.
\end{align}
The discontinuity along $\mathbb R^+$ implies that just above and below the cut we have
\begin{align}\label{}
\left( -(z\pm {\mathrm i} 0^+) \right)^{{\mathrm i}\tilde t}={\mathrm e}^{\pm \pi \tilde t} z^{{\mathrm i}\tilde t}\,.
\end{align}
As mentioned above and discussed in detail in the appendix, the regularisation makes the integrand bounded in a neighbourhood of the origin $z=0$, so we can neglect the contribution of the boundary point. Putting everything together, \eqref{Sxy2} can be formally represented as
\begin{align}\label{}
S(x,y)&=-\frac{1}{2\pi {\mathrm i}}\left( {\mathrm e}^{\pi \tilde t}-{\mathrm e}^{-\pi \tilde t} \right)\int_{\mathbb R^+} \dif{z}\, z^{{\mathrm i} (t+\tilde t)-1}\,.
\end{align}
Now we proceed to the regularisation. This integral receives divergent contributions from both $z\to 0$ and $z\to \infty$. Both can be made finite by restricting the integration to $z\in({\mathrm e}^{-2\pi m},{\mathrm e}^{2\pi m})$ where we eventually take $m\to \infty$. The result is
\begin{align}\label{eq:Ssin}
S(x,y)=2{\mathrm i}\sinh \left( \pi \tilde t \right) \frac{\sin 2\pi m(t+\tilde t)}{\pi (t+\tilde t)}\,.
\end{align}
Again, we remind the reader that a rigorous derivation of this via residue analysis is presented in appendix \ref{appendixA}. This expression is actually familiar. If $t+\tilde t(x,y)\neq 0$, the fraction is bounded but wildly oscillating, and therefore vanishes when integrated agains regular test functions. In the vicinity of $t+\tilde t(x,y)=0$ instead, the fraction diverges as $m\to\infty$. This is the standard Dirichlet kernel representation of the Dirac distribution, so
\begin{align}\label{Sxy3}
S(x,y)=-2{\mathrm i} \sinh (\pi t) \delta \left( t+\tilde t \right)\,.
\end{align}
As stated above, it is clear that $\tilde t$ is indeed playing a role somewhat analogous to modular time itself. Putting back all together into \eqref{Sigma_t} and replacing \eqref{tildet}, we learn that the kernel associated to the action of modular flow is
\begin{align}\label{Sigmasol}
\Sigma_t(x,y)=-2{\mathrm i} \sinh(\pi t) G(x,y)\delta(t+Z(x)-Z(y))\,,
\end{align}
whose support is given by the solutions of
\begin{align}\label{tdZ}
t+\tilde t =t+Z(x)-Z(y)=0\,.
\end{align}
This equation, and its solutions, play a fundamental role in our analysis. It will determine which points are non-locally coupled via modular flow, as well as the magnitude of their coefficients. For a fixed $y$, we shall call $x_\ell=x_\ell(y)$ the solutions for $x$, where the discrete index $\ell$ labels the different intervals in $V$. The most important property of the function $Z$ is that it increases monotonically from $-\infty$ at each left endpoint $a_j$ to $+\infty$ at the right endpoints $b_j$. This guarantees that there will exist one solution to this equation per interval.
The action of modular flow finally reads
\begin{align}\label{sigmasol}
\sigma_t\left( \psi^\dag(y) \right)&=-2{\mathrm i} \sinh(\pi t) \sum_\ell \frac{G(x_\ell,y)}{Z'(x_\ell)} \psi^\dag(x_\ell)\,,
\end{align}
where as before $x_\ell$ are the solutions of \eqref{tdZ}. We omit the absolute value in the denominator since $Z(x)$ is monotonically increasing in each interval. In order to gain some intuition, we illustrate these results below with some simple examples.
To conclude here, let us note what happens in the limit $t\to 0$. At zero time, the kernel $\Sigma_t$ in \eqref{eq:Sigmat0} must reduce to the identity, localized at $x=y$. Now the prefactor in \eqref{sigmasol} vanishes linearly with time. Since the propagator has a (unique) simple pole at coincident points, all but the `local' solution, which obeys $x_\ell\to y$ at $t\to 0$, vanish in \eqref{sigmasol}.
\paragraph{Rindler space.} This is the best known explicit case, which obeys a universal formula for the vacuum of any QFT on the Rindler wedge \cite{Bisognano:1975ih}. Physically this corresponds to the standard Unruh effect, where modular evolution is nothing but translations along the worldline of observers with constant acceleration. Here the entangling region is $V={\mathbb{R}}^+$, which can be obtained by taking the limit $b\gg a$ of the single interval on the plane \eqref{eq:z1}, yielding
\begin{align}\label{}
Z(x)-Z(y)=\frac{1}{2\pi} \log \frac{x}{y}\,.
\end{align}
The unique solution to \eqref{tdZ} is $x_1={\mathrm e}^{-2\pi t}y$, which inserted back into \eqref{sigmasol} leads to the geometric flow
\begin{align}\label{}
\sigma_t\left( \psi^\dag(y) \right)={\mathrm e}^{-\pi t} \psi^\dag\left( {\mathrm e}^{-2\pi t}y \right)\,,
\end{align}
the prefactor being due to the transformation law of a spin $1/2$ field under Lorentz boosts.
\paragraph{Multiple intervals on the plane.} Here the entagling region is an arbitrary set of disjoint intervals $V=\cup_{i=1}^n (a_i,b_i)$. This was the case solved in the seminal work \cite{Casini:2009vk}. However it is important to note how their strategy differs from ours. In \cite{Casini:2009vk} the authors first derived the modular Hamiltonian $K_V=-\log \rho_V$ and next used the associated Heisenberg equation $\partial_t \psi={\mathrm i}[\psi,K_V]$. This yields a set of coupled differential equations relating the different $\psi(x_\ell(t))$. On the other hand, we computed modular flow \textit{directly} in terms of the resolvent. This avoids using the modular Hamiltonian itself, and the need for the differential equation, and hands at once the solution.
For completeness, and in order to compare to the new results, we illustrate this case for two intervals $(a_1,b_1)\cup (a_2,b_2)$. Again, the solutions to \eqref{tdZ} are essential. In this case,
\begin{align}\label{}
Z(x)=\frac{1}{2\pi} \log - \frac{(x-a_1)(x-a_2)}{(x-b_1)(x-b_2)}
\end{align}
and therefore \eqref{tdZ} leads to a second degree equation, which can be readily solved but the expression is rather cumbersome. We plot these solutions in Fig.\ref{fig:two_ints}. This was the first known case of a bi-local or \textit{quasi}-local modular flow that could be solved analytically \cite{Casini:2009vk}. The most important feature is that the flow contains involves two kinds of terms. The local solution lives in the same interval as $y$, and is continuously connected to it at $t=0$. But there are also bi-local terms, one per interval. Also notice that due to chirality, the solutions move towards the left as modular time $t$ evolves, converging to the left endpoints asymptotically, and similarly go to the right endpoints as $t\to -\infty$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{two_intervals}
\caption{ An illustration of the modular flow $\sigma_t\left( \psi^\dag(y) \right)$ in \eqref{sigmasol} for the union of two intervals $(a_1,b_1)\cup(a_2,b_2)$ (black) on the plane. We plot the function $Z(x)-Z(y)+t$ for fixed $y$ as a function of $x$, for both $t=0$ (dashed curve) and $t>0$ (solid curve). The zeroes of the function correspond to the dots (blue for $t=0$, red for $t>0$). The flow couples the solutions of \eqref{tdZ}, $Z(x)-Z(y)+t=0$, which correspond to the zeroes of the function. Since $Z(x)$ increases monotonically from $-\infty$ to $\infty$ within each interval, there exists exactly one solution per interval. As modular time evolves, the curve is shifted linearly in time. Consequently the zeroes move left along modular flow. The solution contained in the same interval as $y$ is the `local' solution, while the other corresponds to the `non-local' one. }
\label{fig:two_ints}
\end{figure}
This simple example illustrates an important point. In general, the configuration on the plane with $n$ intervals involves polynomials of degree $n$. Thus, solving explicitly for the modular flow of the fermion operator becomes quickly hopeless as we increase $n$. This becomes even more involved for other states like the cylinder or the torus. Now, since these terms will show up in the modular flow of composite operators, it would seem implausible to find any closed analytic results for those cases. However, as we will show in section \ref{sec:mod_two} with the modular two-point function, there exists a remarkable way to circumvent the need to solve \eqref{tdZ}.
\subsection{Cylinder}
\label{subsec:vacuumP}
The vacuum state for the fermion on the cylinder possesses two spin sectors, the antiperiodic (A or Neveu-Schwarz) and the periodic (P or Ramond), depending on the boundary conditions we choose for the fermion along the circle. The modular flow for the antiperiodic sector (for any number of intervals) is identical to that of the plane, provided we use the appropriate propagator~\eqref{eq:g2} and $Z(x)$ given in \eqref{eq:z23}, and therefore we will not elaborate on it. However, this behaviour changes dramatically when we consider the periodic sector.
The periodic sector on the cylinder provides an example of how the present method allows to go beyond previous results in the literature. This case must be considered separately, since the resolvent~\eqref{eq:f3} contains an extra term, due to the presence of a zero mode. The first term of \eqref{eq:f3} is of identical form as the one derived in the previous section, namely \eqref{Sigma_t}, where again $S(x,y)$ is given by \eqref{Sxy3} and the corresponding correlator on the cylinder (P), \eqref{eq:g3}. This allows us to decompose the modular evolution as
\begin{align}\label{SigmaR}
\Sigma_t(x,y)=G(x,y) S(x,y) + \delta \Sigma_t\,,
\end{align}
where the extra term, associated to the Ramond sector, can be written in the variable $z=(1-\lambda)/\lambda$ as
\begin{align}\label{}
\delta \Sigma_
&=\frac{1}{4\pi {\mathrm i}}\oint_\Gamma \frac{\dif{z}}{z} z^{{\mathrm i} t} \left( -z \right)^{{\mathrm i}\tilde t} \frac{(-z)^L-1}{(-z)^L+1}
\end{align}
where again we defined $\tilde t=Z(x)-Z(y)$.
Using again \eqref{Sxy2}, this can be brought into the more convenient form
\begin{align}\label{}
\delta \Sigma_t&=\frac{1}{2}S(x,y) + \frac{1}{2\pi i}\oint_\Gamma \frac{\dif{z}}{z} z^{{\mathrm i} t} \left( -z \right)^{{\mathrm i}\tilde t} \frac{1}{(-z)^L+1}\,.
\end{align}
Although the last factor in the integral does not possess a multiplicative branch cut, one can bring it into such a form using the identity
\begin{equation}
\label{eq:identity}
\frac{1}{1+y}=\frac{{\mathrm i}}{2}\int_{-\infty}^{\infty}\dif{s}\frac{y^{{\mathrm i} s}}{\sinh\left(\pi s+{\mathrm i} 0^+\right)}\quad\mathrm{for}\quad y\in{\mathbb{C}}\setminus{\mathbb{R}}^{-}\,,
\end{equation}
where we identify $y=(-z)^L$ for $L\in (0,1)$. Then, the same steps leading to \eqref{Sxy3} yield
\begin{align}\label{}
\delta \Sigma_
&=\frac{1}{2}S(x,y) + \frac{\sinh(\pi t)}{L\sinh\left( \frac{\pi (t+\tilde t)}{L} \right)}\,.
\end{align}
As we discuss below, this result is quite remarkable. While the first term -- given by \eqref{Sigmasol} -- produces a local flow, the second one does not and leads to complete non-locality. Indeed, as explained in \ref{sec:modular-flows-overview}, any contribution to the kernel $\Sigma_t(x,y)$ that is \textit{not} localised (in the sense of being proportional to $\delta(x-y)$) will produce a discontinuity in the modular two-point function, in accordance with the KMS condition \eqref{eq:kms2}.
Finally, inserting back into \eqref{SigmaR}, \eqref{sigmasol} and \eqref{tomitapsi}, we find
\begin{align}\label{sigma_R}
\sigma_t\left( \psi^\dag(y) \right)=-2{\mathrm i} \sinh(\pi t) \sum_\ell \frac{G(x_\ell,y)+1/2}{Z'(x_\ell)}\, \psi^\dag(x_\ell) + \frac{\sinh (\pi t)}{L} \int_V \dif{x} \frac{\psi^\dag(x)}{\sinh \frac{\pi}{L}(t+\tilde t)}\,.
\end{align}
The first term is very similar to the flow on the plane or cylinder (NS) of the previous section, but with the correlator shifted by a constant of $1/2$. This factor is not very significant as it can be reabsorbed into the integral; it will show up again in section \ref{subsec:mod2pt-cylinder}. Again, this flow always involves a local (geometric) term, and additional bi-local (quasi-geometric) couplings whenever $V$ contains more than one interval. In particular, the solution for the $x_\ell$ is identical to that on the antiperiodic sector, since this depends only on the function $Z(x)$, which does not depend on the periodicity. Notice that, as mentioned above, at $t=0$ all terms vanish, except the one involving the propagator.
The second piece is another important result of this work. It constitutes an example of a \textit{continuously non-local} modular evolution. The operator $\psi^\dag$, initially localised at $y$ at $t=0$, receives contributions from the entire interval as modular time evolves. The properties of the coefficient $\left( \sinh(\pi (t+\tilde t)/L) \right)^{-1}$ are again determined by those of $Z(x)$: since this function increases monotonically and diverges at the endpoints, the coefficient vanishes at $\partial V$, and has an asymptote at the solutions of \eqref{tdZ}. Just as with the bi-local flows of section \ref{subsec:vacuumflow}, the asymptote moves monotonically from $b_i$ at $t\to -\infty$ to $a_i$ at $t\to\infty$. We illustrate these features in figure \ref{fig:Ramond}, for the case of a single interval.
The corresponding modular Hamiltonian for the periodic case was first derived in \cite{Klich:2015ina}. Although the continuously non-local flow appears to have a different nature than the bi-local quasi-geometric terms, it actually results as the zero temperature limit of bi-locality on the torus, as shown in \cite{Fries:2019ozf}. We comment on this in the next subsection.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{Ramond}
\caption{Illustration of the modular flow $\sigma_t\left( \psi^\dag(y) \right)$ of eq. \eqref{sigma_R} for the periodic sector on the cylinder -- one of the novel results of this work -- for a single interval, depicted in a similar manner as figure \ref{fig:two_ints}. Again the blue dot signals the point $y$, while the red dot follows the position of the local contribution to the flow, corresponding to the zero of $Z(x)-Z(y)+t$, for $t=0$ (dashed-opaqued) and $t>0$ (solid blue). The novelty is that in addition, this case contains a completely non-local term, as seen in the second term of \eqref{sigma_R}, represented by the dot-dashed line that is proportional the non-local kernel. This vanishes at the endpoints and diverges at the position of the local contribution. }
\label{fig:Ramond}
\end{figure}
For multiple intervals on the periodic sector, the flow has three components: the local piece, the bi-local term (one per interval as in the plane), and the completely non-local one, which couples a given point to all intervals.
\subsection{Torus}
The resolvent on the torus possesses a qualitatively new behaviour. As shown in \cite{Fries:2019ozf}, the modular Hamiltonian involves a discrete but infinite set of bi-local couplings. The same behaviour is present for the modular flow and the modular correlator as discussed in section \ref{subsec:mod2pt-torus}. See also \cite{PhysRevD.100.025003} for related work.
The novelty here is that the propagator appearing in $F_\lambda$ (see eq. \eqref{eq:f4}) carries an explicit $\lambda$-dependence, and therefore does not factor out of the integral in \eqref{Sigmat2}, as in the previous cases. Nevertheless, we can still find an analytic solution. In \eqref{eq:z4} we noticed that the propagator appearing in this case can be re-interpreted as the correlator with a chemical potential turned on and has an associated series representation, \eqref{eq:gg2} and \eqref{eq:gg3} for each spin sector. Using this, and again essentially the same steps as in section \ref{subsec:vacuumP} for the cylinder (P), one finds
\begin{align}
\label{eq:sigma_series}
\Sigma^{(\nu)}_t&=\frac{1}{L} \frac{\sinh\left( \pi t \right)}{\sinh \left( \pi \frac{t+\tilde t}{L} \right)}\sum_{k}' {\mathrm e}^{-2\pi {\mathrm i} k\left( x-y+\beta (t+\tilde t)/L \right)}
\end{align}
where the sum runs over $k\in\mathbb Z$ for $\nu=2$ and $k\in\mathbb Z+1/2$ for $\nu=3$. Finally, using the periodic/antiperiodic representations of the Dirac distribution,
\begin{align}
\label{eq:dirac-comb}
\sum_{k\in \mathbb Z}{\mathrm e}^{-2\pi {\mathrm i} k s}=\sum_{k\in\mathbb Z} \delta (s-k)\,,\quad \sum_{k\in \mathbb Z+\frac{1}{2}}{\mathrm e}^{-2\pi {\mathrm i} k s}=\sum_{k\in\mathbb Z} (-1)^k\delta (s-k)
\end{align}
we find that the kernel for the modular operator is given, for $\nu=2,3$, by
\begin{align}\label{}
\Sigma^{(\nu)}_t=\frac{1}{L} \frac{\sinh\left( \pi t \right)}{\sinh \left( \pi \frac{t+\tilde t}{L} \right)} \sum_{k\in \mathbb Z} (-1)^{\nu k}\delta \left( x-y+\beta\frac{t+\tilde t}{L} -k\right)
\end{align}
where we have replaced $\tau={\mathrm i}\beta$ again to emphasize that the argument of the Dirac distribution is strictly real. Here again the argument of the distribution plays a fundamental role. The support of $\Sigma_t$ locates at the roots of
\begin{align}\label{tdZ_torus}
x-y+\frac{\beta}{L} \left( t+Z(x)-Z(y) \right) -k=0
\end{align}
For every integer $k$ and interval $\ell$, \eqref{tdZ_torus} has a single solution, $x_{\ell,k}=x_{\ell,k}(y)$, since again $Z(x)$ is monotonically increasing from $-\infty$ to $+\infty$ within each interval. This is illustrated for a single interval in figure \ref{fig:torus}. Therefore modular flow connects any given point $y\in V$ to an infinite set of other points in $V$. Thus we can write
\begin{align}\label{}
\Sigma^{(\nu)}_t=\frac{1}{L} \frac{\sinh\left( \pi t \right)}{\sinh \left( \pi \frac{t+\tilde t}{L} \right)} \sum_{k\in \mathbb Z} (-1)^{\nu k} \sum_\ell \frac{\delta(x-x_{\ell,k})}{1+\frac{\beta}{L}Z'(x_{\ell,k})}
\end{align}
Finally, the modular flow on the torus is then
\begin{align}\label{psi_t_torus}
\sigma_t (\psi^\dag(y)) = \frac{\sinh\left( \pi t \right)}{L} \sum_\ell \sum_{k\in \mathbb Z} \frac{(-1)^{\nu k}}{\sinh \left( \frac{\pi}{L}\left( t+Z(x_{\ell,k})-Z(y) \right) \right)} \frac{\psi^\dag(x_{\ell,k})}{1+\frac{\beta}{L}Z'(x_{\ell,k})}\,.
\end{align}
This result illustrates the `infinite bi-locality', as was already understood for the modular Hamiltonian \cite{Fries:2019ozf,PhysRevD.100.025003}. As can be seen explicitly from the above equation and depicted in figure \ref{fig:torus}, the modular flow of a field localised at $y$ receives contributions from a discrete but infinite set of points within $V$, even for a single interval. As discussed in \cite{Fries:2019ozf}, this structure illuminates the behaviour at zero temperature. Indeed, when $\beta\to\infty$, the solid curve becomes a straight line with infinite slope, and the red dots in figure \ref{fig:torus} form a regular partition of the interval. For $\nu=2$, this reproduces exactly the definition of the Riemann integral, which leads to the continuous integral for the periodic sector on the cylinder as shown in \eqref{sigma_R}.
It is worth emphasizing once more how explicit \eqref{psi_t_torus} is. Indeed, whereas alternative approaches \cite{Casini:2009vk,Blanco:2019cet} use the modular Hamiltonian itself to write the modular flow in terms of a set of infinitely many coupled differential equations, our method yields at once the full solution.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{torus}
\caption{Modular flow \eqref{psi_t_torus} for a single interval on the torus. It mixes an operator at $y$ to an infinite but discrete set of points (red) within the interval. These are the intersections of the solid curve with the integers, solutions to \eqref{tdZ_torus} which accumulate at the endpoints. As modular time increases, these solutions move towards the left endpoint. The dashed opaque curve corresponds to $t=0$, while the solid one is for $t>0$. Note the non-trivial dependence of the curve with the ratio $\beta/L$, the two physical scales of the system. }
\label{fig:torus}
\end{figure}
\section{Modular two-point function}
\label{sec:mod_two}
In this section we illustrate the power of the tools laid out in section~\ref{sec:modular-flows-overview} and explicitly calculate the modular two-point function defined in eq.~\eqref{eq:mod2pt}. Following the distinction of cases from section \ref{subsec:resolvent} we may work out the different results for the plane, the cylinder with (anti-)periodic boundary conditions and finally the torus. We will explicitly calculate $G_{\text{mod}}(t)$ in the lower strip $-1<\Im(t)<0$, and later comment on the continuation to the complex plane.
On a formal level, the modular correlator is already determined by the results of the previous section. All we have to do is take the expectation value of $\sigma_t\left( \psi^\dag \right)$ with another fermionic field,
\begin{equation}
G_\mathrm{mod}(x,y;t)=\left\langle \psi(x)\,\sigma_t(\psi^\dagger(y)) \right\rangle
\end{equation}
on the global state considered, where the modular flow of the operator $\sigma_t(\psi^\dagger(y))$ has been computed for all mentioned cases in section \ref{sec:op_flow}. For instance, in the simplest case of the plane or cylinder (A) this yields
\begin{equation}\label{}
G_\mathrm{mod}(x,y;t)=2{\mathrm i} \sinh(\pi t) \sum_k \frac{G(x_k,y)G(x,x_k)}{Z'(x_k)}\,,
\end{equation}
where again $x_k(y;t)$ are the solutions of \eqref{tdZ}. In practice however, this formula is not very useful. As mentioned above, the problem of finding the roots $x_k$ is in general very hard. Moreover, as mentioned in section \ref{sec:modular-flows-overview}, the modular correlator must be analytic in the strip $-1\leq\Im(t)\leq 0$ and satisfy the boundary condition~\eqref{eq:kms3}, implied by the KMS condition~\eqref{eq:kms}. Both properties are obscured in the above formula, because it was derived for $t\in{\mathbb{R}}$ and the $x_k$ depend implicitly on $t$. Therefore, one would wish for an alternative expression which makes these properties manifest.
Fortunately, such an expression exists. In the next section we compute the modular two-point function directly from the resolvent. Since all the integrals involved are convergent, this yields a final result for the modular correlator that depends on $Z(x)-Z(y)$ rather than on the roots $x_k$. Furthermore, the resulting expression can explicitly be shown to satisfy the boundary condition~\eqref{eq:kms3}.
\subsection{Plane}
\label{subsec:mod2pt-plane}
This is a rather trivial case, but the result is a useful building block for the more involved calculations in the other cases. Note that all results hold for arbitrary configurations of intervals. We want to calculate the modular two-point function \eqref{eq:mod2pt} by solving the variant of Cauchy's integral equation presented in \eqref{eq:cauchy}. After inserting the resolvent given in \eqref{eq:res}, \eqref{eq:f1} and \eqref{eq:z1} we get
\begin{eqnarray}
G_\mathrm{mod}(x,y;t) & = & \oint_{\gamma}\frac{\dif{\lambda}}{2\pi{\mathrm i}}\left(\frac{1-\lambda}{\lambda}\right)^{{\mathrm i} t}\delta(x-y)\\
& & -\oint_{\gamma}\frac{\dif{\lambda}}{2\pi{\mathrm i}}\frac{1}{1-\lambda}\left(\frac{1-\lambda}{\lambda}\right)^{{\mathrm i} t} G(x,y) \left[-\frac{1-\lambda}{\lambda}\right]^{{\mathrm i}\tilde{t}}\nonumber\,,
\end{eqnarray}
where we used the shorthand $\tilde{t}=Z(x)-Z(y)$ again and $\gamma$ is a tight counter-clockwise contour around the interval $[0,1]$ as before. The first term vanishes since the integrand is analytic in the integration region. For the second term we find it convenient to substitute $z=(1-\lambda)/\lambda$ and thus we may write
\begin{equation}
\label{eq:idk}
G_\mathrm{mod}(x,y;t)=\oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{z^{{\mathrm i} t}}{z\left(1+z\right)}\left(-z\right)^{{\mathrm i}\tilde{t}}G(x,y)\,,
\end{equation}
with $\Gamma$ being a tight counter-clockwise contour around ${\mathbb{R}}^+$. We note that $z^{{\mathrm i} t}$ has a branch cut on ${\mathbb{R}}^{-}$ while $\left(-z\right)^{{\mathrm i}\tilde{t}}$
has a branch cut on ${\mathbb{R}}^{+}$ which makes it impossible to
use the residue theorem. However, we can circumvent this problem with a trick that produces a common branch cut on ${\mathbb{R}}^{+}$ for the whole integrand. This goes as follows.
Assuming $-1<\Im(t)<0$, we can pull tight the integration contour and neglect the contributions at the endpoints $0$ and $\infty$. While $\Im(t)<0$ avoids the divergence at zero, $\Im(t)>-1$ ensures the vanishing of contributions at infinity. Hence, we may write
\begin{eqnarray}
G_\mathrm{mod}(x,y;t) & = & \int_{0}^{\infty}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{z^{{\mathrm i} t}}{z\left(1+z\right)}\left[\left(-z+{\mathrm i} 0^+\right)^{{\mathrm i}\tilde{t}}-\left(-z-{\mathrm i} 0^+\right)^{{\mathrm i}\tilde{t}}\right]G(x,y)\\
& = & \int_{0}^{\infty}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{z^{{\mathrm i}\left(t+\tilde{t}\right)}}{z\left(1+z\right)}\left[e^{-\pi\tilde{t}}-e^{\pi\tilde{t}}\right]G(x,y)\,.
\end{eqnarray}
Notice that we went from a complex contour integral to an
integral along ${\mathbb{R}}^+$ and from $(-z)^{{\mathrm i}\tilde{t}}$
to $z^{{\mathrm i}\tilde{t}}$ by acquiring a factor $[e^{-\pi\tilde{t}}-e^{\pi\tilde{t}}]$.
Now we use the inverse logic and go back to the contour integral
and from $z^{{\mathrm i}(t+\tilde{t})}$ to $(-z)^{{\mathrm i}(t+\tilde{t})}$
by adding a factor $[e^{-\pi(t+\tilde{t})}-e^{\pi(t+\tilde{t})}]^{-1}$. Thus, we arrive at
\begin{equation}
\label{eq:pacman}
G_\mathrm{mod}(x,y;t)= G(x,y)\frac{\sinh\left[\pi\tilde{t}\right]}{\sinh\left[\pi\left(t+\tilde{t}-{\mathrm i} 0^+\right)\right]}\oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{\left(-z\right)^{{\mathrm i}\left(t+\tilde{t}\right)}}{z\left(1+z\right)}\,.
\end{equation}
where we made explicit the above assumption that $\Im(t)<0$. The remaining contour integral is solved by closing $\Gamma$ at infinity with a big circle as illustrated in figure \ref{fig:pacman}. We only pick up the residue at $z=-1$ which is just one. So the final
result for the modular two-point function on the plane is
\begin{equation}
\label{eq:g-mod12}
G_\mathrm{mod}(x,y;t)= G(x,y)\frac{\sinh\left[\pi\tilde{t}\left(x,y\right)\right]}{\sinh\left[\pi\left(t+\tilde{t}\left(x,y\right)-{\mathrm i} 0^+\right)\right]}\,.
\end{equation}
This expression was expected, since it is known already, but it has, to the best of our knowledge, not yet been derived directly from the resolvent. As a first consistency check, we see that for $t=0$ we retain the original two-point function. It is also straightforward to check that it satisfies the KMS condition. Since eq.~\eqref{eq:g-mod12} is analytic for all $t \in {\mathbb{C}}$, we do not have to care about the direction of any limits and can straighforwardly substitute $t\rightarrow t-{\mathrm i}$. Then, the $\sinh$ in the denominator switches sign and thus the entire expression switches sign. This verifies KMS condition in the form of~\eqref{eq:kms3} since the right hand side vanishes up to the localized contribution~\eqref{Sigmasol}.
\begin{figure}[h]
\def.7\linewidth{0.5\linewidth}
\centering{
\input{pacman-contour.pdf_tex}
\caption{The integral in \eqref{eq:pacman} is carried out along the contour $\Gamma$ displayed here. We can close the contour at infinity with a big circle since those contributions vanish. The only non-analyticity in the integration region is the simple pole at $z=-1$ for which we obtain the residue.}
\label{fig:pacman} }
\end{figure}
\subsection{Cylinder}
\label{subsec:mod2pt-cylinder}
Now we will take the spatial direction to be periodic with periodicity 1. For the boundary conditions there are two possibilities. In the first case of antiperiodic boundary conditions, there is no more work to be done. Recalling the details given in section~\ref{subsec:resolvent} we see that the only difference to the case of the plane lies in the definitions of the propagator \eqref{eq:g2} and the function $Z(x)$ \eqref{eq:z23}. However, both of these definitions did not play a role in the calculation. Hence, the modular two-point function on the cylinder with antiperiodic boundary conditions is already given by the expression in \eqref{eq:g-mod12}.
The case of periodic boundary conditions is more complicated insofar as the resolvent \eqref{eq:f3} features an additional term. The corresponding contribution to the modular two-point function is
\begin{eqnarray}
\delta G_\mathrm{mod}(x,y;t) & = & -\oint_{\gamma}\frac{\dif\lambda}{2\pi{\mathrm i}}\frac{1}{1-\lambda}\left(\frac{1-\lambda}{\lambda}\right)^{{\mathrm i} t}\left(\frac{\lambda-1}{\lambda}\right)^{{\mathrm i}\tilde{t}}\frac{1}{2}\frac{\left(\frac{\lambda-1}{\lambda}\right)^{L}-1}{\left(\frac{\lambda-1}{\lambda}\right)^{L}+1}\\
& = & \oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{1}{z\left(1+z\right)}z^{{\mathrm i} t}\left(-z\right)^{{\mathrm i}\tilde{t}}\frac{1}{2}\frac{\left(-z\right)^{L}-1}{\left(-z\right)^{L}+1}\,,
\end{eqnarray}
where we used the same definitions for $z$, $\tilde{t}$, $\gamma$ and $\Gamma$ as in the previous subsection. We may split the expression into two terms
\begin{eqnarray}
\delta G_\mathrm{mod}(x,y;t) & = & \frac{1}{2}\oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{1}{z\left(1+z\right)}z^{{\mathrm i} t}\left(-z\right)^{{\mathrm i}\tilde{t}} \label{eq:club}\\
& & -\oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{1}{z\left(1+z\right)}\frac{z^{{\mathrm i} t}\left(-z\right)^{{\mathrm i}\tilde{t}}}{\left(-z\right)^{L}+1}\nonumber
\end{eqnarray}
after which we recognize the first term to be the same integral as in \eqref{eq:idk} whose solution has been given in \eqref{eq:g-mod12}. Let us refer to the second term as $\Delta G_\mathrm{mod}$. To solve it we make use of the identity \eqref{eq:identity}. It follows that
\begin{equation}
\Delta G_\mathrm{mod}=-\frac{{\mathrm i}}{2}\int_{-\infty}^{\infty}\frac{\dif{s}}{\sinh\left(\pi s+{\mathrm i} 0^+\right)}\oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{1}{z\left(1+z\right)}z^{{\mathrm i} t}\left(-z\right)^{{\mathrm i}\left(\tilde{t}+sL\right)}\,.
\end{equation}
Again, the contour integral is of the same type as in \eqref{eq:idk}, which leaves us with the integral over $s$. We find it convenient to substitute $w={\mathrm e}^{\pi s}$ and arrive at
\begin{equation}
\label{eq:spade}
\Delta G_\mathrm{mod}=\frac{{\mathrm e}^{-\pi t}}{\pi{\mathrm i}}\int_{0}^{\infty}\frac{\dif{w}}{w^{2}-1+{\mathrm i} 0^+}\cdot\frac{w^{2L}-{\mathrm e}^{-2\pi\tilde{t}}}{w^{2L}-{\mathrm e}^{-2\pi\left(t+\tilde{t}\right)}}\,.
\end{equation}
Our further solution is restricted to the special case $L=\nicefrac{1}{n},n\in{\mathbb{N}}\setminus\{1\}$. Under this restriction we are able to separate the integrand into partial fractions which results in a sum of simple integrals. To do that we continue with the substitution $v=w^{\nicefrac{1}{n}}$, yielding
\begin{equation}
\Delta G_\mathrm{mod}^{L=\nicefrac{1}{n}}=\frac{n\,{\mathrm e}^{-\pi t}}{\pi\,{\mathrm i}}\int_{0}^{\infty}\dif{v}\frac{v^{n-1}\left(v^{2}-{\mathrm e}^{-2\pi\tilde{ t}}\right)}{\left(v^{2n}-1+{\mathrm i} 0^+\right)\left(v^{2}-{\mathrm e}^{-2\pi\left( t+\tilde{ t}\right)}\right)}\,.
\end{equation}
A decomposition into partial fractions gives
\begin{eqnarray}
\label{eq:partial-frac}
\Delta G_\mathrm{mod}^{L=\nicefrac{1}{n}} & = & \frac{1}{2\pi{\mathrm i}}\int_{0}^{\infty}\dif{v}\left\{ \frac{\sinh\left[\pi\tilde{ t}\right]}{\sinh\left[\pi\left( t+\tilde{ t}\right)\right]}\cdot\frac{1}{v-1+{\mathrm i} 0^+}\right.\\
& & +\sum_{k=1}^{2n-1}\left(-1\right)^{k}\frac{\sinh\left[\pi\left(\tilde{ t}+{\mathrm i}\frac{k}{n}\right)\right]}{\sinh\left[\pi\left( t+\tilde{ t}+{\mathrm i}\frac{k}{n}\right)\right]}\cdot\frac{1}{v-{\mathrm e}^{i\pi\frac{k}{n}}}\nonumber \\
& & +\left.n\frac{\sinh\left[\pi t\right]}{\sinh\left[n\,\pi\left(t+\tilde{t}\right)\right]}\left[\frac{1}{v-{\mathrm e}^{-\pi\left(t+\tilde{t}\right)}-{\mathrm i} 0^+}+\frac{\left(-1\right)^{n-2}}{v+{\mathrm e}^{-\pi\left( t+\tilde{ t}\right)}}\right]\right\}\,.\nonumber
\end{eqnarray}
In the third line we made the negative imaginary part of $t$ explicit
as a prescription for how to avoid the pole. Although all terms diverge separately, we can trust the divergences to cancel in total, since the
original integral is finite. Importantly, note that the boundary term of the first summand cancels exactly with the first term of $\delta G_\mathrm{mod}$ in \eqref{eq:club}. Thus, our final result for the modular two-point function on the cylinder with periodic boundary conditions is
\begin{equation}
\label{eq:g-mod-cyl}
G_\mathrm{mod}^{L=\nicefrac{1}{n}}(x,y;t)=
G(x,y)\frac{\sinh\left[\pi\tilde{t}\right]}{\sinh\left[\pi\left(t+\tilde{t}\right)\right]}
+\delta G_\mathrm{mod}^{L=\nicefrac{1}{n}}(x,y;t)
\end{equation}
with
\begin{eqnarray}
\label{eq:g-mod3}
\delta G_\mathrm{mod}^{L=\nicefrac{1}{n}}(x,y;t) & = & \frac{1}{2}\sum_{k=1}^{2n-1}\left(1-\frac{k}{n}\right)\left(-1\right)^{k}\frac{\sinh\left[\pi\left(\tilde{ t}+{\mathrm i}\frac{k}{n}\right)\right]}{\sinh\left[\pi\left( t+\tilde{ t}+{\mathrm i}\frac{k}{n}\right)\right]}\\
& & +\frac{n}{2}\frac{\sinh\left[\pi t\right]}{\sinh\left[n\,\pi\left( t+\tilde{ t}\right)\right]}\left[1-\left(1+\left(-1\right)^{n-2}\right){\mathrm i}\left( t+\tilde{ t}\right)\right]\,.\nonumber
\end{eqnarray}
It is straightforward to check that $\delta G_\mathrm{mod}$ vanishes for $t=0$. This is as expected, since we should recover the usual propagator. To see what happens to the KMS condition, we again insert $t\rightarrow t-{\mathrm i}$. The first term clearly switches sign hence does not contribute to the left hand side of eq.~\eqref{eq:kms3}. For the second term, we can to consider the cases of odd and even $n$ separately. In both cases, we end up with
\begin{align}\label{eq:jump}
\delta G_\mathrm{mod}^{L=\nicefrac{1}{n}}(x,y;t-{\mathrm i} 0^+) + \delta G_\mathrm{mod}^{L=\nicefrac{1}{n}}(x,y;t-{\mathrm i}+{\mathrm i} 0^+)= \frac{n \sinh\left[\pi t\right]}{\sinh\left[n\,\pi\left( t+\tilde{ t}\right)\right]},
\end{align}
reproducing exactly the non-local extra term from~\eqref{sigma_R}. As far as the authors know, this is the first time in the literature that such a non-local term was explicitly derived in the context of modular two-point functions.
\subsection{Torus}
\label{subsec:mod2pt-torus}
At last, we will turn to the modular two-point function on the torus, i.e. we will deal with periodicity 1 in the spatial direction and with periodicity $\tau={\mathrm i}\beta$ in the time direction. Following the labelling in section~\ref{subsec:resolvent}, the boundary conditions will be denoted by $\nu=2,3$. To begin with, the same recipe as in the previous subsections can be applied. We take the necessary information about the resolvent and the propagator from the third key point in section~\ref{subsec:resolvent}. It strikes us that we can (and will) make use of \eqref{eq:identity} once again.
Let us demonstrate the procedure for $\nu=2$ and then comment on the difference for $\nu=3$. Combining all things mentioned (again with $z$, $\tilde{t}$ and $\Gamma$ defined as before) we have
\begin{equation}
G_\mathrm{mod}(x,y;t)=\frac{{\mathrm i}}{2}\int_{-\infty}^{\infty}\dif{s}\frac{\sum_{k\in{\mathbb{Z}}}'{\mathrm e}^{-2\pi{\mathrm i} k(x-y-{\mathrm i} 0^++\beta s)}}{\sinh\left(\pi s+{\mathrm i} 0^+\right)}\oint_{\Gamma}\frac{\dif{z}}{2\pi{\mathrm i}}\frac{z^{{\mathrm i} t}}{z\left(1+z\right)}\left(-z\right)^{{\mathrm i}\left(\tilde{t}-Ls\right)}\,,
\end{equation}
which allows a number of simplifications. The contour integral over $z$ on the right might be an old acquaintance by now. It is of the same type as the one in \eqref{eq:idk}, which was solved by \eqref{eq:g-mod12}. The sum over $k$ on the left can be recognized as a Dirac comb with periodicity~1. Putting the changes into place results in
\begin{eqnarray}
G_\mathrm{mod}(x,y;t) & = &
\frac{{\mathrm i}}{2\beta}\int_{-\infty}^{\infty}\dif{s}\frac{\sum_{n\in{\mathbb{Z}}}'\delta\left(s+\frac{x-y-n}{\beta}\right)}{\sinh\left(\pi s+{\mathrm i} 0^+\right)}\cdot\frac{\sinh\left[\pi\left(\tilde{t}-Ls\right)\right]}{\sinh\left[\pi\left(t+\tilde{t}-Ls\right)\right]} \\
& = & \frac{{\mathrm i}}{2\beta}\sum_{n\in{\mathbb{Z}}}'\frac{1}{\sinh\left(-\pi\frac{x-y-n}{\beta}+{\mathrm i} 0^+\right)}\cdot\frac{\sinh\left[\pi\left(\tilde{t}+L\frac{x-y-n}{\beta}\right)\right]}{\sinh\left[\pi\left(t+\tilde{t}+L\frac{x-y-n}{\beta}\right)\right]}\,, \label{eq:g-mod-torus}
\end{eqnarray}
which is our final result for the modular two-point function on the torus with PA boundary conditions ($\nu=2$). To see the slight difference for AA boundary conditions ($\nu=3$), it suffices to compare eqs.~\eqref{eq:gg2} and \eqref{eq:gg3}. The sum is now over $k\in{\mathbb{Z}}+1/2$ which produces an additional factor $(-1)^n$ in the final result.
For $t=0$, the second factor in \eqref{eq:g-mod-torus} cancels and we can trace back the steps to $G(x,y)$ as expected. To check for the KMS condition, we replace $t\rightarrow t-{\mathrm i}$ and confirm that the expected sign change is produced by the $\sinh$ in the denominator of the second factor, up to the isolated poles which yield local contributions on the right hand side of~\eqref{eq:kms3}.
\subsection{Analytic structure}
\label{subsec:mod2pt-poles}
As a key feature of two-point functions we deem it instructive to analyze the pole structure of the above results. This will give insights into the non-locality of modular flow, since it causes couplings between multiple points or even entire regions.
We illustrate the analytic structure of $G_{\text{mod}}(x,y,t)$ for $t\in\mathbb C$, for some fixed $x,y\in V$. As anticipated in section \ref{sec:modular-flows-overview}, the presence of poles and cuts gives information about the locality or not of the modular evolution. First, we know that $G_{\text{mod}}(x,y,t)$ possesses simple poles along the real axis, located at the solutions to $\eqref{tdZ}$ and \eqref{tdZ_torus} for the plane/cylinder, and the torus respectively. Due to the definition of the modular correlator and the KMS condition \eqref{eq:kms2}, we also know that the function is antiperiodic in imaginary modular time. The precise values of $x,y$ -- and therefore the poles -- will not be important for the discussion.
\paragraph{Plane or cylinder (A).} The modular two-point function in the plane or on the cylinder with antiperiodic boundary conditions was determined in \eqref{eq:g-mod12}. Given fixed $x,y$, the poles are the values of $t$ which satisfy
\begin{equation}\label{eq:sinhtdZ}
\sinh \left( t+Z(x)-Z(y) \right)=0\,.
\end{equation}
Notice that, since $Z(x)\in\mathbb R$ for $x\in V$ (and similarily for $y$), the solutions of this equation lie at $\Im(t)\in \mathbb Z$. In figure \ref{fig:Gmod-plane} we illustrate the structure of the modular two-point function when continued to the complex plane. The sketch can be easily understood with the aim of figure \ref{fig:two_ints}. As portrayed there, if we fix $y$ and evolve in $t$, the values of $x$ that satisfy \eqref{eq:sinhtdZ} move, and sweep the entirety of $V$ when $t$ varies in $(-\infty,\infty)$. In other words, for fixed $x,y\in V$ there exists a unique value of $t\in\mathbb R$ that solves \eqref{eq:sinhtdZ}. This is a poles of $G_{\text{mod}}$, depicted as a black dot in figure \ref{fig:Gmod-plane}, which gets repeated due to antiperiodicity.
\begin{figure}[h]
\def.7\linewidth{.7\linewidth}
\centering{
\input{Gmod_plane.pdf_tex}
\caption{ Sketch of the analytic structure of the modular correlator $G_{\text{mod}}(x,y,t)$ \eqref{eq:g-mod12} as function of complex modular time $t$, for fixed $x,y$ belonging to a single interval on the plane or cylinder(A). The function is analytic everywhere except at its simple poles (black dots), their specific location depending on $x,y$. Since the modular flow in this case is local, $G_{\text{mod}}(t)$ possesses no branch cuts. For multiple intervals, the location of the poles is shifted, but no new poles arise. The KMS condition ensures that $G_{\text{mod}}$ is antiperiodic in imaginary time. }
\label{fig:Gmod-plane} }
\end{figure}
\paragraph{Cylinder (P).} On the cylinder with periodic boundary conditions we have the same situation as above plus the contributions coming from the additional term $\delta G_{\text{mod}}$ in \eqref{eq:g-mod3}. This is the novel ingredient in this result. As explained in \eqref{eq:jump}, this contribution induces a discontinuity along $\Im(t)\in\mathbb Z$, as depicted in figure \ref{fig:Gmod-R}. At first sight, the appearance of a cut might seem surprising, due to its absence in the antiperiodic sector. However, this will be clarified in brief once we discuss the case on the torus, and consider the limit of zero temperature.
\begin{comment}
For simplicity let us focus on the the special case $L=\nicefrac{1}{2}$, where we have
\begin{equation}\label{dGmodP}
\delta G_\mathrm{mod}^{L=\nicefrac{1}{2}}(x,y;t)=\frac{\sinh\left[\pi t\right]}{\sinh\left[\pi\left( t+\tilde{t}\right)\right]\cosh\left[\pi\left( t+\tilde{t}\right)\right]}\left[\frac{1}{2}-{\mathrm i}\left( t+\tilde{t}\right)\right]\,.
\end{equation}
The fraction on the left has the same pole structure as the main part, so this does not pose a qualitative difference. Much more interesting is the factor on the right which comes with a `naked' $\tilde{t}$, \textit{i.e.} outside the argument of a hyperbolic function. As discussed in section \ref{subsec:resolvent}, $\tilde{t}=Z(x)-Z(y)$ possesses a logarithmic branch cut over $V$ by construction.
\end{comment}
\begin{figure}[h]
\def.7\linewidth{.7\linewidth}
\centering{
\input{Gmod_R.pdf_tex}
\caption{ Sketch of $G_{\text{mod}}(x,y,t)$ \eqref{eq:g-mod-cyl}, for fixed $x,y$ belonging to a single interval on the plane or cylinder (P). The function is analytic in the interior of each strip and antiperiodic amongst strips due to the KMS condition. Just as in the plane, it possesses simple poles (black dots) along $\Im(t)\in \mathbb Z$. However, the novelty here is that in addition it has branch cuts, indicating that the modular flow is completely non-local as explained around \eqref{eq:kms3}. }
\label{fig:Gmod-R} }
\end{figure}
As a remark, notice that there is no cut at $t\in i\mathbb Z$, since by definition the modular two-point function coincides with the propagator at $t=0$.
\paragraph{Torus.} To analyze the pole structure of the modular two-point function for the torus with PA boundary conditions, we reconsider our result \eqref{eq:g-mod-torus}. The first factor inside the sum, although it seems obscure, represents the usual propagator $G(x,y)$ and is not important for our discussion. The second factor determines the interesting poles as the solutions to
\begin{equation}
\sinh\left( Z(x)-Z(y)-\frac{L}{\beta}\left( y-x+k \right)+t \right)=0\ \ \ \ ,\ \ \ k\in\mathbb Z\,.
\end{equation}
Since we already know the effect that a composition of multiple disjoint intervals has, let us focus on the case of a single interval. Even then there are countably infinitely many solutions for $t$, namely one for each $k$, spaced by $L/\beta$. Hence, the modular two-point function on the torus (PA) shows a non-locality that consists of infinitely many bi-local couplings.
\begin{figure}[h]
\def.7\linewidth{.7\linewidth}
\centering{
\input{Gmod_torus.pdf_tex}
\caption{ Sketch of the analytic structure of the modular correlator $G_{\text{mod}}(x,y,t)$ \eqref{eq:g-mod-torus} for fixed $x,y$ belonging to a single interval on the torus. The function is analytic everywhere except at simple poles (black dots), located at the solutions to \eqref{tdZ_torus}, which lie on a lattice of spacings $i$ and $L/\beta$ respectively. In the limit of low temperature $\beta\to\infty$, the real component of the spacing between poles vanishes. This behaviour is the origin of the branch cut in the case of the cylinder (P), figure \ref{fig:Gmod-R}. }
\label{fig:Gmod-torus} }
\end{figure}
In the low-temperature limit $\beta/L\rightarrow\infty$ we note that the poles move closer to each other until they finally form a branch cut, precisely the situation that we found for the cylinder (P). This is consistent with the interpretation of the cylinder as the low-temperature limit of the torus. By the same line of argument, we expect this infinite set of poles not to appear for the torus with AA boundary conditions. Indeed, the factor $(-1)^n$ in the sum of \eqref{eq:g-mod-torus} causes the poles to cancel each other, in a way completely analogous to the modular Hamiltonian \cite{Fries:2019ozf}.
In order to understand how a cut might originate from an infinite set of simple poles, consider the following example:
\begin{equation}
f(x)=\int_{a}^{b} \dif{\lambda}\,\frac{1}{x-\lambda}=\log\left(\frac{a-x}{b-x}\right)\,.
\end{equation}
Here, we produced a function with a branch cut in the interval $[a,b]$ by `summing' over infinitely many simple poles between $a$ and $b$. This is precisely what happens in the low temperature limit of the torus, where the lattice of simple poles create a branch cut in the modular two-point function. In turn, this implies that for any fixed modular time $t$, the modular flow on the cylinder couples a given point $y$ to the entirety of the region $V$.
|
1,116,691,497,714 | arxiv | \section{Introduction}\label{sect:introduction}
\subsection{Motivation and New Trends}\label{sect:Motivation/Introduction}
With the emergence of cloud computing, cloud providers tend to virtualize a range of telecom services by spreading the cloud computing technology toward end users and delivering mobile user's connectivity as a cloud service. In this context, authors of \cite{6616110} propose the "Follow Me Cloud" (FMC) concept, which allows services to migrate and seamlessly follow the users' mobility.
Accordingly, services are regularly furnished from the storage computer positions that are adequate for the prevalent placement of users as well as the present state of networks. The main idea of FMC is that services pursue the users all through their mobilizations. The "follow-me cloud" approach may be achieved throughout various mechanisms. One of the key technologies is virtualization that gives the possibility to conveniently move a Virtual Machine from specific host to another without turning it off, therefore, this can offer a dynamicity on VM placement optimization with negligible impact on performance. In the Network Function Virtualization (NFV) context, we may add or remove virtual machine instances at any moment, in an unexpected manner, depending on the service chaining associated with the customer profiles. This dynamics, despite its numerous benefits, may result in sub-optimal or unstable configurations of the virtual networks. Unfortunately, the most current research work overlook this dynamicity of VM placement requests which is often managed by cloud infrastructure controllers. Moreover, VMs may experience some fluctuations within the resource utilization (e.g., a mobile application server and a web server may possess identical patterns of incoming workload while using the same CPU). \\
However, it is crucial to have the ability to precise the application requirements of virtual machine placement and already know the status of each server to properly define the constraints required for modeling a reasonable configuration. Therefore, it is highly important to identify which services are stateless, and which ones are stateful in order to ensure good performance. Hence, the critical issue lies beneath finding the optimal VM allocation and packing to reduce the number of real nodes in order that datacenter (DC) administrator could shut down the idle nodes and diminish the number of migrations while meeting the Service Level Agreement (SLA) defined with users and avoiding non-viable placement that could lead to performance degradation. This paper first focuses on presenting the major issues relevant to the placement of VMs in cloud environments. It then presents the existing optimization approaches and classifies them according to their objective functions.
\subsection{Cloud Computing: on-Demand Networking}\label{sect:Motivation-Introduction}
Cloud computing enables users to consume an on-demand computing resources, such as storage, servers, applications and networks, as instances (VMs) instead of building physical infrastructure. These resources may be swiftly provided and managed with minimal efforts
by cloud providers. Cloud computing provides numerous exciting advantages for companies and ultimate consumers like on-demand provisioning of virtual resources, self-service ability, high elasticity, flexibility, scalability, broad network access and resource pooling. Virtualization technologies allow users to box their required computing resources into VMs. A virtual machine placement algorithm is used to define the locations of these VMs at suitable computing centers. As it will be discussed in Section II, many issues are considered for the placement of VMs: energy consumption, cost, SLAs, and load. For each issue, different solutions are proposed across several survey and research articles where each one treats the in its unique way proposing different techniques based on various algorithms as deterministic, heuristic or approximation algorithms. Our paper aims at finding the optimal and most effective Virtual Machine Placement (VMP) approach among all existing solutions $i)$ by determining the different problems that may arise in VMP and $ii)$ by classifying the proposed solutions according to their adopted approach/objective. \\
This survey is organized as follows. Section II introduces issues relevant to VMP. The existing VMP solutions are then classified into five objective functions: 1) energy consumption minimization, 2) cost optimization, 3) network traffic maximization, 4) resource utilization, and 5) performance maximization. Each class of VMP solutions is introduced in a separate section (i.e., Sections III - VII). In Section VIII, some concluding remarks are drawn and open research problems are highlighted.
\section{Virtual Machine Placement}\label{sect:}
\subsection{Placement Issues}\label{sect:Virtual Machine Placement Problem}
VMP is an important problem in cloud computing. It concerns the mapping between the physical and virtual machines with the objective to maximize the usage of available resources \cite{Hyser2007AutonomicVM}. Indeed, VMP is known as the procedure of choosing the most appropriate host where the virtual machine need to be deployed. The ability to migrate from a physical host to another made it possible to explore different strategies of VMP according to the constraints expressed in SLA to match different workloads \cite{Calcavecchia:2012:VPS:2353730.2353807}\cite{5990687}.\\
Large-scale cloud systems incur high cost for cloud owners, particularly in terms of energy consumption: data centers are still the major contributor of global CO$_2$ emissions of IT services \cite{6477661}. Furthermore, inefficient usage of computing resources (i.e., CPU, memory, storage and bandwidth) may result in high-energy consumption. Besides, unneeded virtual machine migrations cause additional management cost \cite{Fu2015} due to VM reconfiguration, destruction or creation of VMs and on-line VM migration, that further generate high energy consumption \cite{Hwang:2013:HVM:2514940.2515012}. Another issue pertains to performance degradation, since aggressive virtual consolidation can lead to performance degradation; therefore, One has to strike a balance of acquirable resources utilization to avoid possible degradation of performance \cite{6753800}. In this vein, a set of papers \cite{Cao2014}, \cite{Zhang2014} focuse on dealing with the compromise of minimizing the energy consumption and achieving high performance while maintaining a low-level SLA violation.\\
Along with the prominent cloud-based services, cloud data centers are affected by both spatio-temporal alteration of end users' demands and restricted available resources. Hence, the problem resides on how data centers can satisfy large number of requests for virtual resources under limitations relevant to both computation resources and link capacities \cite{6710563} \cite{6973776}. In the same regard, the problem of resource wastage can be caused when multiple VMs are unnecessarily launched on a large number of PMs. Furthermore, the unbalanced exploitation of physical machines introduce a waste of computing resources and this may impact the placement of VMs \cite{6701467} \cite{5724828} \cite{GAO20131230}. \\
In cloud computing environment, the data centers are placed while maintaining certain geographic or rational distances among them. This may cause critical issues related to data transfer time and network traffic between data centers \cite{5662521}\cite{6168391}. The problem of virtual machine placement through a huge number of exhaustive issues is studied in \cite{ILKHECHI2015508} where A.R. Ilkechi et al. focus their research on network-aware VMP with multiple traffic-intensive components. This solution will be introduced with further details in Section IV. In the same way, numerous applications running in the cloud demand high networking resources such as intense bandwidth requirements. In this regard, R.Cohen et al. \cite{6566794} focus on VM placement problem and propose proficient manners of assigning VMs of applications, characterized by intense bandwidth, into new data centers. In \cite{6466665}, D.S.Dias et al. also reflect the traffic congestion, the connection disruption and network links in their proposed VMP algorithms. The network scalability of modern data centers is another critical issue of VMP \cite{5461930}.\\
The demand for cloud services has significantly increased along with advances in the IT industry \cite{6253575}. Data centers that house all hardware equipment (e.g., servers, network devices, power and cooling systems) and support online applications \cite{FANG2013179} require a tremendous amount of energy to process data of hosted services, resulting in huge operational costs \cite{LI20131222}\cite{5763426}\cite{5724828}. Furthermore, the electricity cost incurred by large-scale data centers is enormous since cloud service providers frequently pay per the energy consumed and the power used. \cite{6114417}. W. Shi et al. \cite{6701467} concentrate on the impact of performance, revenue and energy cost, i.e. the higher the performance levels are supplied, the higher the revenues and therefore the higher the energy costs. To cope with this issue, they develop a Multi-level Generalized Assignment algorithm for augmenting the revenue under a limited power budget and SLAs (Section V). The impact of VMP on other metrics relevant to availability, Quality of Service (QoS), and resource interference is also investigated in other research work such as \cite{6332041}\cite{6622863}\cite{6297100}.
\subsection{Classification of existing VMP solutions}\label{sect:Virtual Machine Placement Problem}
This section presents a global classification of various VMP optimization approaches proposed in the literature. On one hand, the determined optimization alternatives can be divided into two categories : mono-objective or multi-objectives optimization problems (Table I). On the other hand, the selection of a possible VMP solution depends on several criteria, as discussed earlier in sub-section II-A, which vary according to several conditions and constraints. This results in a heterogeneity of possible formulations to cope with virtual machine placement problem. Taking into account the criteria diversity, we categorize the VMP solutions by purpose, i.e. those having the same goal, ranking them into five objective functions group as shown in Table II.\\
\renewcommand{\arraystretch}{2}
{\setlength{\tabcolsep}{0.5cm}
\begin{table*}[!h]
\begin{center}
\begin{tabular}{|p{4cm}|p{10cm}|}
\hline
\cellcolor{lightgray}Optimization Approach & \cellcolor{lightgray}References \\
\hline
Mono-Objective & \cite{6503614}, \cite{Cao2014}, \cite{Fu2015}, \cite{Tang2015}, \cite{6799695}, \cite{DBLP:journals/corr/abs-1011-5064}, \cite{5488431}, \cite{6296853}, \cite{6726449}, \cite{7387769}, \cite{LI20131222}, \cite{6221099}, \cite{6477661}, \cite{6575260}, \cite{5961722}, \cite{5578331}, \cite{6332041}, \cite{6710563}, \cite{doi:10.1080/10798587.2016.1152775}, \cite{6123491}, \cite{5394134}, \cite{Hyser2007AutonomicVM}, \cite{6848063}, \cite{6566794}, \cite{6466665}, \cite{5990687}, \cite{Ortigoza2016DynamicEF}
\\
\hline
Multi-Objective & \cite{6253575}, \cite{Hwang:2013:HVM:2514940.2515012}, \cite{5724828}, \cite{6296866}, \cite{6809418}, \cite{6679894}, \cite{GAO20131230}, \cite{5071526}, \cite{ZHENG201695}, \cite{7416960}, \cite{6701467}, \cite{Song2014}, \cite{6114417}, \cite{7179432}, \cite{ILKHECHI2015508}, \cite{5662521}, \cite{FANG2013179}, \cite{6297100}, \cite{6952725}, \cite{Pires:2013:MVM:2588611.2588692}, \cite{Anand:2013:VMP:2568486.2568500}, \cite{6753800}, \cite{6973776}, \cite{6168391}, \cite{Kakadia:2013:NVM:2534695.2534702}, \cite{DONG201462}, \cite{TORDSSON2012358}, \cite{iet:/content/books/10.1049/pbte070e_ch4}
\\
\hline
\end{tabular}
\caption{Global Classification of VMP solutions.}
\end{center}
\end{table*}
\begin{table*}[!h]
\begin{center}
\begin{tabular}{|p{4cm}|p{10cm}|}
\hline
\cellcolor{lightgray}Optimization objective & \cellcolor{lightgray}References \\
\hline
Energy Consumption Minimization & \cite{6503614}, \cite{6253575}, \cite{Cao2014}, \cite{Fu2015}, \cite{Tang2015}, \cite{6799695}, \cite{6726449}, \cite{LI20131222}, \cite{Anand:2013:VMP:2568486.2568500}, \cite{6296866}, \cite{DBLP:journals/corr/abs-1011-5064}, \cite{6809418},
\cite{6753800}, \cite{6477661}, \cite{GAO20131230}, \cite{6973776}, \cite{6575260}, \cite{6679894},
\cite{6296853}, \cite{5724828}, \cite{6221099}, \cite{5071526}, \cite{GAO20131230}, \cite{ZHENG201695}, \cite{Hwang:2013:HVM:2514940.2515012}, \cite{5488431}, \cite{Pires:2013:MVM:2588611.2588692}, \cite{100000}
\\
\hline
Cost Optimization & \cite{6253575}, \cite{6114417}, \cite{FANG2013179}, \cite{6701467}, \cite{6123491}, \cite{Pires:2013:MVM:2588611.2588692}, \cite{7179432}, \cite{5394134}, \cite{Hyser2007AutonomicVM}, \cite{5724828}, \cite{6952725}
\\
\hline
Network Traffic Maximization & \cite{5461930}, \cite{6753800}, \cite{6848063}, \cite{ILKHECHI2015508}, \cite{6566794}, \cite{6466665}, \cite{6296866}, \cite{6809418}, \cite{FANG2013179}, \cite{5662521}, \cite{6168391}, \cite{Kakadia:2013:NVM:2534695.2534702}
\cite{DONG201462}, \cite{6679894}, \cite{Pires:2013:MVM:2588611.2588692}, \cite{6297100}, \cite{6952725}, \cite{iet:/content/books/10.1049/pbte070e_ch4}, \cite{7461481}, 71]
\\
\hline
Resource Utilization & \cite{Hwang:2013:HVM:2514940.2515012}, \cite{Song2014}, \cite{6973776}, \cite{6701467}, \cite{GAO20131230}, \cite{Ortigoza2016DynamicEF}
\cite{5724828}, \cite{5990687}, \cite{6710563}, \cite{7179432}, \cite{6168391}, \cite{ZHENG201695}, \cite{Ortigoza2016DynamicEF}
\\
\hline
QoS Maximization & \cite{Anand:2013:VMP:2568486.2568500}, \cite{5578331}, \cite{5961722}, \cite{6332041}, \cite{6297100}, \cite{TORDSSON2012358}, \cite{5071526}, \cite{ILKHECHI2015508}, \cite{5662521}, \cite{Kakadia:2013:NVM:2534695.2534702}
\cite{doi:10.1080/10798587.2016.1152775}, \cite{iet:/content/books/10.1049/pbte070e_ch4}, \cite{DONG201462}, \cite{7387769}, \cite{7416960}, \cite{7414158}
\\
\hline
\end{tabular}
\caption{Objective function-based classification of VMP solutions.}
\end{center}
\end{table*}
The technical solutions to solve VMP problems through objective functions are classified into heuristics, meta-heuristics, deterministic, and approximation algorithms. The
mentioned solutions along with their objectives/approaches are detailed in Table III.
\section{Energy Consumption Minimization}\label{sect: III}
The majority of VMP algorithms focus on minimizing the energy consumption. Despite the fact they target the same objective, each VMP solution adopts a different approach; i.e., some minimize the DC power, others reduce the number of PMs turned on, and others focus on minimizing the network power consumption. Each approach is separately analyzed in the following subsections.
\subsection{Reducing the Energy Consumption}\label{sect:Energy consumption minimization}
Recently, minimizing the energy consumption in data centers has become the main focus of cloud providers. The existing VMP schemes aim to optimize the utilization of physical server resources or network resources. Conversely, D.Huang et al. \cite{6503614} propose an energy-aware scheme for VMP in DCs. The scheme jointly addresses the energy consumption of both network and server, taking into account the challenges encountered in terms of server capacity or multi-layered reliances of applications. The proposed approach is treated as a multi-criteria optimization problem allowing to synchronously reduce the total wastage of resources (maximizing the utilization of PMs) and the costs of communications. A fuzzy logic algorithm is recommended to handle this issue, which provides an efficient way to balance the conflicting goals. Simulation results show the effectiveness of the suggested scheme, featured by energy saving, compared to existing methods such as bin packing algorithm and random placement. In the same way, M.Tang et al. \cite{Tang2015} propose a genetic algorithm that support the energy consumption of both the communication network and the physical servers in a DC. Then to enhance the efficiency and performance of their approach, they present a hybrid Greedy Algorithm (HGA) or memetic algorithm by incorporating : (1) an inescapable solution repair procedure which transform those infeasible in feasible ones and resolve all violation constraints (as CPU violation) ; (2) the local optimization approach that employs a heuristic mechanism to minimize the number of PMs in the VMP thereby reducing the consumed energy on the PMs.\\
In the same context of cutting down the total energy consumption, authors in \cite{6253575} suggest placing numerous VM clusters on diverse servers and thus spreading out the arriving applications on these clones of VMs which lead to increase the service reliability. Hadi et.al \cite{6253575} propose operating all these copies to serve the incoming demands. The raised problem lies on a complex resource allocation algorithm named (MERA) "Multi-dimensional Energy-efficient Resource Allocation". Hence, a heuristic algorithm is suggested to solve this issue by using Local Search (LS) and Dynamic Programming (DP) methods where LS attempts to reduce the cost of energy by shutting down the underutilized servers while DP initially identifies the number of VM clones to be placed on servers. In other side, providing a high quality of experience also requires cloud providers to minimize the energy consumption with a low Service Level Agreement violations in DC. To accomplish this goal, Z. Cao et al. \cite{Cao2014} propose a heuristic algorithm based on tradeoff between low energy and high performance to consolidate VMs on cloud Data Centers. This framework is based on two important contributions, the first aim to classify the host overload in terms of SLA violation in two types OverS (Over SLA violation) and OverNS through SLA violation decision algorithm (SLAVDA), whereas the second propose a "minimum power and maximum utilization" policy (MPMU) to catch a convenient placement for VM migration by increasing the minimum power policy (MP). According to simulation results with CloudSim toolkit, this framework accomplish a better energy-performance tradeoff, comparing it to older works, it performs an energy consumption minimization of 21-34\% percentage, a decrease in energy-performance of 90\% and a reduction in SLA violation of 84-92\%.\\
Dynamic consolidation is an efficient technique to minimize the energy consumption and enhance physical resource utilization. The problem resides in migrating VMs from overloaded host, which causes extra energy consumption, SLA violations and affects the migration time. To attract users, cloud Providers should achieve a high quality of service with lowest cost by simultaneously reducing the energy consumption and meeting SLAs. The main objective is to estimate energy efficiency policies based on SLA violation, energy consumption and VM migration time. The complication arises when the CPU utilization exceeds the thresholds where the necessity to invoke VM selection and VM placement procedures. In one hand, Xiong et al. \cite{Fu2015} propose a novel VM selection policy named meets performance (MP) which takes into account the resource satisfaction intensity in order to accomplish the three aspects mentioned previously. In the other hand, a VM placement technique denominated MCC (Minimum coefficient correlation) is introduced to find the suitable host featuring the minimum correlation coefficient where the migrated VM would be placed; since it helps averting performance deterioration in other VMs. simulation results prove that the suggested policies (MP, MCC) accomplish greater performance compared to existing ones regarding the three features.\\
In cloud computing environment, data centers desire enough bandwidth to achieve quality of communication between network functions and network components; hence, the occurred issue is how to save energy consumption and provide bandwidth simultaneously. To deal with this problem, S. Wang et al. \cite{6799695} propose an enhanced VMP named EQVMP (Energy efficient and Quality of Service VMP) based on computing the minimum server power and decreasing the total delay of network while preserving QoS. EQVMP is a multi-objective approach that incorporates three key features : saving energy, hop reduction and load balancing. Saving energy minimizes power on servers without SLA violation and seek the suitable placement of energy-efficient multi resources so that each VM meets its requirement by using the Best Fit Decreasing (BFD) Algorithm; hop reduction partitions VMs into segments to reduce their network traffic intensity; Load balancing aims to ensure transmission without blockage or congestion and update VM placement periodically.
\subsection{Minimizing the number of PMs}\label{sect:Energy consumption minimization}
With the advent of cloud computing, computing resources are provisioned as on-demand services over networks destined to meet user needs in a cost effective manner.
Virtualization techniques are adopted to facilitate the usage of hardware by placing VMs on PMs to satisfy the user demands. However, virtual machine placement in large-scale environment still a very big challenge that must be tackled due to the high use of VMs which enhance the number of physical machines and then improve the power consumption. To cope with this issue, Umesh et.al \cite{DBLP:journals/corr/abs-1011-5064} present an optimal technique for VMP toward minimizing the number of required nodes. Therefore, two procedures are provided, the first is supported by linear programming (LP) while the second is based on quadratic programming (QP), these approaches contribute optimal solutions of VBP (Vector bin packing) problem by exploiting the polynomial-time solvability. According to \cite{DBLP:journals/corr/abs-1011-5064}, experiments demonstrate that "PACKINGVECTORS" algorithm provides an optimal placement of VM. \\
Moreover, Fumio et al. \cite{5488431} propose a duplicate configuration of hosting clusters for various applications employing VMs, which aim to reduce the requested number of VMs in agreement with the performance necessities of any k host failures for online applications. The problem of setting up the virtual machine is presented as an optimization problem and relies on some elements as the ability of hosting server, the requisite fault-tolerant level k, the total number of applications and their effectiveness exigencies; hence, an algorithm for VMP is defined to reach k fault-tolerance according to the specified conditions, the redundant placement allows a hosting server configuration that allows a better reliability when it comes to servers failures. Another issue for VMP is loss of performance provided by VM migration depending on type of resources and applications adopted to decrease the number of migrations and the performance failures. In this context, A. Anand et al. \cite{Anand:2013:VMP:2568486.2568500} reformulate the provisioning algorithms, i.e. ILP and FFD, by including migration overhead in placement decisions, thus aim to reduce both the number of migration and the number of hosts used and further avoid performance loss. \\
In \cite{6726449}, to bypass VM Consolidation problem (VMC), B. C. Ribas et al. propose an artificial intelligence consolidation rest on a new pseudo-Boolean Formulation of VMC problem (PBFVMC). The main objective of this issue is to employ k VMs inside N hardware with reducing the number of active physical resources simultaneously. PBFVMC focuses on removing non-linear constraints, equal constraints and reducing the number of variables, as a result, it can recognize 4 times augmentation in the size of hardware bigger than previous formulation. With this approach, the solvers spend running time on optimizing the formula whereas previous ones waste all the time to verify whether the method is suitable or not. Also \cite{LI20131222} focuses on minimizing the total energy consumption by reducing the number of running PMs. Hence, to increase the resource usage, the size of resource fragments should be minimized as well as their number. Consequently, VMP must be achieved in a resource-balances way since the overloaded use of resources is the predominant cause of resource fragments. X. Li et al.\cite{LI20131222} define the resource utilization of each PM with space partitioning process, they further introduce a VMP algorithm named EAGLE that reduce the number of running PMs, harmonize the resource usage and minimize the energy consumption. According to expensive experiments, we can obviously show the performance of EAGLE as it can optimize 15\% of energy in comparison to the First fit algorithm. In the same regard, Ms. R. S. Moorthy \cite{100000} propose a Constraint Satisfaction based Virtual Machine Placement (CSP) to minimize both the number of running PMs and the completion duration of applications for perfectly placing VMs across various physical devices. The CSP-VMP algorithm firstly selects the optimal physical machine for placing the virtual machine then automatically migrate the virtual machine in the overloaded Physical machine to new physical machine. Experiment results demonstrate the high performance of the proposed approach in terms of maximum user satisfaction.
\subsection{Minimizing the Network Power Consumption }\label{sect:Energy consumption minimization}
Virtualization Technology offers a powerful way to improve Data Centers by allowing high utilization of resources in a single physical server and facilitating workload movement between servers. Although, this elasticity, allowed by virtualization concept, generates management challenges of scalability problem that must be procured and handled so that the data center manager determines where VMs will be placed and how resources would be allocated to them while harmonizing thermal distribution, reducing power consumption and increasing resource usage. As in \cite{5724828}, the objective of minimizing the power consumption forms a part of a multi-attribute decision making problem that concurrently aim to reduce the energy consumption, decrease the consumption costs and the total waste of resources. However, satisfying simultaneously all this features engender conflicting objectives. J. Xu et al. \cite{5724828} propose a Modified Genetic Algorithm (MGA) to effectively search a suitable solutions for large-scale DCs, accordingly, a fuzzy-logic is applied on this algorithm to combine the conflicting goals. Whereas, \cite{6221099} focuses just on reducing the power consumption (mono-objective) while achieving performance requirements by proposing an energy aware framework for reallocating VMs in DCs. This framework computes the efficient feasible placement of VM regarding SLA constraints (hardware, QoS, Availability of services, Additional metrics) that should be decoupled to fulfill the framework's flexibility. It depends on Constraint Programming as Choco solver and Entropy open source library. The core element of this framework is the optimizer which is able to cope with the SLA requirements, minimize the energy consumption, reduce CO2 emissions and interconnect separated DCs in a federation. \\
The test results executed in a federated cloud show the benefit of this approach to decrease the energy consumption by saving 18 \% of energy and CO2 emissions for the test case carried out. Moreover, the scalability experiments demonstrate that devising the problem in various segments is efficient to reduce the time of finding the solution. Energy consumption is an important factor for big Data centers to sustain a considerable number of leaseholders. Authors in \cite{6753800} deals with the Time-aware VMP-Routing problem (TVPR) where each occupier requires a given number of network resources and server resources for a specified time toward finding the suitable manner to map their virtual machines and route their traffic for the sake of saving energy consumption. They formulate the TVRP problem as a MILP optimization approach based on a power utilization model whose objective reside on defining the the exhaustive power consumed in DCs by using all components (servers, switches). Moreover, regarding the NP-hard complexity of TVRP problem, a heuristic algorithm is developed to fix the optimization issue. Results demonstrates the efficiency of the purposed algorithm concerning the large DC and power consumption.
\subsection{DC power Consumption/IP and WDM layer consumption minimization }\label{sect:Energy consumption minimization}
Data Centers still the major contributor of global CO2 emissions of IT services. The crucial issue lies on minimizing the power consumption and optimizing network delay in large-scale infrastructures. In this perspective, \cite{6477661} introduce an integrated algorithm for extensive Cloud Systems where the cloudified services provided by decentralized DCs are coordinated upon federated network. Firstly, Mixed Integer Programming (MILP) is introduced to compute the optimal VMP of intra/inter networking datacenters in massive Cloud Systems by considering both local physical resources and network resources through transforming network topology into virtual architecture and allocating the computing resources in DCs. Secondly, to prove the effectiveness of this solution, it has been compared with benchmark MILP model using the principle of customer loyalty (Sending the VM request to the nearest DC with enough capacity and memory in physical hosts). The proposed holistic solution allows reducing the global energy consumption that include three components: (1) Power consumption in Data Center, (2) Power dissipation in IP layer and (3) Power expenditure in Wavelength Division Multiplexing layer. Results shows the performance of holistic solution for saving energy and achieving better fairness.
\section{Cost Optimization}\label{sect: IV}
The high cost is considered as a challenge for cloud providers due to the huge requirement of cloud computing services as well as the multiple geographically distributed data centers. However, multiple elements are related to this dilemma since the cost includes the network power cost, the energy consumption cost, the electricity cost, the total infrastructure cost and the heat dissipation cost, etc. Referring to the studied articles, a group of them (\cite{6253575} \cite{FANG2013179} \cite{LI20131222} \cite{6114417} \cite{6701467} \cite{6123491}) involve in optimizing the economical costs. This section classifies these items, owing to optimize the total cost, according to their objective functions as presented in the following subsections.
\subsection{Reducing Electricity Costs}
One of Virtual Machine Placement Challenges is the high electricity cost in high performance computing (HPC) clouds e.g Cloud Service Providers should pay the energy consumed and the peak power used. The arisen issue is how to minimize the electricity cost for HPC providers. Studies prove the influence of load placement policies on DC temperatures and DC cooling costs, where cloud services are implemented on several geographically distributed data centers. Kien et.al \cite{6114417} propose dynamic load distribution policies for VMP and VM migration in DCs to reduce the operating cost by taking into account the peak power prices, energy consumption and cooling energy prices. Evaluation comparison between Dynamic Cost aware policies (CA Cost Aware Distribution, CAM Cost aware distribution with migration) and Baseline Policies (RR Round Robin, WF worst Fit, SCA) demonstrates the benefits of the proposed load placement approach in large cost saving. Hence, these policies prove the impact of cooling on total cost with its large changes in data center load and the need of pre-cooling the target DC when it is necessary to prevent overheating. As seen in section II-A, \cite{6253575} aim also to decrease the energy cost by switching off the underutilized servers.
\subsection{Network Power Costs }
Statistical analyzes carried out in \cite{FANG2013179} reveal that the power of the network represents 10 to 20 \% of the total power consumption. The heavy network requirements affects negatively the overall power cost. W. Fang et al. \cite{FANG2013179} introduce this new metric for energy saving by proposing a novel approach called "VMPlanner". This new framework aim to optimize both the traffic flow routing and Virtual machine placement by extinguishing the unnecessary network elements to save power consumption. The "VMPlanner" exploits the elasticity of VM migration and the alignment of traffic flow routing to reduce the use of network links and therefore minimize the network power costs. Optimization Problem formulation shows its NP-hardness, then with VMPlanner, the problem can be solved by executing three approximation algorithms : (1) BMKP, Balanced Minimum K-cut Problem, for grouping virtual machines according to traffic, (2) QAP, Quadratic Assignment Problem, for grouping VMs conforming to distance, (3) MCFP, Multi-Commodity Flow Problem for routing traffic flows between VMs to save power.
\subsection{Revenue Maximization }
Data Centers often suffer from the challenge of maximizing their profit under the huge quantity of energy costs required to perform their operations. The practical method to enhance the benefit or ROI is to minimize the operational cost by supporting the PUE (Power Usage Effectiveness), adopting the virtualization technology and improving the power efficiency. However, the efficiency of these methods is limited due to the fixed conditions. In \cite{6123491}, W.Shi et al. address this problem by probing the VMP dimension in DC for increasing the revenue without invading the service level agreement. They firstly formulate the problem as a "Multi-level Generalized Assignment Problem" (MGAP) to accomplish the critical requirement of increasing the ROI under the various constraints of power budget and SLA violation, and secondly propose a first fit heuristic algorithm to resolve it. In \cite{6701467}, maximizing the revenue is a part from a multi-objective approach next to two other objectives, load balancing maximization and resource wastage minimization, Amol C. Adamuthe et al. \cite{6701467} use Genetic algorithm and "Non-dominated Sorting Genetic" Algorithms to solve the multi-objective virtual machine placement problem. In the same way, authors in \cite{Pires:2013:MVM:2588611.2588692} handle the VMP problem by incorporating three fundamental behaviors : (1) Minimizing the network traffic, (2) Reducing the energy consumption and (3) maximizing the economical revenue. A new MeMetic Algorithm (MMA) is proposed to achieve the multi-objective optimization features.
\subsection{Resource cost minimization }
Cloud computing has become one the most requested services regarding their several benefits including high performance, elasticity, availabilty, the low cost of services and scalability. Notably, it provides and assigns applications with computing power on a shared resource pool. Indeed, Virtualized service is characterized by its flexibility on running several VMs in the same physical machine, the capacity scaling of a VM and the live migration between hosts. However, this flexibility has an imperfection on IT manager as system management becomes more complicated. Therefore, the challenge of cloud providers is to manage automatically the virtual services while ensuring high QoS of Internet-based applications and guaranteeing a cheap costs of resource management. \cite{Hyser2007AutonomicVM} presents an autonomic virtual resource manager for service hosting platforms that aim to optimize the general utility function based on the SLA fulfillment degree and the operating costs, commonly, this paradigm is able to automatize the placement of VMs and brutalize the dynamic provisioning. The core element of system architecture management is the global decision module as it deals the two main tasks : VM provisioning and VM packing, which are expressed as two Constraint Satisfaction Problems (CSP) manipulated by CS (Constraint Solver). Results obtained through simulation tests executed by Xen hypervisor demonstrate that the constraint programming proposition is fit to settle the problem.\\
Cloud computing support a paradigm for ultimate consumers to satisfy their needs in a cost-effective manner. Resource provisioning is a crucial issue in VMP since it prescribes how resources may be allocated. Hence, to provision resources, cloud providers may furnish two payment plans to consumers e.g prepaid plan (reservation), which is cheaper but may not meet actual demands, and pay per use plan (on-demand), used for dynamically provision resources. However, while deploying a virtual machine placement, there is a big challenge of optimizing the capacity utilization due to resource allocation cost, over-provisioning and under-provisioning problems. In order to resolve these issues, S. Chaisiri et.al \cite{5394134} present an optimal VMP to implement optimized resource provisioning operations that aim to reduce their total cost. This algorithm is exploited to take a best decision making depending on the ideal key of Stochastic Integer Programming to reserve resources from VMs to any cloud service vendors. This solution is achieved through two phases : the first illustrates the number of virtual machines contributed in reservation plan whereas the second one determines the number of VMs provided in on demand plan. According to performance evaluation, in one hand, if the number of requested VMs is accurately recognized, all instances (VMs) can be contributed in reservation plan, besides a Deterministic Interger Programming is employed to reduce the overall amount of resource reservation, on the other hand if there is an uncertainty of demands and prices, a two-phase SIP algorithm is established. As a result of simulation studies OVMP algorithm based on SIP can reach the lowest total cost.\\
Along with evolving improvement in cloud computing and their advanced virtualization techniques related to network function virtualization, many papers have been focused on cost saving under better utilization of computing resources in cloud-based mobile core networks with the objective of finding the optimal placement of VNFs within the same DC. F.Z. Yousaf et al. \cite{7355586} treat this problem and examine the cost acquired through two deployment strategies based on heuristic constraints and derivations, named Vertical Serial Deployment (VSD) and Horizontal Serial Deployment (HSD) for initial implementation of VNF/VNFC.
\subsection{Connection Cost Minimization}
As a global goal, authors in \cite{7179432} think of formulating an optimization problem for seeking the suitable VMP to reduce the cost of communication between Virtual Machines in Network Data Center. Their primary goal is to develop a beneficial algorithms to handle VMs by formulating a VM placement optimization approach. Since, Customers demand a set of requests where each one presents the desired number of VMs, hence the challenge is to determine the PMs hosting the requested virtual machines and then building a subnet connecting these PMs. In this context, there are two major aspects, resource limitation (limited performance) of PMs, formulated as a constraint, and the connection cost in each subnetwork presented as a major objective. The major intention is to reduce the total connection costs by shortening the length of networks connecting the root node with physical machines. T. Fukunaga et al. \cite{7179432} present an approximation algorithms for each case according to model's type (centralized or distributed) and requests category (uniform or non-uniform).
\section{Network Traffic Minimization }\label{sect: V }
There are several papers that study the problem of minimizing network traffic in the cloud in a manner that they try to enhance the performance of a DC through selecting the most suitable physical machines for virtual machines. In this section, we list some previous works classified conforming to their target and interest as well as their relevance. We conclude that Network Traffic Minimization can be impacted by several factors as Data Transfer Time, Average Traffic Latency, Network Traffic and Network Performance.
\subsection{Reducing the Average Traffic Latency}\label{sect:V}
The Scalability of Data Centers can be considered as one of the most attractive prospects in cloud environment since the bandwidth usage among VMs is increasing verry quickly with the high rate required for the communication between intensive applications in Data Center. To tackle this issue, Xiaoqiao et.al \cite{5461930} propose an optimization approach of placing VMs on host machines based on TVMPP problem to embellish the network scalability. TVMPP can be optimized by reducing the traffic average latency generated by IT infrastructure. Analysis demonstrate the NP-hardness of this proposed procedure (TVMPP), however a two-tier approximation algorithm "Cluster-and-cut" is proposed to effectively solve this problem even for the enormous sizes. This heuristic algorithm firstly splits hosts and VMs into clusters separately and secondly associates them at cluster level and then in individual level. Experiments prove that the suggested algorithm can considerably reduce the accumulated traffic and decrease the computation time by comparing to other existing mechanisms that don't care about data center networking technologies and features of traffic models.\\
In \cite{6848063}, Kuo et al. presents VM placement algorithms for a particular situation which consist of allocating VMs to Data Nodes (DNs) while minimizing the highest access time between Data nodes and Virtual machines. The "virtual machine placement for data node problem" (VMPDN) is characterized by allowing each DN to only one VM. To solve this problem, they firstly introduce a 3-approximation algorithm based on threshold technique and secondly propose a 2-approximation algorithm that subdivide the global problem into small problems,and finally compute their solutions and adopt the best one. Simulation results reveal that the 2-approximation algorithm is the optimal-efficient one capable to minimize data latency in cloud systems. Moroever, in \cite{ILKHECHI2015508}, DNs can be assigned simultaneously by multiple VMs while in \cite{6848063} each VM is booked by a single DN.\\
In Mobile Cloud Computing Environments, Internet of things (IoT) based systems using a variety of mobile devices need to operate despite of connection failure or degradation. Therefore, Mobile cloud service providers can reduce network latency by moving applications close to the user. \cite{iet:/content/books/10.1049/pbte070e_ch4} suggests two cloudlet-based architectures, hierarchical and ring technologies, which aim to accomplish the mobile users requirements. The latency delay in each architecture is modeled by continuous time Markov-chain throughout different components as user nodes, cloudlets, and the main cloud. Composers use different scenarios to compare the performance of these proposed architectures. Authors in \cite{7461481} present a mechanism to minimize the one-way delay in wireless networking by introducing an easy algorithm for selecting the ever changing data plane in a mobile network. This method pick out the gateways pursuant to the designated goals of reducing the bottleneck link load, the end-to-end path latency and the network element processing load.
\subsection{Minimizing the Data Transfer Time}\label{sect:V}
Computation resources has become a critical issue in cloud computing due to the high rate of on-demand provisions. Actual VM placement procedure mostly concentrates on ameliorating the capability and proficiency of using computing resources without taking account the network performance, this can lead to place the VM faraway the data center. Consequently, the global application performance will be influenced and it is become critical to consider the network Input/Output performance because this latter affects significantly the overall application performance. To solve this issue, Jing et.al \cite{5662521} offer a network approaches for placing and migrating VMs to minimize the time of transmitting Data between VMs (the application) and related data (the data storage). The VM placement approach aim to reduce the total data access latency complying with the condition of computing the available capacity whereas the VM migration approach is happened when the operation interval surpass the SLA limen due to variable network condition that influences the users' behavior to retrieve data and damage the pertinence achievement. The simulation made by CloudSim 2.0 show that the advised technique is reliable to execution time of each task. In the same way, K. Zamanifar et al. \cite{6168391} propose a new VMP algorithm to reduce the data transfer time by optimizing simultaneously the Virtual Machine Placement and the storage allocation frequency.
\subsection{Minimizing the network traffic}\label{sect:V}
In recent years, data centers are been progressively hired in enterprises to run a variety of applications and provide different services over a shared infrastructure. However, service misplacement can cause several problems as network link overloads, congestion and connection disruption. Indeed, network connectivity is a major influence to any data center decision that can be tackled through VM migration by alternating the traffic matrix and re-allocating services in different way. Daniel et al. \cite{6466665} present a VMP algorithm to reallocate virtual machines in DC Server contingent on memory usage, overall CPU and the traffic matrix network. The first stage of this VMP algorithm is collecting data from VMs and DC topology (Data acquisition), the second step resides on partitioning servers with a greater level of connectivity, whereas the last one consist on clustering VMs per determining the amount of traded traffic using graph theory to handle all the virtual servers. As a result, this solution enhances the quality of network traffic and the availability of bandwidth at DC, it is able to optimize 80\% of the core traffic that will be consolidated in the edge of the network. The allocation of many VMs nearby the DC with high bandwidth exigency engender congestion problem over their shared links despite its advantage to balance links utilization. \\
In \cite{6566794}, R. Cohen et al. recognize the virtual machine placement with heavy bandwidth demands and attend to maximize the revenue of the overall traffic delivered by VMs to the root node in the DC. Their scenario is similar to a storage network with intense storage needs. The mathematical formulation model of the bandwidth-constrained VM placement optimization problem demonstrates its hardness. Consequently, to overtake this complexity, they provide two approximation algorithms for delivering a capricious weight function. The first one is "the greedy algorithm" which gives a 3-approximate solution of partitioned flow to solve the bandwidth-constrained placement problem, and can be curved to 6-approximation by using rounding procedure, whereas the second is "The Rounding Algorithm" that present a 24-approximate integral solution based on fractional Linear Programming. The convenience of this algorithm (Rounding algorithm) is its faster execution even in larger instances. They address exclusive cases of weight functions and graph topologies by using symmetric trees and considering the revenue as a simple allocated bandwidth function, the results of numerical simulations display the efficiency of the scheduled algorithms over I/O traces exported from IBM Data center. \\
Furthermore, A.R. Ilkhechi et al. \cite{ILKHECHI2015508} concentrate on the network and traffic aware VMP with multiple traffic intensive components. Their main objective is to maximize the satisfaction metric defined as the performance of placing VM on a specific PM and associated to the global overcrowded traffic in the overall network. They introduce heuristic and greedy approaches for allocating a group of Virtual Machines into a group of Physical Machines which provide excellent solutions given the sink flow demands and the communication pattern of the VMs. B. Zhang et al.\cite{6296866} develop an adept algorithm to combine VMs on PMs through ensuring a high scalability of data centers. The VMP problem is treated as a multiple criteria function owing to i) reduce the received and transmitted traffic inside a data center and ii) save the power cost by minimizing the number of on-line PMs. In this sense, a heuristic approach is proposed to consolidate dynamic VMs and a greedy algorithm is used to manage VM requirements.\\
In the same way of \cite{ILKHECHI2015508} and \cite{6296866}, T. Yapicioglu \cite{6809418} take into account communication pattern of VMs with the major objective to reduce the traffic between frames, minimize the networking cost and decrease the average length of the traffic while minimizing the number of network components and operative servers to economize the consumed energy. They propose a clustering algorithm to bundle VMs according to their communication rates by placing virtual machines communicating frequently in the same rack. Based on simulation results, it turns out that the VMP algorithm sensitive to network traffic yields effective results regarding a greater proportion of intra rack traffic and inferior middle number of hops for each flow compared to first-fit algorithm. Therefore authors propose three heuristic algorithms to solve this issue : (1) \textbf{Greedy Algorithm} (GA), (2) \textbf{Repeated Greedy Algorithms} (RGA), and (3) \textbf{Optimal Network Function Placement for Load Balancing Traffic} (ONPL). Simulation results demonstrate the effectiveness of the proposed algorithms.
\subsection{Maximizing The Network Performance}\label{sect:V}
Dong et al. \cite{DONG201462} propose a VMP approach based on a multiple contrained resources to improve network performance in cloud. They attempt to decrease the maximum use of network links to minimize the network congestion and balance the total dispersion of data traffic. Reducing the entire communication traffic is considered as a Quadratic Assignment Problem (QAP) that is NP-hard. To solve this combinatorial problem, Dong et al. \cite{DONG201462} propose Ant Colony Optimization Algorithm (ACO) attached with 2-opt Local Search (LS). Simulation analysis for the proposed algorithms, with different technologies as Tree, VL2 and Fat-Tree yield better optimization results compared to simulated annealing (SA), local search (LS) and clustering algorithm \cite{6679894}. As a result, Maximum Link Utilization (MLU) is reduced by 20\% and the number of links between applications is saved by 37\%. For the same purpose, Authors of \cite{6679894} suggest a VMP strategy founded by a novel two phase heuristic algorithms to reduce the network congestion through minimizing the power consumption of physical resources and network components to guarantee a satisfactory network performance. \cite{Kakadia:2013:NVM:2534695.2534702} aim also to ensure a high performance and promise an optimal network utilization in big data centers by consolidating VMs using network awareness. D. Kakadia et al. \cite{Kakadia:2013:NVM:2534695.2534702} firstly propose "VM-Cluster formation algorithm" to group the VMs in reference to their traffic swap models and then suggest a greedy consolidation algorithm "VM-Cluster placement algorithm" for placing VM-Clusters toward centralizing traffic in the same group. This solution can save 70\% of the internal bandwidth and improve 60\% of the application performances.\\
In telecommunication domain, mobile operators need an efficient mobile cloud to meet their customer requirements thereby dealing with the intense mobile data traffic and the humble average revenue (ARPU). In this context, authors in \cite{Taleb:2013:GRA:2507924.2508000} propose a heuristic algorithm to diminish and avert the density of gateway relocations in carrier cloud thus reducing the gateway relocation cost while guaranteeing efficient placement of virtual network functions. Similarly, \cite{7248929} also lies on avoiding the relocation of mobility anchor gateways (S-GW) by placing their VNFs away from the UEs and ensuring QoE by placing "data anchor gateways" (PDN-GW) VNFs nearer to mobile devices. To achieve this two goals, the authors in \cite{Taleb:2013:GRA:2507924.2508000} propose a VNF placement algorithms based on linear programming to find an efficient placement of VNFs in both PDN-GW and S-GW for cloud carrier and that is to create elastic mobile core network conform with Virtual 5G Network Infrastructure. In the same way, \cite{7355586} propose a Fine-grained scheme based on computing reference resource affinity score RRAS values of each hosted VM for optimal management and decision of VFNs. This approach can optimize the Life Cycle Management (LCM) operations on the VNF instances and reduce the occurrences of the expensive VM management operations.
\section{Resource Utilization}\label{sect: VI}
The utilization of the lowest number of servers is an important factor to consider when searching the power efficiency in Data Centers. Consolidation of Virtual Machines (VMs) on servers implies collecting numerous virtual machines in a unique physical server by growing the resource utilization. It allows to shut off idle servers which yields a great deal of energy saving. The VM condensation is realized either in a static manner by allocating physical resources to VMs depending on the strong consumer demand (over-provision), and this produce a waste of resources since workloads are not often at peak point. Or in a dynamic manner as VM capacities change in accordance with the current workload requirements and this helps to effectively use data center resources. In this context, this section focuses on resource utilization maximization through minimizing the resource wastage, maximizing the resource usage and increasing elasticity.
\subsection{Maximizing resource usage}
To deal with scalability and energy consumption problems, F. Song et al. \cite{Song2014} propose an optimization-based algorithm for virtual machine placement approach that takes into account both server constraints and dependencies between VMs and applications levels so as to maximize the resources allocation and minimize the Data Center transmission traffic in a short duration. Their principal objective is to minimize the number of physical hosts, improve the scalability and decrease the energy consumption. Targeting the same goal, N. Trung \cite{6973776} not only pretend to increase the resource utilization but also to equilibrate the resources utilization beyond several measures beneficial to minimize the number of running servers. They propose a complex Balanced Resource Utilization (Max-BRU) algorithm characterized by multiple resource-constraint measures such as the d-th dimensional resource utilization ratio (RUd) and the resource balance ratio that allow to find the most appropriate server to deploy virtual machines in large DCs. Through in-depth simulations, Max-BRU algorithm can balance the use of resources and makes it more efficient by reducing the number of active physical servers.\\
With the emergence of energy consumption in cloud computing architectures, there is an increasing need of energy aware resource management in data centers \cite{Hwang:2013:HVM:2514940.2515012}, taking into consideration the relation between multiple kind of resources ( e.g. Bandwidth, CPU, Network usage, Disk Space, Memory size) and the VMs resource requests. I. Hwang \cite{Hwang:2013:HVM:2514940.2515012} characterize the resource demands as random variables RVs relative to two metrics : expected mean and standard deviations. Then, they formulate the problem of VM consolidation as a Multi-Capacity Stochastic Bin Packing (MCSBP) issue and suggest a heuristic algorithm (First Fit Algorithm) to solve it. This approach provides efficient results with rational resources management.
\subsection{Minimizing the resource wastage}
The unbalanced use of residual resources can impact the placement of virtual machines and cause a waste of resources \cite{5724828}. Fig. 1 depict an example of resource wastage where host has a little available memory but a lot of unused CPU capacity which prevents the host from accepting a new virtual machine due to lack of memory. Whence the need to balance the use of resources in diverse dimensions by reducing the remaining resources wasted on a server "$W$", which is defined as the summation of subtractions between the minimum controlled residual resource "$R_i$" and the other k different resources "$R_k$" (1). As seen in section (II), J. Xu et al. \cite{5724828} consider the VMPP as a multi-attribute optimization problem of minimizing simultaneously the thermal dissipation costs, the energy consumption and the total resource wastage. They propose a "modified genetic algorithm" (GA) to effectively select the suitable solutions and introduce a fuzzy-logic to incorporate the various objectives.
\begin{equation}
W =\sum_{{\i\neq k}}^{n}{R_i-R_k} \quad (1).
\end{equation}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.9]{Capture.PNG}
\caption{Resource wastage}
\label{Resource wastage}
\end{figure}
While in \cite{6701467}, minimizing resource wastage is joint with maximizing profit and maximizing load balancing, A. C. Adamuthe et al. \cite{6701467} propose "Genetic algorithms" to solve the problem of VMP in data centers, but simulations prove that the NGSA-II give good and diversified solutions compared to simple NGSA and GA algorithms. In like manner, Y. Gao et al. \cite{GAO20131230} propose a multi-objective ACS algorithm to reduce both the aggregate energy consumption and the resource wastage based on permutation VM assignment. Analysis prove its high performance compared to multi-objective genetic algorithms as Max-Min Ant System (MMAS) algorithm and bin packing algorithm. Whereas, \cite{ZHENG201695} propose a novel solution called VMPMBBO to tackle the virtual machine consolidated placement VMcP problem. This novel approach aim to decrease simultaneously the energy consumption and the resource wastage based on the bio-geography-based optimization strategy.
\subsection{Elasticity maximization}
Taking into account the workload dynamics of modern cloud applications, elasticity is a very important issue to address for Cloud Service Providers in order to deal with under-provisioning (saturation) and over-provisioning (under-utilization) problems of cloud resources. To tackle the problem of placing VMs in cloud infrastructure management. The VMP problem should be solved dynamically to provide a common workload of modern applications. In this context, authors in \cite{Ortigoza2016DynamicEF} propose a taxonomy in order to discern all possible defiances for Cloud Service Providers (CSPs) in dynamic environments, classified with elasticity and overbooking through implementing: (1) Vertical elasticity to dynamically adjust the capacities of virtual resources inside a VM, and (2) horizontal elasticity to adjust the number of VMs.\\
Elasticity in term of resource utilization is referred to how the datacenter can appease the high requirements of VMs resources within the capacities limitations of links and physical machines \cite{6710563}. K. Li et al. \cite{6710563} focus on maximizing elasticity to deal with the restricted resources and the different users demands, they propose a ranked algorithm of virtual machine placement for a multiple layer bunch which heuristically place VMs in succession from the upper layer to the lower layer. As a result, this scheme furnish a approximatively optimal results. On the other hand \cite{5990687}, the service variability in workload may unbalance resource utilization, therefore using elastic services lead to match these workload fluctuations based on on-demand capacity allocation. This variability depends on Service Level Agreement thereby the problem lies on increasing the profit cloud provider from flexible placement to SLA in virtualized Data centers. In the same vein, \cite{5990687} cope with the variety of placement restrictions by proposing a new combinatorial optimization algorithm capable of finding the optimal solution, this problem is named Elastic Services Placement Problem (ESPP) and allows service suppliers to augment their profit through SLA conformity placement. Since ESPP is NP hard to be solved, a simple transformation is used by presenting ESPP as a multi-unit combinatorial auction, and then a delayed column generation method is implemented to acquire an efficient solution in a reasonable amount of time. This procedure submits suitable results for loading large resource pools quickly and effectively.\\
According to \cite{Ortigoza2016DynamicEF}, elasticity is classified into two types : vertical and horizontal, vertical elasticity is referred to the capability of cloud services to dynamically change resource capacities of VMs while horizontal express the ability to adjust the number of virtual machines; elasticity is an important matter to cope with under-provisioning and over-provisioning problems. J. Ortigoza \cite{Ortigoza2016DynamicEF} propose a dynamic approach based on three pertinent parameters: the VMs resources competences, the VMs number and the VMs resources usage (referred to overbooking) in order to efficiently attend applications workload and customers' requests for virtual resources.
\section{Quality of Service Maximization}\label{sect: III}
The complexity of virtual machine placement in Software Define Networking (SDN) is reflected on controller Placement Problems\cite{7511136}, given a network topology and a response time bound, how many controllers are needed, where to place them and which switches are assigned to each of the controller. Since the QoS is the initial concern of the network operators in the placement of SDN controllers, authors in \cite{7416960} propose three heuristic algorithms (incremental greedy algorithm,primal-dual-based algorithm and network-partition-based algorithm) to solve this controller placement problem with the QoS requirement. To insure a good QoS, there are many important metrics to take into consideration. These constraint are expressed in the SLA between the customer and the cloud provider. Therefore to achieve a high quality of service, performance maximization and high availability must be taken into account, this is the subject of this section.
\subsection{High availability}
Virtual Machine Placement is a critical problem that must be tackled by satisfying multiple constraints as QoS, energy conservation, performance, security, etc. The high availability of all applications running in VMs is considered as one of the most critical management domain for VM placement. Thus, if a virtual machine is noticeable as k-resilient, it will be moved to a non-failing host without moving the other virtual machines. This technique is based on a combination of live migration and Hardware Predicted Failure Analysis (HwPFA) alarms with the purpose to allow the continuous running of VMs despite host failures. The non-resilient VMP is defined to reach the resource feasibility constraint but it does not provide High availability properties where the necessity to adjoin flexibility limitations thereby derives the elastic virtual machine placement algorithm denominated RVP. The challenge in solving the NP-hard RVP problem is the second order logic constraints; consequently, to address this frustration, E. Bin et.al \cite{5961722} propose an efficient solution that aim to convert constraints of RVP probem to simpler constraints regardless the failure sequence by adding shadow VMs to the placement computation. Hence, this transformed algorithm enables a Constraint Programming (CP) scheme to provide an optimal solution in a short time. Results exhibit the convenient benefit of the proposed approach in optimizing load balancing, reusing backup in production host and ensuring high availability of evacuating VMs from failed hosts without reshuffling other VMs.\\
As cloud computing is an on-demand service, customers must be able to have constant availability of their VMs even with the variability of workloads. \cite{6332041} highlights the difference between vertical and horizontal resizing aiming to provide a cloud model with the best availability by developing Availability-Aware Placement algorithm. Vertical policy scales up the resource of existing VMs, while horizontal policy adds resource by creating new VMs. However, in this paper, the authors assumed that all the physical hosts are identical which is not true.\\
Changing trends in the telecommunications industry will reveal whether the cloud, NFV and SDN can be an effective paradigms for operators to manage automatically their operations. However, the availability gets an important issue while using the ready made data centers easily built (COTS) since it engenders high failure probabilities. Consequently in Network Function Virtualisation technology, the convenience of cloud infrastructure must be treated from the physical layer till the hypervisor layer and the flexibility technique need to be integrated into software design and service delivery \cite{7414158}.
\subsection{Performance maximization}
Provisioning resources in cloud computing enable to improve the use of resources through placing virtual machines on a limited number of physical machines while meeting the required capacities. The capacity demands used by most applications are itemized on memory usage, storage and network bandwidth. However, the extra CPU requirement makes the capacity requirement incomplete and this may be solved through migrating VMs, which itself can cause a loss of application performance. In this vein, A. Anand \cite{Anand:2013:VMP:2568486.2568500} aim to fulfill SLAs performance by including VMM (Virtual Machine Migration) and CPU overhead constraints in VMPP to minimize the effect of application performance along with achieving the crucial goal of reducing the number of hosts. They make some changes in traditional ILP and First Fit decreasing (FDD) algorithms to evaluate the proposed constraints of CPU overhead and VMM. The ILP algorithms give optimal results but in a long duration, whereas FDD heuristics are scalable and much faster than ILP as it yields suboptimal solutions used for real time decision making. As a result, involving VM migrations and CPU overheads in the placement algorithm lead to reduce the number of migrations by more than 84\%.
\\
Virtualization Technology has been an integral aspect of cloud computing environment. Despite its manageability and utilization advantages, the virtual machines affect negatively the utilization performance in comparison to original behavior even if those instances are executed on a particular server. \cite{5578331} focuses on the main benefits and effects of multi-core architectures that may restrict the efficiency influence of virtualization and this by examining the performance results of multicore cache system on application operating within Xen Hypervisor. Oprofile and Xenoprofile make use of the hardware performance monitors to provide information about resources consumption, current status of operating systems and applications. Hence, various strategies are used for placing virtual machines to Physical CPU (Harpertown Processor) thus are summarized in three cases according to the number of virtual machines residing on the single node ( single, two and four VM placement), all this placement approaches centralize only on second layer of cache sharing (the last level cache in Harpertown processor). The simulation results prove the high performance of placing VMs in caches in contrast to Xen default VCPU scheduler.\\
Moroever, \cite{TORDSSON2012358} proposes a cloud brokering mechanism where the virtual machines of a service are deployed across multiple clouds to maximize performance, while considering various limitations in terms of budget, load balance, service configuration, etc, thereby using scheduling algorithms applications considering the integer programming formulations and the price-performance placement tradeoffs. Cloud computing has become a very requested service due to their multiple benefits including low cost of services, scalability, elasticity, high performance and availability. Notably, it provides and assigns applications with computing power on a shared resource pool. Indeed, Virtualized service is characterized by its flexibility on running several VMs in the same physical server, scaling a virtual machine capacity and live migration between hosts. However, this flexibility has an imperfection on IT manager as system management becomes more complicated. The key challenge of cloud service providers is how to manage virtual machines automatically while ensuring high QoS of hosted applications and low resource management costs. \cite{5071526} details a dynamic placement of VMs on a set of PMs to minimize the number of migrations and the number of active physical servers toward providing an optimal configuration that takes into account SLA constraints. H. Nguyen et al. \cite{5071526} presents an involuntary virtual resource manager for service hosting platforms that aim to optimize the global utility function based on the SLA fulfillment degree and the operating expense, generally this paradigm is able to automate the dynamic placement of VMs. The core element of managing system architecture is the global decision module as it deals with two main features : VM provisioning and VM packing, which are anticipated as two Constraint Satisfaction Problems (CSP) managed by CS (Constraint Solver). Simulation results done by Xen hypervisor demonstrate that the constraint programming approach solve the optimization problem.
\subsection{Minimizing the resource interference}
Virtualization is basically a number of VMs sharing multiple resources includes memory, network, bandwidth, computing power, CPU,etc. Thus, the issue of isolation need to be dealt with \cite{Kim:2012:VMP:2401603.2401656}. In a competitive IT's market Cloud providers should minimize the SLA violations \cite{6297100} to meet the consumer's expectations. Incubator \cite{Kim:2012:VMP:2401603.2401656} is a server bundle that measures traffic dispersion and requirements of instances. The concept of placing VMs with contrasting traffic diffusion within the uniform server minimizes the over-subscription of resources and grows the use of network links. Authors in \cite{Kim:2012:VMP:2401603.2401656} propose a system that measures the time-varying traffic for tenants VMs for a month, and then use a stable marriage algorithm to place these VMs to reduce network over-subscription. \cite{6297100} investigates interference-aware VMP (IAVMP) problem to ensure the quality of service exigencies of user subsciptions and efficiently maximize the network I/O performance. Authors formulate IAVMP by Integer linear programing but due to its complexity, a polynomial heuristic algorithm is proposed to tackle this problem which guarantee a high performance compared to other VMP algorithms.
\subsection{Reliability}
Reliability is an important aspect to guarantee the high Quality of Service, regarding the number of running VMs in cloud data centers; it is arduous for cloud services to satisfy VMs performance due to software and hardware challenges that cause VM failures. Therefore, enhancing the reliability is a challenging issue to address Virtual Machine Placement problems. A. Zhou et al. \cite{7387769} propose a new recurrent VMP algorithm to improve the accuracy of cloud services. This approach called OPVMP (\textit{optimal redundant virtual machine placement}) aim to reduce the lost time and the network resource consumption under fault-tolerant requirements. It is based on tree-step process where each one is defined with an algorithm, (1) efficient VMP, (2) host server election and (3) recovery strategy agreement, using a heuristic approach is able to select the suitable host servers and to determine the optimal VM placement.\\
In the same way of ensuring a high reliability of cloud applications, authors in \cite{doi:10.1080/10798587.2016.1152775} propose a VM placement scheme subject to flexibly choose the fault-tolerant methodology (SelfAdaptionFTPlace) of cloud applications. The system architecture of SelfAdaptionFTPlace lies on three stages : (1) convert the application demands to constraint patterns, (2) select the flexible fault-tolerant procedures, and (3) solve VMP problem. The constraint model consider three factors : resource consumption, failure rate and response time; thus a two-phase approach is adopted to settle this problem, where the first phase consist of solving the fault-tolerant strategy of VMP considering the uniformly changing of cloud applications hindrance parameters. While the second one allows you to solve the VMP paradigm regarding the solution of the first stage. Results demonstrate that SelfAdaptionFTPlace obtains better performance compared to existing methods (RandomFTPlace, NOFTPlace and ResourceFTPlace).
\section{Optimal VNF placement in 5G Network}
Telecom world is changing exponentially to become virtualized and cloudified. "Software-Defined Networks" (SDN), "Network Function Virtualization" (NFV) and Mobile Cloud Edge Computing are the key elements of a global economic trend toward 5G. The technology of the fifth generation network is divided into four areas:
\\
\noindent \textbf{Network softwarization}: Network Softwarization is a general revolution tendency for deploying, configuring and updating network functions and/or equipment beyond software programming. This virtualized network functions are dynamically allocated in cloud computing considering the software reliability and the storage capabilities.
\\
\noindent \textbf{Network management/Orchestration}: NFV and SDN technologies constitute the foundation for managing the life cycle of logically isolated network partitions, called "slices". When creating a slice, the management and orchestration functions, NFV-MANO, will provide primary capabilities: select functions requested, launch them on a virtualization platform, and connect them via virtual networks created on physical infrastructure.
\\
\noindent \textbf{Fronthaul/backhaul} : while requirements of 5G will be set until 2020, capacity and coverage are the most important that must be emphasized in every evolution in cellular network. To achieve this requirements, mobile operators will turn to small cells through the addition of the Remote radio heads (RRHs) operated with baseband units (BBUs). Mobile fronthaul (MFH) is a transport network connecting RRHs to BBUs and mobile backhaul (MBH) is a transport network connecting BBUs with core network functions, such as MME, S-GW/P-GW, .etc.
\\
\noindent \textbf{Multi-access edge computing} : provides service capacities, application developers and networking infrastructure based on virtualization technologies. The goal is to define a set of APIs that enable the creation of virtual network functions (VNFs) that respond to all the needs of a mobile communications network, including security, orchestration and portability to deliver ultra-low latency and high bandwidth of applications.
\subsection{VNF placement Issues and Challenges}
The compelling deployment of network function virtualization requires multi objective functions, such as minimizing CAPEX and OPEX, reducing the network latency and decreasing the number of active nodes in the network. However NFV placement with considering the tradeoffs between this objectives may cause many conflicts since hosting multiple VNFs in the same hardware can lead to scalability issues \cite{7608294} for example minimizing the number of active node may growth the aggregation traffic in physical links which affect the network latency, while reducing network latency may be cost-effective due to the redundancy of resources to deploy VNFs. Three Integer Linear Programming (ILP) models are proposed in \cite{7608294} to solve VNF placement problem with VNF service chaining while ensuring resiliency against single link, single node and single node/link failures.
The diverse network functions forming 5G infrastructure are placed on VMs that can be switched among various PM (Physical machines or servers).Then the power consumption can be minimized by shutting down unused resources, however, we don't know exactly the resources that network function requires and if placing more virtual network functions in fewer physical resources may deteriorate user's experience of the service and violate SLAs. \\
The challenge of placing virtual network functions (VNF) over service chains is studied in \cite{7859379} to guarantee a traffic and energy-aware cost reduction thereby proposing an algorithm that combines a "sampling-based Markov approximation" (MA) with matching theory to find an efficient solution of saving functional costs and network traffic costs. This algorithm is named SAMA , it's based on two phases : i) identify the nodes where vNFs may be implemented and ii) place vNFs in a manner to reduce the overall cost provoked in the structure. This program minimizes the state space of possible alternatives that immediately influences the convergence duration.
In order to support increasing traffic, mobile operators will need to introduce a number of small cells through the addition of base stations or remote radio heads (RRHs) operated with baseband units (BBUs) and connected by Mobile Fronthaul/Backhaul. The consolidation of mobile backhaul and fronthaul inside a single carrier network builds a new architecture named 5G crosshaul. This anticipated structure for the 5G networks \cite{pub2665423} requests a completely incorporated management of network resources in an elastic, scalable and cost-effective manner by integrating an SDN/NFV control framework. \\
The VNF placement objective is to reduce the resource expenditure and improve the trustworthiness. Authors in \cite{pub2665423} propose two key SDN/NFV technologies addressing energy utilization and cost-effective resources : "the Energy Management and Monitoring Application" (EMMA) and "the Resource Management Application" (RMA) with the intention to enhance performance and decrease costs. The conspicuous goal of EMMA is to minimize the power consumption of millimeter wave mesh 5G network by turning off the mmWave components characterized by limited traffic requirement. Meanwhile the goal of RMA is to manage Crosshaul resources and maximize the resource utilization in cost-efficient way and also endure the variable demand of 5G Points of Attachment (5G PoA). Fig. 2 taken from \cite{pub2665423} shows the physical infrastructure of which Xhaul is split into three differentiated layers.\\
The modeling work of \cite{Cao2017} for VNF placement optimization problem involve multiple optimization objectives to achieve minimal bandwidth dissipation and the smallest maximum link application simultaneously, four genetic algorithms have been proposed by using the frameworks of two existing algorithms MOGA (multiple objective genetic algorithm) and NSGA-II (non-dominated sorting genetic algorithm) : Greedy MOGA, Greedy NSGA-II, Random MOGA and Random NSGA-II ) . Numerical Analysis show that Greedy-NSGA-II accomplishes high performance compared to the four proposed algorithms. The critical issue of the mobile operators sustainability in terms of 5G relies on providing high service performance and immense data rate connectivity. The system scalability should be flexible to admit large amount of mobile applications.\\
Taking into account this requirements,as example for video streaming service where customers are attached to the network to hear video streams, they usually stop depleting resources. The service will be impacted in terms of play-out starvations, packet loss and startup delay due to the lack of static allocated resources (i.e, physical memory, CPU, cache, buffer, swap) that are insufficient to manipulate the number of connections wish degrade user's QoE. However if the resources are over-provisioned, this will increase energy consumption and diminish revenue. To overcome this limitations, \cite{7511377} proposes a virtual infrastructure that dynamically expands or reduce the resources to provide an optimal usage of resource and ensure a good QoE of the provided services.
\subsection{Toward 5G Slicing}
Regarding the virtual machine placement problems already discussed, one need to understand how Telecom Providers will bypass the issues related to virtual network function placement in 5G network. In this context, network slicing offers a number of significant advantages that are particularly useful for the conception of NGN (next generation networks) \cite{7926923}. Slicing provide flexible VNF placement that may enhance network performance and reduce operating costs. It addresses the deployment of several logical networks as separated business operations on a common physical infrastructure \cite{9d560172e1ba408fb355946fb8627734}. With the tremendous growth of cloud-based technologies towards integrated 5G infrastructures, diverse architectures have been already proposed. Such propositions are presented for example in \cite{Nikaein:2015:NSE:2795381.2795390}, \cite{9d560172e1ba408fb355946fb8627734}, \cite{7503760} which provides the means to support the expected service diversity, flexible deployments, and network slicing. Authors in \cite{Nikaein:2015:NSE:2795381.2795390} display the pattern of 5G-ready architecture and a NFV-based Network Store along with a network slicing for 5G applications. The objective of the proposed network store is to produce programmable VNFs and furnish 5G slice that perfectly matches end user's demands. Meanwhile network splitting aims to exploit virtual networks in physical infrastructure by isolating the virtual resources and ensuring high performance of virtual networks.\\
In the same way, authors in \cite{7503760} deliver New Architectural Design to Open Cloud-Based 5G Communications. The proposed architectural pattern in \cite{7503760} depends on considering network slicing as a brain wave in cloud-based RANs owing to increase the scalability of current RAN systems. Based on recent works \cite{Nikaein:2015:NSE:2795381.2795390}, \cite{7503760}, the Network Slices Architecture illustrated in fig. 3 is splitted in 3 layers:\\
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{Capture1.PNG}
\end{center}
\caption{5G crosshaul physical architecture}
\label{5G crosshaul physical architecture}
\end{figure*}
\noindent\textbf{The business layer} is an Application and Network Functions Market place used to provision diverse use-cases(e.g, high mobility, speed, IOT). It establishes a slice that encrypt all the requested informations from
the service layer to create the desired service.
\\
\noindent\textbf{The service layer} sustain the configuration, management and scaling of the services operational bundle regarding their specific use-case requirements defined in the "slice manifest". Based on decision making, it accomplishes network life-cycle service management and has a direct access to network informations requested by VNA.
\\
\noindent\textbf{The infrastructure layer} maintains the reconfigurable cloud ecological system in a real time and uses virtualization for high speed services.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Capture2.PNG}
\caption{5G Network Slicing architectures (\cite{Nikaein:2015:NSE:2795381.2795390}, \cite{7503760}) }
\label{5G Network Slicing architectures }
\end{figure*}
Network Slicing will become more relevant and more important in the context of 5G Networks. It is created to deliver services to many vertical industries, for example both SDN and virtualization NFV are actually creating programmable network as a service that will expose certain network capabilities to the business layer, then those services can be defined to different vertical industries. Particularly, some industries will need very low latency and high security requirements while others will require high bandwidth, which means each slice will have different characteristics. However, the user experience will be the same, since network slices are isolated from each other in both user plane and control plane. \\
According to a white paper published by Ericsson, future 5G networks will have a pliable structure where network slices allocate separately capacity, speed and coverage requirements. Hence, with network slicing technology, a single physical network can be splitted into multiple virtual networks allowing Operators to provide various services and different customer slices.
\section{Conclusion}\label{sect:}
Virtualization technologies heavily used by cloud computing environments enable virtual machines (VMs) to be transferred between physical systems. In such a competitive field, cloud providers seek to figure out new ways to acquire optimal virtual machine placement by addressing various problems as energy consumption, high cost, performance degradation, SLA violation,etc. According to previous researches, overall there are two types of solutions : multi-objective functions and mono-objective functions, particularly we classified those solutions into five objective functions as follows : (1) Minimizing the Energy consumption, (2) cost optimization, (3) Network traffic minimization, (4) balancing Resource utilization and (5) ensuring high quality of service. Various protocols, heuristics, algorithms and architectures were surveyed. Virtual Machine Placement algorithms has been solved as heuristic, meta-heuristic, deterministic and approximation algorithm depending on the angle they treated the VMP issue with. The majority of works use heuristic and/or meta-heuristic algorithms since it provides good quality solutions. Finding an optimal solution with all the constraints facing VM and VNF Placement in 5G networks would be the subject of our future work.
\newpage
\begin{*}
\centering
\includepdf[pages = {1-2}]{Chart.pdf}
\end{*}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.